The human-error side of cybersecurity

Want better enterprise cybersecurity? It may seem counter-intuitive, but the answer probably isn’t a surge in employee training or hiring of cybersecurity talent. That’s because humans will always make errors, and humans can’t cope with the scale and stealth of today’s cyberattacks. To best protect information systems, including data, applications, networks, and mobile devices, look to more automation and artificial intelligence-based software to give the defense-in-depth required to reduce risk and stop attacks.

That’s one of the key conclusions of a new report conducted by Oracle, “Security in the Age of AI,” released in May. The report draws on a survey of 775 respondents based in the US, including 341 CISOs, CSOs, and other CXOs at firms with at least $100 million in annual revenue; 110 federal or state government policy influencers; and 324 technology-engaged workers in non-managerial roles.

Looking at the CXO responses in the report shows that corporate executives see human error as one of the biggest risks to information security. The most common response (47%) is to invest more in people via training and hiring than in technology in the next two years. Less common is to invest in new types of software with enhanced security, upgrade infrastructure, or buy artificial intelligence and machine learning to use for security, all of which could contribute to minimizing human error.

Learn more about this in my article, “You Can’t Improve Cybersecurity By Throwing People At The Problem,” published in Forbes.