In The Terminator, the Skynet artificial intelligence was turned on to track down hacking a military computer network. Turns out the hacker was Skynet itself. Is there a lesson there? Could AI turn against us, especially as it relates to the security domain?

That was one of the points I made while moderating a discussion of cybersecurity and AI back in October 2017. Here’s the start of a blog post written by my friend Tami Casey about the panel:

Mention artificial intelligence (AI) and security and a lot of people think of Skynet from The Terminator movies. Sure enough, at a recent Bay Area Cyber Security Meetup group panel on AI and machine learning, it was moderator Alan Zeichick – technology analyst, journalist and speaker – who first brought it up. But that wasn’t the only lively discussion during the panel, which focused on AI and cybersecurity.

I found two areas of discussion particularly interesting, which drew varying opinions from the panelists. One, around the topic of AI eliminating jobs and thoughts on how AI may change a security practitioner’s job, and two, about the possibility that AI could be misused or perhaps used by malicious actors with unintended negative consequences.

It was a great panel. I enjoyed working with the Meetup folks, and the participants: Allison Miller (Google), Ali Mesdaq (Proofpoint), Terry Ray (Imperva), Randy Dean (Launchpad.ai & Fellowship.ai).

You can read the rest of Tami’s blog here, and also watch a video of the panel.

Smart televisions, talking home assistants, consumer wearables – that’s not the real story of the Internet of Things. While those are fun and get great stories on blogs and morning news reports, the real IoT is the Industrial IoT. That’s where businesses will truly be transformed, with intelligent, connected devices working together to improve services, reduce friction, and disrupt everything. Everything.

According to Grand View Research, the Industrial IoT (IIoT) market will be $933.62 billion by 2025. “The ability of IoT to reduce costs has been the prime factor for its adoption in the industrial sector. However, several significant investment incentives, such as increased productivity, process automation, and time-to-market, have also been boosting this adoption. The falling prices of sensors have reduced the overall cost associated with data collection and analytics,” says the report.

The report continues,

An emerging trend among enterprises worldwide is the transformation of technical focus to improving connectivity in order to undertake data collection with the right security measures in place and with improved connections to the cloud. The emergence of low-power hardware devices, cloud integration, big data analytics, robotics & automation, and smart sensors are also driving IIoT market growth.

Markets and Markets

Markets & Markets predicts that IIoT will be worth $195.47 billion by 2022. The company says,

A key influencing factor for the growth of the IIoT market is the need to implement predictive maintenance techniques in industrial equipment to monitor their health and avoid unscheduled downtimes in the production cycle. Factors which driving the IIoT market include technological advancements in semiconductor and electronics industry and evolution of cloud computing technologies.

The manufacturing vertical is witnessing a transformation through the implementation of the smart factory concept and factory automation technologies. Government initiatives such as Industrie 4.0 in Germany and Plan Industriel in France are expected to promote the implementation of the IIoT solutions in Europe. Moreover, leading countries in the manufacturing vertical such as U.S., China, and India are expected to further expand their manufacturing industries and deploy smart manufacturing technologies to increase this the contribution of this vertical to their national GDPs.

The IIoT market for camera systems is expected to grow at the highest rate between 2016 and 2022. Camera systems are mainly used in the retail and transportation verticals. The need of security and surveillance in these sectors is the key reason for the high growth rate of the market for camera systems. In the retail sector, the camera systems are used for capturing customer behavior, moment tracking, people counting, and heat mapping. The benefits of installation of surveillance systems include the safety at the workplace, and the prevention of theft and other losses, sweet hearting, and other retail crimes. Video analytics plays a vital role for security purpose in various areas in transportation sector including airports, railway stations, and large public places. Also, intelligent camera systems are used for traffic monitoring, and incident detection and reporting.

Accenture

The huge research firm Accenture says that the IIoT will add $14.2 trillion to the global economy by 2030. That’s not talking about the size of the market, but the overall lift that IIoT will have. By any measure, that’s staggering. Accenture reports,

Today, the IIoT is helping to improve productivity, reduce operating costs and enhance worker safety. For example, in the petroleum industry, wearable devices sense dangerous chemicals and unmanned aerial vehicles can inspect remote pipelines.

However, the longer-term economic and employment potential will require companies to establish entirely new product and service hybrids that disrupt their own markets and generate fresh revenue streams. Many of these will underpin the emergence of the “outcome economy,” where organizations shift from selling products to delivering measurable outcomes. These may range from guaranteed energy savings in commercial buildings to guaranteed crop yields in a specific parcel of farmland.

IIoT Is a Work in Progress

The IIoT is going to have huge impact. But it hasn’t yet, not on any large scale. As Accenture says,

When Accenture surveyed more than 1,400 C-suite decision makers—including 736 CEOs—from some of the world’s largest companies, the vast majority (84 percent) believe their organizations have the capability to create new, service-based income streams from the IIoT.

But scratch beneath the surface and the gloss comes off. Seventy-three percent confess that their companies have yet to make any concrete progress. Just 7 percent have developed a comprehensive strategy with investments to match.

Challenge and opportunity: That’s the Industrial Internet of Things. Watch this space.

The bad news: There are servers used in serverless computing. Real servers, with whirring fans and lots of blinking lights, installed in racks inside data centers inside the enterprise or up in the cloud.

The good news: You don’t need to think about those servers in order to use their functionality to write and deploy enterprise software. Your IT administrators don’t need to provision or maintain those servers, or think about their processing power, memory, storage, or underlying software infrastructure. It’s all invisible, abstracted away.

The whole point of serverless computing is that there are small blocks of code that do one thing very efficiently. Those blocks of code are designed to run in containers so that they are scalable, easy to deploy, and can run in basically any computing environment. The open Docker platform has become the de facto industry standard for containers, and as a general rule, developers are seeing the benefits of writing code that target Docker containers, instead of, say, Windows servers or Red Hat Linux servers or SuSE Linux servers, or any specific run-time environment. Docker can be hosted in a data center or in the cloud, and containers can be easily moved from one Docker host to another, adding to its appeal.

Currently, applications written for Docker containers still need to be managed by enterprise IT developers or administrators. That means deciding where to create the containers, ensuring that the container has sufficient resources (like memory and processing power) for the application, actually installing the application into the container, running/monitoring the application while it’s running, and then adding more resources if required. Helping do that is Kubernetes, an open container management and orchestration system for Docker. So while containers greatly assist developers and admins in creating portable code, the containers still need to be managed.

That’s where serverless comes in. Developers write their bits of code (such as to read or write from a database, or encrypt/decrypt data, or search the Internet, or authenticate users, or to format output) to run in a Docker container. However, instead of deploying directly to Docker, or using Kubernetes to handle deployment, they write their code as a function, and then deploy that function onto a serverless platform, like the new Fn project. Other applications can call that function (perhaps using a RESTful API) to do the required operation, and the serverless platform then takes care of everything else automatically behind the scenes, running the code when needed, idling it when not needed.

Read my essay, “Serverless Computing: What It Is, Why You Should Care,” to find out more.

Critical information about 46 million Malaysians were leaked online onto the Dark Web. The stolen data included mobile phone numbers from telcos and mobile virtual network operators (MVNOs), prepaid phone numbers, customers details including physical addresses – and even the unique IMEI and IMSI registration numbers associated with SIM cards.

Isolated instance from one rogue carrier? No. The carriers included Altel, Celcom, DiGi, Enabling Asia, Friendimobile, Maxis, MerchantTradeAsia, PLDT, RedTone, TuneTalk, Umobile and XOX; news about the breach were first published 19 October 2017 by a Malaysian online community.

When did the breach occur? According to lowyat.net, “Time stamps on the files we downloaded indicate the leaked data was last updated between May and July 2014 between the various telcos.”

That’s more than three years between theft of the information and its discovery. We have no idea if the carriers had already discovered the losses, and chose not to disclose the breaches.

A huge delay between a breach and its disclosure is not unusual. Perhaps things will change once the General Data Protection Regulation (GDPR) kicks in next year, when organizations must reveal a breach within three days of discovery. That still leaves the question of discovery. It simply takes too long!

According to Mandiant, the global average dwell time (time between compromise and detection) is 146 days. In some areas, it’s far worse: the EMEA region has a dwell time of 469 days. Research from the Ponemon Institute says that it takes an average of 98 days for financial services companies to detect intrusion on their networks, and 197 days in retail. It’s not surprising that the financial services folks do a better job – but three months seems like a very long time.

An article headline from InfoSecurity Magazine says it all: “Hackers Spend 200+ Days Inside Systems Before Discovery.” Verizon’s Data Breach Investigations Report for 2017 has some depressing news: “Breach timelines continue to paint a rather dismal picture — with time-to-compromise being only seconds, time-to-exfiltration taking days, and times to discovery and containment staying firmly in the months camp. Not surprisingly, fraud detection was the most prominent discovery method, accounting for 85% of all breaches, followed by law enforcement which was seen in 4% of cases.”

What Can You Do?

There are two relevant statistics. The first is time-to-discovery, and the other is time-to-disclosure, whether to regulators or customers.

  • Time-to-disclosure is a matter of policy, not technology. There are legal aspects, public-relations, financial (what if the breach happens during a “quiet period” prior to announcing results?), regulatory, and even law-enforcement (what if investigators are laying a trap, and don’t want to tip off that the breach has been discovered?).
  • Time-to-discovery, on the other hand, is a matter of technology (and the willingness to use it). What doesn’t work? Scanning log files using manual or semi-automated methods. Excel spreadsheets won’t save you here!

What’s needed are comprehensive endpoint monitoring capabilities, coupled with excellent threat intelligence and real-time analytics driven by machine learning. Nothing else can correlate huge quantities of data from such widely disparate sources, and hope to discover outliers based on patterns.

Discovery and containment takes months, says Verizon. You can’t have containment without discovery. With current methods, we’ve seen that discovery takes months or years, if it’s every detected at all. Endpoint monitoring technology, when coupled with machine learning — and with 24×7 managed security software providers — can reduce that to seconds or minutes.

There is no excuse for breaches staying hidden for three years or longer. None. That’s no way to run a business.

Humans can’t keep up. At least, not when it comes to meeting the rapidly expanding challenges inherent to enterprise cybersecurity. There are too many devices, too many applications, too many users, and too many megabytes of log files for humans to make sense of it all. Moving forward, effective cybersecurity is going to be a “Battle of the Bots,” or to put it less dramatically, machine versus machine.

Consider the 2015 breach at the U.S. Government’s Office of Personnel Management (OPM). According to a story in Wired, “The Office of Personnel Management repels 10 million attempted digital intrusions per month—mostly the kinds of port scans and phishing attacks that plague every large-scale Internet presence.” Yet despite sophisticated security mechanisms, hackers managed to steal millions of records on applications for security clearances, personnel files, and even 5.6 digital images of government employee fingerprints. (In August 2017, the FBI arrested a Chinese national in connection with that breach.)

Traditional security measures are often slow, and potentially ineffective. Take the practice of applying patches and updates to address new-found software vulnerabilities. Companies now have too many systems in play for the process of finding and installing patches to be effectively handled manually,

Another practice that can’t be handled manually: Scanning log files to identify abnormalities and outliers in data traffic. While there are many excellent tools for reviewing those files, they are often slow and aren’t good at aggregating lots across disparate silos (such as a firewall, a web application server, and an Active Directory user authentication system). Thus, results may not be comprehensive, patterns may be missed, and results of deep analysis may not be returned in real time.

Read much more about this in my new essay, “Machine Versus Machine: The New Battle For Enterprise Cybersecurity.”

Still no pastrami sandwich. Still no guinea pig. What’s the deal with the cigarette?

I installed iOS 11.1 yesterday, tantalized by Apple’s boasting of tons of new emoji. Confession: Emoji are great fun. Guess what I looked for right after the completed software install?

Many of the 190 new emoji are skin-tone variations on new or existing people or body parts. That’s good: Not everyone is yellow, like the Simpsons. (If you don’t count the different skin-tone versions, there are about 70 new graphics.)

New emoji that I like:

  • Steak. Yum!
  • Shushing finger face. Shhhh!
  • Cute hedgehog. Awww!
  • Scottish flag. Och aye!

What’s still stupidly missing:

  • Pastrami sandwich. Sure, there’s a new sandwich emoji, but it’s not a pastrami sandwich. Boo.
  • There’s a cheeseburger (don’t get me started on the cheese top/bottom debate), but nothing for those who don’t put cheese on their burgers at all. Grrrr.
  • Onion rings. They’ve got fries, but no rings. Waah.
  • Coffee with creamer. I don’t drink my coffee black. Bleh.
  • Guinea pig. That’s our favorite pet, but no cute little caviidae in the emoji. Wheek!

I still don’t like the cigarette emoji, but I guess once they added it in 2015, they couldn’t delete it.

Here is a complete list of all the emoji, according to PopSugar. What else is missing?

You want to read Backlinko’s “The Definitive Guide To SEO In 2018.” Backlinko is an SEO consultancy founded by Brian Dean. The “Definitive Guide” is a cheerfully illustrated infographic – a lengthy infographic – broken up into several useful chapters:

  • RankBrain & User Experience Signals
  • Become a CTR Jedi
  • Comprehensive, In-Depth Content Wins
  • Get Ready for Google’s Mobile-first Index
  • Go All-In With Video (Or Get Left Behind)
  • Pay Attention to Voice Search
  • Don’t Forget: Content and Links Are Key
  • Quick Tips for SEO in 2018

Some of these section had advice that I knew; others were pretty much new to me, such as the voice search section. I’ll also admit to being very out-of-date on how Google’s ranking systems work; it changes often, and my last deep dive was circa 2014. Oops.

The advice in this document is excellent and well-explained. For example, on RankBrain:

Last year Google announced that RankBrain was their third most important ranking factor: “In the few months it has been deployed, RankBrain has become the third-most important signal contributing to the result of a search query.”

And as Google refines its algorithm, RankBrain is going to become even MORE important in 2018. The question is: What is RankBrain, exactly? And how can you optimize for it?

RankBrain is a machine learning system that helps Google sort their search results. That might sound complicated, but it isn’t. RankBrain simply measures how users interact with the search results… and ranks them accordingly.

The document then goes into a very helpful example, digging into the concept of Dwell Time (that is, how long someone spends on the page). The “Definitive Guide” also provides some very useful metrics about targets for click-through rate (CTR), dwell time, length and depth of content, and more. For example, the document says,

One industry study found that organic CTR is down 37% since 2015. It’s no secret why: Google is crowding out the organic search results with Answer Boxes, Ads, Carousels, “People also ask” sections, and more. And to stand out, your result needs to scream “click on me!”…or else it’ll be ignored.

All of the advice is good, but of course, I’m not always going to follow it. For example, the “Definitive Guide” says:

How can you write the type of in-depth content that Google wants to see? First, publish content that’s at least 2,000 words. That way, you can cover everything a Google searcher needs to know about that topic. In fact, our ranking factors study found that longer content (like ultimate guides and long-form blog posts) outranked short articles in Google.

Well, this post isn’t even close to 2,000 words. Oops. Read the “Definitive Guide,” you’ll be glad you did.

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10. That’s a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Note that the top 10 list doesn’t directly represent the 10 most common attacks. Rather, it’s a ranking of risk. There are four factors used for this calculation. One is the likelihood that applications would have specific vulnerabilities; that’s based on data provided by companies. That’s the only “hard” metric in the OWASP Top 10. The other three risk factors are based on professional judgement.

It boggles the mind that a majority of top 10 issues appear across the 2007, 2010, 2013, and draft 2017 OWASP lists. That doesn’t mean that these application security vulnerabilities have to remain on your organization’s list of top problems, though—you can swat those flaws.

Read more in my essay, “The OWASP Top 10 is killing me, and killing you!

Apply patches. Apply updates. Those are considered to be among the lowest-hanging of the low-hanging fruit for IT cybersecurity. When commercial products release patches, download and install the code right away. When open-source projects disclose a vulnerability, do the appropriate update as soon as you can, everyone says.

A problem is that there are so many patches and updates. They’re found in everything from device firmware to operating systems, to back-end server software to mobile apps. To be able to even discover all the patches is a huge effort. You have to know:

  • All the hardware and software in your organization — so you can scan the vendors’ websites or emails for update notices. This may include the data center, the main office, remote offices, and employees homes. Oh, and rogue software installed without knowledge of IT.
  • The versions of all the hardware and software instances — you can tell which updates apply to you, and which don’t. Sometimes there may be an old version somewhere that’s never been patched.
  • The dependencies. Installing a new operating system may break some software. Installing a new version of a database may require changes on a web application server.
  • The location of each of those instances — so you can know which ones need patching. Sometimes this can be done remotely, but other times may require a truck roll.
  • The administrator access links, usernames and password — hopefully, those are not set to “admin/admin.” The downside of changing default admin passwords is that you have to remember the new ones. Sure, sometimes you can make changes with, say, any Active Director user account with the proper privileges. That won’t help you, though, with most firmware or mobile devices.

The above steps are merely for discovery of the issue and the existence of a patch. You haven’t protected anything until you’ve installed the patch, which often (but not always) requires taking the hardware, software, or service offline for minutes or hours. This requires scheduling. And inconvenience. Even if you have patch-management tools (and there are many available), too many low-hanging fruit can be overlooked.

You Can’t Wait for That Downtime Window

As Oracle CEO Larry Ellison made important points about patching at his keynote at OpenWorld 2017,

Our data centers are enormously complicated. There are lots of servers and storage and operating systems, virtual machines, containers and databases, data stores, file systems. And there are thousands of them, tens of thousands, hundreds of thousands of them. It’s hard for people to locate all these things and patch them. They have to be aware there’s a vulnerability. It’s got to be an automated process.

You can’t wait for a downtime window, where you say, “Oh, I can’t take the system down. I know I’ve got to patch this, but we have scheduled downtime middle of next month.” Well, that’s wrong thinking and that’s kind of lack of priority for security.

All that said, patching and updating must be a priority. Dr. Ron Layton, Deputy Assistant Director of the U.S. Secret Service, said at the NetEvents Global Press Summit, September 2017:

Most successful hacks and breaches – most of them – were because low-level controls were not in place. That’s it. That’s it. Patch management. It’s the low-level stuff that will get you to the extent that the bad guys will say, I’m not going to go here. I’m going to go somewhere else. That’s it.

The Scale of Security Issues Is Huge

I receive many regular email from various defect-tracking and patch-awareness lists. Here’s one weekly sample from the CERT teams at U.S. Dept. of Homeland Security. IT pros won’t be surprised at how large it is: https://www.us-cert.gov/ncas/bulletins/SB17-296

There are 25 high-severity vulnerabilities on this list, most from Microsoft, some from Oracle. Lots of medium-severity vulnerabilities from Microsoft, OpenText, Oracle, and WPA – the latter being the widely reported bug in Wi-Fi Protected Access. In addition, there are a few low-severity vulnerability, and then page after page of those labeled “severity not yet assigned.” The list goes on and on, even hitting infrastructure products from Cisco and F5. And lots more WiFi issues.

This is a typical week – and not all those vulnerabilities in the CERT report have patches yet. CERT is only one source, by the way. Want more? Here’s a list of security-related updates from Apple. Here is a list of a security updates from Juniper Networks. A list of from Microsoft. And Red Hat too.

So: When security analysts say that enterprises merely need to keep up with patches and fixes, well, yes, that’s the low-hanging fruit. However, nobody talks about how much of that low-hanging fruit there is. The amount is overwhelming in an enterprise. No wonder some rotten fruit slip through the cracks.

Open source software (OSS) offers many benefits for organizations large and small—not the least of which is the price tag, which is often zero. Zip. Nada. Free-as-in-beer. Beyond that compelling price tag, what you often get with OSS is a lack of a hidden agenda. You can see the project, you can see the source code, you can see the communications, you can see what’s going on in the support forums.

When OSS goes great, everyone is happy, from techies to accounting teams. Yes, the legal department may want to scrutinize the open source license to make sure your business is compliant, but in most well-performing scenarios, the lawyers are the only ones frowning. (But then again, the lawyers frown when scrutinizing commercial closed-source software license agreements too, so you can’t win.)

The challenge with OSS is that it can be hard to manage, especially when something goes wrong. Depending on the open source package, there can be a lot of mysteries, which can make ongoing support, including troubleshooting and performance tuning, a real challenge. That’s because OSS is complex.

It’s not like you can say, well, here’s my Linux distribution on my server. Oh, and here’s my open source application server, and my open source NoSQL database, and my open source log suite. In reality, those bits of OSS may be from separate OSS projects, which may (or may not) have been tested for how well they work together.

A separate challenge is that because OSS is often free-as-in-beer, the software may not be in the corporate inventory. That’s especially common if the OSS is in the form of a library or an API that might be built into other applications you’ve written yourself. The OSS might be invisible but with the potential to break or cause problems down the road.

You can’t manage what you don’t know about

When it comes to OSS, there may be a lot you don’t know about, such as those license terms or interoperability gotchas. Worse, there can be maintenance issues — and security issues. Ask yourself: Does your organization know all the OSS it has installed on servers on-prem or in the cloud? Coded into custom applications? Are you sure that all patches and fixes have been installed (and installed correctly), even on virtual machine templates, and that there are no security vulnerabilities?

In my essay “The six big gotchas: The impact of open source on data centers,” we’ll dig into the key topics: License management, security, patch management, maximizing uptime, maximizing performance, and supporting the OSS.

Those are two popular ways of migrating enterprise assets to the cloud:

  1. Write new cloud-native applications.
  2. Lift-and-shift existing data center applications to the cloud.

Gartner’s definition: “Lift-and-shift means that workloads are migrated to cloud IaaS in as unchanged a manner as possible, and change is done only when absolutely necessary. IT operations management tools from the existing data center are deployed into the cloud environment largely unmodified.”

There’s no wrong answer, no wrong way of proceeding. Some data center applications (including servers and storage) may be easier to move than others. Some cloud-native apps may be easier to write than others. Much depends on how much interconnectivity there is between the applications and other software; that’s why, for example, public-facing websites are often easiest to move to the web, while tightly coupled internal software, such as inventory control or factory-floor automation, can be trickier.

That’s why in some cases, a hybrid strategy is best. Some parts of the applications are moved up to the cloud, while others remain in the data centers, with SD-WANs or other connectivity linking everything together in a secure manner.

In other words, no one size fits all. And no one timeframe fits all, especially when it comes to lifting-and-shifting.

Saas? Paas? It Depends.

A recent survey from the Oracle Applications User Group (OAUG) showed that 70% of respondents who have plans to adopt Oracle Cloud solutions will do so in the next three years. About 35% plan to implement Software-as-a-Service (SaaS) solutions to run with their existing Oracle on-premises installations and 29 percent planning to use Platform-as-a-Service (PaaS) services to accelerate software development efforts in the next 12 months.

Joe Paiva, CIO of the U.S. Commerce Department’s International Trade Administration (ITA), is a fan of lift-and-shift. He said at a cloud conference that “Sometimes it makes sense because it gets you there. That was the key. We had to get there because we would be no worse off or no better off, and we were still spending a lot of money, but it got us to the cloud. Then we started doing rationalization of hardware and applications, and dropped our bill to Amazon by 40 percent compared to what we were spending in our government data center. We were able to rationalize the way we use the service.” Paiva estimates government agencies could save 5%-15% using lift-and-shift.

The benefits of moving existing workloads to the cloud are almost entirely financial. If you can shut down a data center and pay less to run the application in the cloud, it’s can be a good short-term solution with immediate ROI. Gartner cautions, however, that lift and shift “generally results in little created value. Plus, it can be a more expensive option and does not deliver immediate cost savings.” Much depends on how much it costs to run that application today.

A Multi-Track Process for Cloud Migration

The real benefits of new cloud development and deployment architectures take time to realize. For many organizations, there may be a multi-track process:

First track: Lift-and-shift existing workloads that are relatively easy to migrate, while simultaneously writing cloud-native applications for new projects. Those provide the biggest and fastest return on investment, while leaving data center workloads in place and untouched.

Second track: Write cloud-native applications for the remaining data-center workloads, the ones impractical to migrate in their existing form. These will be slower, but the payoff would result in the ability to turn off some or all existing data centers – and eliminating their associated expenses, such as power and cooling, bandwidth, and physical space.

Third track: At some point, revisit the lifted-and-shifted workloads to see which would significantly benefit from being rewritten as cloud-native apps. Unless there is an order of magnitude increase in efficiency, or significant added functionality, the financial returns won’t be high – or may be nonexistent. For some applications, it may never make sense to redesign and rewrite them in a cloud-native way. So, those old enterprise applications may live on for years to come.

About a decade ago, I purchased a piece of a mainframe on eBay — the name ID bar. Carved from a big block of aluminum, it says “IBM System/370 168,” and it hangs proudly over my desk.

My time on mainframes was exclusively with the IBM System/370 series. With a beautiful IBM 3278 color display terminal on my desk, and, later, a TeleVideo 925 terminal and an acoustic coupler at home, I was happier than anyone had a right to be.

We refreshed our hardware often. The latest variant I worked on was the System/370 4341, introduced in early 1979, which ran faster and cooler than the slower, very costly 3031 mainframes we had before. I just found this on the IBM archives: “The 4341, under a 24-month contract, can be leased for $5,975 a month with two million characters of main memory and for $6,725 a month with four million characters. Monthly rental prices are $7,021 and $7,902; purchase prices are $245,000 and $275,000, respectively.” And we had three, along with tape drives, disk drives (in IBM-speak, DASD, for Direct Access Storage Devices), and high-speed line printers. Not cheap!

Our operating system on those systems was called Virtual Machine, or VM/370. It consisted of two parts, Control Program and Conversational Monitoring System. CP was the timesharing operating system – in modern virtualization terms, the hypervisor running on the bare metal. CMS was the user interface that users logged into, and provide access to not only a text-based command console, but also file storage and a library of tools, such as compilers. (We often referred to the platform as CP/CMS).

Thanks to VM/370, each user believed she had access to a 100% dedicated and isolated System/370 mainframe, with every resource available and virtualized. (I.e., she appeared to have dedicated access to tape drives, but they appeared non-functional if her tape(s) weren’t loaded, or if she didn’t buy access to the drives.)

My story about mainframes isn’t just reminiscing about the time of dinosaurs. When programming those computers, which I did full-time in the late 1970s and early 1980s, I learned a lot of lessons that are very applicable today. Read all about that in my article for HP Enterprise Insights, “4 lessons for modern software developers from 1970s mainframe programming.”

To get the most benefit from the new world of cloud-native server applications, forget about the old way of writing software. In the old model, architects designed software. Programmers wrote the code, and testers tested it on test server. Once the testing was complete, the code was “thrown over the wall” to administrators, who installed the software on production servers, and who essentially owned the applications moving forward, only going back to the developers if problems occurred.

The new model, which appeared about 10 years ago is called “DevOps,” or developer operations. In the DevOps model, architects, developers, testers, and administrators collaborate much more closely to create and manage applications. Specifically, developers play a much broader role in the day-to-day administration of deployed applications, and use information about how the applications are running to tune and enhance those applications.

The involvement of developers in administration made DevOps perfect for cloud computing. Because administrators had fewer responsibilities (i.e., no hardware to worry about), it was less threatening for those developers and administrators to collaborate as equals.

Change matters

In that old model of software development and deployment, developers were always change agents. They created new stuff, or added new capabilities to existing stuff. They embraced change, including new technologies – and the faster they created change (i.e., wrote code), the more competitive their business.

By contrast, administrators are tasked with maintaining uptime, while ensuring security. Change is not a virtue to those departments. While admins must accept change as they install new applications, it’s secondary to maintaining stability. Indeed, admins could push back against deploying software if they believed those apps weren’t reliable, or if they might affect the overall stability of the data center as a whole.

With DevOps, everyone can embrace change. One of the ways that works, with cloud computing, is to reduce the risk that an unstable application can damage system reliability. In the cloud, applications can be build and deployed using bare-metal servers (like in a data center), or in virtual machines or containers.

DevOps works best when software is deployed in VMs or containers, since those are isolated from other systems – thereby reducing risk. Turns out that administrators do like change, if there’s minimal risk that changes will negatively affect overall system reliability, performance, and uptime.

Benefits of DevOps

Goodbye, CapEx, hello, OpEx. Cloud computing moves enterprises from capital-expense data centers (which must be built, electrified, cooled, networked, secured, stocked with servers, and refreshed periodically) to operational-expense service (where the business pays monthly for the processors, memory, bandwidth, and storage reserved and/or consumed). When you couple those benefits that with virtual machines, containers, and DevOps, you get:

  • Easier Maintenance: It can be faster to apply patches and fixes to software virtual machines – and use snapshots to roll back if needed.
  • Better Security: Cloud platform vendors offer some security monitoring tools, and it’s relatively easy to install top-shelf protections like next-generation firewalls – themselves offered as cloud services.
  • Improved Agility: Studies show that the process of designing, coding, testing, and deploying new applications can be 10x faster than traditional data center methods, because the cloud reduces and provides robust resources.
  • Lower Cost: Vendors such as Amazon, Google, Microsoft, and Oracle, are aggressively lowering prices to gain market share — and in many cases, those prices are an order of magnitude below what it could cost to provision an enterprise data center.
  • Massive Scale: Need more power? Need more bandwidth? Need more storage? Push a button, and the resources are live. If those needs are short-term, you can turn the dials back down, to lower the monthly bill. You can’t do that in a data center.

Rapidly evolving

The technologies used in creating cloud-native applications are evolving rapidly. Containers, for example, are relatively new, yet are becoming incredibly popular because they require 4x-10x fewer resources than VMs – thereby slashing OpEx costs even further. Software development and management tools, like Kubernetes (for orchestration of multiple containers), Chef (which makes it easy to manage cloud infrastructure), Puppet (which automates pushing out cloud service configurations), and OpenWhisk (which strips down cloud services to “serverless” basics) push the revolution farther.

DevOps is more important than the meaningless “developer operations” moniker implies. It’s a whole new, faster way of doing computing with cloud-native applications. Because rapid change means everything in achieving business agility, everyone wins.

“One of these things is not like the others,” the television show Sesame Street taught generations of children. Easy. Let’s move to the next level: “One or more of these things may or may not be like the others, and those variances may or may not represent systems vulnerabilities, failed patches, configuration errors, compliance nightmares, or imminent hardware crashes.” That’s a lot harder than distinguishing cookies from brownies.

Looking through gigabytes of log files and transactions records to spot patterns or anomalies is hard for humans: it’s slow, tedious, error-prone, and doesn’t scale. Fortunately, it’s easy for artificial intelligence (AI) software, such as the machine learning algorithms built into Oracle Management Cloud. What’s more, the machine learning algorithms can be used to direct manual or automated remediation efforts to improve security, compliance, and performance.

Consider how large-scale systems gradually drift away from their required (or desired) configuration, a key area of concern in the large enterprise. In his Monday, October 2 Oracle OpenWorld session on managing and securing systems at scale using AI, Prakash Ramamurthy, senior vice president of systems management at Oracle, talked about how drift happens. Imagine that you’ve applied a patch, but then later you spool up a virtual server that is running an old version of a critical service or contains an obsolete library with a known vulnerability. That server is out of compliance, Ramamurthy said. Drift.

Drift is bad, said Ramamurthy, and detecting and stopping drift is a core competency of Oracle Management Cloud. It starts with monitoring cloud and on-premises servers, services, applications, and logs, using machine learning to automatically understand normal behavior and identify anomalies. No training necessary here: A variety of machine learning algorithms teach themselves how to play the “one of these things is not like the others” game with your data, your systems, and your configuration, and also to classify the systems in ways that are operationally relevant. Even if those logs contain gigabytes of information on hundreds of thousands of transactions each second.

Learn more in my article for Forbes, “Catch The Drift With Machine Learning — Before The Drift Catches You.”

AOL Instant Messenger will be dead before the end of 2017. Yet, instant messages have succeeded far beyond what anyone could have envisioned for either SMS (Short Message Service, carried by the phone company) or AOL, which arguably brought instant messaging to regular computers starting in 1997.

It would be wonderful to claim that there’s some great significance in the passing of AIM. However, my guess is that there simply wasn’t any business benefit to maintaining ia service that nearly nobody used. The AIM service was said to carry far less than 1% of all instant messages across the Internet… and that was in 2011.

I have an AIM account, and although it’s linked into my Apple Messages client, I had completely forgotten about it. Yes, there was a little flurry of news back in March 2017, when AOL began closing APIs and shutting down some third-party AIM applications. However, that didn’t resonate. Then, on Oct. 6, came the email from AOL’s new corporate overload, Oath, a subsidiary of Verizon:

Dear AIM user,

We see that you’ve used AOL Instant Messenger (AIM) in the past, so we wanted to let you know that AIM will be discontinued and will no longer work as of December 15, 2017.

Before December 15, you can continue to use the service. After December 15, you will no longer have access to AIM and your data will be deleted. If you use an @aim.com email address, your email account will not be affected and you will still be able to send and receive email as usual.

We’ve loved working on AIM for you. From setting the perfect away message to that familiar ring of an incoming chat, AIM will always have a special place in our hearts. As we move forward, all of us at AOL (now Oath) are excited to continue building the next generation of iconic brands and life-changing products for users around the world.

You can visit our FAQ to learn more. Thank you for being an AIM user.

Sincerely,

The AOL Instant Messenger team

Interestingly, my wife, who also has an AIM account but never uses it, thought that the message above was a phishing scam of some sorts. So, AIM is dead. But not instant messaging, which is popular for both consumers and business users, and for desktop/notebooks and smartphones. There are so many clients that consumers can use; according to Statistica, here are the leaders as of January 2017, cited in millions of active monthly users. AIM didn’t make the list.

Then there are the corporate instant message platforms, Slack, Lync, and Symphony. And we’re not even talking social media, like Twitter, Google+, Kik, and Instagram. So – Instant messaging is alive and well. AIM was the pioneer, but it ceased being relevant a long, long time ago.

IT managers shouldn’t have to choose between cloud-driven innovation and data-center-style computing. Developers shouldn’t have to choose between the latest DevOps programming using containers and microservices, and traditional architectures and methodologies. CIOs shouldn’t have to choose between a fully automated and fully managed cloud and a self-managed model using their own on-staff administrators.

At an Oracle OpenWorld general session on infrastructure-as-a-service (IaaS) October 3, Don Johnson, senior vice president of product development at Oracle, lamented that CIOs are often forced to make such difficult choices. Sure, the cloud is excellent for purpose-built applications, he said, “and so what’s working for them is cloud-native, but what’s not working in the cloud are enterprise workloads. It’s an unnecessary set of bad choices.”

When it comes to moving existing business-critical applications to the cloud, Johnson explained the three difficult choices faced by many organizations:

  • First, CIOs can rewrite those applications from the ground up to run in the cloud in a platform-as-a-service (PaaS) model. That’s best in terms of achieving the greatest computational efficiency, as well as integration with other cloud services, but it can be time-consuming and costly.
  • Second, organizations can retrofit their existing applications to run in in the cloud, but this can be challenging at best, or nearly impossible in some cases.
  • Or third, CIOs can “lift and shift” existing on-premises applications, including their full software stack, directly into the cloud, using the IaaS model.

Historically, those three models have required three different clouds. No longer. Only the Oracle Cloud Infrastructure, Johnson stated, “lets you run your full existing stack alongside cloud-native applications.” And this is important, he added, because migration to the cloud must be slow and deliberate. “Running in the cloud is very disruptive. It can’t happen overnight. You need to move when and how you want to move,” he said. And a deliberative movement to the cloud means a combination of new cloud-native PaaS applications and legacy applications migrated to IaaS.

Read more in my story for Forbes, “Lift And Shift Workloads — And Write Cloud-Native Apps — For The Same Cloud.”

Despite Elon Musk’s warnings this summer, there’s not a whole lot of reason to lose any sleep worrying about Skynet and the Terminator. Artificial Intelligence (AI) is far from becoming a maleficent, all-knowing force. The only “Apocalypse” on the horizon right now is an over reliance by humans on machine learning and expert systems, as demonstrated by the deaths of Tesla owners who took their hands off the wheel.

Examples of what currently pass for “Artificial Intelligence” — technologies such as expert systems and machine learning — are excellent for creating software. AI software is truly valuable help in contexts that involve pattern recognition, automated decision-making, and human-to-machine conversations. Both types of AI have been around for decades. And both are only as good as the source information they are based on. For that reason, it’s unlikely that AI will replace human beings’ judgment on important tasks requiring decisions more complex than “yes or no” any time soon.

Expert systems, also known as rule-based or knowledge-based systems, are when computers are programmed with explicit rules, written down by human experts. The computers can then run the same rules but much faster, 24×7, to come up with the same conclusions as the human experts. Imagine asking an oncologist how she diagnoses cancer and then programming medical software to follow those same steps. For a particular diagnosis, an oncologist can study which of those rules was activated to validate that the expert system is working correctly.

However, it takes a lot of time and specialized knowledge to create and maintain those rules, and extremely complex rule systems can be difficult to validate. Needless to say, expert systems can’t function beyond their rules.

By contrast, machine learning allows computers to come to a decision—but without being explicitly programmed. Instead, they are shown hundreds or thousands of sample data sets and told how they should be categorized, such as “cancer | no cancer,” or “stage 1 | stage 2 | stage 3 cancer.”

Read more about this, including my thoughts on machine learning, pattern recognition, expert systems, and comparisons to human intelligence, in my story for Ars Technica, “Never mind the Elon—the forecast isn’t that spooky for AI in business.”

Long after intruders are removed and public scrutiny has faded, the impacts from a cyberattack can reverberate over a multi-year timeline. Legal costs can cascade as stolen data is leveraged in various ways over time; it can take years to recover pre-incident growth and profitability levels; and brand impact can play out in multiple ways.

That’s from a Deloitte report, “Beneath the surface of a cyberattack: A deeper look at business impacts,” released in late 2016. The report’s contents, and other statements on cyber security from Deloitte, are ironic given the company’s huge breach reported this week.

The big breach

The Deloitte breach was reported on Monday, Sept. 25. It appears to have leaked confidential emails and financial documents of some of its clients. According to the Guardian,

The Guardian understands Deloitte clients across all of these sectors had material in the company email system that was breached. The companies include household names as well as US government departments. So far, six of Deloitte’s clients have been told their information was “impacted” by the hack. Deloitte’s internal review into the incident is ongoing. The Guardian understands Deloitte discovered the hack in March this year, but it is believed the attackers may have had access to its systems since October or November 2016.

The Guardian asserts that hackers gained access to the Deloitte’s global email server via an administrator’s account protected by only a single password. Without two-factor authentication, hackers could gain entry via any computer, as long as they guessed the right password (or obtained it via hacking, malware, or social engineering). The story continues,

In addition to emails, the Guardian understands the hackers had potential access to usernames, passwords, IP addresses, architectural diagrams for businesses and health information. Some emails had attachments with sensitive security and design details.

Okay, the breach was bad. What did Deloitte have to say about these sorts of incidents? Lots.

The Deloitte Cybersecurity Report

In its 2016 report, Deloitte’s researchers pointed to 14 cyberattack impact factors. Half are the directly visible costs of breach incidents; the others which can be more subtle or hidden, and potentially never fully understood.

  • The “Above the Surface” incident costs include the expenses of technical investigations, consumer breach notifications, regulatory compliance, attorneys fees and litigation, post-preach customer protection, public relations, and cybersecurity protections.
  • Hard to tally are the “Below the Surface” costs of insurance premium increases, increased cost to raise debt, impact of operational disruption/destruction, value of lost contact revenue, devaluation of trade name, loss of intellectual property, and lost value of customer relationship.

As the report says,

Common perceptions about the impact of a cyberattack are typically shaped by what companies are required to report publicly—primarily theft of personally identifiable information (PII), payment data, and personal health information (PHI). Discussions often focus on costs related to customer notification, credit monitoring, and the possibility of legal judgments or regulatory penalties. But especially when PII theft isn’t an attacker’s only objective, the impacts can be even more far-reaching.

Recovery can take a long time, as the Deloitte says:

Beyond the initial incident triage, there are impact management and business recovery stages. These stages involve a wide range of business functions in efforts to rebuild operations, improve cybersecurity, and manage customer and third-party relationships, legal matters, investment decisions, and changes in strategic course.

Indeed, asserts Deloitte in the 2016 report, it can take months or years to repair the damage to the business. That includes redesigning processes and assets, and investing in cyber programs to emerge stronger after the incident. But wait, there’s more.

Intellectual Property and Lawsuits

A big part of the newly reported breach is the loss of intellectual property. That’s not necessarily only Deloitte’s IP, but also the IP of its biggest blue-chip customers. About the loss of IP, the 2016 reports says:

Loss of IP is an intangible cost associated with loss of exclusive control over trade secrets, copyrights, investment plans, and other proprietary and confidential information, which can lead to loss of competitive advantage, loss of revenue, and lasting and potentially irreparable economic damage to the company. Types of IP include, but are not limited to, patents, designs, copyrights, trademarks, and trade secrets.

We’ll see some of those phrases in lawsuits filed by Deloitte’s customers as they try to get a handle on what hackers may have stolen. Oh, about lawsuits, here’s what the Deloitte report says:

Attorney fees and litigation costs can encompass a wide range of legal advisory fees and settlement costs externally imposed and costs associated with legal actions the company may take to defend its interests. Such fees could potentially be offset through the recovery of damages as a result of assertive litigation pursued against an attacker, especially in regards to the theft of IP. However, the recovery could take years to pursue through litigation and may not be ultimately recoverable, even after a positive verdict in favor of the company. Based on our analysis of publicly available data pertaining to recent consumer settlement cases and other legal costs relating to cyber incidents, we observed that, on average, it could cost companies approximately $10 million in attorney fees, potential settlement of loss claims, and other legal matters.

Who wants to bet that the legal costs from this breach will be significantly higher than $10 million?

Stay Vigilant

The back page of Deloitte’s 2016 report says something important:

To grow, streamline, and innovate, many organizations have difficulty keeping pace with the evolution of cyber threats. The traditional discipline of IT security, isolated from a more comprehensive risk-based approach, may no longer be enough to protect you. Through the lens of what’s most important to your organization, you must invest in cost-justified security controls to protect your most important assets, and focus equal or greater effort on gaining more insight into threats, and responding more effectively to reduce their impact. A Secure. Vigilant. Resilient. cyber risk program can help you become more confident in your ability to reap the value of your strategic investments.

Wise words — too bad Deloitte’s email administrators, SOC teams, and risk auditors didn’t heed them. Or read their own report.

Stupidity. Incompetence. Negligence. The unprecedented huge data breach at Equifax has dominated the news cycle, infuriating IT managers, security experts, legislators, and attorneys — and scaring consumers. It appears that sensitive personally identifiable information (PII) on 143 million Americans was exfiltrated, as well as PII on some non-US nationals.

There are many troubling aspects. Reports say the tools that consumers can use to see if they are affected by the breach are inaccurate. Articles that say that by using those tools, consumers are waiving their rights to sue Equifax. Some worry that Equifax will actually make money off this by selling affected consumers its credit-monitoring services.

Let’s look at the technical aspects, though. While details about the breach are still widely lacking, two bits of information are making the rounds. One is that Equifax practiced bad password practices, allowing hackers to easily gain access to at least one server. Another is that there was a flaw in a piece of open-source software – but the patch had been available for months, yet Equifax didn’t apply that patch.

It’s unclear about the veracity of those two possible causes of the breach. Even so, this points to a troubling pattern of utter irresponsibility by Equifax’s IT and security operations teams.

Bad Password Practices

Username “admin.” Password “admin.” That’s often the default for hardware, like a home WiFi router. The first thing any owner should do is change both the username and password. Every IT professional knows that. Yet the fine techies at Equifax, or at least their Argentina office, didn’t know that. According to well-known security writer Brian Krebs, earlier this week,

Earlier today, this author was contacted by Alex Holden, founder of Milwaukee, Wisc.-based Hold Security LLC. Holden’s team of nearly 30 employees includes two native Argentinians who spent some time examining Equifax’s South American operations online after the company disclosed the breach involving its business units in North America.

It took almost no time for them to discover that an online portal designed to let Equifax employees in Argentina manage credit report disputes from consumers in that country was wide open, protected by perhaps the most easy-to-guess password combination ever: “admin/admin.”

What’s more, writes Krebs,

Once inside the portal, the researchers found they could view the names of more than 100 Equifax employees in Argentina, as well as their employee ID and email address. The “list of users” page also featured a clickable button that anyone authenticated with the “admin/admin” username and password could use to add, modify or delete user accounts on the system.

and

A review of those accounts shows all employee passwords were the same as each user’s username. Worse still, each employee’s username appears to be nothing more than their last name, or a combination of their first initial and last name. In other words, if you knew an Equifax Argentina employee’s last name, you also could work out their password for this credit dispute portal quite easily.

Idiots.

Patches Are Important, Kids

Apache’s Struts is a well-regarded open source framework for creating Web applications. It’s excellent — I’ve used it myself — but like all software, it can have bugs. One such defect was discovered in March 2017, and was given the name “CVE-2017-5638.” A patch was issued within days by the Struts team. Yet Equifax never installed that patch.

Even so, the company is blaming the U.S. breach on that defect:

Equifax has been intensely investigating the scope of the intrusion with the assistance of a leading, independent cybersecurity firm to determine what information was accessed and who has been impacted. We know that criminals exploited a U.S. website application vulnerability. The vulnerability was Apache Struts CVE-2017-5638. We continue to work with law enforcement as part of our criminal investigation, and have shared indicators of compromise with law enforcement.

Keeping up with vulnerability reports, and applying patches right away, is essential for good security. Everyone knows this. Including, I’m sure, Equifax’s IT team. There is no excuse. Idiots.

HP-35 slide rule calculatorAt the current rate of rainfall, when will your local reservoir overflow its banks? If you shoot a rocket at an angle of 60 degrees into a headwind, how far will it fly with 40 pounds of propellant and a 5-pound payload? Assuming a 100-month loan for $75,000 at 5.11 percent, what will the payoff balance be after four years? If a lab culture is doubling every 14 hours, how many viruses will there be in a week?

Those sorts of questions aren’t asked by mathematicians, who are the people who derive equations to solve problems in a general way. Rather, they are asked by working engineers, technicians, military ballistics officers, and financiers, all of whom need an actual number: Given this set of inputs, tell me the answer.

Before the modern era (say, the 1970s), these problems could be hard to solve. They required a lot of pencils and paper, a book of tables, or a slide rule. Mathematicians never carried slide rules, but astronauts did, as their backup computers.

However, slide rules had limitations. They were good to about three digits of accuracy, no more, in the hands of a skilled operator. Three digits was fine for real-world engineering, but not enough for finance. With slide rules, you had to keep track of the decimal point yourself: The slide rule might tell you the answer is 641, but you had to know if that was 64.1 or 0.641 or 641.0. And if you were chaining calculations (needed in all but the simplest problems), accuracy dropped with each successive operation.

Everything the slide rule could do, a so-called slide-rule calculator could do better—and more accurately. Slide rules are really good at few things. Multiplication and division? Easy. Exponents, like 613? Easy. Doing trig, like sines, cosines, and tangents? Easy. Logarithms? Easy.

Hewlett-Packard unleashed a monster when it created the HP-9100A desktop calculator, released in 1968 at a price of about $5,000. The HP-9100A did everything a slide rule could do, and more—such as trig, polar/rectangular conversions, and exponents and roots. However, it was big and it was expensive—about $35,900 in 2017 dollars, or the price of a nice car! HP had a market for the HP-9100A, since it already sold test equipment into many labs. However, something better was needed, something affordable, something that could become a mass-market item. And that became the pocket slide-rule calculator revolution, starting off with the amazing HP-35.

If you look at the HP-35 today, it seems laughably simplistic. The calculator app in your smartphone is much more powerful. However, back in 1972, and at a price of only $395 ($2,350 in 2017 dollars), the HP-35 changed the world. Companies like General Electric ordered tens of thousands of units. It was crazy, especially for a device that had a few minor math bugs in its first shipping batch (HP gave everyone a free replacement).

Read more about early slide-rule calculators — and the more advanced card-programmable models like the HP-65 and HP-67, in my story, “The early history of HP calculators.”

HP-65 and HP-67 card-programmable calculators

When was the last time most organizations discussed the security of their Oracle E-Business Suite? How about SAP S/4HANA? Microsoft Dynamics? IBM’s DB2? Discussions about on-prem server software security too often begin and end with ensuring that operating systems are at the latest level, and are current with patches.

That’s not good enough. Just as clicking on a phishing email or opening a malicious document in Microsoft Word can corrupt a desktop, so too server applications can be vulnerable. When those server applications are involved with customer records, billing systems, inventory, transactions, financials, or human resources, a hack into ERP or CRM systems can threaten an entire organization. Worse, if that hack leveraged stolen credentials, the business may never realize that competitors or criminals are stealing its data, and potentially even corrupting its records.

A new study from the Ponemon Institute points to the potential severity of the problem. Sixty percent of the respondents to the “Cybersecurity Risks to Oracle E-Business Suite” say that information theft, modification of data and disruption of business processes on their company’s Oracle E-Business Suite applications would be catastrophic. While 70% respondents said a material security or data breach due to insecure Oracle E-Business Suite applications is likely, 67% of respondents believe their top executives are not aware of this risk. (The research was sponsored by Onapsis, which sells security solutions for ERP suites, so apply a little sodium chloride to your interpretation of the study’s results.)

The audience of this study was of businesses that rely upon Oracle E-Business Suite. About 24% of respondents said that it was the most critical application they ran, and altogether, 93% said it was one of the top 10 critical applications. Bearing in mind that large businesses run thousands of server applications, that’s saying something.

Yet more than half of respondents – 53% — said that it was Oracle’s responsibility to ensure that its applications and platforms are safe and secure. Unless they’ve contracted with Oracle to manage their on-prem applications, and to proactively apply patches and fixes, well, they are delusional.

Another area of delusion: That software must be connected to the Internet to pose a risk. In this study, 52% of respondents agree or strongly agree that “Oracle E-Business applications that are not connected to the Internet are not a security threat.” They’ve never heard of insider threats? Credentials theft? Penetrations of enterprise networks?

What About Non-Oracle Packages?

This Ponemon/Onapsis study represents only one data point. It does not adequately discuss the role of vendors in this space, including ERP/CRM value-added resellers, consultants and MSSPs (managed security service providers). It also doesn’t differentiate between Oracle instances running on-prem compared to the Oracle ERP Cloud – where Oracle does manage all the security.

Surprising, packaged software isn’t talked about very often. Given the amount of chatter at most security conferences, bulletin boards, and the like, packaged applications like these on-prem ERP or CRM suites are rarely a factor in conversations about security. Instead, everyone is seemingly focused on the endpoint, firewalls, and operating systems. Sometimes we’ll see discussions of the various tiers in an n-tier architecture, such as databases, application servers, and presentation systems (like web servers or mobile app back ends).

Another company that offers ERP security, ERPScan, conducted a study with Crowd Research Partners focused on SAP. The “ERP Cybersecurity Study 2017” said that (and I quote from the report on these bullet points):

  • 89% of respondents expect that the number of cyber-attacks against ERP systems will grow in next 12 months.
  • An average cost of a security breach in SAP is estimated at $5m with fraud considered as the costliest risk. A third of organizations assesses the damage of fraudulent actions at more than 10m USD.
  • There is a lack of awareness towards ERP Security, worryingly, even among people who are engaged in ERP Security. One-third of them haven’t even heard about any SAP Security incident. Only 4% know about the episode with the direst consequences – USIS data breach started with an SAP vulnerability, which resulted in the company’s bankruptcy.
  • One of three respondents hasn’t taken any ERP Security initiative yet and is going to do so this year.
  • Cybersecurity professionals are most concerned about protecting customer data (72%), employee data (66%), and emails (54%). Due to this information being stored in different SAP systems (e.g. ERP, HR, or others), they are one of the most important assets to protect.
  • It is still unclear who is in charge of ERP Security: 43% of responders suppose that CIO takes responsibilities, while 28% consider it CISO’s duty.

Of course, we still must secure our operating systems, network perimeters, endpoints, mobile applications, WiFi networks, and so-on. Let’s not forget, however, the crucial applications our organizations depend upon. Breaches into those systems could be invisible – and ruinous to any business.

The water is rising up over your desktops, your servers, and your data center. Glug, glug, gurgle.

You’d better hope that the disaster recovery plans included the word “offsite.” Hope the backup IT site wasn’t another local business that’s also destroyed by the hurricane, the flood, the tornado, the fire, or the earthquake.

Disasters are real, as August’s Hurricane Harvey and immense floods in Southeast Asia have taught us all. With tens of thousands of people displaced, it’s hard to rebuild a business. Even with a smaller disaster, like a power outage that lasts a couple of days, the business impact can be tremendous.

I once worked for a company in New York that was hit by a blizzard that snapped the power and telephone lines to the office building. Down went the PBX, down went the phone system and the email servers. Remote workers (I was in in California) were massively impaired. Worse, incoming phone calls simply rang and rang; incoming email messages bounced back to the sender.

With that storm, electricity was gone for more than a week, and broadband took an additional time to be restored. You’d better believe our first order of business, once we began the recovery phase, was to move our internal Microsoft Exchange Server to a colocation facility with redundant T1 lines, and move our internal PBX to a hosted solution from the phone company. We didn’t like the cost, but we simply couldn’t afford to be shut down again the next time a storm struck.

These days, the answer lies within the cloud, either for primary data center operations, or for the source of a backup. (Forget trying to salvage anything from a submerged server rack or storage system.)

Be very prepared

Are you ready for a disaster? In a February 2017 study conducted by the Disaster Recovery Journal and Forrester Research, “The State Of Disaster Recovery Preparedness 2017,” only 18% of disaster recovery decision makers said they were “very prepared” to recover their data center in the event of a site failure or disaster event. Another 37% were prepared, 34% were somewhat prepared, and 11% not prepared at all.

That’s not good enough if you’re in Houston or Bangladesh or even New York during a blizzard. And that’s clear even among the survey respondents, 43% of whom said there was a business requirement to stay online and competitive 24×7. The cloud is considered to be one option for disaster recovery (DR) planning, but it’s not the only one. Says the study:

DR in the cloud has been a hot topic that has garnered a significant amount of attention during the past few years. Adoption is increasing but at a slow rate. According to the latest survey, 18 percent of companies are now using the cloud in some way as a recovery site – an increase of 3 percent. This includes 10 percent who use a fully packaged DR-as-a-Service (DRaaS) offering and 8 percent who use Infrastructure-as-a-Service (IaaS) to configure their own DR in the cloud configuration. Use of colocation for recovery sites is remains consistent at 37 percent (roughly the same as the prior study). However, the most common method of sourcing recovery sites is still in-house at 43 percent.

The study shows that 43% own their site and IT infrastructure. Also, 37% use a colocation site with their own infrastructure, 20% used a shared, fix-site IT IaaS provider, 10% use DRaaS offering in the cloud, and only 8% use public cloud IaaS as a recovery site.

For the very largest companies, the public cloud, or even a DRaaS provider, may not be the way to go. If the organization is still maintaining a significant data center (or multiple data centers), the cost and risks of moving to the cloud are significant. Unless a data center is heavily virtualized, it will be difficult to replicate the environment – including servers, storage, networking, and security – at a cloud provider.

For smaller businesses, however, moving to a cloud system is becoming increasingly cost-effective. It’s attractive for scalability and OpEx reasons, and agile for deploying new applications. This month’s hurricanes offer an urgent reason to move away from on-prem or hybrid to a full cloud environment — or at least explore DRaaS. With the right service provider, offering redundancy and portability, the cloud could be the only real hope in a significant disaster.

The more advanced the military technology, the greater the opportunities for intentional or unintentional failure in a cyberwar. As Scotty says in Star Trek III: The Search for Spock, “The more they overthink the plumbing, the easier it is to stop up the drain.”

In the case of a couple of recent accidents involving the U.S. Navy, the plumbing might actually be the computer systems that control navigation. In mid-August, the destroyer U.S.S. John S. McCain rammed into an oil tanker near Singapore. A month or so earlier, a container ship hit the nearly identical U.S.S. Fitzgerald off Japan. Why didn’t those hugely sophisticated ships see the much-larger merchant vessels, and move out of the way?

There has been speculation, and only speculation, that both ships might have been victims of cyber foul play, perhaps as a test of offensive capabilities by a hostile state actor. The U.S. Navy has not given a high rating to that possibility, and let’s admit, the odds are against it.

Even so, the military hasn’t dismissed the idea, writes Bill Gertz in the Washington Free Beacon:

On the possibility that China may have triggered the collision, Chinese military writings indicate there are plans to use cyber attacks to “weaken, sabotage, or destroy enemy computer network systems or to degrade their operating effectiveness.” The Chinese military intends to use electronic, cyber, and military influence operations for attacks against military computer systems and networks, and for jamming American precision-guided munitions and the GPS satellites that guide them, according to one Chinese military report.

The datac enters of those ships are hardened and well protected. Still, given the sophistication of today’s warfare, what if systems are hacked?

Imagine what would happen if, say, foreign powers were able to break into drones or cruise missiles. This might cause them to crash prematurely, self-destruct, or hit a friendly target, or perhaps even “land” and become captured. What about disruptions to fighter aircraft, such as jets or helicopters? Radar systems? Gear carried by troops?

It’s a chilling thought. It reminds me that many gun owners in the United States, including law enforcement officers, don’t like so-called “smart” pistols that require fingerprint matching before they can fire – because those systems might fail in a crisis, or if the weapon is dropped or becomes wet, leaving the police officer effectively unarmed.

The Council on Foreign Relations published a blog by David P. Fidler, “A Cyber Norms Hypothetical: What If the USS John S. McCain Was Hacked? In the post, Fidler says, “The Fitzgerald and McCain accidents resulted in significant damage to naval vessels and deaths and injuries to sailors. If done by a foreign nation, then hacking the navigation systems would be an illegal use of force under international law.”

Fidler believes this could lead to a real shooting war:

In this scenario, the targets were naval vessels not merchant ships, which means the hacking threatened and damaged core national security interests and military assets of the United States. In the peacetime circumstances of these incidents, no nation could argue that such a use of force had a plausible justification under international law. And every country knows the United States reserves the right to use force in self-defense if it is the victim of an illegal use of force.

There is precedent. In May and June 2017, two Sukhoi 30 fighter jets belonging to the Indian Air Force crashed – and there was speculation that these were caused by China. In one case, reports Naveen Goud in Cybersecurity Insiders,

The inquiry made by IAF led to the discovery of a fact that the flying aircraft was cyber attacked when it was airborne which led to the death of the two IAF officers- squadron leader D Pankaj and Flight Lieutenant Achudev who were flying the aircraft. The death was caused due to the failure in initiating the ejection process of the pilot’s seat due to a cyber interference caused in the air.

Let us hope that we’re not entering a hot phase of active cyberwarfare.

No organization likes to reveal that its network has been breached, or it data has been stolen by hackers or disclosed through human error. Yet under the European Union’s new General Data Protection Regulation (GDPR), breaches must be disclosed.

The GDPR is a broad set of regulations designed to protect citizens of the European Union. The rules apply to every organization and business that collects or stores information about people in Europe. It doesn’t matter if the company has offices in Europe: If data is collected about Europeans, the GDPR applies.

Traditionally, most organizations hide all information about security incidents, especially if data is compromised. That makes sense: If a business is seen to be careless with people’s data, its reputation can suffer, competitors can attack, and there can be lawsuits or government penalties.

We tend to hear about security incidents only if there’s a breach sufficiently massive that the company must disclose to regulators, or if there’s a leak to the media. Even then, the delay between the breach can take weeks or month — meaning that folks aren’t given enough time to engage identity theft protection companies, monitor their credit/debit payments, or even change their passwords.

Thanks to GDPR, organizations must now disclose all incidents where personal data may have been compromised – and make that disclosure quickly. Not only that, but the GDPR says that the disclosure must be to the general public, or at least to those people affected; the disclosure can’t be buried in a regulatory filing.

Important note: The GDPR says absolutely nothing about disclosing successful cyberattacks where personal data is not stolen or placed at risk. That includes distributed denial-of-service (DDoS) attacks, ransomware, theft of financial data, or espionage of intellectual property. That doesn’t mean that such cyberattacks can be kept secret, but in reality, good luck finding out about them, unless the company has other reasons to disclose. For example, after some big ransomware attacks earlier this year, some publicly traded companies revealed to investors that those attacks could materially affect their quarterly profits. This type of disclosure is mandated by financial regulation – not by the GDPR, which is focused on protecting individuals’ personal data.

The Clock Is Ticking

How long does the organization have to disclose the breach? Three days from when the breach was discovered. That’s pretty quick, though of course, sometimes breaches themselves can take weeks or months to be discovered, especially if the hackers are extremely skilled, or if human error was involved. (An example of human error: Storing unencrypted data in a public cloud without strong password protection. It’s been happening more and more often.)

Here’s what the GDPR says about such breaches — and the language is pretty clear. The first step is to disclose to authorities within three days:

A personal data breach may, if not addressed in an appropriate and timely manner, result in physical, material or non-material damage to natural persons such as loss of control over their personal data or limitation of their rights, discrimination, identity theft or fraud, financial loss, unauthorised reversal of pseudonymisation, damage to reputation, loss of confidentiality of personal data protected by professional secrecy or any other significant economic or social disadvantage to the natural person concerned. Therefore, as soon as the controller becomes aware that a personal data breach has occurred, the controller should notify the personal data breach to the supervisory authority without undue delay and, where feasible, not later than 72 hours after having become aware of it, unless the controller is able to demonstrate, in accordance with the accountability principle, that the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. Where such notification cannot be achieved within 72 hours, the reasons for the delay should accompany the notification and information may be provided in phases without undue further delay.

The GDPR does not specify how quickly the organization must notify the individuals whose data was compromised, beyond “as soon as reasonably feasible”:

The controller should communicate to the data subject a personal data breach, without undue delay, where that personal data breach is likely to result in a high risk to the rights and freedoms of the natural person in order to allow him or her to take the necessary precautions. The communication should describe the nature of the personal data breach as well as recommendations for the natural person concerned to mitigate potential adverse effects. Such communications to data subjects should be made as soon as reasonably feasible and in close cooperation with the supervisory authority, respecting guidance provided by it or by other relevant authorities such as law-enforcement authorities. For example, the need to mitigate an immediate risk of damage would call for prompt communication with data subjects whereas the need to implement appropriate measures against continuing or similar personal data breaches may justify more time for communication.

The phrase “personal data breach” doesn’t only mean theft or accidental disclosure of a person’s private information. The GDPR defines the phrase as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed.” So the loss of important data (think health records) would qualify as a personal data breach.

Big Fines and Penalties

What happens if an organization does not disclose? It can be fined up to 4% of annual global turnover. There’s a cap of €20 million of the fines.

These GDPR rules about breaches are good, and so are the penalties. Too many organizations prefer to hide this type of information, or dribble out disclosures as slowly and quietly as possible, to protect the company’s reputation and share prices. The new EU regulation recognizes that individuals have a vested interest in data that organizations collect or store about them – and need to be told if that data is stolen or compromised.

The European Union is taking computer security, data breaches, and individual privacy seriously. The EU’s General Data Protection Regulation (GDPR) will take effect on May 25, 2018 – but it’s not only a regulation for companies based in Europe.

The GDPR is designed to protect European consumers. That means that every business that stores information about European residents will be affected, no matter where that business operates or is headquartered. That means the United States, and also a post-Brexit United Kingdom.

There’s a hefty price for non-compliance: Businesses can be fined up to 4% of their worldwide top-line revenue, with a cap of €20 million. No matter how you slice it, for most businesses that’s going to hurt, though for some of the tech industry’s giants, that €20 million penalty might look like a slap on the wrist.

A big topic within GDPR is “data portability.” That is the notion that an individual has the right to see information that it has shared with an organization (or has given permission to be collected), inn a commonly used machine-readable format. Details need to be worked out to make that effective.

Another topic is that individuals have the right to make changes to some of their information, or to delete all or part of their information. No, customers can’t delete their transaction history, for example, or delete that they owe the organization money. However, they may choose to delete information that the organization may have collected, such as their age, where they went to college, or the names of their children. They also have the right to request corrections to the data, such as a misspelled name or an incorrect address.

That’s not as trivial as it may seem. It is not uncommon for organizations to have multiple versions of, say, a person’s name and spelling, or to have the information contain differences in formatting. This can have implications when records don’t match. In some countries, there have been problems with a traveler’s passport information not 100% exactly matching the information on a driver’s license, airline ticket, or frequent traveller program. While the variations might appear trivial to a human — a missing middle name, a missing accent mark, an extra space — it can be enough to throw off automated data processing systems, which therefore can’t 100% match the traveler to a ticket. Without rules like the GDPR, organizations haven’t been required to make it easy, or even possible, for customers to make corrections.

Not a Complex Document, But a Tricky One

A cottage industry has arisen with consultancies offering to help European and global companies ensure GDPR prior to implementation. Astonishingly, for such an important regulation, the GDPR itself is relatively short – only 88 pages of fairly easy-to-read prose. Of course, some parts of the GDPR refer back to other European Union directives. Still, the intended meaning is clear.

For example, this clause on sensitive data sounds simple – but how exactly will it be processed? This is why we have consultants.

Personal data which are, by their nature, particularly sensitive in relation to fundamental rights and freedoms merit specific protection as the context of their processing could create significant risks to the fundamental rights and freedoms. Those personal data should include personal data revealing racial or ethnic origin, whereby the use of the term ‘racial origin’ in this Regulation does not imply an acceptance by the Union of theories which attempt to determine the existence of separate human races. The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person. Such personal data should not be processed, unless processing is allowed in specific cases set out in this Regulation, taking into account that Member States law may lay down specific provisions on data protection in order to adapt the application of the rules of this Regulation for compliance with a legal obligation or for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller. In addition to the specific requirements for such processing, the general principles and other rules of this Regulation should apply, in particular as regards the conditions for lawful processing. Derogations from the general prohibition for processing such special categories of personal data should be explicitly provided, inter alia, where the data subject gives his or her explicit consent or in respect of specific needs in particular where the processing is carried out in the course of legitimate activities by certain associations or foundations the purpose of which is to permit the exercise of fundamental freedoms.

The Right to Be Forgotten

Vious EU members states have “rights to be forgotten” rules, which let individuals request that some data about them be deleted. These rules are tricky for rest-of-world organizations, where there may not be any such regulations, and those regulations may be in conflict with other rules (such as in the U.S., freedom of the press). Still, the GDPR strengthens those rules – and this will likely be one of the first areas tested with lawsuits and penalties, particularly with children:

A data subject should have the right to have personal data concerning him or her rectified and a ‘right to be forgotten’ where the retention of such data infringes this Regulation or Union or Member State law to which the controller is subject. In particular, a data subject should have the right to have his or her personal data erased and no longer processed where the personal data are no longer necessary in relation to the purposes for which they are collected or otherwise processed, where a data subject has withdrawn his or her consent or objects to the processing of personal data concerning him or her, or where the processing of his or her personal data does not otherwise comply with this Regulation. That right is relevant in particular where the data subject has given his or her consent as a child and is not fully aware of the risks involved by the processing, and later wants to remove such personal data, especially on the internet. The data subject should be able to exercise that right notwithstanding the fact that he or she is no longer a child. However, the further retention of the personal data should be lawful where it is necessary, for exercising the right of freedom of expression and information, for compliance with a legal obligation, for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller, on the grounds of public interest in the area of public health, for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes, or for the establishment, exercise or defence of legal claims.

Time to Get Up to Speed

In less than a year, many organizations around the world will be subject to the European Union’s GDPR. European businesses are working hard to comply with those regulations. For everyone else, it’s time to start – and yes, you probably do want a consultant.

The late, great science fiction writer Isaac Asimov frequently referred to the “Frankenstein Complex,” That was deep-seated and irrational phobia that robots (i.e, artificial intelligence) would rise up and destroy their creators. Whether it’s HAL in “2001: A Space Odyssey,” or the mainframe in “Colossus: The Forbin Project,” or Arnold Schwarzenegger in “Terminator,” or even the classic Star Trek episode “The Ultimate Computer,” sci-fi carries the message that AI will soon render us obsolescent… or obsolete… or extinct. Many people are worried this fantasy will become reality.

No, Facebook didn’t have to kill creepy bots 

To listen to the breathless news reports, Facebook created some chatbots that were out of control. The bots, designed to test AI’s ability to negotiate, had created their own language – and scientists were alarmed that they could no longer understand what those devious rogues were up to. So, the plug had to be pulled before Armageddon. Said Poulami Nag in the International Business Times:

Facebook may have just created something, which may cause the end of a whole Homo sapien species in the hand of artificial intelligence. You think I am being over dramatic? Not really. These little baby Terminators that we’re breeding could start talking about us behind our backs! They could use this language to plot against us, and the worst part is that we won’t even understand.

Well, no. Not even close. The development of an optimized negotiating language was no surprise, and had little to do with the conclusion of Facebook’s experiment, explain the engineers at FAIR – Facebook Artificial Intelligence Research.

The program’s goal was to create dialog agents (i.e., chatbots) that would negotiate with people. To quote a Facebook blog,

Similar to how people have differing goals, run into conflicts, and then negotiate to come to an agreed-upon compromise, the researchers have shown that it’s possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.

And then,

To go beyond simply trying to imitate people, the FAIR researchers instead allowed the model to achieve the goals of the negotiation. To train the model to achieve its goals, the researchers had the model practice thousands of negotiations against itself, and used reinforcement learning to reward the model when it achieved a good outcome. To prevent the algorithm from developing its own language, it was simultaneously trained to produce humanlike language.

The language produced by the chatbots was indeed humanlike – but they didn’t talk like humans. Instead they used English words, but in a way that was slightly different than human speakers would use. For example, explains tech journalist Wayne Rash in eWeek,

The blog discussed how researchers were teaching an AI program how to negotiate by having two AI agents, one named Bob and the other Alice, negotiate with each other to divide a set of objects, which consisted a hats, books and balls. Each AI agent was assigned a value to each item, with the value not known to the other ‘bot. Then the chatbots were allowed to talk to each other to divide up the objects.

The goal of the negotiation was for each chatbot to accumulate the most points. While the ‘bots started out talking to each other in English, that quickly changed to a series of words that reflected meaning to the bots, but not to the humans doing the research. Here’s a typical exchange between the ‘bots, using English words but with different meaning:

Bob: “I can i i everything else.”

Alice responds: “Balls have zero to me to me to me to me to me to me to me to me to,”

The conversation continues with variations of the number of the times Bob said “i” and the number of times Alice said “to me” in the discussion.

A natural evolution of natural language

Those aren’t glitches; those repetitions have meaning to the chatbots. The experiment showed that some parameters needed to be changed – after all, FAIR wanted chatbots that could negotiate with humans, and these programs weren’t accomplishing that goal. According to Gizmodo’s Tom McKay,

When Facebook directed two of these semi-intelligent bots to talk to each other, FastCo reported, the programmers realized they had made an error by not incentivizing the chatbots to communicate according to human-comprehensible rules of the English language. In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand—but while it might look creepy, that’s all it was.

“Agents will drift off understandable language and invent codewords for themselves,” FAIR visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Facebook did indeed shut down the conversation, but not because they were panicked they had untethered a potential Skynet. FAIR researcher Mike Lewis told FastCo they had simply decided “our interest was having bots who could talk to people,” not efficiently to each other, and thus opted to require them to write to each other legibly.

No panic, fingers on the missiles, no mushroom clouds. Whew, humanity dodged certain death yet again! Must click “like” so the killer robots like me.

It’s hard to know which was better: The pitch for my writing about an infographic, or the infographic itself.

About the pitch: The writer said, “I’ve been tasked with the job of raising some awareness around the graphic (in the hope that people actually like my work lol) and wondered if you thought it might be something entertaining for your audience? If not I completely understand – I’ll just lose my job and won’t be able to eat for a month (think of my poor cats).” Since I don’t want this lady and her cats to starve, I caved.

If you like the pitch, you’ll enjoy the infographic, “10 Marketing Lessons from Apple.” One piece from it is reproduced above. Very cute.

It’s difficult to recruit qualified security staff because there are more openings than humans to fill them. It’s also difficult to retain IT security professionals because someone else is always hiring. But don’t worry: Unless you work for an organization that refuses to pay the going wage, you’ve got this.

Two recent studies present dire, but somewhat conflicting, views of the availability of qualified cybersecurity professionals over the next four or five years. The first study is the Global Information Security Workforce Study from the Center for Cyber Safety and Education, which predicts a shortfall of 1.8 million cybersecurity workers by 2022. Among the highlights from that research, which drew on data from 19,000 cybersecurity professionals:

  • The cybersecurity workforce gap will hit 1.8 million by 2022. That’s a 20 percent increase since 2015.
  • Sixty-eight percent of workers in North America believe this workforce shortage is due to a lack of qualified personnel.
  • A third of hiring managers globally are planning to increase the size of their departments by 15 percent or more.
  • There aren’t enough workers to address current threats, according to 66 percent of respondents.
  • Around the globe, 70 percent of employers are looking to increase the size of their cybersecurity staff this year.
  • Nine in ten security specialists are male. The majority have technical backgrounds, suggesting that recruitment channels and tactics need to change.
  • While 87 percent of cybersecurity workers globally did not start in cybersecurity, 94 percent of hiring managers indicate that security experience in the field is an important consideration.

The second study is the Cybersecurity Jobs Report, created by the editors of Cybersecurity Ventures. Here are some highlights:

  • There will be 3.5 million cybersecurity job openings by 2021.
  • Cybercrime will more than triple the number of job openings over the next five years. India alone will need 1 million security professionals by 2020 to meet the demands of its rapidly growing economy.
  • Today, the U.S. employs nearly 780,000 people in cybersecurity positions. But a lot more are needed: There are approximately 350,000 current cybersecurity job openings, up from 209,000 in 2015.

So, whether you’re hiring a chief information security officer or a cybersecurity operations specialist, expect a lot of competition. What can you do about it? How can you beat the staffing shortage? Read my suggestion in “How to beat the cybersecurity staffing shortage.”

“Ransomware! Ransomware! Ransomware!” Those words may lack the timeless resonance of Steve Ballmer’s epic “Developers! Developers! Developers!” scream in 2000, but ransomware was seemingly an obsession or at Black Hat USA 2017, happening this week in Las Vegas.

There are good reason for attendees and vendors to be focused on ransomware. For one thing, ransomware is real. Rates of ransomware attacks have exploded off the charts in 2017, helped in part by the disclosures of top-secret vulnerabilities and hacking tools allegedly stolen from the United States’ three-letter-initial agencies.

For another, the costs of ransomware are significant. Looking only at a few attacks in 2017, including WannaCry, Petya, and NotPetya, corporates have been forced to revise their earnings downward to account for IT downtime and lost productivity. Those include ReckittNuance, and FedEx. Those types of impact grab the attention of every CFO and every CEO.

Talking with another analyst at Black Hat, he observed that just about every vendor on the expo floor had managed to incorporate ransomware into its magic show. My quip: “I wouldn’t be surprised to see a company marketing network cables as specially designed to prevent against ransomware.” His quick retort: “The queue would be half a mile long for samples. They’d make a fortune.”

While we seek mezzanine funding for our Ransomware-Proof CAT-6 Cables startup, let’s talk about what organizations can and should do to handle ransomware. It’s not rocket science, and it’s not brain surgery.

  • Train, train, train. End users will slip up, and they will click to open emails they shouldn’t open. They will visit websites they shouldn’t visit. And they will ignore security warnings. That’s true for the lowest-level trainee – and true for the CEO as well. Constant training can reduce the amount of stupidity. It can make a difference. By the way, also test your employees’ preparedness by sending out fake malware, and see who clicks on it.
  • Invest in tools that can detect ransomware and other advanced malware. Users will make mistakes, and we’ve seen that there are some ransomware variants that can spread without user intervention. Endpoint security technology is required, and if possible, such tools should do more than passively warn end users if a problem is detected. There are many types of solutions available; look into them, and make sure there are no coverage gaps.
  • Aggressively patch and update software. Patches existed for months to close the vulnerabilities exploited by the recent flurry of ransomware attacks. It’s understandable that consumers wouldn’t be up to date – but it’s inexcusable for corporations to have either not known about the patches, or to have failed to install them. In other words, these attacks were basically 100% avoidable. Maybe they won’t be in the future if the hackers exploit true zero-days, but you can’t protect your organization with out-of-date operating systems, applications, and security tools.
  • Backup, backup, backup. Use backup technology that moves data security into the data center or into the cloud, so that ransomware can’t access the backup drive directly. Too many small businesses lost data on laptops, notebooks, and servers because there weren’t backups. We know better than this! By the way, one should assume that malware attacks, even ransomware, can be designed to destroy data and devices. Don’t assume you can write a check and get your data back safety.
  • Stay up to date on threat data. You can’t rely upon the tech media, or vendor blogs, to keep you up to date with everything going on with cybersecurity. There are many threat data feeds, some curated and expensive, some free and lower-quality. You should find a threat data source that seems to fit your requirements and subscribe to it – and act on what you read. If you’re not going to consume the threat data yourself, find someone else to do so. An urgent warning about your database software version won’t do you any good if it’s in your trashcan.

Ransomware! Ransomware! Ransomware! When it comes to ransomware and advanced malware, it’s not a question of if, or even a question of when. Your organization, your servers, your network, your end-users, are under constant attack. It only takes one slip-up to wreak havoc on one endpoint, and potentially on multiple endpoints. Learn from what’s going on at Black Hat – and be ready for the worst.

A major global cyberattack could cost US$53 billion of economic losses. That’s on the scale of a catastrophic disaster like 2012’s Hurricane Sandy.

Lloyds of London, the famous insurance company, partnered with Cyence, a risk analysis firm specializing in cybersecurity. The result is a fascinating report, “Counting the Cost: Cyber Exposure Decoded.” This partnership makes sense: Lloyds must understand the risk before deciding whether to underwrite a venture — and when it comes to cybersecurity, this is an emerging science. Traditional actuarial methods used to calculate the risk of a cargo ship falling prey to pirates, or an office block to a devastating flood, simply don’t apply.

Lloyds says that in 2016, cyberattacks cost businesses as much as $450 billion. While insurers can help organizations manage that risk, the risk is increasing. The report points to those risks covering “everything from individual breaches caused by malicious insiders and hackers, to wider losses such as breaches of retail point-of-sale devices, ransomware attacks such as BitLocker, WannaCry and distributed denial-of-service attacks such as Mirai.”

The worry? Despite writing $1.35 billion in cyberinsurance in 2016, “insurers’ understanding of cyber liability and risk aggregation is an evolving process as experience and knowledge of cyber-attacks grows. Insureds’ use of the internet is also changing, causing cyber-risk accumulation to change rapidly over time in a way that other perils do not.”

And that is why the lack of time-tested actuarial tables can cause disaster, says Lloyds. “Traditional insurance risk modelling relies on authoritative information sources such as national or industry data, but there are no equivalent sources for cyber-risk and the data for modelling accumulations must be collected at scale from the internet. This makes data collection, and the regular update of it, key components of building a better understanding of the evolving risk.”

Where the Risk Is Growing

The report points to six significant trends that are causing increased risk of an expensive attack – and therefore, increased liability:

  • Volume of contributors: The number of people developing software has grown significantly over the past three decades; each contributor could potentially add vulnerability to the system unintentionally through human error.
  • Volume of software: In addition to the growing number of people amending code, the amount of it in existence is increasing. More code means the potential for more errors and therefore greater vulnerability.
  • Open source software: The open-source movement has led to many innovative initiatives. However, many open-source libraries are uploaded online and while it is often assumed they have been reviewed in terms of their functionality and security, this is not always the case. Any errors in the primary code could then be copied unwittingly into subsequent iterations.
  • Old software: The longer software is out in the market, the more time malicious actors have to find and exploit vulnerabilities. Many individuals and companies run obsolete software that has more secure alternatives.
  • Multi-layered software: New software is typically built on top of prior software code. This makes software testing and correction very difficult and resource intensive.
  • “Generated” software: Code can be produced through automated processes that can be modified for malicious intent.

Based on those points, and other factors, Lloyds and Cyence have come up with two primary scenarios that could lead to widespread, and costly, damages. The first – a successful hack of a major cloud service provider, which hosts websites, applications, and data for many companies. The second — a mass vulnerability attack that affects many client systems. One could argue that some of the recent ransomware attacks fit into that scenario.

Huge Liability Costs

The “Counting the Cost” report makes for some depressing reading. Here are three of the key findings, quoted verbatim. Read the 56-page report to dig deeply into the scenarios, and the damages.

  • The direct economic impacts of cyber events lead to a wide range of potential economic losses. For the cloud service disruption scenario in the report, these losses range from US$4.6 billion for a large event to US$53.1 billion for an extreme event; in the mass software vulnerability scenario, the losses range from US$9.7 billion for a large event to US$28.7 billion for an extreme event.
  • Economic losses could be much lower or higher than the average in the scenarios because of the uncertainty around cyber aggregation. For example, while average losses in the cloud service disruption scenario are US$53 billion for an extreme event, they could be as high as US$121.4 billion or as low as US$15.6 billion, depending on factors such as the different organisations involved and how long the cloud-service disruption lasts for.
  • Cyber-attacks have the potential to trigger billions of dollars of insured losses. For example, in the cloud- services scenario insured losses range from US$620 million for a large loss to US$8.1 billion for an extreme loss. For the mass software vulnerability scenario, the insured losses range from US$762 million (large loss) to US$2.1 billion (extreme loss).

Read the 56-page report to dig deeply into the scenarios, and the damages. You may not sleep well afterwards.