A friend insists that “the Internet is down” whenever he can’t get a strong wireless connection on his smartphone. With that type of logic, enjoy this photo found on the afore-mentioned Internet:

“Wi-Fi” is apparently now synonymous with “Internet” or “network.” It’s clear that we have come a long way from the origins of the Wi-Fi Alliance, which originally defined the term as meaning “Wireless Fidelity.” The vendor-driven alliance was formed in 1999 to jointly promote the broad family of IEEE 802.11 wireless local-area networking standards, as well as to insure interoperability through certifications.

But that was so last millennium! It’s all Wi-Fi, all the time. In that vein, let me propose three new acronyms:

  • Wi-Fi-Wi – Wireless networking, i.e., 802.11
  • Wi-Fi-Cu – Any conventionally cabled network
  • Wi-Fi-Fi – Networking over fiber optics (but not Fibre Channel)
  • Wi-Fi-FC – Wireless Fibre Channel, I suppose

You get the idea….

It’s all about the tradeoffs! You can have the chicken or the fish, but not both. You can have the big engine in your new car, but that means a stick shift—you can’t have the V8 and an automatic. Same for that cake you want to have and eat. Your business applications can be easy to use or secure—not both.

But some of those are false dichotomies, especially when it comes to security for data center and cloud applications. You can have it both ways. The systems can be easy to use and maintain, and they can be secure.

On the consumer side, consider two-factor authentication (2FA), whereby users receive a code number, often by text message to their phones, which they must type into a webpage to confirm their identity. There’s no doubt that 2FA makes systems more secure. The problem is that 2FA is a nuisance for the individual end user, because it slows down access to a desired resource or application. Unless you’re protecting your personal bank account, there’s little incentive for you to use 2FA. Thus, services that require 2FA frequently aren’t used, get phased out, are subverted, or are simply loathed.

Likewise, security measures specified by corporate policies can be seen as a nuisance or an impediment. Consider dividing an enterprise network into small “trusted” networks, such as by using virtual LANs or other forms of authenticating users, applications, or API calls. This setup can require considerable effort for internal developers to create, and even more effort to modify or update.

When IT decides to migrate an application from a data center to the cloud, the steps required to create API-level authentication across such a hybrid deployment can be substantial. The effort required to debug that security scheme can be horrific. As for audits to ensure adherence to the policy? Forget it. How about we just bypass it, or change the policy instead?

Multiply that simple scenario by 1,000 for all the interlinked applications and users at a typical midsize company. Or 10,000 or 100,000 at big ones. That’s why post-mortem examinations of so many security breaches show what appears to be an obvious lack of “basic” security. However, my guess is that in many of those incidents, the chief information security officer or IT staffers were under pressure to make systems, including applications and data sources, extremely easy for employees to access, and there was no appetite for creating, maintaining, and enforcing strong security measures.

Read more about these tradeoffs in my article on Forbes for Oracle Voice: “You Can Have Your Security Cake And Eat It, Too.”

I’m #1! Well, actually #4 and #7. During 2017, I wrote several article for Hewlett Packard Enterprise’s online magazine, Enterprise.nxt Insights, and two of them were quite successful – named as #4 and #7 in the site’s list of Top 10 Articles for 2017.

Article #7 was, “4 lessons for modern software developers from 1970s mainframe programing.” Based entirely on my own experiences, the article began,

Eight megabytes of memory is plenty. Or so we believed back in the late 1970s. Our mainframe programs usually ran in 8 MB virtual machines (VMs) that had to contain the program, shared libraries, and working storage. Though these days, you might liken those VMs more to containers, since the timesharing operating system didn’t occupy VM space. In fact, users couldn’t see the OS at all.

In that mainframe environment, we programmers learned how to be parsimonious with computing resources, which were expensive, limited, and not always available on demand. We learned how to minimize the costs of computation, develop headless applications, optimize code up front, and design for zero defects. If the very first compilation and execution of a program failed, I was seriously angry with myself.

Please join me on a walk down memory lane as I revisit four lessons I learned while programming mainframes and teaching mainframe programming in the era of Watergate, disco on vinyl records, and Star Wars—and which remain relevant today.

Article #4 was, “The OWASP Top 10 is killing me, and killing you! It began,

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10, a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Click the links (or pictures) above and enjoy the articles! And kudos to my prolific friend Steven J. Vaughan-Nichols, whose articles took the #3, #2 and #1 slots. He’s good. Damn good.

Amazon says that that a cloud-connected speaker/microphone was at the top of the charts: “This holiday season was better than ever for the family of Echo products. The Echo Dot was the #1 selling Amazon Device this holiday season, and the best-selling product from any manufacturer in any category across all of Amazon, with millions sold.”

The Echo products are an ever-expanding family of inexpensive consumer electronics from Amazon, which connect to a cloud-based service called Alexa. The devices are always listening for spoken commands, and will respond through conversation, playing music, turning on/off lights and other connected gadgets, making phone calls, and even by showing videos.

While Amazon doesn’t release sales figures for its Echo products, it’s clear that consumers love them. In fact, Echo is about to hit the road, as BMW will integrate the Echo technology (and Alexa cloud service) into some cars beginning this year. Expect other automakers to follow.

Why the Echo – and Apple’s Siri and Google’s Home? Speech.

The traditional way of “talking” to computers has been through the keyboard, augmented with a mouse used to select commands or input areas. Computers initially responded only to typed instructions using a command-line interface (CLI); this was replaced in the era of the Apple Macintosh and the first iterations of Microsoft Windows with windows, icons, menus, and pointing devices (WIMP). Some refer to the modern interface used on standard computers as a graphic user interface (GUI); embedded devices, such as network routers, might be controlled by either a GUI or a CLI.

Smartphones, tablets, and some computers (notably running Windows) also include touchscreens. While touchscreens have been around for decades, it’s only in the past few years they’ve gone mainstream. Even so, the primary way to input data was through a keyboard – even if it’s a “soft” keyboard implemented on a touchscreen, as on a smartphone.

Talk to me!

Enter speech. Sometimes it’s easier to talk, simply talk, to a device than to use a physical interface. Speech can be used for commands (“Alexa, turn up the thermostat” or “Hey Google, turn off the kitchen lights”) or for dictation.

Speech recognition is not easy for computers; in fact, it’s pretty difficult. However, improved microphones and powerful artificial-intelligence algorithms make speech recognition a lot easier. Helping the process: Cloud computing, which can throw nearly unlimited resources at speech recognition, including predictive analytics. Another helper: Constrained inputs, which means that when it comes to understanding commands, there are only so many words for the speech recognition system to decode. (Free-form dictation, like writing an essay using speech recognition, is a far harder problem.)

Speech recognition is only going to get better – and bigger. According to one report, “The speech and voice recognition market is expected to be valued at USD 6.19 billion in 2017and is likely to reach USD 18.30 billion by 2023, at a CAGR of 19.80% between 2017 and 2023. The growing impact of artificial intelligence (AI) on the accuracy of speech and voice recognition and the increased demand for multifactor authentication are driving the market growth.”

Helping the process: Cloud computing, which can throw nearly unlimited resources at speech recognition, including predictive analytics. Another helper: Constrained inputs, which means that when it comes to understanding commands, there are only so many words for the speech recognition system to decode. (Free-form dictation, like writing an essay using speech recognition, is a far harder problem.)

It’s a big market

Speech recognition is only going to get better – and bigger. According to one report, “The speech and voice recognition market is expected to be valued at USD 6.19 billion in 2017and is likely to reach USD 18.30 billion by 2023, at a CAGR of 19.80% between 2017 and 2023. The growing impact of artificial intelligence (AI) on the accuracy of speech and voice recognition and the increased demand for multifactor authentication are driving the market growth.” The report continues:

“The speech recognition technology is expected to hold the largest share of the market during the forecast period due to its growing use in multiple applications owing to the continuously decreasing word error rate (WER) of speech recognition algorithm with the developments in natural language processing and neural network technology. The speech recognition technology finds applications mainly across healthcare and consumer electronics sectors to produce health data records and develop intelligent virtual assistant devices, respectively.

“The market for the consumer vertical is expected to grow at the highest CAGR during the forecast period. The key factor contributing to this growth is the ability to integrate speech and voice recognition technologies into other consumer devices, such as refrigerators, ovens, mixers, and thermostats, with the growth of Internet of Things.”

Right now, many of us are talking to Alexa, talking to Siri, and talking to Google Home. Back in 2009, I owned a Ford car that had a primitive (and laughably inaccurate) infotainment system – today, a new car might do a lot better, perhaps due to embedded Alexa. Will we soon be talking to our ovens, to our laser printers and photocopiers, to our medical implants, to our assembly-line equipment, and to our network infrastructure? It wouldn’t surprise Alexa in the least.

Agility – the ability to deliver projects quickly. That applies to new projects, as well as updates to existing projects. The agile software movement began when many smart people became frustrated with the classic model of development, where first the organization went through a complex process to develop requirements (which took months or years), and wrote software to address those requirements (which took months or years, or maybe never finished). By then, not only did the organization miss out on many opportunities, but perhaps the requirements were no longer valid – if they ever were.

With agile methodologies, the goal is to build software (or accomplish some complex task or action), in small incremental iterations. Each iteration delivers some immediate value, and after each iteration, there would be an evaluation of how well those who requested the project (the stakeholders) were satisfied, and what they wanted to do next. No laborious up-front requirements. No years of investment before there was any return on that investment.

One of the best-known agile frameworks is Scrum, developed by Jeff Sutherland and Ken Schwaber in the early 1990s. In my view, Scrum is noteworthy for several innovations, including:

  • The Scrum framework is simple enough for everyone involved to understand.
  • The Scrum framework is not a product.
  • Scrum itself is not tied to specific vendor’s project-management tools.
  • Work is performed in two-week increments, called Sprints.
  • Every day there is a brief meeting called a Daily Scrum.
  • Development is iterative, incremental, and outcomes are predictable.
  • The work must be transparent, as much as possible, to everyone involved.
  • The roles of participants in the project are defined extremely clearly.
  • The relationship between people in the various roles is also clearly defined.
  • A key participant is the Scrum Master, who helps everyone maximize the value of the team and the project.
  • There is a clear, unambiguous definition of what “Done” means for every action item.

Scrum itself is refined every year or two by Sutherland and Schwaber. The most recent version (if you can call it a version) is Scrum 2017; before that, it was revised in 2016 and 2013. While there aren’t that many significant changes from the original vision unveiled in 1995, here are three recent changes that, in my view, make Scrum better than ever – enough that it might be called Scrum 2.0. Well, maybe Scrum 1.5. You decide:

  1. The latest version acknowledges more clearly that Scrum, like other agile methodologies, is used for all sorts of projects, not merely creating or enhancing software. While the Scrum Guide is still development-focused, Scrum can be used for market research, product development, developing cloud services, and even managing schools and governments.
  2. The Daily Scrum will be more focused on exploring how well the work is driving toward the goals planned for the biweekly Sprint Goal. For example – what work will be done today to drive to the goal? What impediments likely to prevent us from meeting the goal? (Previously, the Daily Scrum was often viewed as a glorified status report meeting.)
  3. Scrum has a set of values, and those are now spelled out: “When the values of commitment, courage, focus, openness and respect are embodied and lived by the Scrum Team, the Scrum pillars of transparency, inspection, and adaptation come to life and build trust for everyone. The Scrum Team members learn and explore those values as they work with the Scrum events, roles and artifacts. Successful use of Scrum depends on people becoming more proficient in living these five values… Scrum Team members respect each other to be capable, independent people.”

The word “agile” is thrown around too often in business and technology, covering everything from planning a business acquisition to planning a network upgrade. Scrum is one of the best-known agile methodologies, and the framework is very well suited for all sorts of projects where it’s not feasible to determine a full set of requirements up front, and there’s a need to immediately begin delivering some functionality (or accomplish parts of the tasks). That Scrum continues to evolve will help ensure its value in the coming years… and decades.

Criminals steal money from banks. Nothing new there: As Willie Sutton famously said, “I rob banks because that’s where the money is.”

Criminals steal money from other places too. While many cybercriminals target banks, the reality is that there are better places to steal money, or at least, steal information that can be used to steal money. That’s because banks are generally well-protected – and gas stations, convenience stores, smaller on-line retailers, and even payment processors are likely to have inadequate defenses — or make stupid mistakes that aren’t caught by security professionals.

Take TIO Networks, a bill-payment service purchased by PayPal for US$233 in July 2017. TIO processed more than $7 billion in bill payments last year, serving more than 10,000 vendors and 16 million consumers.

Hackers now know critical information about all 16 million TIO customers. According to Paymts.com, “… the data that may have been impacted included names, addresses, bank account details, Social Security numbers and login information. How much of those details fell into the hands of cybercriminals depends on how many of TIO’s services the consumers used.”

PayPal has said,

“The ongoing investigation has uncovered evidence of unauthorized access to TIO’s network, including locations that stored personal information of some of TIO’s customers and customers of TIO billers. TIO has begun working with the companies it services to notify potentially affected individuals. We are working with a consumer credit reporting agency to provide free credit monitoring memberships. Individuals who are affected will be contacted directly and receive instructions to sign up for monitoring.”

Card Skimmers and EMV Chips

Another common place where money changes hands: The point-of-purchase device. Consider payment-card skimmers – that is, a hardware device secretly installed into a retail location’s card reader, often at an unattended location like a gasoline pump.

The amount of fraud caused by skimmers copying information on payment cards is expected to rise from $3.1 billion in 2015 to $6.4 billion in 2018, affecting about 16 million cardholders. Those are for payment cards that don’t have the integrated EMV chip, or for transactions that don’t use the EMV system.

EMV chips, also known as chip-and-PIN or chip-and-signature, are named for the three companies behind the technology standards – Europay, MasterCard, and Visa. Chip technology, which is seen as a nuisance by consumers, has dramatically reduced the amount of fraud by generating a unique, non-repeatable transaction code for each purchase.

The rollout of EMV, especially in the United States, is painfully slow. Many merchants still haven’t upgraded to the new card-reader devices or back-end financial services to handle those transactions. For example, there are very few fuel stations using chips to validate transactions, and so pay-at-the-pump in U.S. is universally still dependent on the mag stripe reader. That presents numerous opportunities for thieves to install skimmers on that stripe reader, and be able to steal payment card information.

For an excellent, well-illustrated primer on skimmers and skimmer-related fraud at gas stations, see “As gas station skimmer card fraud increases, here’s how to cut your risk.” Theft at the point of purchase, or at payment processors, will continue as long as companies fail to execute solid security practices – and continue to accept non-EMV payment card transactions, including allowing customers to type their credit- or debit-card numbers onto websites. Those are both threats for the foreseeable future, especially since desktops, notebooks, and mobile device don’t have built-in EMV chip readers.

Crooks are clever, and are everywhere. They always have been. Money theft and fraud – no matter how secure the banks are, it’s not going away any time soon.

SysSecOps is a new phrase, still unseen by many IT and security administrators – however it’s being discussed within the market, by analysts, and at technical conferences. SysSecOps, or Systems & Security Operations, describes the practice of combining security groups and IT operations groups to be able to make sure the health of enterprise technology – and having the tools to be able to respond most effectively when issues happen.

SysSecOps concentrates on taking down the info walls, disrupting the silos, that get between security groups and IT administrators. IT operations personnel are there to make sure that end-users can access applications, and that important infrastructure is running at all times. They want to optimize access and availability, and require the data required to do that job – like that a new employee needs to be provisioned, or a hard disk drive in a RAID array has actually stopped working, that a new partner needs to be provisioned with access to a secure document repository, or that an Oracle database is ready to be moved to the cloud. It’s everything about innovation to drive business.

Very Same Data, Various Use-Cases

Endpoint and network monitoring details and analytics are clearly customized to fit the diverse needs of IT and security. However, the underlying raw data is in fact the exact same. The IT and security groups simply are looking at their own domain’s issues and scenarios – and doing something about it based upon those use-cases.

Yet in some cases the IT and security groups have to interact. Like provisioning that brand-new organization partner: It must touch all the ideal systems, and be done securely. Or if there is a problem with a remote endpoint, such as a mobile phone or a mechanism on the Industrial Internet of Things, IT and security might have to work together to identify exactly what’s going on. When IT and security share the exact same data sources, and have access to the very same tools, this job becomes a lot easier – and hence SysSecOps.

Envision that an IT administrator spots that a server hard drive is nearing full capacity – and this was not anticipated. Perhaps the network had actually been breached, and the server is now being utilized to steam pirated films throughout the Web. It happens, and finding and resolving that issue is a task for both IT and security. The data gathered by endpoint instrumentation, and showed through a SysSecOps-ready tracking platform, can assist both sides working together more effectively than would happen with conventional, distinct, IT and security tools.

SysSecOps: It’s a brand-new term, and a brand-new idea, and it’s resonating with both IT and security groups. You can discover more about this in a brief 9 minute video, where I talk with numerous market specialists about this subject: “Exactly what is SysSecOps?

Ransomware is genuine, and is threatening individuals, services, schools, medical facilities, governments – and there’s no indication that ransomware is stopping. In fact, it’s probably increasing. Why? Let’s be honest: Ransomware is probably the single most efficient attack that hackers have ever created. Anybody can develop ransomware utilizing easily available tools; any cash received is likely in untraceable Bitcoin; and if something goes wrong with decrypting someone’s disk drive, the hacker isn’t impacted.

A business is hit with ransomware every 40 seconds, according to some sources, and 60% of all malware were ransomware. It strikes all sectors. No industry is safe. And with the increase of RaaS (Ransomware-as-a-Service) it’s going to get worse.

Fortunately: We can fight back. Here’s a 4 step fight plan.

Four steps to good fundamental hygiene

  1. Training employees on handling destructive e-mails. There are falsified messages from service partners. There’s phishing and target spearphishing. Some will survive email spam/malware filters; workers need to be taught not to click links in those messages, or naturally, not to give permission for plugins or apps to be installed. However, some malware, like ransomware, will get through, typically making use of obsolete software applications or unpatched systems, just like in the Equifax breach.
  2. Patch everything. Guaranteeing that end points are completely patched and completely updated with the current, most safe OS, applications, utilities, device drivers, and code libraries. In this way, if there is an attack, the end point is healthy, and has the ability to best battle the infection.
  3. Ransomware isn’t really a technology or security problem. It’s an organization problem. And it’s a lot more than the ransom that is demanded. That’s peanuts compared to loss of efficiency because of downtime, bad public relations, angry clients if service is interfered with, and the expense of rebuilding lost data. (And that assumes that valuable intellectual property or protected financial or consumer health data isn’t really stolen.).
  4. Backup, backup, backup, and safeguard those backups. If you do not have safe, protected backups, you cannot restore data and core infrastructure in a timely fashion. That includes making day-to-day snapshots of virtual machines, databases, applications, source code, and configuration files.

By the way, businesses need tools to discover, determine, and avoid malware like ransomware from dispersing. This needs continuous visibility and reporting of what’s taking place in the environment – consisting of “zero day” attacks that have not been seen before. Part of that is keeping an eye on end points, from the smart phone to the PC to the server to the cloud, to make sure that endpoints are up-to-date and secure, which no unexpected changes have been made to their underlying configuration. That way, if a machine is contaminated by ransomware or other malware, the breach can be discovered quickly, and the device separated and closed down pending forensics and healing. If an end point is breached, quick containment is critical.

Read more in my guest story for Chuck Leaver’s blog, “Prevent And Manage Ransomware With These 4 Steps.”

AI is an emerging technology – always, has been always will be. Back in the early 1990s, I was editor of AI Expert Magazine. Looking for something else in my archives, I found this editorial, dated February 1991.

What do you think? Is AI real yet?

In The Terminator, the Skynet artificial intelligence was turned on to track down hacking a military computer network. Turns out the hacker was Skynet itself. Is there a lesson there? Could AI turn against us, especially as it relates to the security domain?

That was one of the points I made while moderating a discussion of cybersecurity and AI back in October 2017. Here’s the start of a blog post written by my friend Tami Casey about the panel:

Mention artificial intelligence (AI) and security and a lot of people think of Skynet from The Terminator movies. Sure enough, at a recent Bay Area Cyber Security Meetup group panel on AI and machine learning, it was moderator Alan Zeichick – technology analyst, journalist and speaker – who first brought it up. But that wasn’t the only lively discussion during the panel, which focused on AI and cybersecurity.

I found two areas of discussion particularly interesting, which drew varying opinions from the panelists. One, around the topic of AI eliminating jobs and thoughts on how AI may change a security practitioner’s job, and two, about the possibility that AI could be misused or perhaps used by malicious actors with unintended negative consequences.

It was a great panel. I enjoyed working with the Meetup folks, and the participants: Allison Miller (Google), Ali Mesdaq (Proofpoint), Terry Ray (Imperva), Randy Dean (Launchpad.ai & Fellowship.ai).

You can read the rest of Tami’s blog here, and also watch a video of the panel.

Smart televisions, talking home assistants, consumer wearables – that’s not the real story of the Internet of Things. While those are fun and get great stories on blogs and morning news reports, the real IoT is the Industrial IoT. That’s where businesses will truly be transformed, with intelligent, connected devices working together to improve services, reduce friction, and disrupt everything. Everything.

According to Grand View Research, the Industrial IoT (IIoT) market will be $933.62 billion by 2025. “The ability of IoT to reduce costs has been the prime factor for its adoption in the industrial sector. However, several significant investment incentives, such as increased productivity, process automation, and time-to-market, have also been boosting this adoption. The falling prices of sensors have reduced the overall cost associated with data collection and analytics,” says the report.

The report continues,

An emerging trend among enterprises worldwide is the transformation of technical focus to improving connectivity in order to undertake data collection with the right security measures in place and with improved connections to the cloud. The emergence of low-power hardware devices, cloud integration, big data analytics, robotics & automation, and smart sensors are also driving IIoT market growth.

Markets and Markets

Markets & Markets predicts that IIoT will be worth $195.47 billion by 2022. The company says,

A key influencing factor for the growth of the IIoT market is the need to implement predictive maintenance techniques in industrial equipment to monitor their health and avoid unscheduled downtimes in the production cycle. Factors which driving the IIoT market include technological advancements in semiconductor and electronics industry and evolution of cloud computing technologies.

The manufacturing vertical is witnessing a transformation through the implementation of the smart factory concept and factory automation technologies. Government initiatives such as Industrie 4.0 in Germany and Plan Industriel in France are expected to promote the implementation of the IIoT solutions in Europe. Moreover, leading countries in the manufacturing vertical such as U.S., China, and India are expected to further expand their manufacturing industries and deploy smart manufacturing technologies to increase this the contribution of this vertical to their national GDPs.

The IIoT market for camera systems is expected to grow at the highest rate between 2016 and 2022. Camera systems are mainly used in the retail and transportation verticals. The need of security and surveillance in these sectors is the key reason for the high growth rate of the market for camera systems. In the retail sector, the camera systems are used for capturing customer behavior, moment tracking, people counting, and heat mapping. The benefits of installation of surveillance systems include the safety at the workplace, and the prevention of theft and other losses, sweet hearting, and other retail crimes. Video analytics plays a vital role for security purpose in various areas in transportation sector including airports, railway stations, and large public places. Also, intelligent camera systems are used for traffic monitoring, and incident detection and reporting.

Accenture

The huge research firm Accenture says that the IIoT will add $14.2 trillion to the global economy by 2030. That’s not talking about the size of the market, but the overall lift that IIoT will have. By any measure, that’s staggering. Accenture reports,

Today, the IIoT is helping to improve productivity, reduce operating costs and enhance worker safety. For example, in the petroleum industry, wearable devices sense dangerous chemicals and unmanned aerial vehicles can inspect remote pipelines.

However, the longer-term economic and employment potential will require companies to establish entirely new product and service hybrids that disrupt their own markets and generate fresh revenue streams. Many of these will underpin the emergence of the “outcome economy,” where organizations shift from selling products to delivering measurable outcomes. These may range from guaranteed energy savings in commercial buildings to guaranteed crop yields in a specific parcel of farmland.

IIoT Is a Work in Progress

The IIoT is going to have huge impact. But it hasn’t yet, not on any large scale. As Accenture says,

When Accenture surveyed more than 1,400 C-suite decision makers—including 736 CEOs—from some of the world’s largest companies, the vast majority (84 percent) believe their organizations have the capability to create new, service-based income streams from the IIoT.

But scratch beneath the surface and the gloss comes off. Seventy-three percent confess that their companies have yet to make any concrete progress. Just 7 percent have developed a comprehensive strategy with investments to match.

Challenge and opportunity: That’s the Industrial Internet of Things. Watch this space.

The bad news: There are servers used in serverless computing. Real servers, with whirring fans and lots of blinking lights, installed in racks inside data centers inside the enterprise or up in the cloud.

The good news: You don’t need to think about those servers in order to use their functionality to write and deploy enterprise software. Your IT administrators don’t need to provision or maintain those servers, or think about their processing power, memory, storage, or underlying software infrastructure. It’s all invisible, abstracted away.

The whole point of serverless computing is that there are small blocks of code that do one thing very efficiently. Those blocks of code are designed to run in containers so that they are scalable, easy to deploy, and can run in basically any computing environment. The open Docker platform has become the de facto industry standard for containers, and as a general rule, developers are seeing the benefits of writing code that target Docker containers, instead of, say, Windows servers or Red Hat Linux servers or SuSE Linux servers, or any specific run-time environment. Docker can be hosted in a data center or in the cloud, and containers can be easily moved from one Docker host to another, adding to its appeal.

Currently, applications written for Docker containers still need to be managed by enterprise IT developers or administrators. That means deciding where to create the containers, ensuring that the container has sufficient resources (like memory and processing power) for the application, actually installing the application into the container, running/monitoring the application while it’s running, and then adding more resources if required. Helping do that is Kubernetes, an open container management and orchestration system for Docker. So while containers greatly assist developers and admins in creating portable code, the containers still need to be managed.

That’s where serverless comes in. Developers write their bits of code (such as to read or write from a database, or encrypt/decrypt data, or search the Internet, or authenticate users, or to format output) to run in a Docker container. However, instead of deploying directly to Docker, or using Kubernetes to handle deployment, they write their code as a function, and then deploy that function onto a serverless platform, like the new Fn project. Other applications can call that function (perhaps using a RESTful API) to do the required operation, and the serverless platform then takes care of everything else automatically behind the scenes, running the code when needed, idling it when not needed.

Read my essay, “Serverless Computing: What It Is, Why You Should Care,” to find out more.

Critical information about 46 million Malaysians were leaked online onto the Dark Web. The stolen data included mobile phone numbers from telcos and mobile virtual network operators (MVNOs), prepaid phone numbers, customers details including physical addresses – and even the unique IMEI and IMSI registration numbers associated with SIM cards.

Isolated instance from one rogue carrier? No. The carriers included Altel, Celcom, DiGi, Enabling Asia, Friendimobile, Maxis, MerchantTradeAsia, PLDT, RedTone, TuneTalk, Umobile and XOX; news about the breach were first published 19 October 2017 by a Malaysian online community.

When did the breach occur? According to lowyat.net, “Time stamps on the files we downloaded indicate the leaked data was last updated between May and July 2014 between the various telcos.”

That’s more than three years between theft of the information and its discovery. We have no idea if the carriers had already discovered the losses, and chose not to disclose the breaches.

A huge delay between a breach and its disclosure is not unusual. Perhaps things will change once the General Data Protection Regulation (GDPR) kicks in next year, when organizations must reveal a breach within three days of discovery. That still leaves the question of discovery. It simply takes too long!

According to Mandiant, the global average dwell time (time between compromise and detection) is 146 days. In some areas, it’s far worse: the EMEA region has a dwell time of 469 days. Research from the Ponemon Institute says that it takes an average of 98 days for financial services companies to detect intrusion on their networks, and 197 days in retail. It’s not surprising that the financial services folks do a better job – but three months seems like a very long time.

An article headline from InfoSecurity Magazine says it all: “Hackers Spend 200+ Days Inside Systems Before Discovery.” Verizon’s Data Breach Investigations Report for 2017 has some depressing news: “Breach timelines continue to paint a rather dismal picture — with time-to-compromise being only seconds, time-to-exfiltration taking days, and times to discovery and containment staying firmly in the months camp. Not surprisingly, fraud detection was the most prominent discovery method, accounting for 85% of all breaches, followed by law enforcement which was seen in 4% of cases.”

What Can You Do?

There are two relevant statistics. The first is time-to-discovery, and the other is time-to-disclosure, whether to regulators or customers.

  • Time-to-disclosure is a matter of policy, not technology. There are legal aspects, public-relations, financial (what if the breach happens during a “quiet period” prior to announcing results?), regulatory, and even law-enforcement (what if investigators are laying a trap, and don’t want to tip off that the breach has been discovered?).
  • Time-to-discovery, on the other hand, is a matter of technology (and the willingness to use it). What doesn’t work? Scanning log files using manual or semi-automated methods. Excel spreadsheets won’t save you here!

What’s needed are comprehensive endpoint monitoring capabilities, coupled with excellent threat intelligence and real-time analytics driven by machine learning. Nothing else can correlate huge quantities of data from such widely disparate sources, and hope to discover outliers based on patterns.

Discovery and containment takes months, says Verizon. You can’t have containment without discovery. With current methods, we’ve seen that discovery takes months or years, if it’s every detected at all. Endpoint monitoring technology, when coupled with machine learning — and with 24×7 managed security software providers — can reduce that to seconds or minutes.

There is no excuse for breaches staying hidden for three years or longer. None. That’s no way to run a business.

Humans can’t keep up. At least, not when it comes to meeting the rapidly expanding challenges inherent to enterprise cybersecurity. There are too many devices, too many applications, too many users, and too many megabytes of log files for humans to make sense of it all. Moving forward, effective cybersecurity is going to be a “Battle of the Bots,” or to put it less dramatically, machine versus machine.

Consider the 2015 breach at the U.S. Government’s Office of Personnel Management (OPM). According to a story in Wired, “The Office of Personnel Management repels 10 million attempted digital intrusions per month—mostly the kinds of port scans and phishing attacks that plague every large-scale Internet presence.” Yet despite sophisticated security mechanisms, hackers managed to steal millions of records on applications for security clearances, personnel files, and even 5.6 digital images of government employee fingerprints. (In August 2017, the FBI arrested a Chinese national in connection with that breach.)

Traditional security measures are often slow, and potentially ineffective. Take the practice of applying patches and updates to address new-found software vulnerabilities. Companies now have too many systems in play for the process of finding and installing patches to be effectively handled manually,

Another practice that can’t be handled manually: Scanning log files to identify abnormalities and outliers in data traffic. While there are many excellent tools for reviewing those files, they are often slow and aren’t good at aggregating lots across disparate silos (such as a firewall, a web application server, and an Active Directory user authentication system). Thus, results may not be comprehensive, patterns may be missed, and results of deep analysis may not be returned in real time.

Read much more about this in my new essay, “Machine Versus Machine: The New Battle For Enterprise Cybersecurity.”

Still no pastrami sandwich. Still no guinea pig. What’s the deal with the cigarette?

I installed iOS 11.1 yesterday, tantalized by Apple’s boasting of tons of new emoji. Confession: Emoji are great fun. Guess what I looked for right after the completed software install?

Many of the 190 new emoji are skin-tone variations on new or existing people or body parts. That’s good: Not everyone is yellow, like the Simpsons. (If you don’t count the different skin-tone versions, there are about 70 new graphics.)

New emoji that I like:

  • Steak. Yum!
  • Shushing finger face. Shhhh!
  • Cute hedgehog. Awww!
  • Scottish flag. Och aye!

What’s still stupidly missing:

  • Pastrami sandwich. Sure, there’s a new sandwich emoji, but it’s not a pastrami sandwich. Boo.
  • There’s a cheeseburger (don’t get me started on the cheese top/bottom debate), but nothing for those who don’t put cheese on their burgers at all. Grrrr.
  • Onion rings. They’ve got fries, but no rings. Waah.
  • Coffee with creamer. I don’t drink my coffee black. Bleh.
  • Guinea pig. That’s our favorite pet, but no cute little caviidae in the emoji. Wheek!

I still don’t like the cigarette emoji, but I guess once they added it in 2015, they couldn’t delete it.

Here is a complete list of all the emoji, according to PopSugar. What else is missing?

You want to read Backlinko’s “The Definitive Guide To SEO In 2018.” Backlinko is an SEO consultancy founded by Brian Dean. The “Definitive Guide” is a cheerfully illustrated infographic – a lengthy infographic – broken up into several useful chapters:

  • RankBrain & User Experience Signals
  • Become a CTR Jedi
  • Comprehensive, In-Depth Content Wins
  • Get Ready for Google’s Mobile-first Index
  • Go All-In With Video (Or Get Left Behind)
  • Pay Attention to Voice Search
  • Don’t Forget: Content and Links Are Key
  • Quick Tips for SEO in 2018

Some of these section had advice that I knew; others were pretty much new to me, such as the voice search section. I’ll also admit to being very out-of-date on how Google’s ranking systems work; it changes often, and my last deep dive was circa 2014. Oops.

The advice in this document is excellent and well-explained. For example, on RankBrain:

Last year Google announced that RankBrain was their third most important ranking factor: “In the few months it has been deployed, RankBrain has become the third-most important signal contributing to the result of a search query.”

And as Google refines its algorithm, RankBrain is going to become even MORE important in 2018. The question is: What is RankBrain, exactly? And how can you optimize for it?

RankBrain is a machine learning system that helps Google sort their search results. That might sound complicated, but it isn’t. RankBrain simply measures how users interact with the search results… and ranks them accordingly.

The document then goes into a very helpful example, digging into the concept of Dwell Time (that is, how long someone spends on the page). The “Definitive Guide” also provides some very useful metrics about targets for click-through rate (CTR), dwell time, length and depth of content, and more. For example, the document says,

One industry study found that organic CTR is down 37% since 2015. It’s no secret why: Google is crowding out the organic search results with Answer Boxes, Ads, Carousels, “People also ask” sections, and more. And to stand out, your result needs to scream “click on me!”…or else it’ll be ignored.

All of the advice is good, but of course, I’m not always going to follow it. For example, the “Definitive Guide” says:

How can you write the type of in-depth content that Google wants to see? First, publish content that’s at least 2,000 words. That way, you can cover everything a Google searcher needs to know about that topic. In fact, our ranking factors study found that longer content (like ultimate guides and long-form blog posts) outranked short articles in Google.

Well, this post isn’t even close to 2,000 words. Oops. Read the “Definitive Guide,” you’ll be glad you did.

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10. That’s a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Note that the top 10 list doesn’t directly represent the 10 most common attacks. Rather, it’s a ranking of risk. There are four factors used for this calculation. One is the likelihood that applications would have specific vulnerabilities; that’s based on data provided by companies. That’s the only “hard” metric in the OWASP Top 10. The other three risk factors are based on professional judgement.

It boggles the mind that a majority of top 10 issues appear across the 2007, 2010, 2013, and draft 2017 OWASP lists. That doesn’t mean that these application security vulnerabilities have to remain on your organization’s list of top problems, though—you can swat those flaws.

Read more in my essay, “The OWASP Top 10 is killing me, and killing you!

Apply patches. Apply updates. Those are considered to be among the lowest-hanging of the low-hanging fruit for IT cybersecurity. When commercial products release patches, download and install the code right away. When open-source projects disclose a vulnerability, do the appropriate update as soon as you can, everyone says.

A problem is that there are so many patches and updates. They’re found in everything from device firmware to operating systems, to back-end server software to mobile apps. To be able to even discover all the patches is a huge effort. You have to know:

  • All the hardware and software in your organization — so you can scan the vendors’ websites or emails for update notices. This may include the data center, the main office, remote offices, and employees homes. Oh, and rogue software installed without knowledge of IT.
  • The versions of all the hardware and software instances — you can tell which updates apply to you, and which don’t. Sometimes there may be an old version somewhere that’s never been patched.
  • The dependencies. Installing a new operating system may break some software. Installing a new version of a database may require changes on a web application server.
  • The location of each of those instances — so you can know which ones need patching. Sometimes this can be done remotely, but other times may require a truck roll.
  • The administrator access links, usernames and password — hopefully, those are not set to “admin/admin.” The downside of changing default admin passwords is that you have to remember the new ones. Sure, sometimes you can make changes with, say, any Active Director user account with the proper privileges. That won’t help you, though, with most firmware or mobile devices.

The above steps are merely for discovery of the issue and the existence of a patch. You haven’t protected anything until you’ve installed the patch, which often (but not always) requires taking the hardware, software, or service offline for minutes or hours. This requires scheduling. And inconvenience. Even if you have patch-management tools (and there are many available), too many low-hanging fruit can be overlooked.

You Can’t Wait for That Downtime Window

As Oracle CEO Larry Ellison made important points about patching at his keynote at OpenWorld 2017,

Our data centers are enormously complicated. There are lots of servers and storage and operating systems, virtual machines, containers and databases, data stores, file systems. And there are thousands of them, tens of thousands, hundreds of thousands of them. It’s hard for people to locate all these things and patch them. They have to be aware there’s a vulnerability. It’s got to be an automated process.

You can’t wait for a downtime window, where you say, “Oh, I can’t take the system down. I know I’ve got to patch this, but we have scheduled downtime middle of next month.” Well, that’s wrong thinking and that’s kind of lack of priority for security.

All that said, patching and updating must be a priority. Dr. Ron Layton, Deputy Assistant Director of the U.S. Secret Service, said at the NetEvents Global Press Summit, September 2017:

Most successful hacks and breaches – most of them – were because low-level controls were not in place. That’s it. That’s it. Patch management. It’s the low-level stuff that will get you to the extent that the bad guys will say, I’m not going to go here. I’m going to go somewhere else. That’s it.

The Scale of Security Issues Is Huge

I receive many regular email from various defect-tracking and patch-awareness lists. Here’s one weekly sample from the CERT teams at U.S. Dept. of Homeland Security. IT pros won’t be surprised at how large it is: https://www.us-cert.gov/ncas/bulletins/SB17-296

There are 25 high-severity vulnerabilities on this list, most from Microsoft, some from Oracle. Lots of medium-severity vulnerabilities from Microsoft, OpenText, Oracle, and WPA – the latter being the widely reported bug in Wi-Fi Protected Access. In addition, there are a few low-severity vulnerability, and then page after page of those labeled “severity not yet assigned.” The list goes on and on, even hitting infrastructure products from Cisco and F5. And lots more WiFi issues.

This is a typical week – and not all those vulnerabilities in the CERT report have patches yet. CERT is only one source, by the way. Want more? Here’s a list of security-related updates from Apple. Here is a list of a security updates from Juniper Networks. A list of from Microsoft. And Red Hat too.

So: When security analysts say that enterprises merely need to keep up with patches and fixes, well, yes, that’s the low-hanging fruit. However, nobody talks about how much of that low-hanging fruit there is. The amount is overwhelming in an enterprise. No wonder some rotten fruit slip through the cracks.

Open source software (OSS) offers many benefits for organizations large and small—not the least of which is the price tag, which is often zero. Zip. Nada. Free-as-in-beer. Beyond that compelling price tag, what you often get with OSS is a lack of a hidden agenda. You can see the project, you can see the source code, you can see the communications, you can see what’s going on in the support forums.

When OSS goes great, everyone is happy, from techies to accounting teams. Yes, the legal department may want to scrutinize the open source license to make sure your business is compliant, but in most well-performing scenarios, the lawyers are the only ones frowning. (But then again, the lawyers frown when scrutinizing commercial closed-source software license agreements too, so you can’t win.)

The challenge with OSS is that it can be hard to manage, especially when something goes wrong. Depending on the open source package, there can be a lot of mysteries, which can make ongoing support, including troubleshooting and performance tuning, a real challenge. That’s because OSS is complex.

It’s not like you can say, well, here’s my Linux distribution on my server. Oh, and here’s my open source application server, and my open source NoSQL database, and my open source log suite. In reality, those bits of OSS may be from separate OSS projects, which may (or may not) have been tested for how well they work together.

A separate challenge is that because OSS is often free-as-in-beer, the software may not be in the corporate inventory. That’s especially common if the OSS is in the form of a library or an API that might be built into other applications you’ve written yourself. The OSS might be invisible but with the potential to break or cause problems down the road.

You can’t manage what you don’t know about

When it comes to OSS, there may be a lot you don’t know about, such as those license terms or interoperability gotchas. Worse, there can be maintenance issues — and security issues. Ask yourself: Does your organization know all the OSS it has installed on servers on-prem or in the cloud? Coded into custom applications? Are you sure that all patches and fixes have been installed (and installed correctly), even on virtual machine templates, and that there are no security vulnerabilities?

In my essay “The six big gotchas: The impact of open source on data centers,” we’ll dig into the key topics: License management, security, patch management, maximizing uptime, maximizing performance, and supporting the OSS.

Those are two popular ways of migrating enterprise assets to the cloud:

  1. Write new cloud-native applications.
  2. Lift-and-shift existing data center applications to the cloud.

Gartner’s definition: “Lift-and-shift means that workloads are migrated to cloud IaaS in as unchanged a manner as possible, and change is done only when absolutely necessary. IT operations management tools from the existing data center are deployed into the cloud environment largely unmodified.”

There’s no wrong answer, no wrong way of proceeding. Some data center applications (including servers and storage) may be easier to move than others. Some cloud-native apps may be easier to write than others. Much depends on how much interconnectivity there is between the applications and other software; that’s why, for example, public-facing websites are often easiest to move to the web, while tightly coupled internal software, such as inventory control or factory-floor automation, can be trickier.

That’s why in some cases, a hybrid strategy is best. Some parts of the applications are moved up to the cloud, while others remain in the data centers, with SD-WANs or other connectivity linking everything together in a secure manner.

In other words, no one size fits all. And no one timeframe fits all, especially when it comes to lifting-and-shifting.

Saas? Paas? It Depends.

A recent survey from the Oracle Applications User Group (OAUG) showed that 70% of respondents who have plans to adopt Oracle Cloud solutions will do so in the next three years. About 35% plan to implement Software-as-a-Service (SaaS) solutions to run with their existing Oracle on-premises installations and 29 percent planning to use Platform-as-a-Service (PaaS) services to accelerate software development efforts in the next 12 months.

Joe Paiva, CIO of the U.S. Commerce Department’s International Trade Administration (ITA), is a fan of lift-and-shift. He said at a cloud conference that “Sometimes it makes sense because it gets you there. That was the key. We had to get there because we would be no worse off or no better off, and we were still spending a lot of money, but it got us to the cloud. Then we started doing rationalization of hardware and applications, and dropped our bill to Amazon by 40 percent compared to what we were spending in our government data center. We were able to rationalize the way we use the service.” Paiva estimates government agencies could save 5%-15% using lift-and-shift.

The benefits of moving existing workloads to the cloud are almost entirely financial. If you can shut down a data center and pay less to run the application in the cloud, it’s can be a good short-term solution with immediate ROI. Gartner cautions, however, that lift and shift “generally results in little created value. Plus, it can be a more expensive option and does not deliver immediate cost savings.” Much depends on how much it costs to run that application today.

A Multi-Track Process for Cloud Migration

The real benefits of new cloud development and deployment architectures take time to realize. For many organizations, there may be a multi-track process:

First track: Lift-and-shift existing workloads that are relatively easy to migrate, while simultaneously writing cloud-native applications for new projects. Those provide the biggest and fastest return on investment, while leaving data center workloads in place and untouched.

Second track: Write cloud-native applications for the remaining data-center workloads, the ones impractical to migrate in their existing form. These will be slower, but the payoff would result in the ability to turn off some or all existing data centers – and eliminating their associated expenses, such as power and cooling, bandwidth, and physical space.

Third track: At some point, revisit the lifted-and-shifted workloads to see which would significantly benefit from being rewritten as cloud-native apps. Unless there is an order of magnitude increase in efficiency, or significant added functionality, the financial returns won’t be high – or may be nonexistent. For some applications, it may never make sense to redesign and rewrite them in a cloud-native way. So, those old enterprise applications may live on for years to come.

About a decade ago, I purchased a piece of a mainframe on eBay — the name ID bar. Carved from a big block of aluminum, it says “IBM System/370 168,” and it hangs proudly over my desk.

My time on mainframes was exclusively with the IBM System/370 series. With a beautiful IBM 3278 color display terminal on my desk, and, later, a TeleVideo 925 terminal and an acoustic coupler at home, I was happier than anyone had a right to be.

We refreshed our hardware often. The latest variant I worked on was the System/370 4341, introduced in early 1979, which ran faster and cooler than the slower, very costly 3031 mainframes we had before. I just found this on the IBM archives: “The 4341, under a 24-month contract, can be leased for $5,975 a month with two million characters of main memory and for $6,725 a month with four million characters. Monthly rental prices are $7,021 and $7,902; purchase prices are $245,000 and $275,000, respectively.” And we had three, along with tape drives, disk drives (in IBM-speak, DASD, for Direct Access Storage Devices), and high-speed line printers. Not cheap!

Our operating system on those systems was called Virtual Machine, or VM/370. It consisted of two parts, Control Program and Conversational Monitoring System. CP was the timesharing operating system – in modern virtualization terms, the hypervisor running on the bare metal. CMS was the user interface that users logged into, and provide access to not only a text-based command console, but also file storage and a library of tools, such as compilers. (We often referred to the platform as CP/CMS).

Thanks to VM/370, each user believed she had access to a 100% dedicated and isolated System/370 mainframe, with every resource available and virtualized. (I.e., she appeared to have dedicated access to tape drives, but they appeared non-functional if her tape(s) weren’t loaded, or if she didn’t buy access to the drives.)

My story about mainframes isn’t just reminiscing about the time of dinosaurs. When programming those computers, which I did full-time in the late 1970s and early 1980s, I learned a lot of lessons that are very applicable today. Read all about that in my article for HP Enterprise Insights, “4 lessons for modern software developers from 1970s mainframe programming.”

To get the most benefit from the new world of cloud-native server applications, forget about the old way of writing software. In the old model, architects designed software. Programmers wrote the code, and testers tested it on test server. Once the testing was complete, the code was “thrown over the wall” to administrators, who installed the software on production servers, and who essentially owned the applications moving forward, only going back to the developers if problems occurred.

The new model, which appeared about 10 years ago is called “DevOps,” or developer operations. In the DevOps model, architects, developers, testers, and administrators collaborate much more closely to create and manage applications. Specifically, developers play a much broader role in the day-to-day administration of deployed applications, and use information about how the applications are running to tune and enhance those applications.

The involvement of developers in administration made DevOps perfect for cloud computing. Because administrators had fewer responsibilities (i.e., no hardware to worry about), it was less threatening for those developers and administrators to collaborate as equals.

Change matters

In that old model of software development and deployment, developers were always change agents. They created new stuff, or added new capabilities to existing stuff. They embraced change, including new technologies – and the faster they created change (i.e., wrote code), the more competitive their business.

By contrast, administrators are tasked with maintaining uptime, while ensuring security. Change is not a virtue to those departments. While admins must accept change as they install new applications, it’s secondary to maintaining stability. Indeed, admins could push back against deploying software if they believed those apps weren’t reliable, or if they might affect the overall stability of the data center as a whole.

With DevOps, everyone can embrace change. One of the ways that works, with cloud computing, is to reduce the risk that an unstable application can damage system reliability. In the cloud, applications can be build and deployed using bare-metal servers (like in a data center), or in virtual machines or containers.

DevOps works best when software is deployed in VMs or containers, since those are isolated from other systems – thereby reducing risk. Turns out that administrators do like change, if there’s minimal risk that changes will negatively affect overall system reliability, performance, and uptime.

Benefits of DevOps

Goodbye, CapEx, hello, OpEx. Cloud computing moves enterprises from capital-expense data centers (which must be built, electrified, cooled, networked, secured, stocked with servers, and refreshed periodically) to operational-expense service (where the business pays monthly for the processors, memory, bandwidth, and storage reserved and/or consumed). When you couple those benefits that with virtual machines, containers, and DevOps, you get:

  • Easier Maintenance: It can be faster to apply patches and fixes to software virtual machines – and use snapshots to roll back if needed.
  • Better Security: Cloud platform vendors offer some security monitoring tools, and it’s relatively easy to install top-shelf protections like next-generation firewalls – themselves offered as cloud services.
  • Improved Agility: Studies show that the process of designing, coding, testing, and deploying new applications can be 10x faster than traditional data center methods, because the cloud reduces and provides robust resources.
  • Lower Cost: Vendors such as Amazon, Google, Microsoft, and Oracle, are aggressively lowering prices to gain market share — and in many cases, those prices are an order of magnitude below what it could cost to provision an enterprise data center.
  • Massive Scale: Need more power? Need more bandwidth? Need more storage? Push a button, and the resources are live. If those needs are short-term, you can turn the dials back down, to lower the monthly bill. You can’t do that in a data center.

Rapidly evolving

The technologies used in creating cloud-native applications are evolving rapidly. Containers, for example, are relatively new, yet are becoming incredibly popular because they require 4x-10x fewer resources than VMs – thereby slashing OpEx costs even further. Software development and management tools, like Kubernetes (for orchestration of multiple containers), Chef (which makes it easy to manage cloud infrastructure), Puppet (which automates pushing out cloud service configurations), and OpenWhisk (which strips down cloud services to “serverless” basics) push the revolution farther.

DevOps is more important than the meaningless “developer operations” moniker implies. It’s a whole new, faster way of doing computing with cloud-native applications. Because rapid change means everything in achieving business agility, everyone wins.

AOL Instant Messenger will be dead before the end of 2017. Yet, instant messages have succeeded far beyond what anyone could have envisioned for either SMS (Short Message Service, carried by the phone company) or AOL, which arguably brought instant messaging to regular computers starting in 1997.

It would be wonderful to claim that there’s some great significance in the passing of AIM. However, my guess is that there simply wasn’t any business benefit to maintaining ia service that nearly nobody used. The AIM service was said to carry far less than 1% of all instant messages across the Internet… and that was in 2011.

I have an AIM account, and although it’s linked into my Apple Messages client, I had completely forgotten about it. Yes, there was a little flurry of news back in March 2017, when AOL began closing APIs and shutting down some third-party AIM applications. However, that didn’t resonate. Then, on Oct. 6, came the email from AOL’s new corporate overload, Oath, a subsidiary of Verizon:

Dear AIM user,

We see that you’ve used AOL Instant Messenger (AIM) in the past, so we wanted to let you know that AIM will be discontinued and will no longer work as of December 15, 2017.

Before December 15, you can continue to use the service. After December 15, you will no longer have access to AIM and your data will be deleted. If you use an @aim.com email address, your email account will not be affected and you will still be able to send and receive email as usual.

We’ve loved working on AIM for you. From setting the perfect away message to that familiar ring of an incoming chat, AIM will always have a special place in our hearts. As we move forward, all of us at AOL (now Oath) are excited to continue building the next generation of iconic brands and life-changing products for users around the world.

You can visit our FAQ to learn more. Thank you for being an AIM user.

Sincerely,

The AOL Instant Messenger team

Interestingly, my wife, who also has an AIM account but never uses it, thought that the message above was a phishing scam of some sorts. So, AIM is dead. But not instant messaging, which is popular for both consumers and business users, and for desktop/notebooks and smartphones. There are so many clients that consumers can use; according to Statistica, here are the leaders as of January 2017, cited in millions of active monthly users. AIM didn’t make the list.

Then there are the corporate instant message platforms, Slack, Lync, and Symphony. And we’re not even talking social media, like Twitter, Google+, Kik, and Instagram. So – Instant messaging is alive and well. AIM was the pioneer, but it ceased being relevant a long, long time ago.

Despite Elon Musk’s warnings this summer, there’s not a whole lot of reason to lose any sleep worrying about Skynet and the Terminator. Artificial Intelligence (AI) is far from becoming a maleficent, all-knowing force. The only “Apocalypse” on the horizon right now is an over reliance by humans on machine learning and expert systems, as demonstrated by the deaths of Tesla owners who took their hands off the wheel.

Examples of what currently pass for “Artificial Intelligence” — technologies such as expert systems and machine learning — are excellent for creating software. AI software is truly valuable help in contexts that involve pattern recognition, automated decision-making, and human-to-machine conversations. Both types of AI have been around for decades. And both are only as good as the source information they are based on. For that reason, it’s unlikely that AI will replace human beings’ judgment on important tasks requiring decisions more complex than “yes or no” any time soon.

Expert systems, also known as rule-based or knowledge-based systems, are when computers are programmed with explicit rules, written down by human experts. The computers can then run the same rules but much faster, 24×7, to come up with the same conclusions as the human experts. Imagine asking an oncologist how she diagnoses cancer and then programming medical software to follow those same steps. For a particular diagnosis, an oncologist can study which of those rules was activated to validate that the expert system is working correctly.

However, it takes a lot of time and specialized knowledge to create and maintain those rules, and extremely complex rule systems can be difficult to validate. Needless to say, expert systems can’t function beyond their rules.

By contrast, machine learning allows computers to come to a decision—but without being explicitly programmed. Instead, they are shown hundreds or thousands of sample data sets and told how they should be categorized, such as “cancer | no cancer,” or “stage 1 | stage 2 | stage 3 cancer.”

Read more about this, including my thoughts on machine learning, pattern recognition, expert systems, and comparisons to human intelligence, in my story for Ars Technica, “Never mind the Elon—the forecast isn’t that spooky for AI in business.”

Long after intruders are removed and public scrutiny has faded, the impacts from a cyberattack can reverberate over a multi-year timeline. Legal costs can cascade as stolen data is leveraged in various ways over time; it can take years to recover pre-incident growth and profitability levels; and brand impact can play out in multiple ways.

That’s from a Deloitte report, “Beneath the surface of a cyberattack: A deeper look at business impacts,” released in late 2016. The report’s contents, and other statements on cyber security from Deloitte, are ironic given the company’s huge breach reported this week.

The big breach

The Deloitte breach was reported on Monday, Sept. 25. It appears to have leaked confidential emails and financial documents of some of its clients. According to the Guardian,

The Guardian understands Deloitte clients across all of these sectors had material in the company email system that was breached. The companies include household names as well as US government departments. So far, six of Deloitte’s clients have been told their information was “impacted” by the hack. Deloitte’s internal review into the incident is ongoing. The Guardian understands Deloitte discovered the hack in March this year, but it is believed the attackers may have had access to its systems since October or November 2016.

The Guardian asserts that hackers gained access to the Deloitte’s global email server via an administrator’s account protected by only a single password. Without two-factor authentication, hackers could gain entry via any computer, as long as they guessed the right password (or obtained it via hacking, malware, or social engineering). The story continues,

In addition to emails, the Guardian understands the hackers had potential access to usernames, passwords, IP addresses, architectural diagrams for businesses and health information. Some emails had attachments with sensitive security and design details.

Okay, the breach was bad. What did Deloitte have to say about these sorts of incidents? Lots.

The Deloitte Cybersecurity Report

In its 2016 report, Deloitte’s researchers pointed to 14 cyberattack impact factors. Half are the directly visible costs of breach incidents; the others which can be more subtle or hidden, and potentially never fully understood.

  • The “Above the Surface” incident costs include the expenses of technical investigations, consumer breach notifications, regulatory compliance, attorneys fees and litigation, post-preach customer protection, public relations, and cybersecurity protections.
  • Hard to tally are the “Below the Surface” costs of insurance premium increases, increased cost to raise debt, impact of operational disruption/destruction, value of lost contact revenue, devaluation of trade name, loss of intellectual property, and lost value of customer relationship.

As the report says,

Common perceptions about the impact of a cyberattack are typically shaped by what companies are required to report publicly—primarily theft of personally identifiable information (PII), payment data, and personal health information (PHI). Discussions often focus on costs related to customer notification, credit monitoring, and the possibility of legal judgments or regulatory penalties. But especially when PII theft isn’t an attacker’s only objective, the impacts can be even more far-reaching.

Recovery can take a long time, as the Deloitte says:

Beyond the initial incident triage, there are impact management and business recovery stages. These stages involve a wide range of business functions in efforts to rebuild operations, improve cybersecurity, and manage customer and third-party relationships, legal matters, investment decisions, and changes in strategic course.

Indeed, asserts Deloitte in the 2016 report, it can take months or years to repair the damage to the business. That includes redesigning processes and assets, and investing in cyber programs to emerge stronger after the incident. But wait, there’s more.

Intellectual Property and Lawsuits

A big part of the newly reported breach is the loss of intellectual property. That’s not necessarily only Deloitte’s IP, but also the IP of its biggest blue-chip customers. About the loss of IP, the 2016 reports says:

Loss of IP is an intangible cost associated with loss of exclusive control over trade secrets, copyrights, investment plans, and other proprietary and confidential information, which can lead to loss of competitive advantage, loss of revenue, and lasting and potentially irreparable economic damage to the company. Types of IP include, but are not limited to, patents, designs, copyrights, trademarks, and trade secrets.

We’ll see some of those phrases in lawsuits filed by Deloitte’s customers as they try to get a handle on what hackers may have stolen. Oh, about lawsuits, here’s what the Deloitte report says:

Attorney fees and litigation costs can encompass a wide range of legal advisory fees and settlement costs externally imposed and costs associated with legal actions the company may take to defend its interests. Such fees could potentially be offset through the recovery of damages as a result of assertive litigation pursued against an attacker, especially in regards to the theft of IP. However, the recovery could take years to pursue through litigation and may not be ultimately recoverable, even after a positive verdict in favor of the company. Based on our analysis of publicly available data pertaining to recent consumer settlement cases and other legal costs relating to cyber incidents, we observed that, on average, it could cost companies approximately $10 million in attorney fees, potential settlement of loss claims, and other legal matters.

Who wants to bet that the legal costs from this breach will be significantly higher than $10 million?

Stay Vigilant

The back page of Deloitte’s 2016 report says something important:

To grow, streamline, and innovate, many organizations have difficulty keeping pace with the evolution of cyber threats. The traditional discipline of IT security, isolated from a more comprehensive risk-based approach, may no longer be enough to protect you. Through the lens of what’s most important to your organization, you must invest in cost-justified security controls to protect your most important assets, and focus equal or greater effort on gaining more insight into threats, and responding more effectively to reduce their impact. A Secure. Vigilant. Resilient. cyber risk program can help you become more confident in your ability to reap the value of your strategic investments.

Wise words — too bad Deloitte’s email administrators, SOC teams, and risk auditors didn’t heed them. Or read their own report.

Stupidity. Incompetence. Negligence. The unprecedented huge data breach at Equifax has dominated the news cycle, infuriating IT managers, security experts, legislators, and attorneys — and scaring consumers. It appears that sensitive personally identifiable information (PII) on 143 million Americans was exfiltrated, as well as PII on some non-US nationals.

There are many troubling aspects. Reports say the tools that consumers can use to see if they are affected by the breach are inaccurate. Articles that say that by using those tools, consumers are waiving their rights to sue Equifax. Some worry that Equifax will actually make money off this by selling affected consumers its credit-monitoring services.

Let’s look at the technical aspects, though. While details about the breach are still widely lacking, two bits of information are making the rounds. One is that Equifax practiced bad password practices, allowing hackers to easily gain access to at least one server. Another is that there was a flaw in a piece of open-source software – but the patch had been available for months, yet Equifax didn’t apply that patch.

It’s unclear about the veracity of those two possible causes of the breach. Even so, this points to a troubling pattern of utter irresponsibility by Equifax’s IT and security operations teams.

Bad Password Practices

Username “admin.” Password “admin.” That’s often the default for hardware, like a home WiFi router. The first thing any owner should do is change both the username and password. Every IT professional knows that. Yet the fine techies at Equifax, or at least their Argentina office, didn’t know that. According to well-known security writer Brian Krebs, earlier this week,

Earlier today, this author was contacted by Alex Holden, founder of Milwaukee, Wisc.-based Hold Security LLC. Holden’s team of nearly 30 employees includes two native Argentinians who spent some time examining Equifax’s South American operations online after the company disclosed the breach involving its business units in North America.

It took almost no time for them to discover that an online portal designed to let Equifax employees in Argentina manage credit report disputes from consumers in that country was wide open, protected by perhaps the most easy-to-guess password combination ever: “admin/admin.”

What’s more, writes Krebs,

Once inside the portal, the researchers found they could view the names of more than 100 Equifax employees in Argentina, as well as their employee ID and email address. The “list of users” page also featured a clickable button that anyone authenticated with the “admin/admin” username and password could use to add, modify or delete user accounts on the system.

and

A review of those accounts shows all employee passwords were the same as each user’s username. Worse still, each employee’s username appears to be nothing more than their last name, or a combination of their first initial and last name. In other words, if you knew an Equifax Argentina employee’s last name, you also could work out their password for this credit dispute portal quite easily.

Idiots.

Patches Are Important, Kids

Apache’s Struts is a well-regarded open source framework for creating Web applications. It’s excellent — I’ve used it myself — but like all software, it can have bugs. One such defect was discovered in March 2017, and was given the name “CVE-2017-5638.” A patch was issued within days by the Struts team. Yet Equifax never installed that patch.

Even so, the company is blaming the U.S. breach on that defect:

Equifax has been intensely investigating the scope of the intrusion with the assistance of a leading, independent cybersecurity firm to determine what information was accessed and who has been impacted. We know that criminals exploited a U.S. website application vulnerability. The vulnerability was Apache Struts CVE-2017-5638. We continue to work with law enforcement as part of our criminal investigation, and have shared indicators of compromise with law enforcement.

Keeping up with vulnerability reports, and applying patches right away, is essential for good security. Everyone knows this. Including, I’m sure, Equifax’s IT team. There is no excuse. Idiots.

HP-35 slide rule calculatorAt the current rate of rainfall, when will your local reservoir overflow its banks? If you shoot a rocket at an angle of 60 degrees into a headwind, how far will it fly with 40 pounds of propellant and a 5-pound payload? Assuming a 100-month loan for $75,000 at 5.11 percent, what will the payoff balance be after four years? If a lab culture is doubling every 14 hours, how many viruses will there be in a week?

Those sorts of questions aren’t asked by mathematicians, who are the people who derive equations to solve problems in a general way. Rather, they are asked by working engineers, technicians, military ballistics officers, and financiers, all of whom need an actual number: Given this set of inputs, tell me the answer.

Before the modern era (say, the 1970s), these problems could be hard to solve. They required a lot of pencils and paper, a book of tables, or a slide rule. Mathematicians never carried slide rules, but astronauts did, as their backup computers.

However, slide rules had limitations. They were good to about three digits of accuracy, no more, in the hands of a skilled operator. Three digits was fine for real-world engineering, but not enough for finance. With slide rules, you had to keep track of the decimal point yourself: The slide rule might tell you the answer is 641, but you had to know if that was 64.1 or 0.641 or 641.0. And if you were chaining calculations (needed in all but the simplest problems), accuracy dropped with each successive operation.

Everything the slide rule could do, a so-called slide-rule calculator could do better—and more accurately. Slide rules are really good at few things. Multiplication and division? Easy. Exponents, like 613? Easy. Doing trig, like sines, cosines, and tangents? Easy. Logarithms? Easy.

Hewlett-Packard unleashed a monster when it created the HP-9100A desktop calculator, released in 1968 at a price of about $5,000. The HP-9100A did everything a slide rule could do, and more—such as trig, polar/rectangular conversions, and exponents and roots. However, it was big and it was expensive—about $35,900 in 2017 dollars, or the price of a nice car! HP had a market for the HP-9100A, since it already sold test equipment into many labs. However, something better was needed, something affordable, something that could become a mass-market item. And that became the pocket slide-rule calculator revolution, starting off with the amazing HP-35.

If you look at the HP-35 today, it seems laughably simplistic. The calculator app in your smartphone is much more powerful. However, back in 1972, and at a price of only $395 ($2,350 in 2017 dollars), the HP-35 changed the world. Companies like General Electric ordered tens of thousands of units. It was crazy, especially for a device that had a few minor math bugs in its first shipping batch (HP gave everyone a free replacement).

Read more about early slide-rule calculators — and the more advanced card-programmable models like the HP-65 and HP-67, in my story, “The early history of HP calculators.”

HP-65 and HP-67 card-programmable calculators

When was the last time most organizations discussed the security of their Oracle E-Business Suite? How about SAP S/4HANA? Microsoft Dynamics? IBM’s DB2? Discussions about on-prem server software security too often begin and end with ensuring that operating systems are at the latest level, and are current with patches.

That’s not good enough. Just as clicking on a phishing email or opening a malicious document in Microsoft Word can corrupt a desktop, so too server applications can be vulnerable. When those server applications are involved with customer records, billing systems, inventory, transactions, financials, or human resources, a hack into ERP or CRM systems can threaten an entire organization. Worse, if that hack leveraged stolen credentials, the business may never realize that competitors or criminals are stealing its data, and potentially even corrupting its records.

A new study from the Ponemon Institute points to the potential severity of the problem. Sixty percent of the respondents to the “Cybersecurity Risks to Oracle E-Business Suite” say that information theft, modification of data and disruption of business processes on their company’s Oracle E-Business Suite applications would be catastrophic. While 70% respondents said a material security or data breach due to insecure Oracle E-Business Suite applications is likely, 67% of respondents believe their top executives are not aware of this risk. (The research was sponsored by Onapsis, which sells security solutions for ERP suites, so apply a little sodium chloride to your interpretation of the study’s results.)

The audience of this study was of businesses that rely upon Oracle E-Business Suite. About 24% of respondents said that it was the most critical application they ran, and altogether, 93% said it was one of the top 10 critical applications. Bearing in mind that large businesses run thousands of server applications, that’s saying something.

Yet more than half of respondents – 53% — said that it was Oracle’s responsibility to ensure that its applications and platforms are safe and secure. Unless they’ve contracted with Oracle to manage their on-prem applications, and to proactively apply patches and fixes, well, they are delusional.

Another area of delusion: That software must be connected to the Internet to pose a risk. In this study, 52% of respondents agree or strongly agree that “Oracle E-Business applications that are not connected to the Internet are not a security threat.” They’ve never heard of insider threats? Credentials theft? Penetrations of enterprise networks?

What About Non-Oracle Packages?

This Ponemon/Onapsis study represents only one data point. It does not adequately discuss the role of vendors in this space, including ERP/CRM value-added resellers, consultants and MSSPs (managed security service providers). It also doesn’t differentiate between Oracle instances running on-prem compared to the Oracle ERP Cloud – where Oracle does manage all the security.

Surprising, packaged software isn’t talked about very often. Given the amount of chatter at most security conferences, bulletin boards, and the like, packaged applications like these on-prem ERP or CRM suites are rarely a factor in conversations about security. Instead, everyone is seemingly focused on the endpoint, firewalls, and operating systems. Sometimes we’ll see discussions of the various tiers in an n-tier architecture, such as databases, application servers, and presentation systems (like web servers or mobile app back ends).

Another company that offers ERP security, ERPScan, conducted a study with Crowd Research Partners focused on SAP. The “ERP Cybersecurity Study 2017” said that (and I quote from the report on these bullet points):

  • 89% of respondents expect that the number of cyber-attacks against ERP systems will grow in next 12 months.
  • An average cost of a security breach in SAP is estimated at $5m with fraud considered as the costliest risk. A third of organizations assesses the damage of fraudulent actions at more than 10m USD.
  • There is a lack of awareness towards ERP Security, worryingly, even among people who are engaged in ERP Security. One-third of them haven’t even heard about any SAP Security incident. Only 4% know about the episode with the direst consequences – USIS data breach started with an SAP vulnerability, which resulted in the company’s bankruptcy.
  • One of three respondents hasn’t taken any ERP Security initiative yet and is going to do so this year.
  • Cybersecurity professionals are most concerned about protecting customer data (72%), employee data (66%), and emails (54%). Due to this information being stored in different SAP systems (e.g. ERP, HR, or others), they are one of the most important assets to protect.
  • It is still unclear who is in charge of ERP Security: 43% of responders suppose that CIO takes responsibilities, while 28% consider it CISO’s duty.

Of course, we still must secure our operating systems, network perimeters, endpoints, mobile applications, WiFi networks, and so-on. Let’s not forget, however, the crucial applications our organizations depend upon. Breaches into those systems could be invisible – and ruinous to any business.

The water is rising up over your desktops, your servers, and your data center. Glug, glug, gurgle.

You’d better hope that the disaster recovery plans included the word “offsite.” Hope the backup IT site wasn’t another local business that’s also destroyed by the hurricane, the flood, the tornado, the fire, or the earthquake.

Disasters are real, as August’s Hurricane Harvey and immense floods in Southeast Asia have taught us all. With tens of thousands of people displaced, it’s hard to rebuild a business. Even with a smaller disaster, like a power outage that lasts a couple of days, the business impact can be tremendous.

I once worked for a company in New York that was hit by a blizzard that snapped the power and telephone lines to the office building. Down went the PBX, down went the phone system and the email servers. Remote workers (I was in in California) were massively impaired. Worse, incoming phone calls simply rang and rang; incoming email messages bounced back to the sender.

With that storm, electricity was gone for more than a week, and broadband took an additional time to be restored. You’d better believe our first order of business, once we began the recovery phase, was to move our internal Microsoft Exchange Server to a colocation facility with redundant T1 lines, and move our internal PBX to a hosted solution from the phone company. We didn’t like the cost, but we simply couldn’t afford to be shut down again the next time a storm struck.

These days, the answer lies within the cloud, either for primary data center operations, or for the source of a backup. (Forget trying to salvage anything from a submerged server rack or storage system.)

Be very prepared

Are you ready for a disaster? In a February 2017 study conducted by the Disaster Recovery Journal and Forrester Research, “The State Of Disaster Recovery Preparedness 2017,” only 18% of disaster recovery decision makers said they were “very prepared” to recover their data center in the event of a site failure or disaster event. Another 37% were prepared, 34% were somewhat prepared, and 11% not prepared at all.

That’s not good enough if you’re in Houston or Bangladesh or even New York during a blizzard. And that’s clear even among the survey respondents, 43% of whom said there was a business requirement to stay online and competitive 24×7. The cloud is considered to be one option for disaster recovery (DR) planning, but it’s not the only one. Says the study:

DR in the cloud has been a hot topic that has garnered a significant amount of attention during the past few years. Adoption is increasing but at a slow rate. According to the latest survey, 18 percent of companies are now using the cloud in some way as a recovery site – an increase of 3 percent. This includes 10 percent who use a fully packaged DR-as-a-Service (DRaaS) offering and 8 percent who use Infrastructure-as-a-Service (IaaS) to configure their own DR in the cloud configuration. Use of colocation for recovery sites is remains consistent at 37 percent (roughly the same as the prior study). However, the most common method of sourcing recovery sites is still in-house at 43 percent.

The study shows that 43% own their site and IT infrastructure. Also, 37% use a colocation site with their own infrastructure, 20% used a shared, fix-site IT IaaS provider, 10% use DRaaS offering in the cloud, and only 8% use public cloud IaaS as a recovery site.

For the very largest companies, the public cloud, or even a DRaaS provider, may not be the way to go. If the organization is still maintaining a significant data center (or multiple data centers), the cost and risks of moving to the cloud are significant. Unless a data center is heavily virtualized, it will be difficult to replicate the environment – including servers, storage, networking, and security – at a cloud provider.

For smaller businesses, however, moving to a cloud system is becoming increasingly cost-effective. It’s attractive for scalability and OpEx reasons, and agile for deploying new applications. This month’s hurricanes offer an urgent reason to move away from on-prem or hybrid to a full cloud environment — or at least explore DRaaS. With the right service provider, offering redundancy and portability, the cloud could be the only real hope in a significant disaster.

The more advanced the military technology, the greater the opportunities for intentional or unintentional failure in a cyberwar. As Scotty says in Star Trek III: The Search for Spock, “The more they overthink the plumbing, the easier it is to stop up the drain.”

In the case of a couple of recent accidents involving the U.S. Navy, the plumbing might actually be the computer systems that control navigation. In mid-August, the destroyer U.S.S. John S. McCain rammed into an oil tanker near Singapore. A month or so earlier, a container ship hit the nearly identical U.S.S. Fitzgerald off Japan. Why didn’t those hugely sophisticated ships see the much-larger merchant vessels, and move out of the way?

There has been speculation, and only speculation, that both ships might have been victims of cyber foul play, perhaps as a test of offensive capabilities by a hostile state actor. The U.S. Navy has not given a high rating to that possibility, and let’s admit, the odds are against it.

Even so, the military hasn’t dismissed the idea, writes Bill Gertz in the Washington Free Beacon:

On the possibility that China may have triggered the collision, Chinese military writings indicate there are plans to use cyber attacks to “weaken, sabotage, or destroy enemy computer network systems or to degrade their operating effectiveness.” The Chinese military intends to use electronic, cyber, and military influence operations for attacks against military computer systems and networks, and for jamming American precision-guided munitions and the GPS satellites that guide them, according to one Chinese military report.

The datac enters of those ships are hardened and well protected. Still, given the sophistication of today’s warfare, what if systems are hacked?

Imagine what would happen if, say, foreign powers were able to break into drones or cruise missiles. This might cause them to crash prematurely, self-destruct, or hit a friendly target, or perhaps even “land” and become captured. What about disruptions to fighter aircraft, such as jets or helicopters? Radar systems? Gear carried by troops?

It’s a chilling thought. It reminds me that many gun owners in the United States, including law enforcement officers, don’t like so-called “smart” pistols that require fingerprint matching before they can fire – because those systems might fail in a crisis, or if the weapon is dropped or becomes wet, leaving the police officer effectively unarmed.

The Council on Foreign Relations published a blog by David P. Fidler, “A Cyber Norms Hypothetical: What If the USS John S. McCain Was Hacked? In the post, Fidler says, “The Fitzgerald and McCain accidents resulted in significant damage to naval vessels and deaths and injuries to sailors. If done by a foreign nation, then hacking the navigation systems would be an illegal use of force under international law.”

Fidler believes this could lead to a real shooting war:

In this scenario, the targets were naval vessels not merchant ships, which means the hacking threatened and damaged core national security interests and military assets of the United States. In the peacetime circumstances of these incidents, no nation could argue that such a use of force had a plausible justification under international law. And every country knows the United States reserves the right to use force in self-defense if it is the victim of an illegal use of force.

There is precedent. In May and June 2017, two Sukhoi 30 fighter jets belonging to the Indian Air Force crashed – and there was speculation that these were caused by China. In one case, reports Naveen Goud in Cybersecurity Insiders,

The inquiry made by IAF led to the discovery of a fact that the flying aircraft was cyber attacked when it was airborne which led to the death of the two IAF officers- squadron leader D Pankaj and Flight Lieutenant Achudev who were flying the aircraft. The death was caused due to the failure in initiating the ejection process of the pilot’s seat due to a cyber interference caused in the air.

Let us hope that we’re not entering a hot phase of active cyberwarfare.