The more advanced the military technology, the greater the opportunities for intentional or unintentional failure in a cyberwar. As Scotty says in Star Trek III: The Search for Spock, “The more they overthink the plumbing, the easier it is to stop up the drain.”

In the case of a couple of recent accidents involving the U.S. Navy, the plumbing might actually be the computer systems that control navigation. In mid-August, the destroyer U.S.S. John S. McCain rammed into an oil tanker near Singapore. A month or so earlier, a container ship hit the nearly identical U.S.S. Fitzgerald off Japan. Why didn’t those hugely sophisticated ships see the much-larger merchant vessels, and move out of the way?

There has been speculation, and only speculation, that both ships might have been victims of cyber foul play, perhaps as a test of offensive capabilities by a hostile state actor. The U.S. Navy has not given a high rating to that possibility, and let’s admit, the odds are against it.

Even so, the military hasn’t dismissed the idea, writes Bill Gertz in the Washington Free Beacon:

On the possibility that China may have triggered the collision, Chinese military writings indicate there are plans to use cyber attacks to “weaken, sabotage, or destroy enemy computer network systems or to degrade their operating effectiveness.” The Chinese military intends to use electronic, cyber, and military influence operations for attacks against military computer systems and networks, and for jamming American precision-guided munitions and the GPS satellites that guide them, according to one Chinese military report.

The datac enters of those ships are hardened and well protected. Still, given the sophistication of today’s warfare, what if systems are hacked?

Imagine what would happen if, say, foreign powers were able to break into drones or cruise missiles. This might cause them to crash prematurely, self-destruct, or hit a friendly target, or perhaps even “land” and become captured. What about disruptions to fighter aircraft, such as jets or helicopters? Radar systems? Gear carried by troops?

It’s a chilling thought. It reminds me that many gun owners in the United States, including law enforcement officers, don’t like so-called “smart” pistols that require fingerprint matching before they can fire – because those systems might fail in a crisis, or if the weapon is dropped or becomes wet, leaving the police officer effectively unarmed.

The Council on Foreign Relations published a blog by David P. Fidler, “A Cyber Norms Hypothetical: What If the USS John S. McCain Was Hacked? In the post, Fidler says, “The Fitzgerald and McCain accidents resulted in significant damage to naval vessels and deaths and injuries to sailors. If done by a foreign nation, then hacking the navigation systems would be an illegal use of force under international law.”

Fidler believes this could lead to a real shooting war:

In this scenario, the targets were naval vessels not merchant ships, which means the hacking threatened and damaged core national security interests and military assets of the United States. In the peacetime circumstances of these incidents, no nation could argue that such a use of force had a plausible justification under international law. And every country knows the United States reserves the right to use force in self-defense if it is the victim of an illegal use of force.

There is precedent. In May and June 2017, two Sukhoi 30 fighter jets belonging to the Indian Air Force crashed – and there was speculation that these were caused by China. In one case, reports Naveen Goud in Cybersecurity Insiders,

The inquiry made by IAF led to the discovery of a fact that the flying aircraft was cyber attacked when it was airborne which led to the death of the two IAF officers- squadron leader D Pankaj and Flight Lieutenant Achudev who were flying the aircraft. The death was caused due to the failure in initiating the ejection process of the pilot’s seat due to a cyber interference caused in the air.

Let us hope that we’re not entering a hot phase of active cyberwarfare.

No organization likes to reveal that its network has been breached, or it data has been stolen by hackers or disclosed through human error. Yet under the European Union’s new General Data Protection Regulation (GDPR), breaches must be disclosed.

The GDPR is a broad set of regulations designed to protect citizens of the European Union. The rules apply to every organization and business that collects or stores information about people in Europe. It doesn’t matter if the company has offices in Europe: If data is collected about Europeans, the GDPR applies.

Traditionally, most organizations hide all information about security incidents, especially if data is compromised. That makes sense: If a business is seen to be careless with people’s data, its reputation can suffer, competitors can attack, and there can be lawsuits or government penalties.

We tend to hear about security incidents only if there’s a breach sufficiently massive that the company must disclose to regulators, or if there’s a leak to the media. Even then, the delay between the breach can take weeks or month — meaning that folks aren’t given enough time to engage identity theft protection companies, monitor their credit/debit payments, or even change their passwords.

Thanks to GDPR, organizations must now disclose all incidents where personal data may have been compromised – and make that disclosure quickly. Not only that, but the GDPR says that the disclosure must be to the general public, or at least to those people affected; the disclosure can’t be buried in a regulatory filing.

Important note: The GDPR says absolutely nothing about disclosing successful cyberattacks where personal data is not stolen or placed at risk. That includes distributed denial-of-service (DDoS) attacks, ransomware, theft of financial data, or espionage of intellectual property. That doesn’t mean that such cyberattacks can be kept secret, but in reality, good luck finding out about them, unless the company has other reasons to disclose. For example, after some big ransomware attacks earlier this year, some publicly traded companies revealed to investors that those attacks could materially affect their quarterly profits. This type of disclosure is mandated by financial regulation – not by the GDPR, which is focused on protecting individuals’ personal data.

The Clock Is Ticking

How long does the organization have to disclose the breach? Three days from when the breach was discovered. That’s pretty quick, though of course, sometimes breaches themselves can take weeks or months to be discovered, especially if the hackers are extremely skilled, or if human error was involved. (An example of human error: Storing unencrypted data in a public cloud without strong password protection. It’s been happening more and more often.)

Here’s what the GDPR says about such breaches — and the language is pretty clear. The first step is to disclose to authorities within three days:

A personal data breach may, if not addressed in an appropriate and timely manner, result in physical, material or non-material damage to natural persons such as loss of control over their personal data or limitation of their rights, discrimination, identity theft or fraud, financial loss, unauthorised reversal of pseudonymisation, damage to reputation, loss of confidentiality of personal data protected by professional secrecy or any other significant economic or social disadvantage to the natural person concerned. Therefore, as soon as the controller becomes aware that a personal data breach has occurred, the controller should notify the personal data breach to the supervisory authority without undue delay and, where feasible, not later than 72 hours after having become aware of it, unless the controller is able to demonstrate, in accordance with the accountability principle, that the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. Where such notification cannot be achieved within 72 hours, the reasons for the delay should accompany the notification and information may be provided in phases without undue further delay.

The GDPR does not specify how quickly the organization must notify the individuals whose data was compromised, beyond “as soon as reasonably feasible”:

The controller should communicate to the data subject a personal data breach, without undue delay, where that personal data breach is likely to result in a high risk to the rights and freedoms of the natural person in order to allow him or her to take the necessary precautions. The communication should describe the nature of the personal data breach as well as recommendations for the natural person concerned to mitigate potential adverse effects. Such communications to data subjects should be made as soon as reasonably feasible and in close cooperation with the supervisory authority, respecting guidance provided by it or by other relevant authorities such as law-enforcement authorities. For example, the need to mitigate an immediate risk of damage would call for prompt communication with data subjects whereas the need to implement appropriate measures against continuing or similar personal data breaches may justify more time for communication.

The phrase “personal data breach” doesn’t only mean theft or accidental disclosure of a person’s private information. The GDPR defines the phrase as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed.” So the loss of important data (think health records) would qualify as a personal data breach.

Big Fines and Penalties

What happens if an organization does not disclose? It can be fined up to 4% of annual global turnover. There’s a cap of €20 million of the fines.

These GDPR rules about breaches are good, and so are the penalties. Too many organizations prefer to hide this type of information, or dribble out disclosures as slowly and quietly as possible, to protect the company’s reputation and share prices. The new EU regulation recognizes that individuals have a vested interest in data that organizations collect or store about them – and need to be told if that data is stolen or compromised.

The European Union is taking computer security, data breaches, and individual privacy seriously. The EU’s General Data Protection Regulation (GDPR) will take effect on May 25, 2018 – but it’s not only a regulation for companies based in Europe.

The GDPR is designed to protect European consumers. That means that every business that stores information about European residents will be affected, no matter where that business operates or is headquartered. That means the United States, and also a post-Brexit United Kingdom.

There’s a hefty price for non-compliance: Businesses can be fined up to 4% of their worldwide top-line revenue, with a cap of €20 million. No matter how you slice it, for most businesses that’s going to hurt, though for some of the tech industry’s giants, that €20 million penalty might look like a slap on the wrist.

A big topic within GDPR is “data portability.” That is the notion that an individual has the right to see information that it has shared with an organization (or has given permission to be collected), inn a commonly used machine-readable format. Details need to be worked out to make that effective.

Another topic is that individuals have the right to make changes to some of their information, or to delete all or part of their information. No, customers can’t delete their transaction history, for example, or delete that they owe the organization money. However, they may choose to delete information that the organization may have collected, such as their age, where they went to college, or the names of their children. They also have the right to request corrections to the data, such as a misspelled name or an incorrect address.

That’s not as trivial as it may seem. It is not uncommon for organizations to have multiple versions of, say, a person’s name and spelling, or to have the information contain differences in formatting. This can have implications when records don’t match. In some countries, there have been problems with a traveler’s passport information not 100% exactly matching the information on a driver’s license, airline ticket, or frequent traveller program. While the variations might appear trivial to a human — a missing middle name, a missing accent mark, an extra space — it can be enough to throw off automated data processing systems, which therefore can’t 100% match the traveler to a ticket. Without rules like the GDPR, organizations haven’t been required to make it easy, or even possible, for customers to make corrections.

Not a Complex Document, But a Tricky One

A cottage industry has arisen with consultancies offering to help European and global companies ensure GDPR prior to implementation. Astonishingly, for such an important regulation, the GDPR itself is relatively short – only 88 pages of fairly easy-to-read prose. Of course, some parts of the GDPR refer back to other European Union directives. Still, the intended meaning is clear.

For example, this clause on sensitive data sounds simple – but how exactly will it be processed? This is why we have consultants.

Personal data which are, by their nature, particularly sensitive in relation to fundamental rights and freedoms merit specific protection as the context of their processing could create significant risks to the fundamental rights and freedoms. Those personal data should include personal data revealing racial or ethnic origin, whereby the use of the term ‘racial origin’ in this Regulation does not imply an acceptance by the Union of theories which attempt to determine the existence of separate human races. The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person. Such personal data should not be processed, unless processing is allowed in specific cases set out in this Regulation, taking into account that Member States law may lay down specific provisions on data protection in order to adapt the application of the rules of this Regulation for compliance with a legal obligation or for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller. In addition to the specific requirements for such processing, the general principles and other rules of this Regulation should apply, in particular as regards the conditions for lawful processing. Derogations from the general prohibition for processing such special categories of personal data should be explicitly provided, inter alia, where the data subject gives his or her explicit consent or in respect of specific needs in particular where the processing is carried out in the course of legitimate activities by certain associations or foundations the purpose of which is to permit the exercise of fundamental freedoms.

The Right to Be Forgotten

Vious EU members states have “rights to be forgotten” rules, which let individuals request that some data about them be deleted. These rules are tricky for rest-of-world organizations, where there may not be any such regulations, and those regulations may be in conflict with other rules (such as in the U.S., freedom of the press). Still, the GDPR strengthens those rules – and this will likely be one of the first areas tested with lawsuits and penalties, particularly with children:

A data subject should have the right to have personal data concerning him or her rectified and a ‘right to be forgotten’ where the retention of such data infringes this Regulation or Union or Member State law to which the controller is subject. In particular, a data subject should have the right to have his or her personal data erased and no longer processed where the personal data are no longer necessary in relation to the purposes for which they are collected or otherwise processed, where a data subject has withdrawn his or her consent or objects to the processing of personal data concerning him or her, or where the processing of his or her personal data does not otherwise comply with this Regulation. That right is relevant in particular where the data subject has given his or her consent as a child and is not fully aware of the risks involved by the processing, and later wants to remove such personal data, especially on the internet. The data subject should be able to exercise that right notwithstanding the fact that he or she is no longer a child. However, the further retention of the personal data should be lawful where it is necessary, for exercising the right of freedom of expression and information, for compliance with a legal obligation, for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller, on the grounds of public interest in the area of public health, for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes, or for the establishment, exercise or defence of legal claims.

Time to Get Up to Speed

In less than a year, many organizations around the world will be subject to the European Union’s GDPR. European businesses are working hard to comply with those regulations. For everyone else, it’s time to start – and yes, you probably do want a consultant.

The late, great science fiction writer Isaac Asimov frequently referred to the “Frankenstein Complex,” That was deep-seated and irrational phobia that robots (i.e, artificial intelligence) would rise up and destroy their creators. Whether it’s HAL in “2001: A Space Odyssey,” or the mainframe in “Colossus: The Forbin Project,” or Arnold Schwarzenegger in “Terminator,” or even the classic Star Trek episode “The Ultimate Computer,” sci-fi carries the message that AI will soon render us obsolescent… or obsolete… or extinct. Many people are worried this fantasy will become reality.

No, Facebook didn’t have to kill creepy bots 

To listen to the breathless news reports, Facebook created some chatbots that were out of control. The bots, designed to test AI’s ability to negotiate, had created their own language – and scientists were alarmed that they could no longer understand what those devious rogues were up to. So, the plug had to be pulled before Armageddon. Said Poulami Nag in the International Business Times:

Facebook may have just created something, which may cause the end of a whole Homo sapien species in the hand of artificial intelligence. You think I am being over dramatic? Not really. These little baby Terminators that we’re breeding could start talking about us behind our backs! They could use this language to plot against us, and the worst part is that we won’t even understand.

Well, no. Not even close. The development of an optimized negotiating language was no surprise, and had little to do with the conclusion of Facebook’s experiment, explain the engineers at FAIR – Facebook Artificial Intelligence Research.

The program’s goal was to create dialog agents (i.e., chatbots) that would negotiate with people. To quote a Facebook blog,

Similar to how people have differing goals, run into conflicts, and then negotiate to come to an agreed-upon compromise, the researchers have shown that it’s possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.

And then,

To go beyond simply trying to imitate people, the FAIR researchers instead allowed the model to achieve the goals of the negotiation. To train the model to achieve its goals, the researchers had the model practice thousands of negotiations against itself, and used reinforcement learning to reward the model when it achieved a good outcome. To prevent the algorithm from developing its own language, it was simultaneously trained to produce humanlike language.

The language produced by the chatbots was indeed humanlike – but they didn’t talk like humans. Instead they used English words, but in a way that was slightly different than human speakers would use. For example, explains tech journalist Wayne Rash in eWeek,

The blog discussed how researchers were teaching an AI program how to negotiate by having two AI agents, one named Bob and the other Alice, negotiate with each other to divide a set of objects, which consisted a hats, books and balls. Each AI agent was assigned a value to each item, with the value not known to the other ‘bot. Then the chatbots were allowed to talk to each other to divide up the objects.

The goal of the negotiation was for each chatbot to accumulate the most points. While the ‘bots started out talking to each other in English, that quickly changed to a series of words that reflected meaning to the bots, but not to the humans doing the research. Here’s a typical exchange between the ‘bots, using English words but with different meaning:

Bob: “I can i i everything else.”

Alice responds: “Balls have zero to me to me to me to me to me to me to me to me to,”

The conversation continues with variations of the number of the times Bob said “i” and the number of times Alice said “to me” in the discussion.

A natural evolution of natural language

Those aren’t glitches; those repetitions have meaning to the chatbots. The experiment showed that some parameters needed to be changed – after all, FAIR wanted chatbots that could negotiate with humans, and these programs weren’t accomplishing that goal. According to Gizmodo’s Tom McKay,

When Facebook directed two of these semi-intelligent bots to talk to each other, FastCo reported, the programmers realized they had made an error by not incentivizing the chatbots to communicate according to human-comprehensible rules of the English language. In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand—but while it might look creepy, that’s all it was.

“Agents will drift off understandable language and invent codewords for themselves,” FAIR visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Facebook did indeed shut down the conversation, but not because they were panicked they had untethered a potential Skynet. FAIR researcher Mike Lewis told FastCo they had simply decided “our interest was having bots who could talk to people,” not efficiently to each other, and thus opted to require them to write to each other legibly.

No panic, fingers on the missiles, no mushroom clouds. Whew, humanity dodged certain death yet again! Must click “like” so the killer robots like me.

It’s hard to know which was better: The pitch for my writing about an infographic, or the infographic itself.

About the pitch: The writer said, “I’ve been tasked with the job of raising some awareness around the graphic (in the hope that people actually like my work lol) and wondered if you thought it might be something entertaining for your audience? If not I completely understand – I’ll just lose my job and won’t be able to eat for a month (think of my poor cats).” Since I don’t want this lady and her cats to starve, I caved.

If you like the pitch, you’ll enjoy the infographic, “10 Marketing Lessons from Apple.” One piece from it is reproduced above. Very cute.

It’s difficult to recruit qualified security staff because there are more openings than humans to fill them. It’s also difficult to retain IT security professionals because someone else is always hiring. But don’t worry: Unless you work for an organization that refuses to pay the going wage, you’ve got this.

Two recent studies present dire, but somewhat conflicting, views of the availability of qualified cybersecurity professionals over the next four or five years. The first study is the Global Information Security Workforce Study from the Center for Cyber Safety and Education, which predicts a shortfall of 1.8 million cybersecurity workers by 2022. Among the highlights from that research, which drew on data from 19,000 cybersecurity professionals:

  • The cybersecurity workforce gap will hit 1.8 million by 2022. That’s a 20 percent increase since 2015.
  • Sixty-eight percent of workers in North America believe this workforce shortage is due to a lack of qualified personnel.
  • A third of hiring managers globally are planning to increase the size of their departments by 15 percent or more.
  • There aren’t enough workers to address current threats, according to 66 percent of respondents.
  • Around the globe, 70 percent of employers are looking to increase the size of their cybersecurity staff this year.
  • Nine in ten security specialists are male. The majority have technical backgrounds, suggesting that recruitment channels and tactics need to change.
  • While 87 percent of cybersecurity workers globally did not start in cybersecurity, 94 percent of hiring managers indicate that security experience in the field is an important consideration.

The second study is the Cybersecurity Jobs Report, created by the editors of Cybersecurity Ventures. Here are some highlights:

  • There will be 3.5 million cybersecurity job openings by 2021.
  • Cybercrime will more than triple the number of job openings over the next five years. India alone will need 1 million security professionals by 2020 to meet the demands of its rapidly growing economy.
  • Today, the U.S. employs nearly 780,000 people in cybersecurity positions. But a lot more are needed: There are approximately 350,000 current cybersecurity job openings, up from 209,000 in 2015.

So, whether you’re hiring a chief information security officer or a cybersecurity operations specialist, expect a lot of competition. What can you do about it? How can you beat the staffing shortage? Read my suggestion in “How to beat the cybersecurity staffing shortage.”