No organization likes to reveal that its network has been breached, or it data has been stolen by hackers or disclosed through human error. Yet under the European Union’s new General Data Protection Regulation (GDPR), breaches must be disclosed.

The GDPR is a broad set of regulations designed to protect citizens of the European Union. The rules apply to every organization and business that collects or stores information about people in Europe. It doesn’t matter if the company has offices in Europe: If data is collected about Europeans, the GDPR applies.

Traditionally, most organizations hide all information about security incidents, especially if data is compromised. That makes sense: If a business is seen to be careless with people’s data, its reputation can suffer, competitors can attack, and there can be lawsuits or government penalties.

We tend to hear about security incidents only if there’s a breach sufficiently massive that the company must disclose to regulators, or if there’s a leak to the media. Even then, the delay between the breach can take weeks or month — meaning that folks aren’t given enough time to engage identity theft protection companies, monitor their credit/debit payments, or even change their passwords.

Thanks to GDPR, organizations must now disclose all incidents where personal data may have been compromised – and make that disclosure quickly. Not only that, but the GDPR says that the disclosure must be to the general public, or at least to those people affected; the disclosure can’t be buried in a regulatory filing.

Important note: The GDPR says absolutely nothing about disclosing successful cyberattacks where personal data is not stolen or placed at risk. That includes distributed denial-of-service (DDoS) attacks, ransomware, theft of financial data, or espionage of intellectual property. That doesn’t mean that such cyberattacks can be kept secret, but in reality, good luck finding out about them, unless the company has other reasons to disclose. For example, after some big ransomware attacks earlier this year, some publicly traded companies revealed to investors that those attacks could materially affect their quarterly profits. This type of disclosure is mandated by financial regulation – not by the GDPR, which is focused on protecting individuals’ personal data.

The Clock Is Ticking

How long does the organization have to disclose the breach? Three days from when the breach was discovered. That’s pretty quick, though of course, sometimes breaches themselves can take weeks or months to be discovered, especially if the hackers are extremely skilled, or if human error was involved. (An example of human error: Storing unencrypted data in a public cloud without strong password protection. It’s been happening more and more often.)

Here’s what the GDPR says about such breaches — and the language is pretty clear. The first step is to disclose to authorities within three days:

A personal data breach may, if not addressed in an appropriate and timely manner, result in physical, material or non-material damage to natural persons such as loss of control over their personal data or limitation of their rights, discrimination, identity theft or fraud, financial loss, unauthorised reversal of pseudonymisation, damage to reputation, loss of confidentiality of personal data protected by professional secrecy or any other significant economic or social disadvantage to the natural person concerned. Therefore, as soon as the controller becomes aware that a personal data breach has occurred, the controller should notify the personal data breach to the supervisory authority without undue delay and, where feasible, not later than 72 hours after having become aware of it, unless the controller is able to demonstrate, in accordance with the accountability principle, that the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. Where such notification cannot be achieved within 72 hours, the reasons for the delay should accompany the notification and information may be provided in phases without undue further delay.

The GDPR does not specify how quickly the organization must notify the individuals whose data was compromised, beyond “as soon as reasonably feasible”:

The controller should communicate to the data subject a personal data breach, without undue delay, where that personal data breach is likely to result in a high risk to the rights and freedoms of the natural person in order to allow him or her to take the necessary precautions. The communication should describe the nature of the personal data breach as well as recommendations for the natural person concerned to mitigate potential adverse effects. Such communications to data subjects should be made as soon as reasonably feasible and in close cooperation with the supervisory authority, respecting guidance provided by it or by other relevant authorities such as law-enforcement authorities. For example, the need to mitigate an immediate risk of damage would call for prompt communication with data subjects whereas the need to implement appropriate measures against continuing or similar personal data breaches may justify more time for communication.

The phrase “personal data breach” doesn’t only mean theft or accidental disclosure of a person’s private information. The GDPR defines the phrase as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed.” So the loss of important data (think health records) would qualify as a personal data breach.

Big Fines and Penalties

What happens if an organization does not disclose? It can be fined up to 4% of annual global turnover. There’s a cap of €20 million of the fines.

These GDPR rules about breaches are good, and so are the penalties. Too many organizations prefer to hide this type of information, or dribble out disclosures as slowly and quietly as possible, to protect the company’s reputation and share prices. The new EU regulation recognizes that individuals have a vested interest in data that organizations collect or store about them – and need to be told if that data is stolen or compromised.

The European Union is taking computer security, data breaches, and individual privacy seriously. The EU’s General Data Protection Regulation (GDPR) will take effect on May 25, 2018 – but it’s not only a regulation for companies based in Europe.

The GDPR is designed to protect European consumers. That means that every business that stores information about European residents will be affected, no matter where that business operates or is headquartered. That means the United States, and also a post-Brexit United Kingdom.

There’s a hefty price for non-compliance: Businesses can be fined up to 4% of their worldwide top-line revenue, with a cap of €20 million. No matter how you slice it, for most businesses that’s going to hurt, though for some of the tech industry’s giants, that €20 million penalty might look like a slap on the wrist.

A big topic within GDPR is “data portability.” That is the notion that an individual has the right to see information that it has shared with an organization (or has given permission to be collected), inn a commonly used machine-readable format. Details need to be worked out to make that effective.

Another topic is that individuals have the right to make changes to some of their information, or to delete all or part of their information. No, customers can’t delete their transaction history, for example, or delete that they owe the organization money. However, they may choose to delete information that the organization may have collected, such as their age, where they went to college, or the names of their children. They also have the right to request corrections to the data, such as a misspelled name or an incorrect address.

That’s not as trivial as it may seem. It is not uncommon for organizations to have multiple versions of, say, a person’s name and spelling, or to have the information contain differences in formatting. This can have implications when records don’t match. In some countries, there have been problems with a traveler’s passport information not 100% exactly matching the information on a driver’s license, airline ticket, or frequent traveller program. While the variations might appear trivial to a human — a missing middle name, a missing accent mark, an extra space — it can be enough to throw off automated data processing systems, which therefore can’t 100% match the traveler to a ticket. Without rules like the GDPR, organizations haven’t been required to make it easy, or even possible, for customers to make corrections.

Not a Complex Document, But a Tricky One

A cottage industry has arisen with consultancies offering to help European and global companies ensure GDPR prior to implementation. Astonishingly, for such an important regulation, the GDPR itself is relatively short – only 88 pages of fairly easy-to-read prose. Of course, some parts of the GDPR refer back to other European Union directives. Still, the intended meaning is clear.

For example, this clause on sensitive data sounds simple – but how exactly will it be processed? This is why we have consultants.

Personal data which are, by their nature, particularly sensitive in relation to fundamental rights and freedoms merit specific protection as the context of their processing could create significant risks to the fundamental rights and freedoms. Those personal data should include personal data revealing racial or ethnic origin, whereby the use of the term ‘racial origin’ in this Regulation does not imply an acceptance by the Union of theories which attempt to determine the existence of separate human races. The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person. Such personal data should not be processed, unless processing is allowed in specific cases set out in this Regulation, taking into account that Member States law may lay down specific provisions on data protection in order to adapt the application of the rules of this Regulation for compliance with a legal obligation or for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller. In addition to the specific requirements for such processing, the general principles and other rules of this Regulation should apply, in particular as regards the conditions for lawful processing. Derogations from the general prohibition for processing such special categories of personal data should be explicitly provided, inter alia, where the data subject gives his or her explicit consent or in respect of specific needs in particular where the processing is carried out in the course of legitimate activities by certain associations or foundations the purpose of which is to permit the exercise of fundamental freedoms.

The Right to Be Forgotten

Vious EU members states have “rights to be forgotten” rules, which let individuals request that some data about them be deleted. These rules are tricky for rest-of-world organizations, where there may not be any such regulations, and those regulations may be in conflict with other rules (such as in the U.S., freedom of the press). Still, the GDPR strengthens those rules – and this will likely be one of the first areas tested with lawsuits and penalties, particularly with children:

A data subject should have the right to have personal data concerning him or her rectified and a ‘right to be forgotten’ where the retention of such data infringes this Regulation or Union or Member State law to which the controller is subject. In particular, a data subject should have the right to have his or her personal data erased and no longer processed where the personal data are no longer necessary in relation to the purposes for which they are collected or otherwise processed, where a data subject has withdrawn his or her consent or objects to the processing of personal data concerning him or her, or where the processing of his or her personal data does not otherwise comply with this Regulation. That right is relevant in particular where the data subject has given his or her consent as a child and is not fully aware of the risks involved by the processing, and later wants to remove such personal data, especially on the internet. The data subject should be able to exercise that right notwithstanding the fact that he or she is no longer a child. However, the further retention of the personal data should be lawful where it is necessary, for exercising the right of freedom of expression and information, for compliance with a legal obligation, for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller, on the grounds of public interest in the area of public health, for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes, or for the establishment, exercise or defence of legal claims.

Time to Get Up to Speed

In less than a year, many organizations around the world will be subject to the European Union’s GDPR. European businesses are working hard to comply with those regulations. For everyone else, it’s time to start – and yes, you probably do want a consultant.

The late, great science fiction writer Isaac Asimov frequently referred to the “Frankenstein Complex,” That was deep-seated and irrational phobia that robots (i.e, artificial intelligence) would rise up and destroy their creators. Whether it’s HAL in “2001: A Space Odyssey,” or the mainframe in “Colossus: The Forbin Project,” or Arnold Schwarzenegger in “Terminator,” or even the classic Star Trek episode “The Ultimate Computer,” sci-fi carries the message that AI will soon render us obsolescent… or obsolete… or extinct. Many people are worried this fantasy will become reality.

No, Facebook didn’t have to kill creepy bots 

To listen to the breathless news reports, Facebook created some chatbots that were out of control. The bots, designed to test AI’s ability to negotiate, had created their own language – and scientists were alarmed that they could no longer understand what those devious rogues were up to. So, the plug had to be pulled before Armageddon. Said Poulami Nag in the International Business Times:

Facebook may have just created something, which may cause the end of a whole Homo sapien species in the hand of artificial intelligence. You think I am being over dramatic? Not really. These little baby Terminators that we’re breeding could start talking about us behind our backs! They could use this language to plot against us, and the worst part is that we won’t even understand.

Well, no. Not even close. The development of an optimized negotiating language was no surprise, and had little to do with the conclusion of Facebook’s experiment, explain the engineers at FAIR – Facebook Artificial Intelligence Research.

The program’s goal was to create dialog agents (i.e., chatbots) that would negotiate with people. To quote a Facebook blog,

Similar to how people have differing goals, run into conflicts, and then negotiate to come to an agreed-upon compromise, the researchers have shown that it’s possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.

And then,

To go beyond simply trying to imitate people, the FAIR researchers instead allowed the model to achieve the goals of the negotiation. To train the model to achieve its goals, the researchers had the model practice thousands of negotiations against itself, and used reinforcement learning to reward the model when it achieved a good outcome. To prevent the algorithm from developing its own language, it was simultaneously trained to produce humanlike language.

The language produced by the chatbots was indeed humanlike – but they didn’t talk like humans. Instead they used English words, but in a way that was slightly different than human speakers would use. For example, explains tech journalist Wayne Rash in eWeek,

The blog discussed how researchers were teaching an AI program how to negotiate by having two AI agents, one named Bob and the other Alice, negotiate with each other to divide a set of objects, which consisted a hats, books and balls. Each AI agent was assigned a value to each item, with the value not known to the other ‘bot. Then the chatbots were allowed to talk to each other to divide up the objects.

The goal of the negotiation was for each chatbot to accumulate the most points. While the ‘bots started out talking to each other in English, that quickly changed to a series of words that reflected meaning to the bots, but not to the humans doing the research. Here’s a typical exchange between the ‘bots, using English words but with different meaning:

Bob: “I can i i everything else.”

Alice responds: “Balls have zero to me to me to me to me to me to me to me to me to,”

The conversation continues with variations of the number of the times Bob said “i” and the number of times Alice said “to me” in the discussion.

A natural evolution of natural language

Those aren’t glitches; those repetitions have meaning to the chatbots. The experiment showed that some parameters needed to be changed – after all, FAIR wanted chatbots that could negotiate with humans, and these programs weren’t accomplishing that goal. According to Gizmodo’s Tom McKay,

When Facebook directed two of these semi-intelligent bots to talk to each other, FastCo reported, the programmers realized they had made an error by not incentivizing the chatbots to communicate according to human-comprehensible rules of the English language. In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand—but while it might look creepy, that’s all it was.

“Agents will drift off understandable language and invent codewords for themselves,” FAIR visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Facebook did indeed shut down the conversation, but not because they were panicked they had untethered a potential Skynet. FAIR researcher Mike Lewis told FastCo they had simply decided “our interest was having bots who could talk to people,” not efficiently to each other, and thus opted to require them to write to each other legibly.

No panic, fingers on the missiles, no mushroom clouds. Whew, humanity dodged certain death yet again! Must click “like” so the killer robots like me.

It’s hard to know which was better: The pitch for my writing about an infographic, or the infographic itself.

About the pitch: The writer said, “I’ve been tasked with the job of raising some awareness around the graphic (in the hope that people actually like my work lol) and wondered if you thought it might be something entertaining for your audience? If not I completely understand – I’ll just lose my job and won’t be able to eat for a month (think of my poor cats).” Since I don’t want this lady and her cats to starve, I caved.

If you like the pitch, you’ll enjoy the infographic, “10 Marketing Lessons from Apple.” One piece from it is reproduced above. Very cute.

It’s difficult to recruit qualified security staff because there are more openings than humans to fill them. It’s also difficult to retain IT security professionals because someone else is always hiring. But don’t worry: Unless you work for an organization that refuses to pay the going wage, you’ve got this.

Two recent studies present dire, but somewhat conflicting, views of the availability of qualified cybersecurity professionals over the next four or five years. The first study is the Global Information Security Workforce Study from the Center for Cyber Safety and Education, which predicts a shortfall of 1.8 million cybersecurity workers by 2022. Among the highlights from that research, which drew on data from 19,000 cybersecurity professionals:

  • The cybersecurity workforce gap will hit 1.8 million by 2022. That’s a 20 percent increase since 2015.
  • Sixty-eight percent of workers in North America believe this workforce shortage is due to a lack of qualified personnel.
  • A third of hiring managers globally are planning to increase the size of their departments by 15 percent or more.
  • There aren’t enough workers to address current threats, according to 66 percent of respondents.
  • Around the globe, 70 percent of employers are looking to increase the size of their cybersecurity staff this year.
  • Nine in ten security specialists are male. The majority have technical backgrounds, suggesting that recruitment channels and tactics need to change.
  • While 87 percent of cybersecurity workers globally did not start in cybersecurity, 94 percent of hiring managers indicate that security experience in the field is an important consideration.

The second study is the Cybersecurity Jobs Report, created by the editors of Cybersecurity Ventures. Here are some highlights:

  • There will be 3.5 million cybersecurity job openings by 2021.
  • Cybercrime will more than triple the number of job openings over the next five years. India alone will need 1 million security professionals by 2020 to meet the demands of its rapidly growing economy.
  • Today, the U.S. employs nearly 780,000 people in cybersecurity positions. But a lot more are needed: There are approximately 350,000 current cybersecurity job openings, up from 209,000 in 2015.

So, whether you’re hiring a chief information security officer or a cybersecurity operations specialist, expect a lot of competition. What can you do about it? How can you beat the staffing shortage? Read my suggestion in “How to beat the cybersecurity staffing shortage.”

“Ransomware! Ransomware! Ransomware!” Those words may lack the timeless resonance of Steve Ballmer’s epic “Developers! Developers! Developers!” scream in 2000, but ransomware was seemingly an obsession or at Black Hat USA 2017, happening this week in Las Vegas.

There are good reason for attendees and vendors to be focused on ransomware. For one thing, ransomware is real. Rates of ransomware attacks have exploded off the charts in 2017, helped in part by the disclosures of top-secret vulnerabilities and hacking tools allegedly stolen from the United States’ three-letter-initial agencies.

For another, the costs of ransomware are significant. Looking only at a few attacks in 2017, including WannaCry, Petya, and NotPetya, corporates have been forced to revise their earnings downward to account for IT downtime and lost productivity. Those include ReckittNuance, and FedEx. Those types of impact grab the attention of every CFO and every CEO.

Talking with another analyst at Black Hat, he observed that just about every vendor on the expo floor had managed to incorporate ransomware into its magic show. My quip: “I wouldn’t be surprised to see a company marketing network cables as specially designed to prevent against ransomware.” His quick retort: “The queue would be half a mile long for samples. They’d make a fortune.”

While we seek mezzanine funding for our Ransomware-Proof CAT-6 Cables startup, let’s talk about what organizations can and should do to handle ransomware. It’s not rocket science, and it’s not brain surgery.

  • Train, train, train. End users will slip up, and they will click to open emails they shouldn’t open. They will visit websites they shouldn’t visit. And they will ignore security warnings. That’s true for the lowest-level trainee – and true for the CEO as well. Constant training can reduce the amount of stupidity. It can make a difference. By the way, also test your employees’ preparedness by sending out fake malware, and see who clicks on it.
  • Invest in tools that can detect ransomware and other advanced malware. Users will make mistakes, and we’ve seen that there are some ransomware variants that can spread without user intervention. Endpoint security technology is required, and if possible, such tools should do more than passively warn end users if a problem is detected. There are many types of solutions available; look into them, and make sure there are no coverage gaps.
  • Aggressively patch and update software. Patches existed for months to close the vulnerabilities exploited by the recent flurry of ransomware attacks. It’s understandable that consumers wouldn’t be up to date – but it’s inexcusable for corporations to have either not known about the patches, or to have failed to install them. In other words, these attacks were basically 100% avoidable. Maybe they won’t be in the future if the hackers exploit true zero-days, but you can’t protect your organization with out-of-date operating systems, applications, and security tools.
  • Backup, backup, backup. Use backup technology that moves data security into the data center or into the cloud, so that ransomware can’t access the backup drive directly. Too many small businesses lost data on laptops, notebooks, and servers because there weren’t backups. We know better than this! By the way, one should assume that malware attacks, even ransomware, can be designed to destroy data and devices. Don’t assume you can write a check and get your data back safety.
  • Stay up to date on threat data. You can’t rely upon the tech media, or vendor blogs, to keep you up to date with everything going on with cybersecurity. There are many threat data feeds, some curated and expensive, some free and lower-quality. You should find a threat data source that seems to fit your requirements and subscribe to it – and act on what you read. If you’re not going to consume the threat data yourself, find someone else to do so. An urgent warning about your database software version won’t do you any good if it’s in your trashcan.

Ransomware! Ransomware! Ransomware! When it comes to ransomware and advanced malware, it’s not a question of if, or even a question of when. Your organization, your servers, your network, your end-users, are under constant attack. It only takes one slip-up to wreak havoc on one endpoint, and potentially on multiple endpoints. Learn from what’s going on at Black Hat – and be ready for the worst.

A major global cyberattack could cost US$53 billion of economic losses. That’s on the scale of a catastrophic disaster like 2012’s Hurricane Sandy.

Lloyds of London, the famous insurance company, partnered with Cyence, a risk analysis firm specializing in cybersecurity. The result is a fascinating report, “Counting the Cost: Cyber Exposure Decoded.” This partnership makes sense: Lloyds must understand the risk before deciding whether to underwrite a venture — and when it comes to cybersecurity, this is an emerging science. Traditional actuarial methods used to calculate the risk of a cargo ship falling prey to pirates, or an office block to a devastating flood, simply don’t apply.

Lloyds says that in 2016, cyberattacks cost businesses as much as $450 billion. While insurers can help organizations manage that risk, the risk is increasing. The report points to those risks covering “everything from individual breaches caused by malicious insiders and hackers, to wider losses such as breaches of retail point-of-sale devices, ransomware attacks such as BitLocker, WannaCry and distributed denial-of-service attacks such as Mirai.”

The worry? Despite writing $1.35 billion in cyberinsurance in 2016, “insurers’ understanding of cyber liability and risk aggregation is an evolving process as experience and knowledge of cyber-attacks grows. Insureds’ use of the internet is also changing, causing cyber-risk accumulation to change rapidly over time in a way that other perils do not.”

And that is why the lack of time-tested actuarial tables can cause disaster, says Lloyds. “Traditional insurance risk modelling relies on authoritative information sources such as national or industry data, but there are no equivalent sources for cyber-risk and the data for modelling accumulations must be collected at scale from the internet. This makes data collection, and the regular update of it, key components of building a better understanding of the evolving risk.”

Where the Risk Is Growing

The report points to six significant trends that are causing increased risk of an expensive attack – and therefore, increased liability:

  • Volume of contributors: The number of people developing software has grown significantly over the past three decades; each contributor could potentially add vulnerability to the system unintentionally through human error.
  • Volume of software: In addition to the growing number of people amending code, the amount of it in existence is increasing. More code means the potential for more errors and therefore greater vulnerability.
  • Open source software: The open-source movement has led to many innovative initiatives. However, many open-source libraries are uploaded online and while it is often assumed they have been reviewed in terms of their functionality and security, this is not always the case. Any errors in the primary code could then be copied unwittingly into subsequent iterations.
  • Old software: The longer software is out in the market, the more time malicious actors have to find and exploit vulnerabilities. Many individuals and companies run obsolete software that has more secure alternatives.
  • Multi-layered software: New software is typically built on top of prior software code. This makes software testing and correction very difficult and resource intensive.
  • “Generated” software: Code can be produced through automated processes that can be modified for malicious intent.

Based on those points, and other factors, Lloyds and Cyence have come up with two primary scenarios that could lead to widespread, and costly, damages. The first – a successful hack of a major cloud service provider, which hosts websites, applications, and data for many companies. The second — a mass vulnerability attack that affects many client systems. One could argue that some of the recent ransomware attacks fit into that scenario.

Huge Liability Costs

The “Counting the Cost” report makes for some depressing reading. Here are three of the key findings, quoted verbatim. Read the 56-page report to dig deeply into the scenarios, and the damages.

  • The direct economic impacts of cyber events lead to a wide range of potential economic losses. For the cloud service disruption scenario in the report, these losses range from US$4.6 billion for a large event to US$53.1 billion for an extreme event; in the mass software vulnerability scenario, the losses range from US$9.7 billion for a large event to US$28.7 billion for an extreme event.
  • Economic losses could be much lower or higher than the average in the scenarios because of the uncertainty around cyber aggregation. For example, while average losses in the cloud service disruption scenario are US$53 billion for an extreme event, they could be as high as US$121.4 billion or as low as US$15.6 billion, depending on factors such as the different organisations involved and how long the cloud-service disruption lasts for.
  • Cyber-attacks have the potential to trigger billions of dollars of insured losses. For example, in the cloud- services scenario insured losses range from US$620 million for a large loss to US$8.1 billion for an extreme loss. For the mass software vulnerability scenario, the insured losses range from US$762 million (large loss) to US$2.1 billion (extreme loss).

Read the 56-page report to dig deeply into the scenarios, and the damages. You may not sleep well afterwards.

Automotive ECU (engine control unit)

Automotive ECU (engine control unit)

In my everyday life, I trust that if I make a panic stop, my car’s antilock brake system will work. The hardware, software, and servos will work together to ensure that my wheels don’t lock up—helping me avoid an accident. If that’s not sufficient, I trust that the impact sensors embedded behind the front bumper will fire the airbag actuators with the correct force to protect me from harm, even though they’ve never been tested. I trust that the bolts holding the seat in its proper place won’t shear. I trust the seat belts will hold me tight, and that cargo in the trunk won’t smash through the rear seats into the passenger cabin.

Engineers working on nearly every automobile sold worldwide ensure that their work practices conform to ISO 26262. That standard describes how to manage the functional safety of the electrical and electronic systems in passenger cars. A significant portion of ISO 26262 involves ensuring that software embedded into cars—whether in the emissions system, the antilock braking systems, the security systems, or the entertainment system—is architected, coded, and tested to be as reliable as possible.

I’ve worked with ISO 26262 and related standards on a variety of automotive software security projects. Don’t worry, we’re not going to get into the hairy bits of those standards because unless you are personally designing embedded real-time software for use in automobile components, they don’t really apply. Also, ISO 26262 is focused on the real-world safety of two-ton machines hurtling at 60-plus miles per hour—that is, things that will kill or hurt people if they don’t work as expected.

Instead, here are five IT systems management ideas that are inspired by ISO 26262. We’ll help you ensure your systems are designed to be Reliable, with a capital R, and Safe, with a capital S.

Read the list, and more, in my article for HP Enterprise Insights, “5 lessons for data center pros, inspired by automotive engineering standards.”

MacKenzie Brown has nailed the problem — and has good ideas for the solution. As she points out in her three part blog series, “The Unicorn Extinction” (links in a moment):

  • Overall, [only] 25% of women hold occupations in technology alone.
  • Women’s Society of Cyberjutsu (WSC), a nonprofit for empowering women in cybersecurity, states that females make up 11% of the cybersecurity workforce while (ISC)2, a non-profit specializing in education and certification, reports a whopping estimation of 10%.
  • Lastly, put those current numbers against the 1 million employment opportunities predicted for 2017, with a global demand of up to 6 million by 2019.

While many would decry the system sexism and misogyny in cybersecurity, Ms. Brown sees opportunity:

…the cybersecurity industry, a market predicted to have global expenditure exceeding $1 trillion between now and 2021(4), will have plenty of demand for not only information security professionals. How can we proceed to find solutions and a fixed approach towards resolving this gender gap and optimizing this employment fluctuation? Well, we promote unicorn extinction.

The problem of a lack of technically developed and specifically qualified women in Cybersecurity is not unique to this industry alone; however the proliferation of women in tangential roles associated with our industry shows that there is a barrier to entry, whatever that barrier may be. In the next part of this series we will examine the ideas and conclusions of senior leadership and technical women in the industry in order to gain a woman’s point of view.

She continues to write about analyzing the problem from a woman’s point of view:

Innovating solutions to improve this scarcity of female representation, requires breaking “the first rule about Fight Club; don’t talk about Fight Club!” The “Unicorn Law”, this anecdote, survives by the circling routine of the “few women in Cybersecurity” invoking a conversation about the “few women in Cybersecurity” on an informal basis. Yet, driving the topic continuously and identifying the values will ensure more involvement from the entirety of the Cybersecurity community. Most importantly, the executive members of Fortune 500 companies who apply a hiring strategy which includes diversity, can begin to fill those empty chairs with passionate professionals ready to impact the future of cyber.

Within any tale of triumph, obstacles are inevitable. Therefore, a comparative analysis of successful women may be the key to balancing employment supply and demand. I had the pleasure of interviewing a group of women; all successful, eclectic in roles, backgrounds of technical proficiency, and amongst the same wavelength of empowerment. These interviews identified commonalities and distinct perspectives on the current gender gap within the technical community.

What’s the Unicorn thing?

Ms. Brown writes,

During hours of research and writing, I kept coming across a peculiar yet comically exact tokenism deemed, The Unicorn Law. I had heard this in my industry before, attributed to me, “unicorn,” which is described (even in the cybersecurity industry) as: a woman-in-tech, eventually noticed for their rarity and the assemblage toward other females within the industry. In technology and cybersecurity, this is a leading observation many come across based upon the current metrics. When applied to the predicted demand of employment openings for years to come, we can see an enormous opportunity for women.

Where’s the opportunity?

She concludes,

There may be a notable gender gap within cybersecurity, but there also lies great opportunity as well. Organizations can help narrow the gap, but there is also tremendous opportunity in women helping each other as well.

Some things that companies can do to help, include:

  • Providing continuous education, empowering and encouraging women to acquire new skill through additional training and certifications.
  • Using this development training to promote from within.
    Reaching out to communities to encourage young women from junior to high school levels to consider cyber security as a career.
  • Seek out women candidates for jobs, both independently and utilizing outsourcing recruitment if need be.
  • At events, refusing to field all male panels.
  • And most importantly, encourage the discussion about the benefits of a diverse team.

If you care about the subject of gender opportunity in cybersecurity, I urge you to read these three essays.

The Unicorn Extinction Series: An Introspective Analysis of Women in Cybersecurity, Part 1

The Unicorn Extinction Series: An Introspective Analysis of Women in Cybersecurity, Part 2

The Unicorn Extinction Series: An Introspective Analysis of Women in Cybersecurity, Part 3

Did they tell their customers that data was stolen? No, not right away. When AA — a large automobile club and insurer in the United Kingdom — was hacked in April, the company was completely mum for months, in part because it didn’t believe the stolen data was sensitive. AA’s customers only learned about it when information about the breach was publicly disclosed in late June.

There are no global laws that require companies to disclose information about data thefts to customers. There are similarly no global laws that require companies to disclose defects in their software or hardware products, including those that might introduce security vulnerabilities.

It’s obviously why companies wouldn’t want to disclose problems with their products (such as bugs or vulnerabilities) or with their back-end operations (such as system breaches or data exfiltration). If customers think you’re insecure, they’ll leave. If investors think you’re insecure, they’ll leave. If competitors think you’re insecure, they’ll pounce on it. And if lawyers or regulators think you’re insecure, they might file lawsuits.

No matter how you slice it, disclosures about problems is not good for business. Far better to share information about new products, exciting features, customer wins, market share increases, additional platforms, and pricing promotions.

It’s Not Always Hidden

That’s not to say that all companies hide bad news. Microsoft, for example, is considered to be very proactive on disclosing flaws in its products and platforms, including those that affect security. When Microsoft learned about the Server Message Block (SMB) flaw that enabled malware like WannaCry and Petya in March, it quickly issued a Security Bulletin that explained the problem — and supplied the necessary patches. If customers had read the bulletin and applied the patches, those ransomware outbreaks wouldn’t have occurred.

When you get outside the domain of large software companies, such disclosures are rare. Automobile manufacturers do share information about vehicle defects with regulators, as per national laws, but resist recalls because of the expense and bad publicity. Beyond that, companies share information about problems with products, services, and operations unwillingly – and with delays.

In the AA case, as SC Magazine wrote,

The leaky database was first discovered by the AA on April 22 and fixed by April 25. In the time that it had been exposed, it had reportedly been accessed by several unauthorised parties. An investigation by the AA deemed the leaky data to be not sensitive, meaning that the organisation did not feel it necessary to tell customers.

Yet the breach contained over 13 gigabytes of data with information about 100,000 customers. Not sensitive? Well, the stolen information included email addresses along with names, IP addresses, and credit card details. That data seems sensitive to me!

Everything Will Change Under GDPR

The European Union’s new General Data Protection Regulation (GDPR) is go into effect May 2018. GDPR will for the first time require companies to tell customers and regulators about data breaches in a timely manner. Explains the U.K. Information Commissioner’s Office,

The GDPR will introduce a duty on all organisations to report certain types of data breach to the relevant supervisory authority, and in some cases to the individuals affected.

What is a personal data breach?

A personal data breach means a breach of security leading to the destruction, loss, alteration, unauthorised disclosure of, or access to, personal data. This means that a breach is more than just losing personal data.

Example

A hospital could be responsible for a personal data breach if a patient’s health record is inappropriately accessed due to a lack of appropriate internal controls.

When do individuals have to be notified?

Where a breach is likely to result in a high risk to the rights and freedoms of individuals, you must notify those concerned directly.

A ‘high risk’ means the threshold for notifying individuals is higher than for notifying the relevant supervisory authority.

What information must a breach notification contain?

  • The nature of the personal data breach including, where possible:
  • the categories and approximate number of individuals concerned; and
  • the categories and approximate number of personal data records concerned;
  • The name and contact details of the data protection officer (if your organisation has one) or other contact point where more information can be obtained;
  • A description of the likely consequences of the personal data breach; and
  • A description of the measures taken, or proposed to be taken, to deal with the personal data breach and, where appropriate, of the measures taken to mitigate any possible adverse effects.

Also, says the regulation,

If the breach is sufficiently serious to warrant notification to the public, the organisation responsible must do so without undue delay. Failing to notify a breach when required to do so can result in a significant fine up to 10 million Euros or 2 per cent of your global turnover.

Bottom line: Next year, companies in the E.U. must do better disclosing data breaches that affect their customers. Let’s hope this practice extends to more of the world.

The Federal Bureau of Investigation is warning about potential attacks from a hacking group called Lizard Squad. This information, released today, was labeled “TLP:Green” by the FBI and CERT, which means that it shouldn’t be publicly shared – but I am sharing it because this information was published on a publicly accessible blog run by the New York State Bar Association. I do not know why distribution of this information was restricted.

The FBI said:

Summary

An individual or group claiming to be “Anonymous” or “Lizard Squad” sent extortion emails to private-sector companies threatening to conduct distributed denial of service (DDoS) attacks on their network unless they received an identified amount of Bitcoin. No victims to date have reported DDoS activity as a penalty for non-payment.

Threat

In April and May 2017, at least six companies received emails claiming to be from “Anonymous” and “Lizard Squad” threatening their companies with DDoS attacks within 24 hours unless the company sent an identified amount of Bitcoin to the email sender. The email stated the demanded amount of Bitcoin would increase each day the amount went unpaid. No victims to date have reported DDoS activity as a penalty for nonpayment.

Reporting on schemes of this nature go back at least three years.

In 2016, a group identifying itself as “Lizard Squad” sent extortion demands to at least twenty businesses in the United Kingdom, threatening DDoS attacks if they were not paid five Bitcoins (as of 14 June, each Bitcoin was valued at 2,698 USD). No victims reported actual DDoS activity as a penalty for non-payment.

Between 2014 and 2015, a cyber extortion group known as “DDoS ‘4’ Bitcoin” (DD4BC) victimized hundreds of individuals and businesses globally. DD4BC would conduct an initial, demonstrative low-level DDoS attack on the victim company, followed by an

email message introducing themselves, demanding a ransom paid in Bitcoins, and threatening a higher level attack if the ransom was not paid within the stated time limit. While no significant disruption or DDoS activity was noted, it is probable companies paid the ransom to avoid the threat of DDoS activity.

Background

Lizard Squad is a hacking group known for their DDoS attacks primarily targeting gaming-related services. On 25 December 2014, Lizard Squad was responsible for taking down the Xbox Live and PlayStation networks. Lizard Squad also successfully conducted DDoS attacks on the UK’s National Crime Agency’s (NCA) website in 2015.

Anonymous is a hacking collective known for several significant DDoS attacks on government, religious, and corporate websites conducted for ideological reasons.

Recommendations

  • The FBI suggests precautionary measures to mitigate DDoS threats to include, but not limited to:
  • Have a DDoS mitigation strategy ready ahead of time.
  • Implement an incident response plan that includes DDoS mitigation and practice this plan before an actual incident occurs. This plan may involve external organizations such as your Internet Service Provider, technology companies that offer DDoS mitigation services, and law enforcement.
  • Ensure your plan includes the appropriate contacts within these external organizations. Test activating your incident response team and third party contacts.
  • Implement a data back-up and recovery plan to maintain copies of sensitive or proprietary data in a separate and secure location. Backup copies of sensitive data should not be readily accessible from local networks.
  • Ensure upstream firewalls are in place to block incoming User Data Protocol (UDP) packets.
  • Ensure software or firmware updates are applied as soon as the device manufacturer releases them.

If you have received one of these demands:

  • Do not make the demand payment.
  • Retain the original emails with headers.
  • If applicable, maintain a timeline of the attack, recording all times and content of the attack.

The FBI encourages recipients of this document to report information concerning suspicious or criminal activity to their local FBI field office or the FBI’s 24/7 Cyber Watch (CyWatch). Field office contacts can be identified at www.fbi.gov/contact-us/field. CyWatch can be contacted by phone at (855) 292-3937 or by e-mail at email hidden; JavaScript is required. When available, each report submitted should include the date, time, location, type of activity, number of people, and type of equipment used for the activity, the name of the submitting company or organization, and a designated point of contact. Press inquiries should be directed to the FBI’s national Press Office at email hidden; JavaScript is required or (202) 324-3691.

I am unapologetically mocking this company’s name. Agylytyx emailed me this press release today, and only the name captured my attention. Plus, their obvious love of the ™ symbol — even people they quote use the ™. Amazing!

Beyond that, I’ve never talked to the company or used its products, and have no opinion about them. (My guess is that it’s supposed to be pronounced as “Agil-lytics.”)

Agylytyx Announces Availability of New IOT Data Analysis Application

SUNNYVALE, Calif., June 30, 2017 /PRNewswire/ — Agylytyx, a leading cloud-based analytic software vendor, today announced a new platform for analyzing IoT data. The Agylytyx Generator™ IoT platform represents an application of the vendor’s novel Construct Library™ approach to the IoT marketplace. For the first time, companies can both explore their IoT data and make it actionable much more quickly than previously thought possible.

From PLC data streams archived as tags in traditional historians to time series data streaming from sensors attached to devices, the Agylytyx Generator™ aggregates and presents IoT data in a decision-ready format. The company’s unique Construct Library™ (“building block”) approach allows decision makers to create and explore aggregated data such as pressure, temperature, output productivity, worker status, waste removal, fuel consumption, heat transfer, conductivity, condensation or just about any “care abouts.” This data can be instantly explored visually at any level such as region, plant, line, work cell or even device. Best of all, the company’s approach eliminates the need to build charts or write queries.

One of the company’s long-time advisors, John West of Clean Tech Open, noticed the Agylytyx Generator™ potential from the outset. West’s wide angle on data analysis led him to stress the product’s broad applicability. West said “Even as the company was building the initial product, I advised the team that I thought there was strong applicability of the platform to operational data. The idea of applying Constructs to a received data set has broad usage. Their evolution of the Agylytyx Generator™ platform to IoT data is a very natural one.”

The company’s focus on industrial process data was the brainchild of one the company’s investors, Jim Smith. Jim is a chemical engineer with extensive experience working with plant floor data. Smith stated “I recognized the potential in the company’s approach for analyzing process data. Throughout the brainstorming process, we all gradually realized we were on to something groundbreaking.”

This unique approach to analytics attracted the attention of PrecyseTech, a pioneer of Industrial IoT (IIoT) Systems providing end-to-end management of high-value physical assets and personnel. Paul B. Silverman, the CEO of PrecyseTech, has had a longstanding relationship with the company. Silverman noted: “The ability of the Agylytyx Generator™ to address cloud-based IoT data analytic solutions is a good fit with PrecyseTech’s strategy. Agylytyx is working with the PrecyseTech team to develop our inPALMSM Solutions IoT applications, and we are working collaboratively to identify and develop IoT data opportunities targeting PrecyseTech’s clients. Our plans are to integrate the Agylytyx Generator™ within our inPALMSM Solutions product portfolio and also to offer users access to the Agylytyx Generator™ via subscription.”

Creating this IoT focus made the ideal use of the Agylytyx Generator™. Mark Chang, a data scientist for Agylytyx, noted: “All of our previous implementations – financial, entertainment, legal, customer service – had data models with common ‘units of measure’ – projects, media, timekeepers, support cases, etc. IoT data is dissimilar in that there is no common ‘unit of measure’ across devices. This dissimilarity is exactly what makes our Construct Library™ approach so useful to IoT data. The logical next step for us will be to apply machine learning and cluster inference to enable optimization of resource deployment and predictive analytics like predictive maintenance.”

About Agylytyx

Agylytyx provides cloud-based enterprise business analytic software. The company’s flagship product, the Agylytyx Generator™, frees up analyst time and results in better decision making across corporations. Agylytyx is based in Sunnyvale, California, and has locations in Philadelphia and Chicago, IL. For more information about Agylytyx visit www.agylytyx.com.

Virtual reality and augmented reality are the darlings of the tech industry. Seemingly every company is interested, even though one of the most interested AR products, Google Glass, crashed and burned a few years ago.

What’s the difference?

  • Virtual reality (VR) is when you are totally immersed in a virtual world. You only see (and hear) what’s presented to you as part of that virtual world, generated by software and displayed in stereo goggles and headphones. The goggles can detect motion, and can let you move around in virtual world. Games and simulations take place in VR.
  • Augmented reality (AR) means visual overlays. You see the real world, with digital information superimposed on it. Google Glass was AR. So, too, are apps where you aim your smartphone’s camera at the sky, and the AR software overlays the constellations on top of the stars, and shows where Saturn is right now. AR also can guide a doctor to a blood clot, or an emergency worker away from a hot wire, or a game player to a Pokemon character in a local park.

Both AR and VR have been around for decades, although the technology has become smaller and less expensive. There are consumer-oriented devices, such as the Oculus, and many professional systems. Drivers for the success of AR and VR are more powerful computing devices (such as smartphones and game consoles), and advances in both high-resolution displays and motion sensors for goggles.

That doesn’t mean that AR/VR are the next Facebook or Instagram, though both those companies are looking at AR/VR. According to a study, “VR/AR Innovation Report,” presented by the UBM Game Network, VR’s biggest failures include a lack of subsidized hardware enterprise applications, and native VR experiences. The gear is too expensive, developers say, and manufacturers are perceived to have failed in marketing VR systems and software.

Keep that airsick bag handy

It’s well known that if the VR hardware doesn’t work exactly right. If image motion is not properly synchronized to head motion, many VR users experience nausea. That’s not good. To quote from the UBM study:

Notably, we saw that many still feel like VR’s greatest unsolved problem is the high risk of causing nausea and physical discomfort.

“The biggest issue is definitely the lack of available ‘simulator sickness’ mitigation techniques,” opined one respondent. “Since each VR application offers a unique user experience, no one mitigation technique can service all applications. Future designs must consider the medium/genre they are developing for and continue to investigate new mitigation techniques to ensure optimal user enjoyment.”

Lots of good applications

That doesn’t mean that VR and AR are worthless. Pokemon Go, which was a hit a few summers ago, demonstrated that AR can engage consumers without stereo goggles. Google Earth VR provides immersive mapping experiences.

The hardware is also moving forward. A startup in Helsinki, called Varjo, made a breakthrough in optimizing goggles for AR and VR. They are addressing the challenge that if you make the resolution low on the goggles so that you can refresh the image quickly, it doesn’t look realistic. But if you increase the resolution to match that of the human eye, it’s harder to drive the image seamlessly in real time.

Varjo’s answer is to see where the eye is looking – using a technology called gaze tracking – and seamlessly drive that part of the display in super-high resolution. Where you’re not looking? That can be at a lower resolution, to provide context. Varjo says they can shift the high-resolution spot as fast as you can move your eye – and by tracking the gaze on both eyes, they can see if you are looking at virtual objects “close” or “far away.” The result, Varjo claims, is a display that’s about 35x higher resolution than other commercial systems, without nausea.

Varjo is focusing on the professional marketing with headsets that will cost thousands (not hundreds) of dollars when they ship at the end of 2017. However, it shows the promise of realistic, affordable AR/VR technology. Augmented reality and virtual reality are becoming more real every day.

The folks at Varjo think they’re made a breakthrough in how goggles for virtual reality and augmented reality work. They are onto something.

Most VR/AR goggles have two displays, one for each eye, and they strive to drive those displays at the highest resolution possible. Their hardware and software takes into account that as the goggles move, the viewpoint has to move in a seamless way, without delay. If there’s delay, the “willing suspension of disbelief” required to make VR work fails, and in some cases, the user experiences nausea and disorientation. Not good.

The challenge come from making the display sufficiently high resolution to allow the user to make objects look photorealistic. That lets user manipulate virtual machine controls, operate flight simulators, read virtual text, and so-on. Most AR/VR systems try to make the display uniformly high resolution, so that no matter where the user looks, the resolution is there.

Varjo, based in Finland, has a different approach. They take advantage of the fact that the rods and cones in the human eye sees in high resolution in the spot that the eye’s fovea is pointing at – and much lower elsewhere. So while the whole display is capable of high resolution, Varjo uses fovea detectors to do “gaze tracking” to see what the user is looking at, and makes that area super high resolution. When the fovea moves to another spot, that area is almost instantly bumped up to super high resolution, while the original area is downgraded to a reduced resolution.

Sound simple? It’s not, and that’s why the initial Varjo technology will be targeted at professional applications, like doctors, computer-aided design workers, or remote instrument operators. Prototypes of the goggles will be available this year to software developers, and the first products should ship to customers at the end of 2018. The price of the goggles is said to be “thousands, not tens of thousands” of dollars, according to Urho Konttori, the company’s founder. We talked by phone; he was in the U.S. doing demos in San Francisco and New York, but unfortunately, I wasn’t able to attend one of them.

Now, Varjo isn’t the first to use gaze tracking technology to try to optimize the image. According to Konttori, other vendors use medium resolution where the eye is pointing, and low resolution elsewhere, just enough to establish context. By contrast, he says that Varjo uses super high resolution where the user looks, and high resolution elsewhere. Because each eye’s motion is tracked separately, the system can also tell when the user is looking at objects close to user (because the eyes are at a more converged angle) or farther away (the eyes are at a more parallel angle).

“In our prototype, wherever you are looking, that’s the center of the high resolution display,” he said. “The whole image looks to be in focus, no matter where you look. Even in our prototype, we can move the display projection ten times faster than the human eye.”

Konttori says that the effective resolution of the product, called 20/20, is 70 megapixels, updated in real time based on head motion and gaze tracking. That compares to fewer than 2 megapixels for Oculus, Vive, HoloLens and Magic Leap. (This graphic from Varjo compared their display to an unnamed competitor.) What’s more, he said the CPU/GPU power needed to drive this display isn’t huge. “The total pixel count is less than in a single 4K monitor. you need roughly 2x the GPU compared to a conventional VR set for the same scene.”

The current prototypes use two video connectors and two USB connectors. Konttori says that this will drop to one video connector and one USB connector shortly, so that the device can be driven by smaller professional-grade computers, such as a gaming laptop, though he expects most will be connected to workstations.

Konttori will be back in the U.S. later this year. I’m looking forward to getting my hands (and eyes) on a Varjo prototype. Will report back when I’ve actually seen it.

What do PR people do right? What do they do wrong? Khali Henderson, a senior partner in BuzzTheory Strategies, recently interviewed me (and a few other technology editors) about “Things Editors Hate (and Like) About Your Press Relations.”

She started the story with,

I asked these veteran editors what they think about interfacing with business executives and/or their PR representatives in various ways – from press releases to pitches to interviews.

The results are a set of guidelines on what to do and, more importantly, what NOT to do when interfacing with media.

If you’re new to media relations, this advice will start you off on the right track.

Even if you’ve been around the press pool a lap or two, you may learn something new.

After that, Khali asked a number of practical questions, including:

  • When you receive a press release, what makes you most likely to follow up?
  • What makes you skip a press release and go to the next one?
  • When a company executive pitches you a story, what makes you take notice?
  • What makes you pass on a story pitch?
  • When you are reporting on a story, what are you looking for in a source?
  • What do you wish business executives and/or their PR representatives knew about your job?

Read and enjoy the story, and my answers to Khali’s questions!

I received this awesome tech spam message today from LaserVault. (It’s spam because it went to my company’s info@ address).

There’s only one thought: “Lordy, I hope there are backup tapes.”

Free White Paper: Is A Tape-Related Data Disaster In Your Future?

Is a tape-related data disaster in your future? It may be if you currently use tape for your backup and recovery.

This paper discusses the many risks you take by using tape and relying on it to keep your data safe in case of a disaster.

Read how you can better protect your data from the all too common dangers that threaten your business, and learn about using D2D technology, specifically tape emulation, instead of tape for iSeries, AIX, UNIX, and Windows.

This white paper should be required reading for anyone involved in overseeing their company’s tape backup operations.

Don’t be caught short when the need to recover your data is most critical. Download the free white paper now.

Ha ha ha ha ha. I slay me.

The WannaCry (WannaCrypt) malware attack spread through unpatched old software. Old software is the bane of the tech industry. Software vendors hate old software for many reasons. One, of course, is that the old software has vulnerabilities that must be patched. Another is that the support costs for older software keeps going and growing. Plus, of course, newer software has new features that can generate business. Meanwhile, of course, customers running old software aren’t generating much revenue.

Enterprises, too, hate old software. They don’t like the support costs, either, or the security vulnerabilities. However, there are huge costs in licensing and installing new software – which might require training users and IT staff, buying new hardware, updating templates, adjusting integrations, and so-on. Plus, old software has been tested and certified, and better the risk you know than the risk you don’t know. So, they keep using old software.

Think about a family that’s torn between keeping a paid-for 13-year-old car, like my 2004 BMW, instead of leasing a newer, safer, more reliable model. The decision about whether to upgrade or not upgrade is complicated. There’s no good answer, and in case of doubt, the best decision is to simply wait until next year’s budget.

However: What about a family that decides to go car-shopping after paying for a scary breakdown or an unexpectedly large repair bill? Similarly, companies are inspired to upgrade critical software after suffering a data breach or learning about irreparable vulnerabilities in the old code.

The call to action?

WannaCry might be that call to action for some organizations. Take Windows, for example – but let me be quick to stress that this issue isn’t entirely about Microsoft products. Smartphones running old versions of Android or Apple’s iOS, or old Mac laptops that can’t be moved to the latest edition of OS X, are just as vulnerable.

Okay, back to Windows and WannaCry. In its critical March 14, 2017, security update, Microsoft accurately identified a flaw in its Server Message Block (SMB) code that could be exploited; the flaw was disclosed in documents stolen by hackers from the U.S. security agencies. Given the massive severity of that flaw, Microsoft offered patches to old software including Windows Server 2008 and Windows Vista.

It’s important to note that customers who applied those patches were not affected by WannaCry. Microsoft fixed it. Many customers didn’t install the fix because they didn’t know about it, they couldn’t find the IT staff resources, or simply thought this vulnerability was no big deal. Well, some made the wrong bet, and paid for it.

Patches keep coming; they aren’t enough

This week, Microsoft blogged,

On May 12, 2017, the WannaCrypt ransomware served as an all too real example of the danger of cyber attacks to individuals and businesses globally.

In reviewing the updates for this month, some vulnerabilities were identified that pose elevated risk of cyber attacks by government organizations, sometimes referred to as nation-state actors or other copycat organizations. To address this risk, today we are providing additional security updates along with our regular Update Tuesday service. These security updates are being made available to all customers, including those using older versions of Windows. Due to the elevated risk for destructive cyber attacks at this time, we made the decision to take this action because applying these updates provides further protection against potential attacks with characteristics similar to WannaCrypt.

The new patches go back even farther than those issued in March, covering Windows XP and Windows Server 2003. While Microsoft is to be complimented on released those patches, customers should not be complacent. It is dangerous for consumers or consumers to keep running Windows XP, or heaven forbid, Windows 95. It’s equally dangerous to run Windows 2003 at all; anything left on that platform should be migrated. The same is true of smartphones running old versions of Android or iOS, laptops or notebooks running old versions of Macintosh OS, or even old versions of Linux. In some cases, those systems may seem super-reliable – but they are not secure, and can’t be secured.

Unfortunately, upgrades to the latest operating system may require hardware updates (such as more memory) – or a complete replacement. That’s often the case with phones and notebooks, and even servers might require a forklift upgrade. That’s the price of security, however, Forget about the new features of new software; forget about the improved reliability or higher performance that comes along with new hardware. Old software simply can’t be secured. It must go. As my friend Jason Perlow wrote in mid-May, “If you’re still using Windows XP, you’re a menace to society.” He’s right. Get it done.

Have you ever suffered through the application process for cybersecurity insurance? You know that “suffered” is the right word because of a triple whammy.

  • First, the general risk factors involved in cybersecurity are constantly changing. Consider the rapid rise in ransomware, for example.
  • Second, it is extremely labor-intensive for businesses to document how “safe” they are, in terms of their security maturity, policies, practices and technology.
  • Third, it’s hard for insurers, the underwriters, and their actuaries, to feel confident that they truly understand how risky a potential customer can be — information and knowledge that’s required for quoting a policy that offers sufficient coverage at reasonable rates.

That is, of course, assuming that everyone is on the same page and agrees that cybersecurity insurance is important to consider for the organization. Is cybersecurity insurance a necessary evil for every company to consider? Or, is it only a viable option for a small few? That’s a topic for a separate conversation. For now, let’s assume that you’re applying for insurance.

From their part, insurance carriers aren’t equipped to go into your business and examine your IT infrastructure. They won’t examine firewall settings or audit your employee anti-phishing training materials. Instead, they rely upon your answers to questionnaires developed and interpreted by their own engineers. Unfortunately, those questionnaires may not get into the nuances, especially if you’re in a vertical where the risks are especially high, and so are the rewards for successful hackers.

According to InformationAge, 77% of ransomware appear in four industries. Those are business & professional services (28%), government (19%), healthcare (15%) and retail (15%). In 2016 and 2017, healthcare organizations like hospitals and medical practices were repeatedly hit by ransomware. Give that data to the actuaries, and they might look for those types of organizations to fill out even more questionnaires.

About those questionnaires? “Applications tend to have a lot of yes/no answers… so that doesn’t give the entire picture of what the IT framework actually looks like,” says Michelle Chia, Vice President, Zurich North America. She explained that an insurance company’s internal assessment engineers have to dig deeper to understand what is really going on: “They interview the more complex clients to get a robust picture of what the combination of processes and controls actually looks like and how secure the network and the IT infrastructure are.”

Read more in my latest for ITSP Magazine, “How to Streamline the Cybersecurity Insurance Process.”

Twenty years ago, my friend Philippe Kahn introduced the first camera-phone. You may know Philippe as the founder of Borland, and as an entrepreneur who has started many companies, and who has accomplished many things. He’s also a sailor, jazz musician, and, well, a fun guy to hang out with.

About camera phones: At first, I was a skeptic. Twenty years ago I was still shooting film, and then made the transition to digital SLR platforms. Today, I shoot with big Canon DSLRs for birding and general stuff, Leica digital rangefinders when want to be artistic, and with pocket-sized digital cameras when I travel. Yet most of my pictures, especially those posted to social media, come from the built-in camera in my smartphone.

Philippe has blogged about this special anniversary – which also marks the birth of his daughter Sophie. To excerpt from his post, The Creation of the Camera-Phone and Instant-Picture-Mail:

Twenty years ago on June 11th 1997, I shared instantly the first camera-phone photo of the birth of my daughter Sophie. Today she is a university student and over 2 trillion photos will be instantly shared this year alone. Every smartphone is a camera-phone. Here is how it all happened in 1997, when the web was only 4 years old and cellular phones were analog with ultra limited wireless bandwidth.

First step 1996/1997: Building the server service infrastructure: For a whole year before June 1997 I had been working on a web/notification system that was capable of uploading a picture and text annotations securely and reliably and sending link-backs through email notifications to a stored list on a server and allowing list members to comment.

Remember it was 1996/97, the web was very young and nothing like this existed. The server architecture that I had designed and deployed is in general the blueprint for all social media today: Store once, broadcast notifications and let people link back on demand and comment. That’s how Instagram, Twitter, Facebook, LinkedIn and many others are function. In 1997 this architecture was key to scalability because bandwidth was limited and it was prohibitive, for example, to send the same picture to 500 friends. Today the same architecture is essential because while there is bandwidth, we are working with millions of views and potential viral phenomena. Therefore the same smart “frugal architecture” makes sense. I called this “Instant-Picture-Mail” at the time.

He adds:

What about other claims of inventions: Many companies put photo-sensors in phones or wireless modules in cameras, including Kodak, Polaroid, Motorola. None of them understood that the success of the camera-phone is all about instantly sharing pictures with the cloud-based Instant-Picture-Mail software/server/service-infrastructure. In fact, it’s even amusing to think that none of these projects was interesting enough that anyone has kept shared pictures. You’d think that if you’d created something new and exciting like the camera-phone you’d share a picture or two or at least keep some!

Read more about the fascinating story here — he goes into a lot of technical detail. Thank you, Philippe, for your amazing invention!

Doing business in China has always been a rollercoaster. For Internet businesses, the ride just became more scary.

The Chinese government has rolled out new cybersecurity laws, which begin affecting foreign companies today, June 1, 2017. The new rules give the Chinese government more control over Internet companies. The government says that the rules are designed to help address threats causes by terrorists and hackers – but the terms are broad enough to confuse anyone doing business in China.

Two of the biggest requirements of the new legislation:

  • Companies that do business in China must store all data related to that business, including customer data, within China.
  • Consumers must register with their real names on retail sites, community sites, news sites, and social media, including messaging services.

According to many accounts, the wording of the new law is too ambiguous to assure compliance. Perhaps the drafters were careless, or lacked of understanding of technical issues. However, it’s possible that the ambiguity is intentional, to give Chinese regulators room to selectively apply the new laws based on political or business objectives. To quote coverage in The New York Times,

One instance cited by Mats Harborn, president of the European Union Chamber of Commerce in China, in a round-table discussion with journalists, was that the government said it wanted to regulate “critical information infrastructure,” but had not defined what that meant.

“The way it’s enforced and implemented today and the way it might be enforced and implemented in a year is a big question mark,” added Lance Noble, the chamber’s policy and communications manager. He warned that uncertainty surrounding the law could make foreign technology firms reluctant to bring their best innovations to China.

The government organization behind these laws, the Cyberspace Administration of China, offers an English-language website.

Keep Local Data Local

The rules state that companies that store data relevant to Chinese customer overseas without approval can have their businesses shut down. All businesses operating in China must provide technical support to the company’s security agencies in order to investigate anything that the authorities claim threatens national security or might represent a crime. According to the South China Morning Post, the new rules can affect nearly any company that moves data:

For example, rules limiting the transfer of data outside China’s borders originally applied only to “critical information infrastructure operators”. But that was changed mid-April to “network operators,” which could mean just about any business.

“Even a small e-business or email system could be considered a network,” said Richard Zhang, director of KPMG Advisory in Shanghai.

Another provision requires IT hardware and services to undergo inspection and verification as “secure and controllable” before companies can deploy them in China. That appears to be already tilting purchasing decisions at state-owned enterprises.

Compliance Will Be Tricky

According to a report on CNBC,

The American Chamber of Commerce in Shanghai has called the data localization and data transfer regulations “unnecessarily onerous,” with a potential impact on cross-border trade worth billions of dollars.

Multinationals may be better equipped to take on the cost of compliance, but “a lot of the small and medium sized companies may not be able to afford to put in the control that the Chinese government is asking for, and if they can’t put in those controls, it may actually push them out of that country and that market,” said James Carder, vice president of cybersecurity firm LogRhythm Labs.

It’s clear that, well, it’s not clear. There do seem to be legitimate concerns about the privacy of Chinese citizens, and of the ability of the Chinese government to examine data relevant to crime or terrorism. It’s also true, however, that these rules will help Chinese firms, which have a home-court advantage – and which don’t face similar rules when they expand to the rest of Asia, Europe or North America. To quote again from CNBC:

While Chinese firms are also subject to the same data localization and transfer requirements — a potential challenge as many domestic companies are going global — experts said the regulation could help China bolster its domestic tech sector as more companies are forced to store data onshore. But that could mean continued uneven market access for foreign versus Chinese companies, which is also a long-time challenge.

“The asymmetry between the access that Chinese companies enjoy in other markets and the access foreign companies have in China has been growing for some time,” said Kenneth Jarrett, the president of the American Chamber in Shanghai.

One example is that Chinese firms usually can fully own and control data centers and cloud-related services around the world without foreign equity restrictions or technology transfer requirements, but foreign cloud companies in China don’t enjoy the same environment.

The opportunities are huge, so Internet firms have no choice but to ride that Chinese rollercoaster. 

March 2003: The U.S. International Trade Commission released a 32-page paper called, “Protecting U.S. Intellectual Property Rights and the Challenge of Digital Piracy.” The authors, Christopher Johnson and Daniel J. Walworth, cited an article I wrote for the Red Herring in 1999.

Here’s the abstract of the ITC’s paper:

ABSTRACT: According to U.S. industry and government officials, intellectual property rights (IPR) infringement has reached critical levels in the United States as well as abroad. The speed and ease with which the duplication of products protected by IPR can occur has created an urgent need for industries and governments alike to address the protection of IPR in order to keep markets open to trade in the affected goods. Copyrighted products such as software, movies, music and video recordings, and other media products have been particularly affected by inadequate IPR protection. New tools, such as writable compact discs (CDs) and, of course, the Internet have made duplication not only effortless and low-cost, but anonymous as well. This paper discusses the merits of IPR protection and its importance to the U.S. economy. It then provides background on various technical, legal, and trade policy methods that have been employed to control the infringement of IPR domestically and internationally. This is followed by an analysis of current and future challenges facing U.S. industry with regard to IPR protection, particularly the challenges presented by the Internet and digital piracy.

Here’s where they cited yours truly:

To improve upon the basic encryption strategy, several methods have evolved that fall under the classification of “watermarks” and “digital fingerprints” (also known as steganography). Watermarks have been considered extensively by record labels in order to protect their content.44 However, some argue that “watermarking” is better suited to tracking content than it is to protecting against reproduction. This technology is based on a set of rules embedded in the content itself that define the conditions under which one can legally access the data. For example, a digital music file can be manipulated to have a secret pattern of noise, undetectable to the ear, but recorded such that different versions of the file distributed along different channels can be uniquely identified.45 Unlike encryption, which scrambles a file unless someone has a ‘key’ to unlock the process, watermarking does not intrinsically prevent use of a file. Instead it requires a player–a DVD machine or MP3 player, for example–to have instructions built in that can read watermarks and accept only correctly marked files.”46

Reference 45 goes to

Alan Zeichick, “Digital Watermarks Explained,” Red Herring, Dec. 1999

Another paper that referenced that Red Herring article is “Information Technology and the Increasing Efficacy of Non-Legal Sanctions in Financing Transactions.” It was written by Ronald J. Mann of the the University of Michigan Law School.

Sadly, my digital watermarks article is no longer available online.

Technical diligence starts when a startup or company has been approved for outside capital, but needs to be inspected to insure the value of the technology is “good enough” to accept investment. The average startup has something like 1/100 odds of receiving funding once they pitch a VC firm, which is why if investment is offered the ball shouldn’t be dropped during technical diligence. Most issues in technical diligence can be prevented. Since technical diligence is part of the investigation process to receiving venture capital, any business in theory could proactively prepare for technical diligence.

So advises my friend Ellie Cachette, General Partner at CCM Capital Management, a fund-of-funds specializing in venture capital investments. In her two-part series for Inc. Magazine, Ellie shares insights — real insights — in the following areas:

  • Intellectual property and awareness
  • Scaling
  • Security
  • Documentation
  • Risk management
  • Development budget
  • Development meeting and reporting
  • Development ROI
  • Having the right development talent in place

Here are the links:

Five “Business Things” to Understand for Technical Diligence: Part One

Five “Tech Things” to Understand for Technical Diligence: Part Two

While we’re at it, here’s another great article by Ellie in Inc.:

When Your Customers Want One Thing — And Your Investors Want Another

Got a business? Want to do better? Learn from Ellie Cachette. Follow her @ecachette.

The endpoint is vulnerable. That’s where many enterprise cyber breaches begin: An employee clicks on a phishing link and installs malware, such a ransomware, or is tricked into providing login credentials. A browser can open a webpage which installs malware. An infected USB flash drive is another source of attacks. Servers can be subverted with SQL Injection or other attacks; even cloud-based servers are not immune from being probed and subverted by hackers. As the number of endpoints proliferate — think Internet of Things — the odds of an endpoint being compromised and then used to gain access to the enterprise network and its assets only increases.

Which are the most vulnerable endpoints? Which need extra protection? All of them, especially devices running some flavor of Windows, according to Mike Spanbauer, Vice President of Security at testing firm NSS Labs. “All of them. So the reality is that Windows is where most targets attack, where the majority of malware and exploits ultimately target. So protecting your Windows environment, your Windows users, both inside your businesses as well as when they’re remote is the core feature, the core component.”

Roy Abutbul, Co-Founder and CEO of security firm Javelin Networks, agreed. “The main endpoints that need the extra protection are those endpoints that are connected to the [Windows] domain environment, as literally they are the gateway for attackers to get the most sensitive information about the entire organization.” He continued, “From one compromised machine, attackers can get 100 per cent visibility of the entire corporate, just from one single endpoint. Therefore, a machine that’s connected to the domain must get extra protection.”

Scott Scheferman, Director of Consulting at endpoint security company Cylance, is concerned about non-PC devices, as well as traditional computers. That might include the Internet of Things, or unprotected routers, switches, or even air-conditioning controllers. “In any organization, every endpoint is really important, now more than ever with the internet of Things. There are a lot of devices on the network that are open holes for an attacker to gain a foothold. The problem is, once a foothold is gained, it’s very easy to move laterally and also elevate your privileges to carry out further attacks into the network.”

At the other end of the spectrum is cloud computing. Think about enterprise-controlled virtual servers, containers, and other resources configured as Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS). Anything connected to the corporate network is an attack vector, explained Roark Pollock, Vice President at security firm Ziften.

Microsoft, too, takes a broad view of endpoint security. “I think every endpoint can be a target of an attack. So usually companies start first with high privilege boxes, like administrator consoles onboard to service, but everybody can be a victim,” said Heike Ritter, a Product Manager for Security and Networking at Microsoft.

I’ve written a long, detailed article on this subject for NetEvents, “From Raw Data to Actionable Intelligence: The Art and Science of Endpoint Security.”

You can also watch my 10-minute video interview with these people here.

Many IT professionals were caught by surprise by last week’s huge cyberattack. Why? They didn’t expect ransomware to spread across their networks on its own.

The reports came swiftly on Friday morning, May 12. The first I saw were that dozens of hospitals in England were affected by ransomware, denying physicians access to patient medical records and causing surgery and other treatments to be delayed.

The infections spread quickly, reportedly hitting as many as 100 countries, with Russian systems affected apparently more than others. What was going on? The details came out quickly: This was a relatively unknown ransomware variant, dubbed WannaCry or WCry. WannaCry had been “discovered” by hackers who stole information from the U.S. National Security Agency (NSA); affected machines were Windows desktops, notebooks and servers that were not up to date on security patches.

Most alarming, WannaCry did not spread across networks in the usual way, through people clicking on email attachments. Rather, once one Windows system was affected on a Windows network, WannaCry managed to propagate itself and infect other unpatched machines without any human interaction. The industry term for this type of super-vigorous ransomware: Ransomworm.

Iturned to one of the experts on malware that can spread across Windows networks, Roi Abutbul. A former cybersecurity researcher with the Israeli Air Force’s famous OFEK Unit, he is founder and CEO of Javelin Networks, a security company that uses artificial intelligence to fight against malware.

Abutbul told me, “The WannaCry/Wcry ransomware—the largest ransomware infection in history—is a next-gen ransomware. Opposed to the regular ransomware that encrypts just the local machine it lands on, this type spreads throughout the organization’s network from within, without having users open an email or malicious attachment. This is why they call it ransomworm.”

He continued, “This ransomworm moves laterally inside the network and encrypts every PC and server, including the organization’s backup.” Read more about this, and my suggestions for copying with the situation, in my story for Network World, “Self-propagating ransomware: What the WannaCry ransomworm means for you.”

If you’re in London in a couple weeks, look for me. I’ll be at the NetEvents European Media Spotlight on Innovators in Cloud, IoT, AI and Security, on June 5.

At NetEvents, I’ll be doing lots of things:

  • Acting as the Master of Ceremonies for the day-long conference.
  • Introducing the keynote speaker, Brian Lord, OBE, who is former GCHQ Deputy Director for Intelligence and Cyber Operations
  • Conducting an on-stage interview with Mr. Lord, Arthur Snell, formerly of the British Foreign and Commonwealth Office, and Guy Franco, formerly with the Israeli Defense Forces.
  • Giving a brief talk on the state of endpoint cybersecurity risks and technologies.
  • Moderating a panel discussion about endpoint security.

The one-day conference will be at the Chelsea Harbour Hotel. Looking forward to it, and maybe will see you there?

Los informes llegaron rápidamente el viernes por la mañana, 12 de mayo – la primera vez que leí una alerta, referenciaba a docenas de hospitales en Inglaterra que fueron afectados por ransomware (sin darse cuenta que era ransomworm), negando a los médicos el acceso a los registros médicos de sus pacientes, causando demoras en cirujías y tratamientos en curso dijo la BBC,

El malware se propagó rápidamente el viernes, con el personal médico en el Reino Unido, según se informa, las computadoras “una por una” quebadan fuera de uso.

El personal del NHS compartió capturas de pantalla del programa WannaCry, que exigió un pago de $ 300 (£ 230) en moneda virtual Bitcoin para desbloquear los archivos de cada computadora.

A lo largo del día, otros países, principalmente europeos, reportaron infecciones.

Algunos informes dijeron que Rusia había visto el mayor número de infecciones del planeta. Los bancos nacionales, los ministerios del interior y de la salud, la empresa estatal de ferrocarriles rusa y la segunda mayor red de telefonía móvil, fueron reportados como afectados.

Las infecciones se diseminaron rápidamente, según se informa golpearon hasta 150 países, con los sistemas rusos afectados aparentemente más que otros.

Read the rest of my article, “Ransomworm golpea a más de 150 Países,” in IT Connect Latam.

In the United States, Sunday, May 14, is Mother’s Day. (Mothering Sunday was March 27 this year in the United Kingdom.) This is a good time to reflect on the status of women of all marital status and family situations in information technology. The results continue to disappoint.

According to the Unites States Department of Labor, 57.2% of all women participate in the labor force in the United States. 46.9% of the people employed in all occupations are women. So far, so good. Yet when it comes to information technology, women lag far, far behind. Based on 2014 stats:

  • Web developers – 35.2% women
  • Computer systems analysts – 34.2% women
  • Database administrators – 28.0%
  • Computer and information systems managers – 26.7%
  • Computer support specialists – 26.6%
  • Computer programmers – 21.4%
  • Software developers, applications and systems software – 19.8%
  • Network and computer systems administrators – 19.1%
  • Information security analysts – 18.1%
  • Computer network architects – 12.4%

The job area with the highest projected growth rate over the next few years will be information security analysts, says Labor. A question is, will women continue to be underrepresented in this high-paying, fast-growing field? Or will the demand for analysts provide new opportunities for women to enter into the security profession? Impossible to say, really.

The U.S. Equal Employment Opportunity Commission (EEOC) shows that the biggest high tech companies lag behind in diversity. That’s something that anyone working in Silicon Valley can sense intuitively, in large part due to the bro culture (and brogrammer culture) there. Says the EEOC’s extensive report, “Diversity in High Tech,”

Modern manufacturing requires a computer literate worker capable of dealing with highly specialized machines and tools that require advanced skills (STEM Education Coalition).

However, other sources note that stereotyping and bias, often implicit and unconscious, has led to underutilization of the available workforce. The result is an overwhelming dominance of white men and scant participation of African Americans and other racial minorities, Hispanics, and women in STEM and high tech related occupations. The Athena Factor: Reversing the Brain Drain in Science, Engineering, and Technology, published data in 2008 showing that while the female talent pipeline in STEM was surprisingly robust, women were dropping out of the field large numbers. Other accounts emphasize the importance of stereotypes and implicit bias in limiting the perceived labor pool (see discussion below).

Moughari et al., 2012 noted that men comprise at least 70 percent of graduates in engineering, mathematics, and computer science, while women dominate in the lower paying fields. Others point out that in this is not uniformly the case in all science and math occupations and that, while underrepresented among those educated for the industry, women and minorities are more underrepresented among those actually employed in the industry. It has been shown, for example, that men are twice as likely as women to be hired for a job in mathematics when the only difference between candidates is gender.

and

Women account for relatively small percentages of degree recipients in certain STEM fields: only 18.5 percent of bachelor’s degrees in engineering went to women in 2008.

Women Heading for the Exit

The EEOC report is very discouraging in its section on Existing Tech & Related Fields:

Over time, over half of highly qualified women working in science, engineering and technology companies quit their jobs. In 2013, just 26 percent of computing jobs in the U.S. were held by women, down from 35 percent in 1990, according to a study by the American Association of University Women. Although 80 percent of U.S. women working in STEM fields say they love their work, 32 percent also say they feel stalled and are likely to quit within a year. Research by The Center for Work-Life Policy shows that 41 percent of qualified scientists, engineers and technologists are women at the lower rungs of corporate ladders but more than half quit their jobs.

This loss appears attributable to the following: 1) inhospitable work cultures; 2) isolation; 3) conflict between women’s preferred work rhythms and the “firefighting” work style generally rewarded; 4) long hours and travel schedules conflict with women’s heavy household management workload; and 5) women’s lack of advancement in the professions and corporate ladders. If corporate initiatives to stem the brain drain reduced attrition by just 25 percent, there would be 220,000 additional highly qualified female STEM workers.

Based on a survey and in-depth interviews of female scientists, the report observes:

  • Two-thirds of women report having to prove themselves over and over; their success discounted and their expertise questioned.
  • Three-fourths of Black women reported this phenomenon.
  • Thirty-four percent reported pressure to play a traditionally feminine role, including 41 percent of Asian women.
  • Fifty-three percent reported backlash from speaking their minds directly or being outspoken or decisive.
  • Women, particularly Black and Latina women, are seen as angry when they fail to conform to female stereotypes
  • Almost two thirds of women with children say their commitment and competence were questioned and opportunities decreased after having children.

The EEOC report adds that in tech, only 20.44% of executives, senior officials and managers are women – compared to 28.81% in all private industries in the U.S. Women certainly are succeeding in tech, and there are some high-profile women executives in the field —think Meg Whitman at HP, Marissa Mayer at Yahoo (now heading for the exit herself with a huge payout), Sheryl Sandberg at Facebook, Susan Wojcicki at YouTube, Virginia Rometty at IBM, Safra Catz at Oracle, and Ursula Burns at Xerox. That’s still a very short list. The opportunities for and presence of women in tech remain sadly underwhelming.

I have a new research paper in Elsevier’s technical journal, Network Security. Here’s the abstract:

Lock it down! Button it up tight! That’s the default reaction of many computer security professionals to anything and everything that’s perceived as introducing risk. Given the rapid growth of cybercrime such as ransomware and the non-stop media coverage of data theft of everything from customer payment card information through pre-release movies to sensitive political email databases, this is hardly surprising.

The default reaction of many computer security professionals to anything that’s perceived as introducing risk is to lock down the system.

In attempting to lower risk, however, they also exclude technologies and approaches that could contribute significantly to the profitability and agility of the organisation. Alan Zeichick of Camden Associates explains how to make the most of technology by opening up networks and embracing innovation – but safely.

You can read the whole article, “Enabling innovation by opening up the network,” here.

In 2016, Carnival Cruises was alleged to have laid off its entire 200-person IT department – and forced its workers to train foreign replacements. The same year, about 80 IT workers at the University of California San Francisco were laid off, and forced to trained replacements, lower-paid tech workers from an Indian outsourcing firm. And according to the Daily Mail:

Walt Disney Parks and Resorts is being sued by 30 former IT staff from its Florida offices who claim they were unfairly replaced by foreign workers— but only after being forced to train them up.

The suit, filed Monday in an Orlando court, alleges that Disney laid off 250 of its US IT staff because it wanted to replace them with staff from India, who were hired in on H-1B foreign employee visas.

On one hand, these organizations were presumably quite successful with hiring American tech workers… but such workers are expensive. Thanks to a type of U.S. visa, called the H-1B, outsource contractors can bring in foreign workers, place them with those same corporations, and pay them a lot less than American workers. The U.S. organization, like Carnival Cruises, saves money. The outsource contractor, which might be a high-profile organization like the Indian firm Infosys, makes money. The low-cost offshore talent gets decent jobs and a chance to live in the U.S. Everyone wins, right? Except the laid-off American tech workers.

This type of bargain outsourcing is not what the H-1B was designed for. It wasn’t for laying off expensive U.S. workers and hiring or contracting with lower-paid foreign workers. It was intended to help companies bring in overseas experts when they can’t fill the job with qualified local applicants. Clearly that’s not what’s happening here.

It’s Not Supposed to Be About Cheap Labor

Also, the goal was definitely not to let companies reduce their payroll costs. To quote from the U.S. Citizenship & Immigration Services website about H-1B requirements:

Requirement 4— You must be paid at least the actual or prevailing wage for your occupation, whichever is higher.

The prevailing wage is determined based on the position in which you will be employed and the geographic location where you will be working (among other factors).

The challenge is the way that H-1B visas are allocated – which is in a lottery system, based on the number of applications. There’s a cap of only 65,000 visas each year. Outsourcing companies flood the system with hundreds of thousands of applications, whereas the companies that truly need a few specialized tech experts ask for a relative handful. (There are separate rules for educational institutions, like universities, and for those hiring workers with advanced post-graduate degrees.)

H-1B visas have been in the news for decades, as tech companies lobby to increase the quota. Everyone, remember, likes the H-1B visa, except for American tech workers whose jobs are displaced.

Most recently, the U.S. government has warned about a crackdown on H-1B abuses. According to CNN,

While H-1B visas are used to fill the U.S. skills gap, the Trump administration has voiced concerns about abuse of the program. In some cases, outsourcing firms flood the system with applicants, obtaining visas for foreign workers and then contracting them out to tech companies. American jobs are sometimes replaced in the process, critics say.

In response, Infosys, the Indian outsourcing giant, has revealed plans to hire U.S. workers. Says Computerworld,

IT offshore outsourcing giant Infosys — a firm in the Trump administration’s H-1B reform bulls eye — said Tuesday it plans to hire 10,000 “American workers” over the next two years.

The India-based Infosys will hire those employees in four separate locations in the U.S., first in Indiana, which offered the company more than $30 million in tax credits. The other locations weren’t announced.

Look for the H-1B visa issue to remain in the U.S. news spotlight all year during the battle over immigration, employment, and the power of Silicon Valley.

Did you know that last year, 75% of data breaches were perpetrated by outsiders, and fully 25% involved internal actors? Did you know that 18% were conducted by state-affiliated actors, and 51% involved organized criminal groups?

That’s according to the newly release 2017 Data Breach Investigations Report from Verizon. It’s the 10th edition of the DBIR, and as always, it’s fascinating – and frightening at the same time.

The most successful tactic, if you want to call it that, used by hackers: stolen or weak (i.e., easily guessed) passwords. They were were used by 81% of breaches. The report says that 62% of breaches featured hacking of some sort, and 51% involved malware.

More disturbing is that fully 66% of malware was installed by malicious email attachments. This means we’re doing a poor job of training our employees not to click links and open documents. We teach, we train, we test, we yell, we scream, and workers open documents anyway. Sigh. According to the report,

People are still falling for phishing—yes still. This year’s DBIR found that around 1 in 14 users were tricked into following a link or opening an attachment — and a quarter of those went on to be duped more than once. Where phishing successfully opened the door, malware was then typically put to work to capture and export data—or take control of systems.

Ransomware is big

We should not be surprised that the DBIR fingers ransomware as a major tool in the hacker’s toolbox:

Ransomware is the latest scourge of the internet, extorting millions of dollars from people and organizations after infecting and encrypting their systems. It has moved from the 22nd most common variety of malware in the 2014 DBIR to the fifth most common in this year’s data.

The Verizon report spends a lot of time on ransomware, saying,

Encouraged by the profitability of ransomware, criminals began offering ransomware-as-a-service, enabling anyone to extort their favorite targets, while taking a cut of the action. This approach was followed by a variety of experiments in ransom demands. Criminals introduced time limits after which files would be deleted, ransoms that increased over time, ransoms calculated based on the estimated sensitivity of filenames, and even options to decrypt files for free if the victims became attackers themselves and infected two or more other people. Multi-level marketing at its finest!

And this, showing another alarming year-on-year increase:

Perhaps the most significant change to ransomware in 2016 was the swing away from infecting individual consumer systems toward targeting vulnerable organizations. Overall, ransomware is still very opportunistic, relying on infected websites and traditional malware delivery for most attacks. Looking again through the lens of DBIR data, web drive-by downloads were the number one malware vector in the 2016 report, but were supplanted by email this year. Social actions, notably phishing, were found in 21% of incidents, up from just 8% in the 2016 DBIR. These emails are often targeted at specific job functions, such as HR and accounting—whose employees are most likely to open attachments or click on links—or even specific individuals.

Read the report

The DBIR covers everything from cyber-espionage to the dangers caused by failing to keep up with patches, fixes, and updates. There are also industry-specific breakouts, covering healthcare, finance, and so-on. It’s a big report, but worth reading. And sharing.