The European Union is taking computer security, data breaches, and individual privacy seriously. The EU’s General Data Protection Regulation (GDPR) will take effect on May 25, 2018 – but it’s not only a regulation for companies based in Europe.

The GDPR is designed to protect European consumers. That means that every business that stores information about European residents will be affected, no matter where that business operates or is headquartered. That means the United States, and also a post-Brexit United Kingdom.

There’s a hefty price for non-compliance: Businesses can be fined up to 4% of their worldwide top-line revenue, with a cap of €20 million. No matter how you slice it, for most businesses that’s going to hurt, though for some of the tech industry’s giants, that €20 million penalty might look like a slap on the wrist.

A big topic within GDPR is “data portability.” That is the notion that an individual has the right to see information that it has shared with an organization (or has given permission to be collected), inn a commonly used machine-readable format. Details need to be worked out to make that effective.

Another topic is that individuals have the right to make changes to some of their information, or to delete all or part of their information. No, customers can’t delete their transaction history, for example, or delete that they owe the organization money. However, they may choose to delete information that the organization may have collected, such as their age, where they went to college, or the names of their children. They also have the right to request corrections to the data, such as a misspelled name or an incorrect address.

That’s not as trivial as it may seem. It is not uncommon for organizations to have multiple versions of, say, a person’s name and spelling, or to have the information contain differences in formatting. This can have implications when records don’t match. In some countries, there have been problems with a traveler’s passport information not 100% exactly matching the information on a driver’s license, airline ticket, or frequent traveller program. While the variations might appear trivial to a human — a missing middle name, a missing accent mark, an extra space — it can be enough to throw off automated data processing systems, which therefore can’t 100% match the traveler to a ticket. Without rules like the GDPR, organizations haven’t been required to make it easy, or even possible, for customers to make corrections.

Not a Complex Document, But a Tricky One

A cottage industry has arisen with consultancies offering to help European and global companies ensure GDPR prior to implementation. Astonishingly, for such an important regulation, the GDPR itself is relatively short – only 88 pages of fairly easy-to-read prose. Of course, some parts of the GDPR refer back to other European Union directives. Still, the intended meaning is clear.

For example, this clause on sensitive data sounds simple – but how exactly will it be processed? This is why we have consultants.

Personal data which are, by their nature, particularly sensitive in relation to fundamental rights and freedoms merit specific protection as the context of their processing could create significant risks to the fundamental rights and freedoms. Those personal data should include personal data revealing racial or ethnic origin, whereby the use of the term ‘racial origin’ in this Regulation does not imply an acceptance by the Union of theories which attempt to determine the existence of separate human races. The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person. Such personal data should not be processed, unless processing is allowed in specific cases set out in this Regulation, taking into account that Member States law may lay down specific provisions on data protection in order to adapt the application of the rules of this Regulation for compliance with a legal obligation or for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller. In addition to the specific requirements for such processing, the general principles and other rules of this Regulation should apply, in particular as regards the conditions for lawful processing. Derogations from the general prohibition for processing such special categories of personal data should be explicitly provided, inter alia, where the data subject gives his or her explicit consent or in respect of specific needs in particular where the processing is carried out in the course of legitimate activities by certain associations or foundations the purpose of which is to permit the exercise of fundamental freedoms.

The Right to Be Forgotten

Vious EU members states have “rights to be forgotten” rules, which let individuals request that some data about them be deleted. These rules are tricky for rest-of-world organizations, where there may not be any such regulations, and those regulations may be in conflict with other rules (such as in the U.S., freedom of the press). Still, the GDPR strengthens those rules – and this will likely be one of the first areas tested with lawsuits and penalties, particularly with children:

A data subject should have the right to have personal data concerning him or her rectified and a ‘right to be forgotten’ where the retention of such data infringes this Regulation or Union or Member State law to which the controller is subject. In particular, a data subject should have the right to have his or her personal data erased and no longer processed where the personal data are no longer necessary in relation to the purposes for which they are collected or otherwise processed, where a data subject has withdrawn his or her consent or objects to the processing of personal data concerning him or her, or where the processing of his or her personal data does not otherwise comply with this Regulation. That right is relevant in particular where the data subject has given his or her consent as a child and is not fully aware of the risks involved by the processing, and later wants to remove such personal data, especially on the internet. The data subject should be able to exercise that right notwithstanding the fact that he or she is no longer a child. However, the further retention of the personal data should be lawful where it is necessary, for exercising the right of freedom of expression and information, for compliance with a legal obligation, for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller, on the grounds of public interest in the area of public health, for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes, or for the establishment, exercise or defence of legal claims.

Time to Get Up to Speed

In less than a year, many organizations around the world will be subject to the European Union’s GDPR. European businesses are working hard to comply with those regulations. For everyone else, it’s time to start – and yes, you probably do want a consultant.

It’s difficult to recruit qualified security staff because there are more openings than humans to fill them. It’s also difficult to retain IT security professionals because someone else is always hiring. But don’t worry: Unless you work for an organization that refuses to pay the going wage, you’ve got this.

Two recent studies present dire, but somewhat conflicting, views of the availability of qualified cybersecurity professionals over the next four or five years. The first study is the Global Information Security Workforce Study from the Center for Cyber Safety and Education, which predicts a shortfall of 1.8 million cybersecurity workers by 2022. Among the highlights from that research, which drew on data from 19,000 cybersecurity professionals:

  • The cybersecurity workforce gap will hit 1.8 million by 2022. That’s a 20 percent increase since 2015.
  • Sixty-eight percent of workers in North America believe this workforce shortage is due to a lack of qualified personnel.
  • A third of hiring managers globally are planning to increase the size of their departments by 15 percent or more.
  • There aren’t enough workers to address current threats, according to 66 percent of respondents.
  • Around the globe, 70 percent of employers are looking to increase the size of their cybersecurity staff this year.
  • Nine in ten security specialists are male. The majority have technical backgrounds, suggesting that recruitment channels and tactics need to change.
  • While 87 percent of cybersecurity workers globally did not start in cybersecurity, 94 percent of hiring managers indicate that security experience in the field is an important consideration.

The second study is the Cybersecurity Jobs Report, created by the editors of Cybersecurity Ventures. Here are some highlights:

  • There will be 3.5 million cybersecurity job openings by 2021.
  • Cybercrime will more than triple the number of job openings over the next five years. India alone will need 1 million security professionals by 2020 to meet the demands of its rapidly growing economy.
  • Today, the U.S. employs nearly 780,000 people in cybersecurity positions. But a lot more are needed: There are approximately 350,000 current cybersecurity job openings, up from 209,000 in 2015.

So, whether you’re hiring a chief information security officer or a cybersecurity operations specialist, expect a lot of competition. What can you do about it? How can you beat the staffing shortage? Read my suggestion in “How to beat the cybersecurity staffing shortage.”

“Ransomware! Ransomware! Ransomware!” Those words may lack the timeless resonance of Steve Ballmer’s epic “Developers! Developers! Developers!” scream in 2000, but ransomware was seemingly an obsession or at Black Hat USA 2017, happening this week in Las Vegas.

There are good reason for attendees and vendors to be focused on ransomware. For one thing, ransomware is real. Rates of ransomware attacks have exploded off the charts in 2017, helped in part by the disclosures of top-secret vulnerabilities and hacking tools allegedly stolen from the United States’ three-letter-initial agencies.

For another, the costs of ransomware are significant. Looking only at a few attacks in 2017, including WannaCry, Petya, and NotPetya, corporates have been forced to revise their earnings downward to account for IT downtime and lost productivity. Those include ReckittNuance, and FedEx. Those types of impact grab the attention of every CFO and every CEO.

Talking with another analyst at Black Hat, he observed that just about every vendor on the expo floor had managed to incorporate ransomware into its magic show. My quip: “I wouldn’t be surprised to see a company marketing network cables as specially designed to prevent against ransomware.” His quick retort: “The queue would be half a mile long for samples. They’d make a fortune.”

While we seek mezzanine funding for our Ransomware-Proof CAT-6 Cables startup, let’s talk about what organizations can and should do to handle ransomware. It’s not rocket science, and it’s not brain surgery.

  • Train, train, train. End users will slip up, and they will click to open emails they shouldn’t open. They will visit websites they shouldn’t visit. And they will ignore security warnings. That’s true for the lowest-level trainee – and true for the CEO as well. Constant training can reduce the amount of stupidity. It can make a difference. By the way, also test your employees’ preparedness by sending out fake malware, and see who clicks on it.
  • Invest in tools that can detect ransomware and other advanced malware. Users will make mistakes, and we’ve seen that there are some ransomware variants that can spread without user intervention. Endpoint security technology is required, and if possible, such tools should do more than passively warn end users if a problem is detected. There are many types of solutions available; look into them, and make sure there are no coverage gaps.
  • Aggressively patch and update software. Patches existed for months to close the vulnerabilities exploited by the recent flurry of ransomware attacks. It’s understandable that consumers wouldn’t be up to date – but it’s inexcusable for corporations to have either not known about the patches, or to have failed to install them. In other words, these attacks were basically 100% avoidable. Maybe they won’t be in the future if the hackers exploit true zero-days, but you can’t protect your organization with out-of-date operating systems, applications, and security tools.
  • Backup, backup, backup. Use backup technology that moves data security into the data center or into the cloud, so that ransomware can’t access the backup drive directly. Too many small businesses lost data on laptops, notebooks, and servers because there weren’t backups. We know better than this! By the way, one should assume that malware attacks, even ransomware, can be designed to destroy data and devices. Don’t assume you can write a check and get your data back safety.
  • Stay up to date on threat data. You can’t rely upon the tech media, or vendor blogs, to keep you up to date with everything going on with cybersecurity. There are many threat data feeds, some curated and expensive, some free and lower-quality. You should find a threat data source that seems to fit your requirements and subscribe to it – and act on what you read. If you’re not going to consume the threat data yourself, find someone else to do so. An urgent warning about your database software version won’t do you any good if it’s in your trashcan.

Ransomware! Ransomware! Ransomware! When it comes to ransomware and advanced malware, it’s not a question of if, or even a question of when. Your organization, your servers, your network, your end-users, are under constant attack. It only takes one slip-up to wreak havoc on one endpoint, and potentially on multiple endpoints. Learn from what’s going on at Black Hat – and be ready for the worst.

A major global cyberattack could cost US$53 billion of economic losses. That’s on the scale of a catastrophic disaster like 2012’s Hurricane Sandy.

Lloyds of London, the famous insurance company, partnered with Cyence, a risk analysis firm specializing in cybersecurity. The result is a fascinating report, “Counting the Cost: Cyber Exposure Decoded.” This partnership makes sense: Lloyds must understand the risk before deciding whether to underwrite a venture — and when it comes to cybersecurity, this is an emerging science. Traditional actuarial methods used to calculate the risk of a cargo ship falling prey to pirates, or an office block to a devastating flood, simply don’t apply.

Lloyds says that in 2016, cyberattacks cost businesses as much as $450 billion. While insurers can help organizations manage that risk, the risk is increasing. The report points to those risks covering “everything from individual breaches caused by malicious insiders and hackers, to wider losses such as breaches of retail point-of-sale devices, ransomware attacks such as BitLocker, WannaCry and distributed denial-of-service attacks such as Mirai.”

The worry? Despite writing $1.35 billion in cyberinsurance in 2016, “insurers’ understanding of cyber liability and risk aggregation is an evolving process as experience and knowledge of cyber-attacks grows. Insureds’ use of the internet is also changing, causing cyber-risk accumulation to change rapidly over time in a way that other perils do not.”

And that is why the lack of time-tested actuarial tables can cause disaster, says Lloyds. “Traditional insurance risk modelling relies on authoritative information sources such as national or industry data, but there are no equivalent sources for cyber-risk and the data for modelling accumulations must be collected at scale from the internet. This makes data collection, and the regular update of it, key components of building a better understanding of the evolving risk.”

Where the Risk Is Growing

The report points to six significant trends that are causing increased risk of an expensive attack – and therefore, increased liability:

  • Volume of contributors: The number of people developing software has grown significantly over the past three decades; each contributor could potentially add vulnerability to the system unintentionally through human error.
  • Volume of software: In addition to the growing number of people amending code, the amount of it in existence is increasing. More code means the potential for more errors and therefore greater vulnerability.
  • Open source software: The open-source movement has led to many innovative initiatives. However, many open-source libraries are uploaded online and while it is often assumed they have been reviewed in terms of their functionality and security, this is not always the case. Any errors in the primary code could then be copied unwittingly into subsequent iterations.
  • Old software: The longer software is out in the market, the more time malicious actors have to find and exploit vulnerabilities. Many individuals and companies run obsolete software that has more secure alternatives.
  • Multi-layered software: New software is typically built on top of prior software code. This makes software testing and correction very difficult and resource intensive.
  • “Generated” software: Code can be produced through automated processes that can be modified for malicious intent.

Based on those points, and other factors, Lloyds and Cyence have come up with two primary scenarios that could lead to widespread, and costly, damages. The first – a successful hack of a major cloud service provider, which hosts websites, applications, and data for many companies. The second — a mass vulnerability attack that affects many client systems. One could argue that some of the recent ransomware attacks fit into that scenario.

Huge Liability Costs

The “Counting the Cost” report makes for some depressing reading. Here are three of the key findings, quoted verbatim. Read the 56-page report to dig deeply into the scenarios, and the damages.

  • The direct economic impacts of cyber events lead to a wide range of potential economic losses. For the cloud service disruption scenario in the report, these losses range from US$4.6 billion for a large event to US$53.1 billion for an extreme event; in the mass software vulnerability scenario, the losses range from US$9.7 billion for a large event to US$28.7 billion for an extreme event.
  • Economic losses could be much lower or higher than the average in the scenarios because of the uncertainty around cyber aggregation. For example, while average losses in the cloud service disruption scenario are US$53 billion for an extreme event, they could be as high as US$121.4 billion or as low as US$15.6 billion, depending on factors such as the different organisations involved and how long the cloud-service disruption lasts for.
  • Cyber-attacks have the potential to trigger billions of dollars of insured losses. For example, in the cloud- services scenario insured losses range from US$620 million for a large loss to US$8.1 billion for an extreme loss. For the mass software vulnerability scenario, the insured losses range from US$762 million (large loss) to US$2.1 billion (extreme loss).

Read the 56-page report to dig deeply into the scenarios, and the damages. You may not sleep well afterwards.

Automotive ECU (engine control unit)

Automotive ECU (engine control unit)

In my everyday life, I trust that if I make a panic stop, my car’s antilock brake system will work. The hardware, software, and servos will work together to ensure that my wheels don’t lock up—helping me avoid an accident. If that’s not sufficient, I trust that the impact sensors embedded behind the front bumper will fire the airbag actuators with the correct force to protect me from harm, even though they’ve never been tested. I trust that the bolts holding the seat in its proper place won’t shear. I trust the seat belts will hold me tight, and that cargo in the trunk won’t smash through the rear seats into the passenger cabin.

Engineers working on nearly every automobile sold worldwide ensure that their work practices conform to ISO 26262. That standard describes how to manage the functional safety of the electrical and electronic systems in passenger cars. A significant portion of ISO 26262 involves ensuring that software embedded into cars—whether in the emissions system, the antilock braking systems, the security systems, or the entertainment system—is architected, coded, and tested to be as reliable as possible.

I’ve worked with ISO 26262 and related standards on a variety of automotive software security projects. Don’t worry, we’re not going to get into the hairy bits of those standards because unless you are personally designing embedded real-time software for use in automobile components, they don’t really apply. Also, ISO 26262 is focused on the real-world safety of two-ton machines hurtling at 60-plus miles per hour—that is, things that will kill or hurt people if they don’t work as expected.

Instead, here are five IT systems management ideas that are inspired by ISO 26262. We’ll help you ensure your systems are designed to be Reliable, with a capital R, and Safe, with a capital S.

Read the list, and more, in my article for HP Enterprise Insights, “5 lessons for data center pros, inspired by automotive engineering standards.”

MacKenzie Brown has nailed the problem — and has good ideas for the solution. As she points out in her three part blog series, “The Unicorn Extinction” (links in a moment):

  • Overall, [only] 25% of women hold occupations in technology alone.
  • Women’s Society of Cyberjutsu (WSC), a nonprofit for empowering women in cybersecurity, states that females make up 11% of the cybersecurity workforce while (ISC)2, a non-profit specializing in education and certification, reports a whopping estimation of 10%.
  • Lastly, put those current numbers against the 1 million employment opportunities predicted for 2017, with a global demand of up to 6 million by 2019.

While many would decry the system sexism and misogyny in cybersecurity, Ms. Brown sees opportunity:

…the cybersecurity industry, a market predicted to have global expenditure exceeding $1 trillion between now and 2021(4), will have plenty of demand for not only information security professionals. How can we proceed to find solutions and a fixed approach towards resolving this gender gap and optimizing this employment fluctuation? Well, we promote unicorn extinction.

The problem of a lack of technically developed and specifically qualified women in Cybersecurity is not unique to this industry alone; however the proliferation of women in tangential roles associated with our industry shows that there is a barrier to entry, whatever that barrier may be. In the next part of this series we will examine the ideas and conclusions of senior leadership and technical women in the industry in order to gain a woman’s point of view.

She continues to write about analyzing the problem from a woman’s point of view:

Innovating solutions to improve this scarcity of female representation, requires breaking “the first rule about Fight Club; don’t talk about Fight Club!” The “Unicorn Law”, this anecdote, survives by the circling routine of the “few women in Cybersecurity” invoking a conversation about the “few women in Cybersecurity” on an informal basis. Yet, driving the topic continuously and identifying the values will ensure more involvement from the entirety of the Cybersecurity community. Most importantly, the executive members of Fortune 500 companies who apply a hiring strategy which includes diversity, can begin to fill those empty chairs with passionate professionals ready to impact the future of cyber.

Within any tale of triumph, obstacles are inevitable. Therefore, a comparative analysis of successful women may be the key to balancing employment supply and demand. I had the pleasure of interviewing a group of women; all successful, eclectic in roles, backgrounds of technical proficiency, and amongst the same wavelength of empowerment. These interviews identified commonalities and distinct perspectives on the current gender gap within the technical community.

What’s the Unicorn thing?

Ms. Brown writes,

During hours of research and writing, I kept coming across a peculiar yet comically exact tokenism deemed, The Unicorn Law. I had heard this in my industry before, attributed to me, “unicorn,” which is described (even in the cybersecurity industry) as: a woman-in-tech, eventually noticed for their rarity and the assemblage toward other females within the industry. In technology and cybersecurity, this is a leading observation many come across based upon the current metrics. When applied to the predicted demand of employment openings for years to come, we can see an enormous opportunity for women.

Where’s the opportunity?

She concludes,

There may be a notable gender gap within cybersecurity, but there also lies great opportunity as well. Organizations can help narrow the gap, but there is also tremendous opportunity in women helping each other as well.

Some things that companies can do to help, include:

  • Providing continuous education, empowering and encouraging women to acquire new skill through additional training and certifications.
  • Using this development training to promote from within.
    Reaching out to communities to encourage young women from junior to high school levels to consider cyber security as a career.
  • Seek out women candidates for jobs, both independently and utilizing outsourcing recruitment if need be.
  • At events, refusing to field all male panels.
  • And most importantly, encourage the discussion about the benefits of a diverse team.

If you care about the subject of gender opportunity in cybersecurity, I urge you to read these three essays.

The Unicorn Extinction Series: An Introspective Analysis of Women in Cybersecurity, Part 1

The Unicorn Extinction Series: An Introspective Analysis of Women in Cybersecurity, Part 2

The Unicorn Extinction Series: An Introspective Analysis of Women in Cybersecurity, Part 3

Did they tell their customers that data was stolen? No, not right away. When AA — a large automobile club and insurer in the United Kingdom — was hacked in April, the company was completely mum for months, in part because it didn’t believe the stolen data was sensitive. AA’s customers only learned about it when information about the breach was publicly disclosed in late June.

There are no global laws that require companies to disclose information about data thefts to customers. There are similarly no global laws that require companies to disclose defects in their software or hardware products, including those that might introduce security vulnerabilities.

It’s obviously why companies wouldn’t want to disclose problems with their products (such as bugs or vulnerabilities) or with their back-end operations (such as system breaches or data exfiltration). If customers think you’re insecure, they’ll leave. If investors think you’re insecure, they’ll leave. If competitors think you’re insecure, they’ll pounce on it. And if lawyers or regulators think you’re insecure, they might file lawsuits.

No matter how you slice it, disclosures about problems is not good for business. Far better to share information about new products, exciting features, customer wins, market share increases, additional platforms, and pricing promotions.

It’s Not Always Hidden

That’s not to say that all companies hide bad news. Microsoft, for example, is considered to be very proactive on disclosing flaws in its products and platforms, including those that affect security. When Microsoft learned about the Server Message Block (SMB) flaw that enabled malware like WannaCry and Petya in March, it quickly issued a Security Bulletin that explained the problem — and supplied the necessary patches. If customers had read the bulletin and applied the patches, those ransomware outbreaks wouldn’t have occurred.

When you get outside the domain of large software companies, such disclosures are rare. Automobile manufacturers do share information about vehicle defects with regulators, as per national laws, but resist recalls because of the expense and bad publicity. Beyond that, companies share information about problems with products, services, and operations unwillingly – and with delays.

In the AA case, as SC Magazine wrote,

The leaky database was first discovered by the AA on April 22 and fixed by April 25. In the time that it had been exposed, it had reportedly been accessed by several unauthorised parties. An investigation by the AA deemed the leaky data to be not sensitive, meaning that the organisation did not feel it necessary to tell customers.

Yet the breach contained over 13 gigabytes of data with information about 100,000 customers. Not sensitive? Well, the stolen information included email addresses along with names, IP addresses, and credit card details. That data seems sensitive to me!

Everything Will Change Under GDPR

The European Union’s new General Data Protection Regulation (GDPR) is go into effect May 2018. GDPR will for the first time require companies to tell customers and regulators about data breaches in a timely manner. Explains the U.K. Information Commissioner’s Office,

The GDPR will introduce a duty on all organisations to report certain types of data breach to the relevant supervisory authority, and in some cases to the individuals affected.

What is a personal data breach?

A personal data breach means a breach of security leading to the destruction, loss, alteration, unauthorised disclosure of, or access to, personal data. This means that a breach is more than just losing personal data.

Example

A hospital could be responsible for a personal data breach if a patient’s health record is inappropriately accessed due to a lack of appropriate internal controls.

When do individuals have to be notified?

Where a breach is likely to result in a high risk to the rights and freedoms of individuals, you must notify those concerned directly.

A ‘high risk’ means the threshold for notifying individuals is higher than for notifying the relevant supervisory authority.

What information must a breach notification contain?

  • The nature of the personal data breach including, where possible:
  • the categories and approximate number of individuals concerned; and
  • the categories and approximate number of personal data records concerned;
  • The name and contact details of the data protection officer (if your organisation has one) or other contact point where more information can be obtained;
  • A description of the likely consequences of the personal data breach; and
  • A description of the measures taken, or proposed to be taken, to deal with the personal data breach and, where appropriate, of the measures taken to mitigate any possible adverse effects.

Also, says the regulation,

If the breach is sufficiently serious to warrant notification to the public, the organisation responsible must do so without undue delay. Failing to notify a breach when required to do so can result in a significant fine up to 10 million Euros or 2 per cent of your global turnover.

Bottom line: Next year, companies in the E.U. must do better disclosing data breaches that affect their customers. Let’s hope this practice extends to more of the world.

The Federal Bureau of Investigation is warning about potential attacks from a hacking group called Lizard Squad. This information, released today, was labeled “TLP:Green” by the FBI and CERT, which means that it shouldn’t be publicly shared – but I am sharing it because this information was published on a publicly accessible blog run by the New York State Bar Association. I do not know why distribution of this information was restricted.

The FBI said:

Summary

An individual or group claiming to be “Anonymous” or “Lizard Squad” sent extortion emails to private-sector companies threatening to conduct distributed denial of service (DDoS) attacks on their network unless they received an identified amount of Bitcoin. No victims to date have reported DDoS activity as a penalty for non-payment.

Threat

In April and May 2017, at least six companies received emails claiming to be from “Anonymous” and “Lizard Squad” threatening their companies with DDoS attacks within 24 hours unless the company sent an identified amount of Bitcoin to the email sender. The email stated the demanded amount of Bitcoin would increase each day the amount went unpaid. No victims to date have reported DDoS activity as a penalty for nonpayment.

Reporting on schemes of this nature go back at least three years.

In 2016, a group identifying itself as “Lizard Squad” sent extortion demands to at least twenty businesses in the United Kingdom, threatening DDoS attacks if they were not paid five Bitcoins (as of 14 June, each Bitcoin was valued at 2,698 USD). No victims reported actual DDoS activity as a penalty for non-payment.

Between 2014 and 2015, a cyber extortion group known as “DDoS ‘4’ Bitcoin” (DD4BC) victimized hundreds of individuals and businesses globally. DD4BC would conduct an initial, demonstrative low-level DDoS attack on the victim company, followed by an

email message introducing themselves, demanding a ransom paid in Bitcoins, and threatening a higher level attack if the ransom was not paid within the stated time limit. While no significant disruption or DDoS activity was noted, it is probable companies paid the ransom to avoid the threat of DDoS activity.

Background

Lizard Squad is a hacking group known for their DDoS attacks primarily targeting gaming-related services. On 25 December 2014, Lizard Squad was responsible for taking down the Xbox Live and PlayStation networks. Lizard Squad also successfully conducted DDoS attacks on the UK’s National Crime Agency’s (NCA) website in 2015.

Anonymous is a hacking collective known for several significant DDoS attacks on government, religious, and corporate websites conducted for ideological reasons.

Recommendations

  • The FBI suggests precautionary measures to mitigate DDoS threats to include, but not limited to:
  • Have a DDoS mitigation strategy ready ahead of time.
  • Implement an incident response plan that includes DDoS mitigation and practice this plan before an actual incident occurs. This plan may involve external organizations such as your Internet Service Provider, technology companies that offer DDoS mitigation services, and law enforcement.
  • Ensure your plan includes the appropriate contacts within these external organizations. Test activating your incident response team and third party contacts.
  • Implement a data back-up and recovery plan to maintain copies of sensitive or proprietary data in a separate and secure location. Backup copies of sensitive data should not be readily accessible from local networks.
  • Ensure upstream firewalls are in place to block incoming User Data Protocol (UDP) packets.
  • Ensure software or firmware updates are applied as soon as the device manufacturer releases them.

If you have received one of these demands:

  • Do not make the demand payment.
  • Retain the original emails with headers.
  • If applicable, maintain a timeline of the attack, recording all times and content of the attack.

The FBI encourages recipients of this document to report information concerning suspicious or criminal activity to their local FBI field office or the FBI’s 24/7 Cyber Watch (CyWatch). Field office contacts can be identified at www.fbi.gov/contact-us/field. CyWatch can be contacted by phone at (855) 292-3937 or by e-mail at email hidden; JavaScript is required. When available, each report submitted should include the date, time, location, type of activity, number of people, and type of equipment used for the activity, the name of the submitting company or organization, and a designated point of contact. Press inquiries should be directed to the FBI’s national Press Office at email hidden; JavaScript is required or (202) 324-3691.

Petya may indicate the start of real cyberwar. This week’s newest ransomware attack is technically similar to the WannaCry (aka WannaCrypt) cyberattack. However, the intent, and the results, are quite different – one wants to make money, the other to destroy data.

Both Petya and WannaCry are the results of an exploitable flaw in many versions of Windows. Microsoft learned about the flaw after NSA data was stolen, and quickly issued an effective patch. However, many customers have not installed the patch, and therefore, their systems remained vulnerable. Making the situation more complicated, many of those Windows system used pirated versions of the operating system, which means that the system owners may not have been notified about the vulnerability and patch – and not all may have been able to install the patch in any case, because Microsoft verifies the license of Windows during upgrades.

Here’s what Microsoft says:

As happened recently with WannaCrypt, we again face a malicious attack in the form of ransomware, Petya. In early reports, there was a lot of conflicting information reported on the attacks, including conflation of unrelated and misleading pieces of data, so Microsoft teams mobilized to investigate and analyze, enabling our Malware Protection team to release signatures to detect and protect against the malware.

Based on our investigation, the malware was initially delivered via a Ukrainian company’s (M.E.doc) update service for their finance application, which is popular in Ukraine and Russia. Once the initial compromise took hold, the ransomware used multiple tools in its arsenal to spread across impacted networks. If unpatched, the malware uses vulnerabilities CVE-2017-0144 and CVE-2017-0145 to spread across networks. Microsoft released MS17-010 in March that addressed the vulnerabilities exploited by Petya. If that technique was not effective, the malware uses other methods like harvesting of credentials and traversing networks to infect other machines. (read the Microsoft Malware Protection Center analysis here for more details.)

The information from the Malware Protection Center goes into considerable technical detail. It’s  fascinating, and worth reading if you like that sort of thing (like I do).

Goodbye, Data

Analysts  believe that Petya is something new: This malware  pretends to be plain old ransomware that asks for $300 to unlock encrypted data – but is actually intended to steal passwords and destroy data. In other words, it’s a true weaponized cyberattack.

To quote TechCrunch:

Petya appears to have been modified specifically to make the encoding of user data irreversible by overwriting the master boot record. The attackers’ email address also appears to have been taken offline, preventing ransoms from being paid.

Says Krebs on Security:

Nicholas Weaver, a security researcher at the International Computer Science Institute and a lecturer at UC Berkeley, said Petya appears to have been well engineered to be destructive while masquerading as a ransomware strain.

Weaver noted that Petya’s ransom note includes the same Bitcoin address for every victim, whereas most ransomware strains create a custom Bitcoin payment address for each victim.

Also, he said, Petya urges victims to communicate with the extortionists via an email address, while the majority of ransomware strains require victims who wish to pay or communicate with the attackers to use Tor, a global anonymity network that can be used to host Web sites which can be very difficult to take down.

“I’m willing to say with at least moderate confidence that this was a deliberate, malicious, destructive attack or perhaps a test disguised as ransomware,” Weaver said. “The best way to put it is that Petya’s payment infrastructure is a fecal theater.”

Kaspersky Labs agrees:

After an analysis of the encryption routine of the malware used in the Petya/ExPetr attacks, we have thought that the threat actor cannot decrypt victims’ disk, even if a payment was made.

This supports the theory that this malware campaign was not designed as a ransomware attack for financial gain. Instead, it appears it was designed as a wiper pretending to be ransomware.

Different than WannaCry

Both Petya and WannaCry are the results of an exploitable flaw in many versions of Windows. Microsoft learned about the flaw after NSA data was stolen, and quickly issued an effective patch. However, many customers have not installed the patch, and therefore, their systems remained vulnerable. Making the situation more complicated, many of those Windows system used pirated versions of the operating system, which means that the system owners may not have been notified about the vulnerability and patch – and not all may have been able to install the patch in any case, because Microsoft verifies the license of Windows during upgrades.

Petya: A Test for Bigger Cyberwarfare?

There is considerable chatter that Petya is a test. It may be designed to see how well this specific malware distribution methodology works, with a nasty but limited malicious payload primary intended to harm Ukraine. It’s clear that the methodology works, and as long as administrators put off patching their servers, these sorts of attacks will succeed. The next one might be a lot nastier. First WannaCry, then Petya. What’s next?

CNN didn’t get the memo. After all the progress that’s been made to eliminate the requirement for using Adobe’s Flash player by so many streaming-media websites, CNNgo still requires the problematic plug-in, as you can see by the screen I saw just a few minutes ago.


Have you not heard of HTML5, oh, CNN programmers? Perhaps the techies at CNN should read “Why Adobe Flash is a Security Risk and Why Media Companies Still Use it.” After that, “Gone in a Flash: Top 10 Vulnerabilities Used by Exploit Kits.”

Yes, Adobe keeps patching Flash to make it less insecure. Lots and lots of patches, says the story “Patch Tuesday: Adobe Flash Player receives updates for 13 security issues,” publishing in January. That comes in the heels of 17 security flaws patched in December 2016.

And yes, there were more critical patches issued on June 13, 2017. Flash. Just say no. Goodbye, CNNgo, until you stop requiring that prospective customers utilize such a buggy, flawed media player.

And no, I didn’t enable the use of Flash. Guess I’ll never see what CNN wanted to show me. No great loss.

An organization’s Chief Information Security Officer’s job isn’t ones and zeros. It’s not about unmasking cybercriminals. It’s about reducing risk for the organization, for enabling executives and line-of-business managers to innovate and compete safely and  securely. While the CISO is often seen as the person who loves to say “No,” in reality, the CISO wants to say “Yes” — the job, after all, is to make the company thrive.

Meanwhile, the CISO has a small staff, tight budget, and the need to demonstrate performance metrics and ROI. What’s it like in the real world? What are the biggest challenges? We asked two former CISOs (it’s hard to get current CISOs to speak on the record), both of whom worked in the trenches and now advise CISOs on a daily basis.

To Jack Miller, a huge challenge is the speed of decision-making in today’s hypercompetitive world. Miller, currently Executive in Residence at Norwest Venture Partners, conducts due diligence and provides expertise on companies in the cyber security space. Most recently he served as chief security strategy officer at ZitoVault Software, a startup focused on safeguarding the Internet of Things.

Before his time at ZitoVault, Miller was the head of information protection for Auto Club Enterprises. That’s the largest AAA conglomerate with 15 million members in 22 states. Previously, he served as the CISO of the 5th and 11th largest counties in the United States, and as a security executive for Pacific Life Insurance.

“Big decisions are made in the blink of an eye,” says Miller. “Executives know security is important, but don’t understand how any business change can introduce security risks to the environment. As a CISO, you try to get in front of those changes – but more often, you have to clean up the mess afterwards.”

Another CISO, Ed Amoroso, is frustrated by the business challenge of justifying a security ROI. Amoroso is the CEO of TAG Cyber LLC, which provides advanced cybersecurity training and consulting for global enterprise and U.S. Federal government CISO teams. Previously, he was Senior Vice President and Chief Security Officer for AT&T, and managed computer and network security for AT&T Bell Laboratories. Amoroso is also an Adjunct Professor of Computer Science at the Stevens Institute of Technology.

Amoroso explains, “Security is an invisible thing. I say that I’m going to spend money to prevent something bad from happening. After spending the money, I say, ta-da, look, I prevented that bad thing from happening. There’s no demonstration. There’s no way to prove that the investment actually prevented anything. It’s like putting a “This House is Guarded by a Security Company” sign in front of your house. Maybe a serial killer came up the street, saw the sign, and moved on. Maybe not. You can’t put in security and say, here’s what didn’t happen. If you ask, 10 out of 10 CISOs will say demonstrating ROI is a huge problem.”

Read more in my article for Global Banking & Finance Magazine, “Be Prepared to Get Fired! And Other Business Advice for CISOs.”

Have you ever suffered through the application process for cybersecurity insurance? You know that “suffered” is the right word because of a triple whammy.

  • First, the general risk factors involved in cybersecurity are constantly changing. Consider the rapid rise in ransomware, for example.
  • Second, it is extremely labor-intensive for businesses to document how “safe” they are, in terms of their security maturity, policies, practices and technology.
  • Third, it’s hard for insurers, the underwriters, and their actuaries, to feel confident that they truly understand how risky a potential customer can be — information and knowledge that’s required for quoting a policy that offers sufficient coverage at reasonable rates.

That is, of course, assuming that everyone is on the same page and agrees that cybersecurity insurance is important to consider for the organization. Is cybersecurity insurance a necessary evil for every company to consider? Or, is it only a viable option for a small few? That’s a topic for a separate conversation. For now, let’s assume that you’re applying for insurance.

From their part, insurance carriers aren’t equipped to go into your business and examine your IT infrastructure. They won’t examine firewall settings or audit your employee anti-phishing training materials. Instead, they rely upon your answers to questionnaires developed and interpreted by their own engineers. Unfortunately, those questionnaires may not get into the nuances, especially if you’re in a vertical where the risks are especially high, and so are the rewards for successful hackers.

According to InformationAge, 77% of ransomware appear in four industries. Those are business & professional services (28%), government (19%), healthcare (15%) and retail (15%). In 2016 and 2017, healthcare organizations like hospitals and medical practices were repeatedly hit by ransomware. Give that data to the actuaries, and they might look for those types of organizations to fill out even more questionnaires.

About those questionnaires? “Applications tend to have a lot of yes/no answers… so that doesn’t give the entire picture of what the IT framework actually looks like,” says Michelle Chia, Vice President, Zurich North America. She explained that an insurance company’s internal assessment engineers have to dig deeper to understand what is really going on: “They interview the more complex clients to get a robust picture of what the combination of processes and controls actually looks like and how secure the network and the IT infrastructure are.”

Read more in my latest for ITSP Magazine, “How to Streamline the Cybersecurity Insurance Process.”

Hacking can kill. To take the most obvious example, take ransomware. One might argue that hackers demanding about US$300 (£230) to unlock some files is simply petty crime – unless those files were crucial to hospitals. If doctors can’t access medical files because of the WannaCry ransomware, or must postpone surgery, people can die.

It gets worse: Two Indian Air Force pilots are dead, possibly because of a cyberattack on their Sukhoi 30 fighter jet. According to the Economic Times of India,

Squadron leader D Pankaj and Flight Lieutenant S Achudev, the pilots of the Su-30 aircraft, had sustained fatal injuries when the aircraft crashed approximately 60 km from Tezpur Airbase on May 23. A court of Inquiry has already been ordered to investigate the cause of the accident.

According to defence spokesperson S Ghosh, analysis of the Flight Data Recorder of the aircraft and certain other articles recovered from the crash site revealed that the pilots could not initiate ejection before crash. The wreckage of the aircraft was located on May 26.

What does that have to do with hackers? Well, the aircraft was flying close to India’s border with China, and according to reports, the Sukhoi’s two pilots were possibly victims of cyberwarfare. Says the Indian Defense News,

Analysts based in the vicinity of New York and St Petersburg warn that the loss, days ago, of an advanced and mechanically certified as safe, Sukhoi 30 fighter aircraft, close to the border with China may be the result of “cyber-interference with the onboard computers” in the cockpit. This may explain why even the pilots may have found it difficult to activate safety ejection mechanisms, once it became obvious that the aircraft was in serious trouble, as such mechanisms too could have been crippled by computer malfunctions induced from an outside source.

Trouble in the Middle East

The political situation going on this week in Qatar might lead to a shooting war. In mid-May, stories were published on the Qatar News Agency that outraged its Arab neighbors. According to CNN,

The Qatari government has said a May 23 news report on its Qatar News Agency attributed false remarks to the nation’s ruler that appeared friendly to Iran and Israel and questioned whether President Donald Trump would last in office.

Soon thereafter, three Arab countries cut off ties and boycotted the country, which borders Saudi Arabia on the Persian Gulf. It’s now believed that those stories were “fake news” planted by hackers. Were they state-sponsored agents? It’s too soon to tell. However, given how quickly Bahrain, Saudi Arabia, and the United Arab Emirates reacted — and given how hard Saudi Arabia is fighting in Yemen — this is troubling. Could keystrokes from hackers lead to the drumbeat of war?

As a possibly related follow-up, Qatar-based Al-Jazeera reported on June 8 it was under cyberattack:

The websites and digital platforms of Al Jazeera Media Network are undergoing systematic and continual hacking attempts.

These attempts are gaining intensity and taking various forms. However, the platforms have not been compromised.

In the First World War, the feared new weapon was the unstoppable main battle tank. In the Second World War, it was the powerful aircraft carrier. During the Cold War, we worried about ICBMs raining destruction from the skies. Today… it’s cyberwarfare that keeps us awake at night. Sadly, we can’t hide under our desks in the event of a malware attack.

Doing business in China has always been a rollercoaster. For Internet businesses, the ride just became more scary.

The Chinese government has rolled out new cybersecurity laws, which begin affecting foreign companies today, June 1, 2017. The new rules give the Chinese government more control over Internet companies. The government says that the rules are designed to help address threats causes by terrorists and hackers – but the terms are broad enough to confuse anyone doing business in China.

Two of the biggest requirements of the new legislation:

  • Companies that do business in China must store all data related to that business, including customer data, within China.
  • Consumers must register with their real names on retail sites, community sites, news sites, and social media, including messaging services.

According to many accounts, the wording of the new law is too ambiguous to assure compliance. Perhaps the drafters were careless, or lacked of understanding of technical issues. However, it’s possible that the ambiguity is intentional, to give Chinese regulators room to selectively apply the new laws based on political or business objectives. To quote coverage in The New York Times,

One instance cited by Mats Harborn, president of the European Union Chamber of Commerce in China, in a round-table discussion with journalists, was that the government said it wanted to regulate “critical information infrastructure,” but had not defined what that meant.

“The way it’s enforced and implemented today and the way it might be enforced and implemented in a year is a big question mark,” added Lance Noble, the chamber’s policy and communications manager. He warned that uncertainty surrounding the law could make foreign technology firms reluctant to bring their best innovations to China.

The government organization behind these laws, the Cyberspace Administration of China, offers an English-language website.

Keep Local Data Local

The rules state that companies that store data relevant to Chinese customer overseas without approval can have their businesses shut down. All businesses operating in China must provide technical support to the company’s security agencies in order to investigate anything that the authorities claim threatens national security or might represent a crime. According to the South China Morning Post, the new rules can affect nearly any company that moves data:

For example, rules limiting the transfer of data outside China’s borders originally applied only to “critical information infrastructure operators”. But that was changed mid-April to “network operators,” which could mean just about any business.

“Even a small e-business or email system could be considered a network,” said Richard Zhang, director of KPMG Advisory in Shanghai.

Another provision requires IT hardware and services to undergo inspection and verification as “secure and controllable” before companies can deploy them in China. That appears to be already tilting purchasing decisions at state-owned enterprises.

Compliance Will Be Tricky

According to a report on CNBC,

The American Chamber of Commerce in Shanghai has called the data localization and data transfer regulations “unnecessarily onerous,” with a potential impact on cross-border trade worth billions of dollars.

Multinationals may be better equipped to take on the cost of compliance, but “a lot of the small and medium sized companies may not be able to afford to put in the control that the Chinese government is asking for, and if they can’t put in those controls, it may actually push them out of that country and that market,” said James Carder, vice president of cybersecurity firm LogRhythm Labs.

It’s clear that, well, it’s not clear. There do seem to be legitimate concerns about the privacy of Chinese citizens, and of the ability of the Chinese government to examine data relevant to crime or terrorism. It’s also true, however, that these rules will help Chinese firms, which have a home-court advantage – and which don’t face similar rules when they expand to the rest of Asia, Europe or North America. To quote again from CNBC:

While Chinese firms are also subject to the same data localization and transfer requirements — a potential challenge as many domestic companies are going global — experts said the regulation could help China bolster its domestic tech sector as more companies are forced to store data onshore. But that could mean continued uneven market access for foreign versus Chinese companies, which is also a long-time challenge.

“The asymmetry between the access that Chinese companies enjoy in other markets and the access foreign companies have in China has been growing for some time,” said Kenneth Jarrett, the president of the American Chamber in Shanghai.

One example is that Chinese firms usually can fully own and control data centers and cloud-related services around the world without foreign equity restrictions or technology transfer requirements, but foreign cloud companies in China don’t enjoy the same environment.

The opportunities are huge, so Internet firms have no choice but to ride that Chinese rollercoaster. 

Movie subtitles — those are the latest attack vector for malware. According to Check Point Software, by crafting malicious subtitle files, which are then downloaded by a victim’s media player, attackers can take complete control over any type of device via vulnerabilities found in many popular streaming platforms. Those media players include VLC, Kodi (XBMC), Popcorn-Time and strem.io.

I was surprised to see that this would work, because I thought that text subtitles were just that – text. Silly me. Subtitles embedded into media files (like mp4 movies) can be encoded in dozens of different formats, each with unique features, capabilities, metadata, and payloads. The data and metadata in those subtitles can be hard to analyze, in part because of the many ways the subtitles are stored in a repository. To quote Check Point:

These subtitles repositories are, in practice, treated as a trusted source by the user or media player; our research also reveals that those repositories can be manipulated and be made to award the attacker’s malicious subtitles a high score, which results in those specific subtitles being served to the user. This method requires little or no deliberate action on the part of the user, making it all the more dangerous.

Unlike traditional attack vectors, which security firms and users are widely aware of, movie subtitles are perceived as nothing more than benign text files. This means users, Anti-Virus software, and other security solutions vet them without trying to assess their real nature, leaving millions of users exposed to this risk.

According to Check Point, more than 200 million users (or devices) are potentially vulnerable to this exploit. The risk?

Damage: By conducting attacks through subtitles, hackers can take complete control over any device running them. From this point on, the attacker can do whatever he wants with the victim’s machine, whether it is a PC, a smart TV, or a mobile device. The potential damage the attacker can inflict is endless, ranging anywhere from stealing sensitive information, installing ransomware, mass Denial of Service attacks, and much more.

Here’s an infographic from Check Point:

This type of vulnerability is reminiscent of steganography, where secret data is hidden inside image files. We have all become familiar with malicious macros, such as those hidden inside Microsoft Word .doc/.docx or Microsoft Excel .xls/.xlsx files. Those continue to become more sophisticated, even as antivirus or anti-malware scanners becomes more adept at detecting them. Similarly, executables and other malware can be hidden inside Adobe .pdf documents, or even inside image files.

Interestingly, sometimes that malware can be manually destroyed by format conversations. For example, you can turn a metadata-rich format into a dumb format. Turn a Word doc into rich text or plain text, and good-bye, malicious macro. Similarly, converting a malicious JPEG into a bitmap could wipe out any malware in the JPEG file’s header or footer. Of course, you’d lose other benefits as well, especially if there are benign or useful macros or metadata. That’s just how it goes.

See you at the movies!

March 2003: The U.S. International Trade Commission released a 32-page paper called, “Protecting U.S. Intellectual Property Rights and the Challenge of Digital Piracy.” The authors, Christopher Johnson and Daniel J. Walworth, cited an article I wrote for the Red Herring in 1999.

Here’s the abstract of the ITC’s paper:

ABSTRACT: According to U.S. industry and government officials, intellectual property rights (IPR) infringement has reached critical levels in the United States as well as abroad. The speed and ease with which the duplication of products protected by IPR can occur has created an urgent need for industries and governments alike to address the protection of IPR in order to keep markets open to trade in the affected goods. Copyrighted products such as software, movies, music and video recordings, and other media products have been particularly affected by inadequate IPR protection. New tools, such as writable compact discs (CDs) and, of course, the Internet have made duplication not only effortless and low-cost, but anonymous as well. This paper discusses the merits of IPR protection and its importance to the U.S. economy. It then provides background on various technical, legal, and trade policy methods that have been employed to control the infringement of IPR domestically and internationally. This is followed by an analysis of current and future challenges facing U.S. industry with regard to IPR protection, particularly the challenges presented by the Internet and digital piracy.

Here’s where they cited yours truly:

To improve upon the basic encryption strategy, several methods have evolved that fall under the classification of “watermarks” and “digital fingerprints” (also known as steganography). Watermarks have been considered extensively by record labels in order to protect their content.44 However, some argue that “watermarking” is better suited to tracking content than it is to protecting against reproduction. This technology is based on a set of rules embedded in the content itself that define the conditions under which one can legally access the data. For example, a digital music file can be manipulated to have a secret pattern of noise, undetectable to the ear, but recorded such that different versions of the file distributed along different channels can be uniquely identified.45 Unlike encryption, which scrambles a file unless someone has a ‘key’ to unlock the process, watermarking does not intrinsically prevent use of a file. Instead it requires a player–a DVD machine or MP3 player, for example–to have instructions built in that can read watermarks and accept only correctly marked files.”46

Reference 45 goes to

Alan Zeichick, “Digital Watermarks Explained,” Red Herring, Dec. 1999

Another paper that referenced that Red Herring article is “Information Technology and the Increasing Efficacy of Non-Legal Sanctions in Financing Transactions.” It was written by Ronald J. Mann of the the University of Michigan Law School.

Sadly, my digital watermarks article is no longer available online.

From eWeek’s story, “Proposed Laptop Travel Ban Would Wreak Havoc on Business Travelers,” by Wayne Rash:

A current proposal from the Department of Homeland Security to mandate that large electronic devices be relegated to checked luggage is facing stiff resistance from airlines and business travelers.

Under the proposal, travelers with electronic devices larger than a cell phone would be required to carry them as checked luggage. Depending on the airline, those devices may either be placed in each passenger’s luggage, or the airline may offer secure containers at the gate.

While the proposed ban is still in the proposal stage, it could go into effect at any time. U.S. officials have begun meeting with European Union representatives in Brussels on May 17, and will continue their meetings in Washington the following week.

The proposed ban is similar to one that began in March that prohibited laptops and other large electronics from passenger cabins between certain airports in the Middle East and North Africa.

That ban has resulted in a significant reduction in travel between those countries and the U.S., according to a report by Emirates Airlines. That airline has already cut back on its flights to the U.S. because of the laptop ban.

The new laptop ban would work like the current one from the Middle East, except that it would affect all flights from Europe to the U.S.

The ban raises a series of concerns that so far have not been addressed by the Department of Homeland Security, most notably large lithium-ion batteries that are currently not allowed in cargo holds by many airlines because of their propensity to catch fire.

The story continues going into detail about the pros and cons – and includes some thoughtful analysis by yours truly.

The endpoint is vulnerable. That’s where many enterprise cyber breaches begin: An employee clicks on a phishing link and installs malware, such a ransomware, or is tricked into providing login credentials. A browser can open a webpage which installs malware. An infected USB flash drive is another source of attacks. Servers can be subverted with SQL Injection or other attacks; even cloud-based servers are not immune from being probed and subverted by hackers. As the number of endpoints proliferate — think Internet of Things — the odds of an endpoint being compromised and then used to gain access to the enterprise network and its assets only increases.

Which are the most vulnerable endpoints? Which need extra protection? All of them, especially devices running some flavor of Windows, according to Mike Spanbauer, Vice President of Security at testing firm NSS Labs. “All of them. So the reality is that Windows is where most targets attack, where the majority of malware and exploits ultimately target. So protecting your Windows environment, your Windows users, both inside your businesses as well as when they’re remote is the core feature, the core component.”

Roy Abutbul, Co-Founder and CEO of security firm Javelin Networks, agreed. “The main endpoints that need the extra protection are those endpoints that are connected to the [Windows] domain environment, as literally they are the gateway for attackers to get the most sensitive information about the entire organization.” He continued, “From one compromised machine, attackers can get 100 per cent visibility of the entire corporate, just from one single endpoint. Therefore, a machine that’s connected to the domain must get extra protection.”

Scott Scheferman, Director of Consulting at endpoint security company Cylance, is concerned about non-PC devices, as well as traditional computers. That might include the Internet of Things, or unprotected routers, switches, or even air-conditioning controllers. “In any organization, every endpoint is really important, now more than ever with the internet of Things. There are a lot of devices on the network that are open holes for an attacker to gain a foothold. The problem is, once a foothold is gained, it’s very easy to move laterally and also elevate your privileges to carry out further attacks into the network.”

At the other end of the spectrum is cloud computing. Think about enterprise-controlled virtual servers, containers, and other resources configured as Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS). Anything connected to the corporate network is an attack vector, explained Roark Pollock, Vice President at security firm Ziften.

Microsoft, too, takes a broad view of endpoint security. “I think every endpoint can be a target of an attack. So usually companies start first with high privilege boxes, like administrator consoles onboard to service, but everybody can be a victim,” said Heike Ritter, a Product Manager for Security and Networking at Microsoft.

I’ve written a long, detailed article on this subject for NetEvents, “From Raw Data to Actionable Intelligence: The Art and Science of Endpoint Security.”

You can also watch my 10-minute video interview with these people here.

Many IT professionals were caught by surprise by last week’s huge cyberattack. Why? They didn’t expect ransomware to spread across their networks on its own.

The reports came swiftly on Friday morning, May 12. The first I saw were that dozens of hospitals in England were affected by ransomware, denying physicians access to patient medical records and causing surgery and other treatments to be delayed.

The infections spread quickly, reportedly hitting as many as 100 countries, with Russian systems affected apparently more than others. What was going on? The details came out quickly: This was a relatively unknown ransomware variant, dubbed WannaCry or WCry. WannaCry had been “discovered” by hackers who stole information from the U.S. National Security Agency (NSA); affected machines were Windows desktops, notebooks and servers that were not up to date on security patches.

Most alarming, WannaCry did not spread across networks in the usual way, through people clicking on email attachments. Rather, once one Windows system was affected on a Windows network, WannaCry managed to propagate itself and infect other unpatched machines without any human interaction. The industry term for this type of super-vigorous ransomware: Ransomworm.

Iturned to one of the experts on malware that can spread across Windows networks, Roi Abutbul. A former cybersecurity researcher with the Israeli Air Force’s famous OFEK Unit, he is founder and CEO of Javelin Networks, a security company that uses artificial intelligence to fight against malware.

Abutbul told me, “The WannaCry/Wcry ransomware—the largest ransomware infection in history—is a next-gen ransomware. Opposed to the regular ransomware that encrypts just the local machine it lands on, this type spreads throughout the organization’s network from within, without having users open an email or malicious attachment. This is why they call it ransomworm.”

He continued, “This ransomworm moves laterally inside the network and encrypts every PC and server, including the organization’s backup.” Read more about this, and my suggestions for copying with the situation, in my story for Network World, “Self-propagating ransomware: What the WannaCry ransomworm means for you.”

If you’re in London in a couple weeks, look for me. I’ll be at the NetEvents European Media Spotlight on Innovators in Cloud, IoT, AI and Security, on June 5.

At NetEvents, I’ll be doing lots of things:

  • Acting as the Master of Ceremonies for the day-long conference.
  • Introducing the keynote speaker, Brian Lord, OBE, who is former GCHQ Deputy Director for Intelligence and Cyber Operations
  • Conducting an on-stage interview with Mr. Lord, Arthur Snell, formerly of the British Foreign and Commonwealth Office, and Guy Franco, formerly with the Israeli Defense Forces.
  • Giving a brief talk on the state of endpoint cybersecurity risks and technologies.
  • Moderating a panel discussion about endpoint security.

The one-day conference will be at the Chelsea Harbour Hotel. Looking forward to it, and maybe will see you there?

Los informes llegaron rápidamente el viernes por la mañana, 12 de mayo – la primera vez que leí una alerta, referenciaba a docenas de hospitales en Inglaterra que fueron afectados por ransomware (sin darse cuenta que era ransomworm), negando a los médicos el acceso a los registros médicos de sus pacientes, causando demoras en cirujías y tratamientos en curso dijo la BBC,

El malware se propagó rápidamente el viernes, con el personal médico en el Reino Unido, según se informa, las computadoras “una por una” quebadan fuera de uso.

El personal del NHS compartió capturas de pantalla del programa WannaCry, que exigió un pago de $ 300 (£ 230) en moneda virtual Bitcoin para desbloquear los archivos de cada computadora.

A lo largo del día, otros países, principalmente europeos, reportaron infecciones.

Algunos informes dijeron que Rusia había visto el mayor número de infecciones del planeta. Los bancos nacionales, los ministerios del interior y de la salud, la empresa estatal de ferrocarriles rusa y la segunda mayor red de telefonía móvil, fueron reportados como afectados.

Las infecciones se diseminaron rápidamente, según se informa golpearon hasta 150 países, con los sistemas rusos afectados aparentemente más que otros.

Read the rest of my article, “Ransomworm golpea a más de 150 Países,” in IT Connect Latam.

Ping! chimes the email software. There are 15 new messages. One is from your boss, calling you by name, and telling him to give you feedback ASAP on a new budget for your department. There’s an attachment. You click on it. Hmm, the file appears to be corrupted. That’s weird. An email from the CEO suggests you read a newspaper article. You click the link, the browser seems to go somewhere else, and then redirects to the newspaper. You think nothing of it. However, you’ve been spearphished. Your computer is now infected by malware. And you have no idea that it even happened.

That’s the reality today: Innocent and unsuspecting people are being fooled by malicious emails. Some of them are obvious spammy-sorts of messages that nearly people would delete — but a few folks will click the link or open the attachment anyway. That’s phishing. More dangerous are spearphishing message targeting individuals in your organization, customized to make the email look legitimate. It’s crafted from a real executive’s name and forged return address, with details that match your company, your family, your job, your personal interests. There’s the hook… there’s the worm… got you! And another computer is infected with malware, or another user was tricked into providing account names, passwords, bank account information or worse.

Phishing and spearphishing are the delivery method of choice for identity theft and corporate espionage. If the user falls for the malicious message, the user’s computer is potentially compromised – and can be encrypted and held for ransom (ransomware), turned into a member of a botnet, or used to gain a foothold on a corporate network to steal intellectual property.

Yet we’ve had email for decades. Why is phishing still a problem? What does the worst-case scenario look like? Why can’t training solve the problem? What can we do about it?

Read my story for NetEvents, “Blunting the Tip of the Spear by Blocking Phishing and Spearphishing.” It’s a long-form feature – quite in depth.

Also watch a video that I recorded on the same subject. Yes, it’s Alan on a video!

I have a new research paper in Elsevier’s technical journal, Network Security. Here’s the abstract:

Lock it down! Button it up tight! That’s the default reaction of many computer security professionals to anything and everything that’s perceived as introducing risk. Given the rapid growth of cybercrime such as ransomware and the non-stop media coverage of data theft of everything from customer payment card information through pre-release movies to sensitive political email databases, this is hardly surprising.

The default reaction of many computer security professionals to anything that’s perceived as introducing risk is to lock down the system.

In attempting to lower risk, however, they also exclude technologies and approaches that could contribute significantly to the profitability and agility of the organisation. Alan Zeichick of Camden Associates explains how to make the most of technology by opening up networks and embracing innovation – but safely.

You can read the whole article, “Enabling innovation by opening up the network,” here.

To those who run or serve on corporate, local government or non-profit boards:

Your board members are at risk, and this places your organizations at risk. Your board members could be targeted by spearphishing (that is, directed personalized attacks) or other hacking because

  • They are often not technologically sophisticated
  • They have access to valuable information
  • If they are breached, you may not know
  • Their email accounts and devices are not locked down using the enterprise-grade cybersecurity technology used to protect employees

In other words, they have a lot of the same information and access as executive employees, but don’t share in their protections. Even if you give them a corporate email address, their laptops, desktops, phone, and tablets are not covered by your IT cybersecurity systems.

Here’s an overview article I read today. It’s a bit vague but it does raise the alarm (and prompted this post). For the sake of the organization, it might be worth spending some small time at a board meeting on this topic, to raise the issue. But that’s not enough.

What can you do, beyond raising the issue?

  • Provide offline resources and training to board members about how to protect themselves from spearphishing
  • Teach them to use unique strong passwords on all their devices
  • Encourage them to use anti-malware solutions on their devices
  • Provide resources for them to call if they suspect they’ve been hacked

Perhaps your IT provider can prepare a presentation, and make themselves available to assist. Consider this issue in the same light as board liability insurance: Protecting your board members is the good for the organization.

Did you know that last year, 75% of data breaches were perpetrated by outsiders, and fully 25% involved internal actors? Did you know that 18% were conducted by state-affiliated actors, and 51% involved organized criminal groups?

That’s according to the newly release 2017 Data Breach Investigations Report from Verizon. It’s the 10th edition of the DBIR, and as always, it’s fascinating – and frightening at the same time.

The most successful tactic, if you want to call it that, used by hackers: stolen or weak (i.e., easily guessed) passwords. They were were used by 81% of breaches. The report says that 62% of breaches featured hacking of some sort, and 51% involved malware.

More disturbing is that fully 66% of malware was installed by malicious email attachments. This means we’re doing a poor job of training our employees not to click links and open documents. We teach, we train, we test, we yell, we scream, and workers open documents anyway. Sigh. According to the report,

People are still falling for phishing—yes still. This year’s DBIR found that around 1 in 14 users were tricked into following a link or opening an attachment — and a quarter of those went on to be duped more than once. Where phishing successfully opened the door, malware was then typically put to work to capture and export data—or take control of systems.

Ransomware is big

We should not be surprised that the DBIR fingers ransomware as a major tool in the hacker’s toolbox:

Ransomware is the latest scourge of the internet, extorting millions of dollars from people and organizations after infecting and encrypting their systems. It has moved from the 22nd most common variety of malware in the 2014 DBIR to the fifth most common in this year’s data.

The Verizon report spends a lot of time on ransomware, saying,

Encouraged by the profitability of ransomware, criminals began offering ransomware-as-a-service, enabling anyone to extort their favorite targets, while taking a cut of the action. This approach was followed by a variety of experiments in ransom demands. Criminals introduced time limits after which files would be deleted, ransoms that increased over time, ransoms calculated based on the estimated sensitivity of filenames, and even options to decrypt files for free if the victims became attackers themselves and infected two or more other people. Multi-level marketing at its finest!

And this, showing another alarming year-on-year increase:

Perhaps the most significant change to ransomware in 2016 was the swing away from infecting individual consumer systems toward targeting vulnerable organizations. Overall, ransomware is still very opportunistic, relying on infected websites and traditional malware delivery for most attacks. Looking again through the lens of DBIR data, web drive-by downloads were the number one malware vector in the 2016 report, but were supplanted by email this year. Social actions, notably phishing, were found in 21% of incidents, up from just 8% in the 2016 DBIR. These emails are often targeted at specific job functions, such as HR and accounting—whose employees are most likely to open attachments or click on links—or even specific individuals.

Read the report

The DBIR covers everything from cyber-espionage to the dangers caused by failing to keep up with patches, fixes, and updates. There are also industry-specific breakouts, covering healthcare, finance, and so-on. It’s a big report, but worth reading. And sharing.

Every company should have formal processes for implementing cybersecurity. That includes evaluating systems, describing activities, testing those policies, and authorizing action. After all, in this area, businesses can’t afford to wing it, thinking, “if something happens, we’ll figure out what to do.” In many cases, without the proper technology, a breach may not be discovered for months or years – or ever. At least not until the lawsuits begin.

Indeed, running without cybersecurity accreditations is like riding a bicycle in a rainstorm. Without a helmet. In heavy traffic. At night. A disaster is bound to happen sooner or later: That’s especially true when businesses are facing off against professional hackers. And when they are stumbled across as juicy victims by script-kiddies who can launch a thousand variations of Ransomware-as-a-Service with a single keystroke.

Yet, according to the British Chambers of Commerce (BCC), small and very small businesses are extremely deficient in terms of having cybersecurity plans. According to the BCC, in the U.K. only 10% of one-person businesses and 15% of those with 1-4 employees have any formal cybersecurity accreditations. Contrast that with businesses with more than 100 employees: 47% with more than 100 employees) have formal plans.

The BCC surveyed 1,285 business people in the U.K. in January 2017. Of the businesses surveyed, 96% were small or mid-sized businesses. About 22% operate in the manufacturing sector, and 78% operate in the services sector.

And all are woefully unprepared to defend themselves against direct target attacks – and against those which are totally generic. It’s like a car thief walking through a parking lot looking to see which vehicles are unlocked: There’s nothing personal, but if your door is open, your car belongs to the crook. Similarly, if some small business’s employees are click on a phishing email and end up victims of ransomware, well, their Bitcoins are as good as gold.

What can be done? Training, of course, to help ensure that employees (including executives) don’t welcome cybercriminals in by responding to phishing emails, malicious website ads, and social-media scams. Technology, which could be products like anti-malware software installed on endpoints, as well as services offered by internet service providers and security specialty firms. Indeed, the BCC survey indicated that 63% of businesses are reliant on IT providers to resolve issues after an attack,

Needed: A formal process for cybersecurity

Every company should have formal processes for implementing cybersecurity, including evaluating systems, describing activities, testing those policies, and authorizing action. After all, in this area, businesses can’t afford to wing it, thinking, “if something happens, we’ll figure out what to do.” In many cases, without the proper technology, a breach may not be discovered for months or years – or ever. At least not until the lawsuits begin.

As one would expect, small and very small businesses are extremely deficient in terms of having cybersecurity plans. According to the BCC, in the U.K. only 10% of one-person businesses and 15% of those with 1-4 employees have any formal cybersecurity accreditations. Contrast that with businesses with more than 100 employees: 47% with more than 100 employees) have formal plans.

While a CEO may want to focus on his/her primary business, in reality, it’s irresponsible to neglect cybersecurity planning. Indeed, it’s also not good for long-term business success. According to the BCC study, 21% of businesses believe the threat of cyber-crime is preventing their company from growing. And of the businesses that do have cybersecurity accreditations, half (49%) believe it gives their business a competitive advantage over rival companies, and a third (33%) consider it important in creating a more secure environment when trading with other businesses.

Again, one in five businesses in the United Kingdom have fallen victim to cyber-attacks in the past year. That number is probably comparable around the world. There are leading-edge service providers and software companies ready to help reduce that terrible statistic. With more and more hackers, including state-sponsored agents, becoming involved, the stakes are high. Fortunately, the tech industry is up to the challenge.

Some large percentage of IT and security tasks and alerts require simple responses. On a small network, there aren’t many alerts, and so administrators can easily accommodate them: Fixing a connection here, approving external VPN access there, updating router firmware on that side, giving users the latest patches to Microsoft Office on that side, evaluating a security warning, dismissing a security warning, making sure that a newly spun-up virtual machine has the proper agents and firewall settings, reviewing log activity. That sort of thing.

On a large network, those tasks become tedious… and on a very large network, they can escalate unmanageably. As networks scale to hundreds, thousands, and hundreds of thousands of devices, thanks to mobility and the Internet of Things, the load expands exponentially – and so do routine IT tasks and alerts, especially when the network, its devices, users and applications are in constant flux.

Most tasks can be automated, yes, but it’s not easy to spell out in a standard policy-based system exactly what to do. Similarly, the proper way of handling alerts can be automated, but given the tremendous variety of situations, variables, combinations and permutations, that too can be challenging. Merely programming a large number of possible situations, and their responses, would be a tremendous task — and not even worth the effort, since the scripts would be brittle and would themselves require constant review and maintenance.

That’s why in many organizations, only responses to the very simplest of tasks and alert responses are programmed in rule-based systems. The rest are shunted over to IT and security professionals, whose highly trained brains can rapidly decide what to do and execute the proper response.

At the same time, those highly trained brains turn into mush because handling routine, easy-to-solve problems is mind-numbing and not intellectually challenging. Solving a problem once is exciting. Solving nearly the same problem a hundred times every day, five days a week, 52 weeks a year (not counting holidays) is inspiration for updating the C.V… and finding a more interesting job.

Enter Artificial Intelligence

AI has already proven itself in computer management and security. Consider the high-profile role that AI patter recognition plays in Cylance’s endpoint security software. The Cylance solution trains itself to recognize good files (like executables, images and documents) and malicious ones – and can spot the bad ones without using signatures. It can even spot those which have never been seen before, because it’s not training on specific viruses or trojans, but rather, on “good” vs. “bad.”

Torsten George is a believer, as he writes in “The Role of Artificial Intelligence in Cyber Security,”

Last year, the IT security community started to buzz about AI and machine learning as the Holy Grail for improving an organization’s detection and response capabilities. Leveraging algorithms that iteratively learn from data, promises to uncover threats without requiring headcounts or the need to know “what to look for”.

He continues,

Enlisting machine learning to do the heavy lifting in first line security data assessment enables analysts to focus on more advanced investigations of threats rather than performing tactical data crunching. This meeting of the minds, whereby AI is applied using a human-interactive approach holds a lot of promise for fighting, detecting, and responding to cyber risks.

Menlo Security is one of many network-protection companies that uses artificial intelligence. The Menlo Security Isolation Platform uses AI to prevent Internet-based malware from ever reaching an endpoint, such as a desktop or mobile device, because email and websites are accessed inside the cloud – not on the client’s computer. Only safe, malware-free rendering information is sent to the user’s endpoint, eliminating the possibility of malware reaching the user’s device. An artificial intelligence engine constantly scans the Internet session to provide protection against spear-phishing and other email attacks.

What if a machine does become compromised? It’s unlikely, but it can happen – and the price of a single breech can be incredible, especially if a hacker can take full control of the compromised device and use it to attack other assets within the enterprise, such as servers, routers or executives’ computers.

If a breach does occur, that’s when the AI technology like that of Javelin Networks leaps into action. The AI detects that the attack is in progress, alerts security teams, and isolates the device from the network. Simultaneously, the AI tricks the attackers into believing they’ve succeeded in their attack, therefore keeping them “on the line” while real-time forensics tools gather information needed to identify the attacker and help shut them down for good.

Manage the Network, Hal

Of course, AI can serve a vital purpose in managing a key element of modern networks beyond security. As Ajay Malik recently wrote in “Artificial intelligence will revolutionize Wi-Fi,”

The problem is that the data source in a wireless network is huge. The data varies at every transmission level. There is a “data rate” of each message transmitted. There are “retries” for each message transmitted.

The reason for not being able to “construct” the received message is specific for each message. The manual classification and analysis of this data is infeasible and uneconomic. Hence, all data available by different vendors is plagued by averages. This is where I believe artificial intelligence has a role to play.

Deep neural nets can automate the analysis and make it possible to analyze every trend of wireless. Machine learning and algorithms can ensure the end user experience. Only the use of AI can change the center of focus from the evolution of wireless or adding value to wireless networks to automatically ensuring the experience.

We will see AI at every level of the network operations center. There are too many devices, too many users, and too many rapid changes, for human and normal rule-based automation systems to keep up. Self-learning systems that adapt and solve real problems quickly and correctly will be essential in every IT organization.

“Alexa! Unlock the front door!” No, that won’t work, even if you have an intelligent lock designed to work with the Amazon Echo. That’s because Amazon is smart enough to know that someone could shout those five words into an open window, and gain entry to your house.

Presumably Amazon doesn’t allow voice control of “Alexa! Turn off the security system!” but that’s purely conjecture. It’s not something I’ve tried. And certainly it’s possible go use programming or clever work-around to enable voice-activated door unlocking or force-field deactivation. That’s why while our home contains a fair amount of cutting-edge AI-based automation, perimeter security is not hooked up to any of it. We’ll rely upon old-fashioned locks and keys and alarm keypads, thank you very much.

And sorry, no voice-enabled safes for me either. It didn’t work so well to protect the CIA against Jason Bourne, did it?

Unlike the fictional CIA safe and the equally fictional computer on the Starship Enterprise, Echo, Google Home, Siri, Android, and their friends can’t identify specific voices with any degree of accuracy. In most cases, they can’t do so at all. So, don’t look to be able to train Alexa to set up access control lists (ACLs) based on voiceprints. That’ll have to wait for the 23rd century, or at least for another couple of years.

The inability of today’s AI-based assistants to discriminate allows for some foolishness – and some shenanigans. We have an Echo in our family room, and every so often, while watching a movie, Alexa will suddenly proclaim, “Sorry, I didn’t understand that command,” or some such. What set the system off? No idea. But it’s amusing.

Less amusing was Burger King’s advertising prank which intentionally tried to get Google Home to help sell more hamburgers. As Fast Company explains:

A new Whopper ad from Burger King turns Google’s voice-activated speaker into an unwitting shill. In the 15-second spot, a store employee utters the words “OK Google, what is the Whopper burger?” This should wake up any Google Home speakers present, and trigger a partial readout of the Whopper’s Wikipedia page. (Android phones also support “OK Google” commands, but use voice training to block out unauthorized speakers.)

Fortunately, Google was as annoyed as everyone else, and took swift action, said the story:

Update: Google has stopped the commercial from working – presumably by blacklisting the specific audio clip from the ad – though Google Home users can still inquire about the Whopper in their own words.

Burger King wasn’t the first to try this stunt. Other similar tricks have succeeded against Home and Echo, and sometimes, the devices are activated accidentally by TV shows and news reports. Look forward to more of this.

It reminds me of the very first time I saw a prototype Echo. What did I say? “Alexa, Format See Colon.” Darn. It didn’t erase anything. But at least it’s better than a cat running around on your laptop keyboard, erasing your term paper. Or a TV show unlocking your doors. Right?

It’s a bad idea to intentionally weaken the security that protects hardware, software, and data. Why? Many reasons, including the basic right (in many societies) of individuals to engage in legal activities anonymously. An additional reason: Because knowledge about weakened encryption, back doors and secret keys could be leaked or stolen, leading to unintended consequences and breaches by bad actors.

Sir Tim Berners-Lee, the inventor of the World Wide Web, is worried. Some officials in the United States and the United Kingdom want to force technology companies to weaken encryption and/or provide back doors to government investigators.

In comments to the BBC, Sir Tim said that there could be serious consequences to giving keys to unlock coded messages and forcing carriers to help with espionage. The BBC story said:

“Now I know that if you’re trying to catch terrorists it’s really tempting to demand to be able to break all that encryption but if you break that encryption then guess what – so could other people and guess what – they may end up getting better at it than you are,” he said.

Sir Tim also criticized moves by legislators on both sides of the Atlantic, which he sees as an assault on the privacy of web users. He attacked the UK’s recent Investigatory Powers Act, which he had criticised when it went through Parliament: “The idea that all ISPs should be required to spy on citizens and hold the data for six months is appalling.”

The Investigatory Powers Act 2016, which became U.K. law last November, gives broad powers to the government to intercept communications. It requires telecommunications providers to cooperate with government requests for assistance with such interception.

Started with Government

Sir Tim’s comments appear to be motivated by his government’s comments. U.K. Home Secretary Amber Rudd said it is “unacceptable” that terrorists were using apps like WhatsApp to conceal their communications, and that there “should be no place for terrorists to hide.

In the United States, there have been many calls for U.S. officials to own back doors into secure hardware, software or data repositories. One that received widespread attention was in 2016, when the FBI tried to compel Apple to unlock the San Bernardino attack’s iPhone. Apple refused, and this sparked a widespread public debate about the powers of the government to go after terrorists or suspected criminals – and whether companies need to break into their own products, or create intentional weaknesses in encryption.

Ultimately, of course, the FBI received their data through the use of third-party tools to break into the iPhone. That didn’t end the question, and indeed, the debate continues to rage. So why not provide a back door? Why not use crippled encryption algorithms that can be easily broken by those who know the flaw? Why not give law-enforcement officials a “master key” to encryption algorithms?

Aside from legal and moral issues, weakening encryption puts everyone at risk. Someone like Edward Snowden, or a spy, might steal information about the weakness, and offer it to criminals, a state-sponsored organization, or the dark web. And now, everyone – not just the FBI, not only MI5 – can break into systems, potentially without even leaving a fingerprint or a log entry.

Stolen Keys

Consider the widely distributed Content Scramble System used to secure commercial movies on DVD discs. In theory, the DVDs were encoded so that they could only be used on authorized devices (like DVD players) that had paid to license the code. The 40-bit code, introduced around 1996, was compromised in 1999. It’s essentially worthless.

Or consider the “TSA-approved” luggage locks, where the locks were nominally secured by a key or combination. However, there are master keys that allowed airport security staff to open the baggage without cutting off the lock. There were seven master keys, which can open any “TSA-approved” lock – and all seven have been compromised. One famous breach of that system: The Washington Post published a photograph of all the master keys, and based on that photo, hackers could easily reproduce the keys. Whoops!

Speaking of WhatsApp, the software had a flaw in its end-to-end encryption. as was revealed this January. The flaw could let others listen in. The story was first revealed by the Guardian, which wrote

WhatsApp’s end-to-end encryption relies on the generation of unique security keys, using the acclaimed Signal protocol, developed by Open Whisper Systems, that are traded and verified between users to guarantee communications are secure and cannot be intercepted by a middleman.

However, WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.

The recipient is not made aware of this change in encryption, while the sender is only notified if they have opted-in to encryption warnings in settings, and only after the messages have been re-sent. This re-encryption and rebroadcasting of previously undelivered messages effectively allows WhatsApp to intercept and read some users’ messages.

Just Say No

Most (or all) secure systems have their flaws. Yes, they can be broken, but the goal is that if a defect or vulnerability is found, the system will be patched and upgraded. In other words, we expect those secure systems to be indeed secure. Therefore, let’s say “no” to intentional loopholes, back doors, master keys and encryption compromises. We’ve all seen that government secrets don’t stay secret — and even if we believe that government spy agencies should have the right to unlock devices or decrypt communications, none of us want those abilities to fall into the wrong hands.

Can’t we fix injection already? It’s been nearly four years since the most recent iteration of the OWASP Top 10 came out — that’s June 12, 2013. The OWASP Top 10 are the most critical web application security flaws, as determined by a large group of experts. The list doesn’t change much, or change often, because the fundamentals of web application security are consistent.

The 2013 OWASP Top 10 were

  1. Injection
  2. Broken Authentication and Session Management
  3. Cross-Site Scripting (XSS)
  4. Insecure Direct Object References
  5. Security Misconfiguration
  6. Sensitive Data Exposure
  7. Missing Function Level Access Control
  8. Cross-Site Request Forgery (CSRF)
  9. Using Components with Known Vulnerabilities
  10. Unvalidated Redirects and Forwards

The preceding list came out on April 19. 2010:

  1. Injection
  2. Cross-Site Scripting (XSS)
  3. Broken Authentication and Session Management
  4. Insecure Direct Object References
  5. Cross-Site Request Forgery (CSRF)
  6. Security Misconfiguration
  7. Insecure Cryptographic Storage
  8. Failure to Restrict URL Access
  9. Insufficient Transport Layer Protection
  10. Unvalidated Redirects and Forwards

Looks pretty familiar. If you go back further to the inaugural Open Web Application Security Project 2004 and then the 2007 lists, the pattern of flaws stays the same. That’s because programmers, testers, and code-design tools keep making the same mistakes, over and over again.

Take the #1, Injection (often written as SQL Injection, but it’s broader than simply SQL). It’s described as:

Injection flaws occur when an application sends untrusted data to an interpreter. Injection flaws are very prevalent, particularly in legacy code. They are often found in SQL, LDAP, Xpath, or NoSQL queries; OS commands; XML parsers, SMTP Headers, program arguments, etc. Injection flaws are easy to discover when examining code, but frequently hard to discover via testing. Scanners and fuzzers can help attackers find injection flaws.

The technical impact?

Injection can result in data loss or corruption, lack of accountability, or denial of access. Injection can sometimes lead to complete host takeover.

And the business impact?

Consider the business value of the affected data and the platform running the interpreter. All data could be stolen, modified, or deleted. Could your reputation be harmed?

Eliminating the vulnerability to injection attacks is not rocket science. OWASP summaries three approaches:

Preventing injection requires keeping untrusted data separate from commands and queries.

The preferred option is to use a safe API which avoids the use of the interpreter entirely or provides a parameterized interface. Be careful with APIs, such as stored procedures, that are parameterized, but can still introduce injection under the hood.

If a parameterized API is not available, you should carefully escape special characters using the specific escape syntax for that interpreter. OWASP’s ESAPI provides many of these escaping routines.

Positive or “white list” input validation is also recommended, but is not a complete defense as many applications require special characters in their input. If special characters are required, only approaches 1. and 2. above will make their use safe. OWASP’s ESAPI has an extensible library of white list input validation routines.

Not rocket science, not brain surgery — and the same is true of the other vulnerabilities. There’s no excuse for still getting these wrong, folks. Cut down on these top 10, and our web applications will be much safer, and our organizational risk much reduced.

Do you know how often your web developers make the OWASP Top 10 mistakes? The answer should be “never.” They’ve had plenty of time to figure this out.