On February 7, 2018, the carrier Swisscom admitted that a security lapse revealed sensitive information about 800,000 customers was exposed. The security failure was at one of Swisscom’s sales partners.

This is what can happen when a business gives its partners access to critical data. The security chain is only as good as the weakest link – and it can be difficult to ensure that partners are taking sufficient care, even if they pass an onboarding audit. Swisscom says,

In autumn of 2017, unknown parties misappropriated the access rights of a sales partner, gaining unauthorised access to customers’ name, address, telephone number and date of birth.

That’s pretty bad, but what came next was even worse, in my opinion. “Under data protection law this data is classed as ‘non-sensitive’,” said Swisscom. That’s distressing, because that’s exactly the sort of data needed for identity theft. But we digress.

Partners and Trust

Partners can be the way into an organization. Swisscom claims that new restrictions, such as preventing high-volume queries and using two-factor authentication, mean such an event can never occur again, which seems optimistic: “Swisscom also made a number of changes to better protect access to such non-sensitive personal data by third-party companies… These measures mean that there is no chance of such a breach happening again in the future.”

Let’s hope they are correct. But in the meantime, what can organizations do? First, Ensure that all third parties that have access to sensitive data, such as intellectual property, financial information, and customer information, go through a rigorous security audit.

Tricia C. Bailey’s article, “Managing Third-Party Vendor Risk,” makes good recommendations for how to vet vendors – and also how to prepare at your end. For example, do you know what (and where) your sensitive data is? Do vendor contracts spell out your rights and responsibilities for security and data protection – and your vendor’s rights and responsibilities? Do you have a strong internal security policy? If your own house isn’t in order, you can’t expect a vendor to improve your security. After all, you might be the weakest link.

Unaccustomed to performing security audits on partners? Organizations like CA Veracode offer audit-as-a-service, such as with their Vendor Application Security Testing service. There are also vertical industry services: the HITRUST Alliance, for example, offers a standardized security audit process for vendors serving the U.S. healthcare industry with its Third Party Assurance Program.

Check the Back Door

Many vendors and partners require back doors into enterprise data systems. Those back doors, or remote access APIs, can be essential for the vendors’ performing their line-of-business function. Take the Swisscom sales partner: It needs to be able to query Swisscom customers and add/update customer information, in order to effectively serve as as a sales organization.

Yet if the partner is breached, that back door can fall under the control of hackers, using the partner’s systems or credentials. In its 2017 Data Breach Investigations Report, Verizon reported that in regard to Point-of-Sale (POS) systems, “Almost 65% of breaches involved the use of stolen credentials as the hacking variety, while a little over a third employed brute force to compromise POS systems. Following the same trend as last year, 95% of breaches featuring the use of stolen credentials leveraged vendor remote access to hack into their customer’s POS environments.”

A Handshake Isn’t Good Enough

How secure is your business partner, your vendor, your contractor? If you don’t know, then you don’t know. If something goes wrong at your partners’ end, never forget that it may be your IP, your financials, and your customers’ data that is exposed. After all, whether or not you can recover damages from the partner in a lawsuit, your organization is the one that will pay the long-term price in the marketplace.

Savvy businesses have policies that prevent on-site viewing of pornography, in part to avoid creating a hostile work environment — and to avoid sexual harassment lawsuits. For security professionals, porn sites are also a dangerous source of malware.

That’s why human-resources policies should be backed up with technological measures. Those include blocking porn sites at the firewall, and for using on-device means to stop browsers from accessing such sites.

Even that may not be enough, says Kaspersky Labs, in its report, “Naked online: cyberthreats facing users of adult websites and applications.” Why? Because naughty content and videos have gone mainstream, says the report:

Today, porn can be found not only on specialist websites, but also in social media networks and on social platforms like Twitter. Meanwhile, the ‘classic’ porn websites are turning into content-sharing platforms, creating loyal communities willing to share their videos with others in order to get ‘likes’ and ‘shares’.

This problem is not new, but it’s increasingly dangerous, thanks to the criminal elements on the Dark Web, which are advertising tools for weaponizing porn content. Says Kaspersky, “While observing underground and semi-underground market places on the dark web, looking for information on the types of legal and illegal goods sold there, we found that among the drugs, weapons, malware and more, credentials to porn websites were often offered for sale.”

So, what’s the danger? There are concerns about attacks on both desktop/notebook and mobile users. In the latter case, says Kaspersky,

  • In 2017, at least 1.2 million users encountered malware with adult content at least once. That is 25.4% of all users who encountered any type of Android malware.
  • Mobile malware is making extensive use of porn to attract users: Kaspersky Lab researchers identified 23 families of mobile malware that use porn content to hide their real functionality.
  • Malicious clickers, rooting malware, and banking Trojans are the types of malware that are most often found inside porn apps for Android.

That’s the type of malware that’s dangerous on a home network. It’s potential ruinous if it provides a foothold onto an enterprise network not protected by intrusion detection/prevention systems or other anti-malware tech. The Kaspersky report goes into a lot of detail, and you should read it.

For another take on the magnitude of the problem: The Nielsen Company reported that more than 21 million Americans accessed adult websites on work computers – that is, 29% of working adults. Bosses are in on it too. In 2013, Time Magazine said that a survey of 200 U.S.-based data security analysts reveals that 40 percent removed malware from a senior executive’s computer, phone, or tablet after the executive visited a porn website.

What Can You Do?

Getting rid of pornography isn’t easy, but it’s not rocket science either. Start with a strong policy. Work with your legal team to make sure the policy is both legal and comprehensive. Get employee feedback on the policy, to help generate buy-in from executives and the rank-and-file.

Once the policy is finalized, communicate it clearly. Train employees on what to do, what not to do… and the employment ramifications for violating the policy. Explain that this policy is not just about harassment, but also about information security.

Block, block, block. Block at the firewall, block at proxy servers, block on company-owned devices. Block on social media. Make sure that antivirus is up to date. Review log files.

Finally, take this seriously. This isn’t a case of giggling (or eye-rolling) about boys-being-boys, or harmless diversions comparable to work-time shopping on eBay. Porn isn’t only offensive in the workplace, but it’s also a gateway to the Dark Web, criminals, and hackers. Going after porn isn’t only about being Victorian about naughty content. It’s about protecting your business from hackers.

Not a connected car.Nobody wants bad guys to be able to hack connected cars. Equally importantly, they shouldn’t be able to hack any part of the multi-step communications path that lead from the connected car to the Internet to cloud services – and back again. Fortunately, companies are working across the automotive and security industries to make sure that does happen.

The consequences of cyberattacks against cars range from the bad to the horrific: Hackers might be able to determine that a driver is not home, and sell that information to robbers. Hackers could access accounts and passwords, and be able to leverage that information for identity theft, or steal information from bank accounts. Hackers might be able to immobilize vehicles, or modify/degrade the functionality of key safety features like brakes or steering. Hackers might even be able to seize control of the vehicle, and cause accidents or terrorist incidents.

Horrific. Thankfully, companies like semiconductor leader Micron Technology, along with communication security experts NetFoundry, have a plan – and are partnering with vehicle manufacturers to embed secure, trustworthy hardware into connected cars. The result: Safety. Security. Trust. Vroom.

It starts with the Internet of Things

The IoT consists of autonomous computing units, connected to back-end services via the Internet. Those back-end services are often in the cloud, and in the case of connected cars, might offer anything from navigation to infotainment to preventive maintenance to firmware upgrades for build-in automotive features. Often, the back-end services would be offered through the automobile’s manufacturer, though they may be provisioned through third-party providers.

The communications chain for connected cars is lengthy. On the car side, it begins with an embedded component (think stereo head unit, predictive front-facing radar used for adaptive cruise control, or anti-lock brake monitoring system). The component will likely contain or be connected to a ECU – an embedded control unit, a circuit board with a microprocessor, firmware, RAM, and a network connection. The ECU, in turn, is connected via an in-vehicle network, which connected to a communications gateway.

That communications gateway talks to a telecommunications provider, which could change as the vehicle crosses service provider or national boundaries. The telco links to the Internet, the Internet links to a cloud provider (such as Amazon Web Services), and from there, there are services that talk to the automotive systems.

Trust is required at all stages of the communications. The vehicle must be certain that its embedded devices, ECUs, and firmware are not corrupted or hacked. The gateway needs to know that it’s talking to the real car and its embedded systems – not fakes or duplicates offered by hackers. It also needs to know that the cloud services are the genuine article, and not fakes. And of course, the cloud services must be assured that they are talking to the real, authenticated automotive gateway and in-vehicle components.

Read more about this in my feature for Business Continuity, “Building Cybertrust into the Connected Car.”

Farewell, Prius

A sad footnote to this blog post. Our faithful Prius, pictured above, was totaled in a collision. Nobody was injured, which is the most important, but the car is gone. May it rust in piece.

From January 1, 2005 through December 27, 2017, the Identity Theft Resource Center (ITRC) reported 8,190 breaches, with 1,057,771,011 records exposed. That’s more than a billion records. Billion with a B. That’s not a problem. That’s an epidemic.

That horrendous number compiles data breaches in the United States confirmed by media sources or government agencies. Breaches may have exposed information that could potentially lead to identity theft, including Social Security numbers, financial account information, medical information, and even email addresses and passwords.

Of course, some people may be included on multiple breaches, and given today’s highly interconnected world, that’s probably very likely. There’s no good way to know how many individuals were affected.

What defines a breach? The organization says,

Security breaches can be broken down into a number of additional sub-categories by what happened and what information (data) was exposed. What they all have in common is they usually contain personal identifying information (PII) in a format easily read by thieves, in other words, not encrypted.

The ITRC tracks seven categories of breaches:

  • Insider Theft
  • Hacking / Computer Intrusion (includes Phishing, Ransomware/Malware and Skimming)
  • Data on the Move
  • Physical Theft
  • Employee Error / Negligence / Improper Disposal / Lost
  • Accidental Web/Internet Exposure
  • Unauthorized Access

As we’ve seen, data loss has occurred when employees store data files on a cloud service without encryption, without passwords, without access controls. It’s like leaving a luxury car unlocked, windows down, keys on the seat: If someone sees this and steals the car, it’s theft – but it was easily preventable theft abetted by negligence.

The rate of breaches is increasing, says the ITRC. The number of U.S. data breach incidents tracked in 2017 hit a record high of 1,579 breaches exposing 178,955,069 records. This is a 44.7% increase over the record high figures reported for 2016, says the ITRC.

It’s mostly but not entirely about hacking. The ITRC says in its “2017 Annual Data Breach Year-End Review,”

Hacking continues to rank highest in the type of attack, at 59.4% of the breaches, an increase of 3.2 percent over 2016 figures: Of the 940 breaches attributed to hacking, 21.4% involved phishing and 12.4% involved ransomware/malware.

In addition,

Nearly 20% of breaches included credit and debit card information, a nearly 6% increase from last year. The actual number of records included in these breaches grew by a dramatic 88% over the figures we reported in 2016. Despite efforts from all stakeholders to lessen the value of compromised credit/debit credentials, this information continues to be attractive and lucrative to thieves and hackers.

Data theft truly is becoming epidemic. And it’s getting worse.

At least, I think it’s Swedish! Just stumbled across this. I hope they bought the foreign rights to one of my articles…


Wireless Ethernet connections aren’t necessarily secure. The authentication methods used to permit access between a device and a wireless router aren’t very strong. The encryption methods used to handle that authentication, and then the data traffic after authorization, aren’t very strong. The rules that enforce the use of authorization and encryption aren’t always enabled, especially with public hotspots like in hotel, airports and coffee shops; the authentication is handled by a web browser application, not the Wi-Fi protocols embedded in a local router.

Helping to solve those problems will be WPA3, an update to decades-old wireless security protocols. Announced by the Wi-Fi Alliance at CES in January 2018, the new standard is said to:

Four new capabilities for personal and enterprise Wi-Fi networks will emerge in 2018 as part of Wi-Fi CERTIFIED WPA3™. Two of the features will deliver robust protections even when users choose passwords that fall short of typical complexity recommendations, and will simplify the process of configuring security for devices that have limited or no display interface. Another feature will strengthen user privacy in open networks through individualized data encryption. Finally, a 192-bit security suite, aligned with the Commercial National Security Algorithm (CNSA) Suite from the Committee on National Security Systems, will further protect Wi-Fi networks with higher security requirements such as government, defense, and industrial.

This is all good news. According to Zack Whittaker writing for ZDNet,

One of the key improvements in WPA3 will aim to solve a common security problem: open Wi-Fi networks. Seen in coffee shops and airports, open Wi-Fi networks are convenient but unencrypted, allowing anyone on the same network to intercept data sent from other devices.

WPA3 employs individualized data encryption, which scramble the connection between each device on the network and the router, ensuring secrets are kept safe and sites that you visit haven’t been manipulated.

Another key improvement in WPA3 will protect against brute-force dictionary attacks, making it tougher for attackers near your Wi-Fi network to guess a list of possible passwords.

The new wireless security protocol will also block an attacker after too many failed password guesses.

What About KRACK?

A challenge for the use of WPA2 is that a defect, called KRACK, was discovered and published in October 2017. To quote my dear friend John Romkey, founder of FTP Software:

The KRACK vulnerability allows malicious actors to access a Wi-Fi network without the password or key, observe what connected devices are doing, modify the traffic amongst them, and tamper with the responses the network’s users receive. Everyone and anything using Wi-Fi is at risk. Computers, phones, tablets, gadgets, things. All of it. This isn’t just a flaw in the way vendors have implemented Wi-Fi. No. It’s a bug in the specification itself.

The timing of the WPA3 release couldn’t be better. But what about older devices. I have no idea how many of my devices — including desktops, phones, tablets, and routers — will be able to run WPA3. I don’t know if firmware updates will be automatically applied, or I will need to search them out.

What’s more, what about the millions of devices out there? Presumably new hotspots will downgrade to WPA2 if a device can’t support WPA3. (And the other way around: A new mobile device will downgrade to talk to an older or unpatched hotel room’s Wi-Fi router.) It could take ages before we reach a critical mass of new devices that can handle WPA3 end-to-end.

The Wi-Fi Alliance says that it “will continue enhancing WPA2 to ensure it delivers strong security protections to Wi-Fi users as the security landscape evolves.” Let’s hope that is indeed the case, and that those enhancements can be pushed down to existing devices. If not, well, the huge installed base of existing Wi-Fi devices will continue to lack real security for years to come.

It’s all about the tradeoffs! You can have the chicken or the fish, but not both. You can have the big engine in your new car, but that means a stick shift—you can’t have the V8 and an automatic. Same for that cake you want to have and eat. Your business applications can be easy to use or secure—not both.

But some of those are false dichotomies, especially when it comes to security for data center and cloud applications. You can have it both ways. The systems can be easy to use and maintain, and they can be secure.

On the consumer side, consider two-factor authentication (2FA), whereby users receive a code number, often by text message to their phones, which they must type into a webpage to confirm their identity. There’s no doubt that 2FA makes systems more secure. The problem is that 2FA is a nuisance for the individual end user, because it slows down access to a desired resource or application. Unless you’re protecting your personal bank account, there’s little incentive for you to use 2FA. Thus, services that require 2FA frequently aren’t used, get phased out, are subverted, or are simply loathed.

Likewise, security measures specified by corporate policies can be seen as a nuisance or an impediment. Consider dividing an enterprise network into small “trusted” networks, such as by using virtual LANs or other forms of authenticating users, applications, or API calls. This setup can require considerable effort for internal developers to create, and even more effort to modify or update.

When IT decides to migrate an application from a data center to the cloud, the steps required to create API-level authentication across such a hybrid deployment can be substantial. The effort required to debug that security scheme can be horrific. As for audits to ensure adherence to the policy? Forget it. How about we just bypass it, or change the policy instead?

Multiply that simple scenario by 1,000 for all the interlinked applications and users at a typical midsize company. Or 10,000 or 100,000 at big ones. That’s why post-mortem examinations of so many security breaches show what appears to be an obvious lack of “basic” security. However, my guess is that in many of those incidents, the chief information security officer or IT staffers were under pressure to make systems, including applications and data sources, extremely easy for employees to access, and there was no appetite for creating, maintaining, and enforcing strong security measures.

Read more about these tradeoffs in my article on Forbes for Oracle Voice: “You Can Have Your Security Cake And Eat It, Too.”

Software can affect the performance of hardware. Under the right (or wrong) circumstances, malware can cause the hardware to become physically damaged – as the cyberattack on Iran’s centrifuges provided in 2010, and which an errant coin-mining malware is demonstrating right now. Will intentional or unintentional damage to IoT devices be next?

Back in late 2009 and early 2010, a computer worm labeled Stuxnet targeted the centrifuges used by Iran to refine low-grade nuclear material into weapons-class materials. The Stuxnet worm, which affected more than 200,000 machines, was estimated to physically damage 1,000 centrifuges.

How did it work? The Stuxnet virus checked to see if it was running on the right type of machine (i.e., a centrifuge of the specific type used by Iran), and if so, says Wikipedia:

The worm worked by first causing an infected Iranian IR-1 centrifuge to increase from its normal operating speed of 1,064 hertz to 1,410 hertz for 15 minutes before returning to its normal frequency. Twenty-seven days later, the worm went back into action, slowing the infected centrifuges down to a few hundred hertz for a full 50 minutes. The stresses from the excessive, then slower, speeds caused the aluminum centrifugal tubes to expand, often forcing parts of the centrifuges into sufficient contact with each other to destroy the machine.

From centrifuges to coin mining

The Stuxnet attacks were subtle, specific, and intentional. By contrast, the Loapi malware, which appeared in December 2017, appears to cause its damage inadvertently. Loapi, discovered by Kaspersky Labs, installs itself on Android devices using administrator privileges, and then does several nasty things, including displaying ads, acting as a zombie for distributed denial-of-service (DDoS) attacks, and mining Monero crypto-coin tokens.

The problem is that Loapi is a little too enthusiastic. When mining coins, Loapi works so hard that the phone overheats – and cooks the devices. Whoops. Says Neowin.net:

In its test, the firm found that after just two days, the constant load from mining caused its test phone’s battery to bulge, which also deformed the phone’s outer shell. This last detail is quite alarming, as it has the potential to cause serious physical harm to affected handset owners.

Damaging the Internet of Things

If malware gets onto an IoT device… who knows what it could do? Depending on the processor, memory, and network connectivity, some IoT devices could be turned into effective DDoS zombies or digital coin miners. Network security cameras have already been infected by spyware, so why not zombieware or miningware? This could be a significant threat for plug-in devices that are not monitored closely, and which contain considerable CPU power. Imagine a point-of-sale kiosk that also mined Bitcoin.

The possibility of damage is a reality, as is shown by Loapi. It’s possible that malware could somehow damage the device inadvertently, perhaps by messing up the firmware and bricking the machine, or by overloading the processor and memory to the point where it overwhelms on-board cooling mechanisms.

Then there’s the potential for intentional damage of IoT devices, either in a large scale or targeting a specific organization. This could be leveraged for extortion by criminal gangs, or for the destruction of public infrastructure or private enterprise by cyberterrorists or state-sponsored actors. If the creators of Stuxnet could damage centrifuges nearly a decade ago, it’s a sure bet that researchers are working on other attacks of that sort. It’s a sobering thought.

Man-in-the-Middle (MITM or MitM) attacks are about to become famous. Famous, in the way that ransomware, Petya, Distributed Denial of Service (DDoS), and credit-card skimmers have become well-known.

MITM attacks go back thousands of years. A merchant writes a parchment offering to buy spices, and hands it to a courier to deliver to his supplier in a far-away land. The local courier hands the parchment to another courier, who in turns hands it to another courier, and so-on, until the final courier gives the parchment to the supplier. Unbeknownst to anyone, however, one of the couriers was a swindler who might change the parchment to set up a fraud, or who might sell details of the merchant’s purchase offer to a competitor, who could then negotiate a better deal.

In modern times, MITM takes advantage of a weakness in the use of cryptography. Are you completely sure who you’re set up that encrypted end-to-end message session with? Perhaps it’s your bank… or perhaps it’s a scammer, who to you looks like your bank – but to your bank, looks like you. Everyone thinks that it’s a secure communications link, but the man-in-the-middle sees everything, and might be able to change things too.

According to Wikipedia,

In cryptography and computer security, a man-in-the-middle attack (MITM; also Janus attack) is an attack where the attacker secretly relays and possibly alters the communication between two parties who believe they are directly communicating with each other.

We haven’t heard much about MITM attacks, because, quite frankly, they’ve not been in the news associated with breaches. That changed recently, when Fox-IT, a cybersecurity firm in Holland, was nailed with one. Writing on their blog on Dec. 14, 2017, the company said:

In the early morning of September 19 2017, an attacker accessed the DNS records for the Fox-IT.com domain at our third party domain registrar. The attacker initially modified a DNS record for one particular server to point to a server in their possession and to intercept and forward the traffic to the original server that belongs to Fox-IT. This type of attack is called a Man-in-the-Middle (MitM) attack. The attack was specifically aimed at ClientPortal, Fox-IT’s document exchange web application, which we use for secure exchange of files with customers, suppliers and other organizations. We believe that the attacker’s goal was to carry out a sustained MitM attack.

The company pointed to several weaknesses in their security setup that allowed the attack to succeed. The DNS provider’s password hadn’t been changed since 2013; two-factor authentication (2FA) wasn’t used or even supported by the DNS provider; and heavier-than-usual scans from the Internet, while detected by Fox-IT, weren’t flagged for investigation or even extra vigilance.

How To Prevent MITM Attacks

After a timeline discussion and more technical analysis, Fox-IT offered suggestions on how to handle such incidents, and I quote:

  • Choose a DNS provider that doesn’t allow changes through a control panel but requires a more manual process, considering that name servers are very stable and hardly ever change. If you do require more frequent changes, use 2FA.
  • Ensure that all system access passwords are reviewed regularly and changed, even those which are used rarely.
  • Deploy certificate transparency monitoring in order to detect, track and respond to fraudulent certificates.
  • Deploy full packet capture capabilities with good retention in crucial points of your infrastructure, such as the DMZ and border gateways.
  • Always inform Law Enforcement at an early stage, so that they can help you with your investigations, as we did in our case.
  • Make it a management decision to first understand an attack before taking specific actions to mitigate it. This may include letting an attack continue for a short period of time. We consciously made that decision.

It’s a shame about Fox-IT’s breach, but the company responded correctly and promptly once the breach was detected. This is the first serious instance of a successful MITM attack I’ve heard about in some time – but probably won’t be the last.

Criminals steal money from banks. Nothing new there: As Willie Sutton famously said, “I rob banks because that’s where the money is.”

Criminals steal money from other places too. While many cybercriminals target banks, the reality is that there are better places to steal money, or at least, steal information that can be used to steal money. That’s because banks are generally well-protected – and gas stations, convenience stores, smaller on-line retailers, and even payment processors are likely to have inadequate defenses — or make stupid mistakes that aren’t caught by security professionals.

Take TIO Networks, a bill-payment service purchased by PayPal for US$233 in July 2017. TIO processed more than $7 billion in bill payments last year, serving more than 10,000 vendors and 16 million consumers.

Hackers now know critical information about all 16 million TIO customers. According to Paymts.com, “… the data that may have been impacted included names, addresses, bank account details, Social Security numbers and login information. How much of those details fell into the hands of cybercriminals depends on how many of TIO’s services the consumers used.”

PayPal has said,

“The ongoing investigation has uncovered evidence of unauthorized access to TIO’s network, including locations that stored personal information of some of TIO’s customers and customers of TIO billers. TIO has begun working with the companies it services to notify potentially affected individuals. We are working with a consumer credit reporting agency to provide free credit monitoring memberships. Individuals who are affected will be contacted directly and receive instructions to sign up for monitoring.”

Card Skimmers and EMV Chips

Another common place where money changes hands: The point-of-purchase device. Consider payment-card skimmers – that is, a hardware device secretly installed into a retail location’s card reader, often at an unattended location like a gasoline pump.

The amount of fraud caused by skimmers copying information on payment cards is expected to rise from $3.1 billion in 2015 to $6.4 billion in 2018, affecting about 16 million cardholders. Those are for payment cards that don’t have the integrated EMV chip, or for transactions that don’t use the EMV system.

EMV chips, also known as chip-and-PIN or chip-and-signature, are named for the three companies behind the technology standards – Europay, MasterCard, and Visa. Chip technology, which is seen as a nuisance by consumers, has dramatically reduced the amount of fraud by generating a unique, non-repeatable transaction code for each purchase.

The rollout of EMV, especially in the United States, is painfully slow. Many merchants still haven’t upgraded to the new card-reader devices or back-end financial services to handle those transactions. For example, there are very few fuel stations using chips to validate transactions, and so pay-at-the-pump in U.S. is universally still dependent on the mag stripe reader. That presents numerous opportunities for thieves to install skimmers on that stripe reader, and be able to steal payment card information.

For an excellent, well-illustrated primer on skimmers and skimmer-related fraud at gas stations, see “As gas station skimmer card fraud increases, here’s how to cut your risk.” Theft at the point of purchase, or at payment processors, will continue as long as companies fail to execute solid security practices – and continue to accept non-EMV payment card transactions, including allowing customers to type their credit- or debit-card numbers onto websites. Those are both threats for the foreseeable future, especially since desktops, notebooks, and mobile device don’t have built-in EMV chip readers.

Crooks are clever, and are everywhere. They always have been. Money theft and fraud – no matter how secure the banks are, it’s not going away any time soon.

Law enforcement officials play a vital role tracking down and neutralizing cyber criminals. Theirs is a complex, often thankless, mission. Here are some insights shared by two current, and one former,  high-level officials from U.S. law enforcement, who spoke at the NetEvents Global Press & Analyst Summit, in San Jose, Calif., in late September.

Based in San Francisco, M.K. Palmore is a senior manager for the Federal Bureau of Investigation’s Cyber Branch. As an FBI Security Risk Management Executive, Palmore leads teams that help identify threat actors, define attribution and carry out arrests.

Palmore says financially-motivated threat actors account for much of the current level of malicious cyber activity. Nation-state sponsored hackers, ideologically-motivated hacktivists, and insider intruders also are causing significant damage and disruption.

“We’re talking about a global landscape, and the barrier to entry for most financially-motivated cyber-threat actors is extremely low,” Palmore says. “In terms of who is on the other end of the keyboard, we’re typically talking about mostly male threat actors,  between the ages of, say, 14 and 32 years

Dr. Ronald Layton is Deputy Assistant Director of the U.S. Secret Service. Layton observes that the technological sophistication and capabilities of threat actors has increased. “The toolsets that you see today that are widely available would have been highly classified 20 years ago,” Layton says. “Sophistication has gone up exponentially.”

The rapid escalation of ransomware is a telling marker, Layton says; ransomware rose from the 22nd most popular crime-ware application in 2014, to number five in 2017. Says Layton: “In 2014, the bad guys would say, ‘I’m going to encrypt your file unless you pay me X amount of dollars in Bitcoin.’ End-users got smarter, and just said, ‘Well, I’m going to back my systems up.’  Now ransomware concentrates on partial or full hard-disk encryption, so backup doesn’t help as much. Sophistication by the threat actors has gone up, and the ability to more quickly adjust, on both sides, quite frankly, has gone up.”

Read more – and watch the video – in “Law enforcement’s view of cyber criminals — and what it takes to stop them,” published on The Last Watchdog.

SysSecOps is a new phrase, still unseen by many IT and security administrators – however it’s being discussed within the market, by analysts, and at technical conferences. SysSecOps, or Systems & Security Operations, describes the practice of combining security groups and IT operations groups to be able to make sure the health of enterprise technology – and having the tools to be able to respond most effectively when issues happen.

SysSecOps concentrates on taking down the info walls, disrupting the silos, that get between security groups and IT administrators. IT operations personnel are there to make sure that end-users can access applications, and that important infrastructure is running at all times. They want to optimize access and availability, and require the data required to do that job – like that a new employee needs to be provisioned, or a hard disk drive in a RAID array has actually stopped working, that a new partner needs to be provisioned with access to a secure document repository, or that an Oracle database is ready to be moved to the cloud. It’s everything about innovation to drive business.

Very Same Data, Various Use-Cases

Endpoint and network monitoring details and analytics are clearly customized to fit the diverse needs of IT and security. However, the underlying raw data is in fact the exact same. The IT and security groups simply are looking at their own domain’s issues and scenarios – and doing something about it based upon those use-cases.

Yet in some cases the IT and security groups have to interact. Like provisioning that brand-new organization partner: It must touch all the ideal systems, and be done securely. Or if there is a problem with a remote endpoint, such as a mobile phone or a mechanism on the Industrial Internet of Things, IT and security might have to work together to identify exactly what’s going on. When IT and security share the exact same data sources, and have access to the very same tools, this job becomes a lot easier – and hence SysSecOps.

Envision that an IT administrator spots that a server hard drive is nearing full capacity – and this was not anticipated. Perhaps the network had actually been breached, and the server is now being utilized to steam pirated films throughout the Web. It happens, and finding and resolving that issue is a task for both IT and security. The data gathered by endpoint instrumentation, and showed through a SysSecOps-ready tracking platform, can assist both sides working together more effectively than would happen with conventional, distinct, IT and security tools.

SysSecOps: It’s a brand-new term, and a brand-new idea, and it’s resonating with both IT and security groups. You can discover more about this in a brief 9 minute video, where I talk with numerous market specialists about this subject: “Exactly what is SysSecOps?

Ransomware is genuine, and is threatening individuals, services, schools, medical facilities, governments – and there’s no indication that ransomware is stopping. In fact, it’s probably increasing. Why? Let’s be honest: Ransomware is probably the single most efficient attack that hackers have ever created. Anybody can develop ransomware utilizing easily available tools; any cash received is likely in untraceable Bitcoin; and if something goes wrong with decrypting someone’s disk drive, the hacker isn’t impacted.

A business is hit with ransomware every 40 seconds, according to some sources, and 60% of all malware were ransomware. It strikes all sectors. No industry is safe. And with the increase of RaaS (Ransomware-as-a-Service) it’s going to get worse.

Fortunately: We can fight back. Here’s a 4 step fight plan.

Four steps to good fundamental hygiene

  1. Training employees on handling destructive e-mails. There are falsified messages from service partners. There’s phishing and target spearphishing. Some will survive email spam/malware filters; workers need to be taught not to click links in those messages, or naturally, not to give permission for plugins or apps to be installed. However, some malware, like ransomware, will get through, typically making use of obsolete software applications or unpatched systems, just like in the Equifax breach.
  2. Patch everything. Guaranteeing that end points are completely patched and completely updated with the current, most safe OS, applications, utilities, device drivers, and code libraries. In this way, if there is an attack, the end point is healthy, and has the ability to best battle the infection.
  3. Ransomware isn’t really a technology or security problem. It’s an organization problem. And it’s a lot more than the ransom that is demanded. That’s peanuts compared to loss of efficiency because of downtime, bad public relations, angry clients if service is interfered with, and the expense of rebuilding lost data. (And that assumes that valuable intellectual property or protected financial or consumer health data isn’t really stolen.).
  4. Backup, backup, backup, and safeguard those backups. If you do not have safe, protected backups, you cannot restore data and core infrastructure in a timely fashion. That includes making day-to-day snapshots of virtual machines, databases, applications, source code, and configuration files.

By the way, businesses need tools to discover, determine, and avoid malware like ransomware from dispersing. This needs continuous visibility and reporting of what’s taking place in the environment – consisting of “zero day” attacks that have not been seen before. Part of that is keeping an eye on end points, from the smart phone to the PC to the server to the cloud, to make sure that endpoints are up-to-date and secure, which no unexpected changes have been made to their underlying configuration. That way, if a machine is contaminated by ransomware or other malware, the breach can be discovered quickly, and the device separated and closed down pending forensics and healing. If an end point is breached, quick containment is critical.

Read more in my guest story for Chuck Leaver’s blog, “Prevent And Manage Ransomware With These 4 Steps.”

I unlock my smartphone with a fingerprint, which is pretty secure. Owners of the new Apple iPhone X unlock theirs with their faces – which is reported to be hackable with a mask. My tablet is unlocked with a six-digit numerical code, which is better than four digits or a pattern. I log into my laptop with an alphanumeric password. Many online services, including banks and SaaS applications, require their own passwords.

It’s a mess! Not the least because lazy humans tend to reuse passwords, so that if a username and password for one service is stolen, criminals can try using that same combination on other services. Hackers steal your email and password from some insecure e-commerce site’s breach? They’ll try that same ID and password on Facebook, LinkedIn, eBay, Amazon, Walmart.com, Gmail, Office 365, Citibank, Fidelity, Schwab… you get the idea.

Two more weaknesses: Most people don’t change their passwords frequently, and the passwords that they choose are barely more secure than ABCD?1234. And while biometrics are good, they’re not always sufficient. Yes, my smartphone has a fingerprint sensor, but my laptop doesn’t. Sure, companies can add on such technology, but it’s a kludge. It’s not a standard, and certainly I can’t log into my Amazon.com account with a fingerprint swipe.

Passwords Spell Out Trouble

The 2017 Verizon Data Breach Report reports that 81% of hacking-related breaches leverage either stolen or weak passwords. That’s the single biggest tactic used in breaches – followed by actual hacking, at 62%, and malware, at 51%.

To quote from the report: “… if you are relying on username/email address and password, you are rolling the dice as far as password re-usage from other breaches or malware on your customers’ devices are concerned.” About retailers specifically — which is where we see a lot of breaches — Verizon writes: “Their business is their web presence and thus the web application is the prime target of compromise to harvest data, frequently some combination of usernames, passwords (sometimes encrypted, sometimes not), and email addresses.”

By the way, I am dismayed by the common use of a person’s email address instead of a unique login name by many retailers and online services. That reduces the bits of data that hackers or criminals need. It’s pretty easy to figure out my email address, which means that to get into my bank account, all you need is to guess or steal my password. But if my login name was a separate thing, like WeinerDogFancier, you’d have to know that andfind my password. On the other hand, using the email address makes things easier for programmers, and presumably for users as well. As usual, convenience beats security.

Too Much Hanging on a Single Identity

The Deloitte breach, which was discovered in March 2017, succeeded because an administrator account had basically unfettered access to everything. And that account wasn’t secured by two-factor authentication. There were apparently no secondary password protecting critical assets, even from an authenticated user.

As the Guardian wrote in “Deloitte hit by cyber-attack revealing clients’ secret emails,”

The hacker compromised the firm’s global email server through an “administrator’s account” that, in theory, gave them privileged, unrestricted “access to all areas”. The account required only a single password and did not have “two-step“ verification, sources said. Emails to and from Deloitte’s 244,000 staff were stored in the Azure cloud service, which was provided by Microsoft. This is Microsoft’s equivalent to Amazon Web Service and Google’s Cloud Platform. In addition to emails, the Guardian understands the hackers had potential access to usernames, passwords, IP addresses, architectural diagrams for businesses and health information. Some emails had attachments with sensitive security and design details.

There are no universal solutions to the password scourge. However, there are some best practices:

  • Don’t trust any common single-factor authentication scheme completely; they can all be bypassed or hacked.
  • Require two-factor authentication from any new device, for access outside of normal working hours or geographies, or potentially even a new IP address.
  • Look into schemes that require removable hardware, such as a USB dongle, as a third factor.
  • Secure valuable assets, such as identity databases, with additional protections. They should be encrypted and blocked from download.
  • Consider disabling remote access to such assets, and certainly disable the ability to download the results of identity or customer database searches.
  • If it’s possible to use biometrics or other hardware-based authentication, do so.

Passwords are B.S.

You might enjoy this riff on passwords by Jeff Atwood in his blog, Coding Horror. Be sure to read the comments.

The secret sauce is AI-based zero packet inspection. That’s how to secure mobile users, and their personal data and employers’ data.

Let’s back up a step. Mobile devices are increasingly under attack, from malicious apps, from rogue emails, from adware, and from network traffic. Worse, that network traffic can come from any number of sources, including cellular data, WiFi, even Bluetooth. Users want their devices to be safe and secure. But how, if the network traffic can’t be trusted?

The best approach around is AI-based zero packet inspection (ZPI). It all starts with data. Tons of training data, used to train a machine learning algorithm to recognize patterns that indicate whether a device is performing normally – or if it’s under attack. Machine learning refers to a number of advanced AI algorithms that can study streams of data, rapidly and accurately detect patterns in that data, and from those patterns, sort the data into different categories.

The Zimperium z9 engine, as an example, works with machine learning to train against a number of test cases (on both iOS and Android devices) that represent known patterns of safe and not-safe traffic. We call those patterns zero-packet inspection in that the objective is not to look at the contents of the network packets but to scan the lower-level underlying traffic patterns at the network level, such as IP, TCP, UDP and ARP scans.

If you’re not familiar with those terms, suffice it to say that at the network level, the traffic is focused on delivering data to a specific device, and then within that device, making sure it gets to the right application. Think of it as being like an envelope going to a big business – it has the business name, street address, and department/mail stop. The machine learning algorithms look at patterns at that level, rather than examining the contents of the envelope. This makes the scans very fast and accurate.

Read more in my new essay for Security Brief Europe, “Opinion: Mobile security starts with a powerful AI-based scanning engine.”

In The Terminator, the Skynet artificial intelligence was turned on to track down hacking a military computer network. Turns out the hacker was Skynet itself. Is there a lesson there? Could AI turn against us, especially as it relates to the security domain?

That was one of the points I made while moderating a discussion of cybersecurity and AI back in October 2017. Here’s the start of a blog post written by my friend Tami Casey about the panel:

Mention artificial intelligence (AI) and security and a lot of people think of Skynet from The Terminator movies. Sure enough, at a recent Bay Area Cyber Security Meetup group panel on AI and machine learning, it was moderator Alan Zeichick – technology analyst, journalist and speaker – who first brought it up. But that wasn’t the only lively discussion during the panel, which focused on AI and cybersecurity.

I found two areas of discussion particularly interesting, which drew varying opinions from the panelists. One, around the topic of AI eliminating jobs and thoughts on how AI may change a security practitioner’s job, and two, about the possibility that AI could be misused or perhaps used by malicious actors with unintended negative consequences.

It was a great panel. I enjoyed working with the Meetup folks, and the participants: Allison Miller (Google), Ali Mesdaq (Proofpoint), Terry Ray (Imperva), Randy Dean (Launchpad.ai & Fellowship.ai).

You can read the rest of Tami’s blog here, and also watch a video of the panel.

Let’s talk about hackers, not through the eyes of the tech industry but through the eyes of current and former U.S. law enforcement officials. It’s their job to run those people down and throw them in jail.

The Federal Bureau of Investigation

MK Palmore is an Information Security Risk Management Executive with the FBI’s Cyber Branch in San Francisco. He runs the cyber-security teams assigned to the San Francisco division of the FBI. “My teams here in San Francisco typically play some part in the investigations, where our role is to identify, define attribution, and get those folks into the U.S. Justice system.”

“The FBI is 35,000-plus personnel, U.S.-based, and part of the Federal law enforcement community,” says Palmore. “There are 56 different field offices throughout the United States of America, but we also have an international presence in more than 62 cities throughout the world. A large majority of those cities contain personnel that are assigned there specifically for responsibilities in the cyber-security realm, and often-times are there to establish relationships with our counterparts in those countries, but also to establish relationships with some of the international companies, and folks that are raising their profile as it relates to international cyber-security issues.”

The U.S. Secret Service

It’s not really a secret: In 1865, the Secret Service was created by Congress to primarily suppress counterfeit currency. “Counterfeit currency represented greater than 50% of all the currency in the United States at that time, and that was why the Agency was created,” explained Dr. Ronald Layton, Deputy Assistant Director U.S. Secret Service. “The Secret Service has gone from suppressing counterfeit currency, or economic, or what we used to refer to as paper crimes, to plastic, meaning credit cards. So, we’ve had a progression, from paper, to plastic, to digital crimes, which is where we are today,” he continued.

Protecting Data, Personal and Business

“I found a giant hole in the way that private sector businesses are handling their security,” said Michael Levin. “They forgot one very important thing. They forgot to train their people what to do. I work with organizations to try to educate people — we’re not doing a very good job of protecting ourselves. “

A leading expert in cyber-security, Levin is Former Deputy Director, U.S. Department of Homeland Security’s National Cyber-Security Division. He retired from the government a few years ago, and is now CEO & Founder of the Center for Information Security Awareness.

“When I retired from the government, I discovered something,” he continued. “We’re not protecting our own personal data – so, everybody has a role to play in protecting their personal data, and their family’s data. We’re not protecting our business data. Then, we’re not protecting our country’s data, and there’s nation states, and organized crime groups, and activists, that are coming after us on a daily basis.”

The Modern Hacker: Who They Are, What They Want

There are essentially four groups of cyber-threat activists that we need to be concerned with, explained the FBI’s Palmore. “I break them down as financially-motivated criminal intrusion, threat actors, nation states, hacktivists, and then those security incidents caused by what we call the insider threat. The most prevalent of the four groups, and the most impactful, typically, are those motivated by financial concerns.”

“We’re talking about a global landscape, and the barrier to entry for most financially-motivated cyber-threat actors is extremely low,” Palmore continued. “In terms of looking at who these folks are, and in terms of who’s on the other end of the keyboard, we’re typically talking about mostly male threat actors, sometimes between the ages of, say, 14 and 32 years old. We’ve seen them as young as 14.”

Criminals? Nation states? Hacktivists? Insiders? While that matters to law enforcement, it shouldn’t to individuals and enterprise, said CIFSA’s Levin. “For most people, they don’t care if it’s a nation state. They just want to stop the bleeding. They don’t care if it’s a hacktivist, they just want to get their site back up. They don’t care who it is. They just start trying to fix the problem, because it means their business is being attacked, or they’re having some sort of a failure, or they’re losing data. They’re worried about it. So, from a private sector company’s business, they may not care.”

However, “Law enforcement cares, because they want to try to catch the bad guy. But for the private sector is, the goal is to harden the target,” points out Levin. “Many of these attacks are, you know, no different from a car break-in. A guy breaking into cars is going to try the handle first before he breaks the window, and that’s what we see with a lot of these hackers. Doesn’t matter if they’re nation states, it doesn’t matter if they’re script kiddies. It doesn’t matter to what level of the sophistication. They’re going to look for the open doors first.”

The Secret Service focuses almost exclusively about folks trying to steal money. “Several decades ago, there was a famous United States bank robber named Willie Sutton,” said Layton. “Willie Sutton was asked, why do you rob banks? ‘Because that’s where the money is.’ Those are the people that we deal with.”

Layton explained that the Secret Service has about a 25-year history of investigating electronic crimes. The first electronic crimes taskforce was established in New York City 25 years ago. “What has changed in the last five or 10 years? The groups worked in isolation. What’s different? It’s one thing: They all know each other. They all are collaborative. They all use Russian as a communications modality to talk to one another in an encrypted fashion. That’s what’s different, and that represents a challenge for all of us.”

Work with Law Enforcement

Palmore, Levin, and Layton have excellent, practical advice on how businesses and individuals can protect themselves from cybercrime. They also explain how law enforcement can help. Read more in my article for Upgrade Magazine, “The new hacker — Who are they, what they want, how to defeat them.”

Critical information about 46 million Malaysians were leaked online onto the Dark Web. The stolen data included mobile phone numbers from telcos and mobile virtual network operators (MVNOs), prepaid phone numbers, customers details including physical addresses – and even the unique IMEI and IMSI registration numbers associated with SIM cards.

Isolated instance from one rogue carrier? No. The carriers included Altel, Celcom, DiGi, Enabling Asia, Friendimobile, Maxis, MerchantTradeAsia, PLDT, RedTone, TuneTalk, Umobile and XOX; news about the breach were first published 19 October 2017 by a Malaysian online community.

When did the breach occur? According to lowyat.net, “Time stamps on the files we downloaded indicate the leaked data was last updated between May and July 2014 between the various telcos.”

That’s more than three years between theft of the information and its discovery. We have no idea if the carriers had already discovered the losses, and chose not to disclose the breaches.

A huge delay between a breach and its disclosure is not unusual. Perhaps things will change once the General Data Protection Regulation (GDPR) kicks in next year, when organizations must reveal a breach within three days of discovery. That still leaves the question of discovery. It simply takes too long!

According to Mandiant, the global average dwell time (time between compromise and detection) is 146 days. In some areas, it’s far worse: the EMEA region has a dwell time of 469 days. Research from the Ponemon Institute says that it takes an average of 98 days for financial services companies to detect intrusion on their networks, and 197 days in retail. It’s not surprising that the financial services folks do a better job – but three months seems like a very long time.

An article headline from InfoSecurity Magazine says it all: “Hackers Spend 200+ Days Inside Systems Before Discovery.” Verizon’s Data Breach Investigations Report for 2017 has some depressing news: “Breach timelines continue to paint a rather dismal picture — with time-to-compromise being only seconds, time-to-exfiltration taking days, and times to discovery and containment staying firmly in the months camp. Not surprisingly, fraud detection was the most prominent discovery method, accounting for 85% of all breaches, followed by law enforcement which was seen in 4% of cases.”

What Can You Do?

There are two relevant statistics. The first is time-to-discovery, and the other is time-to-disclosure, whether to regulators or customers.

  • Time-to-disclosure is a matter of policy, not technology. There are legal aspects, public-relations, financial (what if the breach happens during a “quiet period” prior to announcing results?), regulatory, and even law-enforcement (what if investigators are laying a trap, and don’t want to tip off that the breach has been discovered?).
  • Time-to-discovery, on the other hand, is a matter of technology (and the willingness to use it). What doesn’t work? Scanning log files using manual or semi-automated methods. Excel spreadsheets won’t save you here!

What’s needed are comprehensive endpoint monitoring capabilities, coupled with excellent threat intelligence and real-time analytics driven by machine learning. Nothing else can correlate huge quantities of data from such widely disparate sources, and hope to discover outliers based on patterns.

Discovery and containment takes months, says Verizon. You can’t have containment without discovery. With current methods, we’ve seen that discovery takes months or years, if it’s every detected at all. Endpoint monitoring technology, when coupled with machine learning — and with 24×7 managed security software providers — can reduce that to seconds or minutes.

There is no excuse for breaches staying hidden for three years or longer. None. That’s no way to run a business.

It’s always nice when a friend is quoted in an article. In this case, it’s one of my dearest and closest, John Romkey, founder of FTP Software. The story is, “The Internet Of Things Just Got Even More Unsafe To Use,” by Harold Stark, and published on Forbes.com.

The story talks about a serious vulnerability in the Internet of Things:

Mathy Vanhoef, Security Researcher at KU Leuven, made headlines last week with a blog where he described this strange new vulnerability that had the potential to affect every device that has ever been on a wi-fi network all at once. The vulnerability, dubbed KRACK or Key Reinstallation Attack, has a simple way of functioning. WPA2-PSK, the most widely used security protocol used to secure devices and routers connected to a wi-fi network, had a glaring flaw. This flaw, which allows a third-party hacker to trick their way into a device as it connects to a wi-fi network using a password, allows said hacker to access and modify all information available to this device without even being on the network. By interfering with the authorization process that allows a device to connect to a closed wi-fi network, the hacker can do things such as intercept traffic, access stored data and even modify information accessed by the device at the time. So this hacker could tell which websites you like to visit, play that video from your friend’s wedding last month or even infect your device with an unknown malware to cause further damage. Just to be clear, this vulnerability affects any and all devices that can connect to wi-fi networks, regardless of which software it is running.

You should read the whole story, which includes a quote from my friend John, here.

Humans can’t keep up. At least, not when it comes to meeting the rapidly expanding challenges inherent to enterprise cybersecurity. There are too many devices, too many applications, too many users, and too many megabytes of log files for humans to make sense of it all. Moving forward, effective cybersecurity is going to be a “Battle of the Bots,” or to put it less dramatically, machine versus machine.

Consider the 2015 breach at the U.S. Government’s Office of Personnel Management (OPM). According to a story in Wired, “The Office of Personnel Management repels 10 million attempted digital intrusions per month—mostly the kinds of port scans and phishing attacks that plague every large-scale Internet presence.” Yet despite sophisticated security mechanisms, hackers managed to steal millions of records on applications for security clearances, personnel files, and even 5.6 digital images of government employee fingerprints. (In August 2017, the FBI arrested a Chinese national in connection with that breach.)

Traditional security measures are often slow, and potentially ineffective. Take the practice of applying patches and updates to address new-found software vulnerabilities. Companies now have too many systems in play for the process of finding and installing patches to be effectively handled manually,

Another practice that can’t be handled manually: Scanning log files to identify abnormalities and outliers in data traffic. While there are many excellent tools for reviewing those files, they are often slow and aren’t good at aggregating lots across disparate silos (such as a firewall, a web application server, and an Active Directory user authentication system). Thus, results may not be comprehensive, patterns may be missed, and results of deep analysis may not be returned in real time.

Read much more about this in my new essay, “Machine Versus Machine: The New Battle For Enterprise Cybersecurity.”

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10. That’s a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Note that the top 10 list doesn’t directly represent the 10 most common attacks. Rather, it’s a ranking of risk. There are four factors used for this calculation. One is the likelihood that applications would have specific vulnerabilities; that’s based on data provided by companies. That’s the only “hard” metric in the OWASP Top 10. The other three risk factors are based on professional judgement.

It boggles the mind that a majority of top 10 issues appear across the 2007, 2010, 2013, and draft 2017 OWASP lists. That doesn’t mean that these application security vulnerabilities have to remain on your organization’s list of top problems, though—you can swat those flaws.

Read more in my essay, “The OWASP Top 10 is killing me, and killing you!

Apply patches. Apply updates. Those are considered to be among the lowest-hanging of the low-hanging fruit for IT cybersecurity. When commercial products release patches, download and install the code right away. When open-source projects disclose a vulnerability, do the appropriate update as soon as you can, everyone says.

A problem is that there are so many patches and updates. They’re found in everything from device firmware to operating systems, to back-end server software to mobile apps. To be able to even discover all the patches is a huge effort. You have to know:

  • All the hardware and software in your organization — so you can scan the vendors’ websites or emails for update notices. This may include the data center, the main office, remote offices, and employees homes. Oh, and rogue software installed without knowledge of IT.
  • The versions of all the hardware and software instances — you can tell which updates apply to you, and which don’t. Sometimes there may be an old version somewhere that’s never been patched.
  • The dependencies. Installing a new operating system may break some software. Installing a new version of a database may require changes on a web application server.
  • The location of each of those instances — so you can know which ones need patching. Sometimes this can be done remotely, but other times may require a truck roll.
  • The administrator access links, usernames and password — hopefully, those are not set to “admin/admin.” The downside of changing default admin passwords is that you have to remember the new ones. Sure, sometimes you can make changes with, say, any Active Director user account with the proper privileges. That won’t help you, though, with most firmware or mobile devices.

The above steps are merely for discovery of the issue and the existence of a patch. You haven’t protected anything until you’ve installed the patch, which often (but not always) requires taking the hardware, software, or service offline for minutes or hours. This requires scheduling. And inconvenience. Even if you have patch-management tools (and there are many available), too many low-hanging fruit can be overlooked.

You Can’t Wait for That Downtime Window

As Oracle CEO Larry Ellison made important points about patching at his keynote at OpenWorld 2017,

Our data centers are enormously complicated. There are lots of servers and storage and operating systems, virtual machines, containers and databases, data stores, file systems. And there are thousands of them, tens of thousands, hundreds of thousands of them. It’s hard for people to locate all these things and patch them. They have to be aware there’s a vulnerability. It’s got to be an automated process.

You can’t wait for a downtime window, where you say, “Oh, I can’t take the system down. I know I’ve got to patch this, but we have scheduled downtime middle of next month.” Well, that’s wrong thinking and that’s kind of lack of priority for security.

All that said, patching and updating must be a priority. Dr. Ron Layton, Deputy Assistant Director of the U.S. Secret Service, said at the NetEvents Global Press Summit, September 2017:

Most successful hacks and breaches – most of them – were because low-level controls were not in place. That’s it. That’s it. Patch management. It’s the low-level stuff that will get you to the extent that the bad guys will say, I’m not going to go here. I’m going to go somewhere else. That’s it.

The Scale of Security Issues Is Huge

I receive many regular email from various defect-tracking and patch-awareness lists. Here’s one weekly sample from the CERT teams at U.S. Dept. of Homeland Security. IT pros won’t be surprised at how large it is: https://www.us-cert.gov/ncas/bulletins/SB17-296

There are 25 high-severity vulnerabilities on this list, most from Microsoft, some from Oracle. Lots of medium-severity vulnerabilities from Microsoft, OpenText, Oracle, and WPA – the latter being the widely reported bug in Wi-Fi Protected Access. In addition, there are a few low-severity vulnerability, and then page after page of those labeled “severity not yet assigned.” The list goes on and on, even hitting infrastructure products from Cisco and F5. And lots more WiFi issues.

This is a typical week – and not all those vulnerabilities in the CERT report have patches yet. CERT is only one source, by the way. Want more? Here’s a list of security-related updates from Apple. Here is a list of a security updates from Juniper Networks. A list of from Microsoft. And Red Hat too.

So: When security analysts say that enterprises merely need to keep up with patches and fixes, well, yes, that’s the low-hanging fruit. However, nobody talks about how much of that low-hanging fruit there is. The amount is overwhelming in an enterprise. No wonder some rotten fruit slip through the cracks.

Loose cyber-lips can sink real ship. According to separate reports published by the British government and the cruise ship industry, large cargo and passenger vessels could be damaged by cyberattacks – and potentially even sent to the bottom of the ocean.

The foreword pulls no punches. “Code of Practice: Cyber Security for Ships” was commissioned by the U.K. Department of Transport, and published by the Institution of Engineering and Technology (IET) in London.

Poor security could lead to significant loss of customer and/or industry confidence, reputational damage, potentially severe financial losses or penalties, and litigation affecting the companies involved. The compromise of ship systems may also lead to unwanted outcomes, for example:

(a) physical harm to the system or the shipboard personnel or cargo – in the worst case scenario this could lead to a risk to life and/or the loss of the ship;

(b) disruptions caused by the ship no longer functioning or sailing as intended;

(c) loss of sensitive information, including commercially sensitive or personal data;

and

(d) permitting criminal activity, including kidnap, piracy, fraud, theft of cargo, imposition of ransomware.

The above scenarios may occur at an individual ship level or at fleet level; the latter is likely to be much worse and could severely disrupt fleet operations.

Cargo and Passenger Systems

The report goes into considerable detail about the need to protect confidential information, including intellectual property, cargo manifests, passenger lists, and financial documents. Beyond that, the document warns about dangers from activist groups (or “hackivism”) where actors might work to prevent the handling of specific cargoes, or even disrupt the operation of the ship. The target may be the ship itself, the ship’s owner or operator, or the supplier or recipient of the cargo.

The types of damage could be as simple as the disruption of ship-to-shore communications through a DDoS attack. It might be as dangerous as the corruption or feeding false sensor data that could cause the vessel to flounder or head off course. What can done? The reports several important steps to maintain the security of critical systems including:

(a) Confidentiality – the control of access and prevention of unauthorised access to ship data, which might be sensitive in isolation or in aggregate. The ship systems and associated processes should be designed, implemented, operated and maintained so as to prevent unauthorised access to, for example, sensitive financial, security, commercial or personal data. All personal data should be handled in accordance with the Data Protection Act and additional measures may be required to protect privacy due to the aggregation of data, information or metadata.

(b) Possession and/or control – the design, implementation, operation and maintenance of ship systems and associated processes so as to prevent unauthorised control, manipulation or interference. The ship systems and associated processes should be designed, implemented, operated and maintained so as to prevent unauthorised control, manipulation or interference. An example would be the loss of an encrypted storage device – there is no loss of confidentiality as the information is inaccessible without the encryption key, but the owner or user is deprived of its contents.

(c) Integrity – maintaining the consistency, coherence and configuration of information and systems, and preventing unauthorised changes to them. The ship systems and associated processes should be designed, implemented, operated and maintained so as to prevent unauthorised changes being made to assets, processes, system state or the configuration of the system itself. A loss of system integrity could occur through physical changes to a system, such as the unauthorised connection of a Wi-Fi access point to a secure network, or through a fault such as the corruption of a database or file due to media storage errors.

(d) Authenticity – ensuring that inputs to, and outputs from, ship systems, the state of the systems and any associated processes and ship data, are genuine and have not been tampered with or modified. It should also be possible to verify the authenticity of components, software and data within the systems and any associated processes. Authenticity issues could relate to data such as a forged security certificate or to hardware such as a cloned device.

With passenger vessels, the report points for the need for modular controls and hardened IT infrastructure. That stops unauthorized people from gaining access to online booking, point-of-sales, passenger management, and other critical ships systems by tapping into wiring cabinets, cable junctions, and maintenance areas. Like we said, scary stuff.

The Industry Weighs In

A similar report was produced for the shipping industry by seven organizations, including the International Maritime Organization and the International Chamber of Shipping. The “Guidelines on Cyber Security Onboard Ships” warns that that incident can arise as the result of,

  • A cyber security incident, which affects the availability and integrity of OT, for example corruption of chart data held in an Electronic Chart Display and Information System (ECDIS)
  • A failure occurring during software maintenance and patching
  • Loss of or manipulation of external sensor data, critical for the operation of a ship. This includes but is not limited to Global Navigation Satellite Systems (GNSS).

This report discusses the role of activists (including disgruntles employees), as well as criminals, opportunists, terrorists, and state-sponsored organizations. There are many potentially vulnerable areas, including cargo management systems, bridge systems, propulsion and other machinery, access control, passenger management systems — and communications. As the report says,

Modern technologies can add vulnerabilities to the ships especially if there are insecure designs of networks and uncontrolled access to the internet. Additionally, shoreside and onboard personnel may be unaware how some equipment producers maintain remote access to shipboard equipment and its network system. The risks of misunderstood, unknown, and uncoordinated remote access to an operating ship should be taken into consideration as an important part of the risk assessment.

The stakes are high. The loss of operational technology (OT) systems “may have a significant and immediate impact on the safe operation of the ship. Should a cyber incident result in the loss or malfunctioning of OT systems, it will be essential that effective actions are taken to ensure the immediate safety of the crew, ship and protection of the marine environment.”

Sobering words for any maritime operator.

“One of these things is not like the others,” the television show Sesame Street taught generations of children. Easy. Let’s move to the next level: “One or more of these things may or may not be like the others, and those variances may or may not represent systems vulnerabilities, failed patches, configuration errors, compliance nightmares, or imminent hardware crashes.” That’s a lot harder than distinguishing cookies from brownies.

Looking through gigabytes of log files and transactions records to spot patterns or anomalies is hard for humans: it’s slow, tedious, error-prone, and doesn’t scale. Fortunately, it’s easy for artificial intelligence (AI) software, such as the machine learning algorithms built into Oracle Management Cloud. What’s more, the machine learning algorithms can be used to direct manual or automated remediation efforts to improve security, compliance, and performance.

Consider how large-scale systems gradually drift away from their required (or desired) configuration, a key area of concern in the large enterprise. In his Monday, October 2 Oracle OpenWorld session on managing and securing systems at scale using AI, Prakash Ramamurthy, senior vice president of systems management at Oracle, talked about how drift happens. Imagine that you’ve applied a patch, but then later you spool up a virtual server that is running an old version of a critical service or contains an obsolete library with a known vulnerability. That server is out of compliance, Ramamurthy said. Drift.

Drift is bad, said Ramamurthy, and detecting and stopping drift is a core competency of Oracle Management Cloud. It starts with monitoring cloud and on-premises servers, services, applications, and logs, using machine learning to automatically understand normal behavior and identify anomalies. No training necessary here: A variety of machine learning algorithms teach themselves how to play the “one of these things is not like the others” game with your data, your systems, and your configuration, and also to classify the systems in ways that are operationally relevant. Even if those logs contain gigabytes of information on hundreds of thousands of transactions each second.

Learn more in my article for Forbes, “Catch The Drift With Machine Learning — Before The Drift Catches You.”

Despite Elon Musk’s warnings this summer, there’s not a whole lot of reason to lose any sleep worrying about Skynet and the Terminator. Artificial Intelligence (AI) is far from becoming a maleficent, all-knowing force. The only “Apocalypse” on the horizon right now is an over reliance by humans on machine learning and expert systems, as demonstrated by the deaths of Tesla owners who took their hands off the wheel.

Examples of what currently pass for “Artificial Intelligence” — technologies such as expert systems and machine learning — are excellent for creating software. AI software is truly valuable help in contexts that involve pattern recognition, automated decision-making, and human-to-machine conversations. Both types of AI have been around for decades. And both are only as good as the source information they are based on. For that reason, it’s unlikely that AI will replace human beings’ judgment on important tasks requiring decisions more complex than “yes or no” any time soon.

Expert systems, also known as rule-based or knowledge-based systems, are when computers are programmed with explicit rules, written down by human experts. The computers can then run the same rules but much faster, 24×7, to come up with the same conclusions as the human experts. Imagine asking an oncologist how she diagnoses cancer and then programming medical software to follow those same steps. For a particular diagnosis, an oncologist can study which of those rules was activated to validate that the expert system is working correctly.

However, it takes a lot of time and specialized knowledge to create and maintain those rules, and extremely complex rule systems can be difficult to validate. Needless to say, expert systems can’t function beyond their rules.

By contrast, machine learning allows computers to come to a decision—but without being explicitly programmed. Instead, they are shown hundreds or thousands of sample data sets and told how they should be categorized, such as “cancer | no cancer,” or “stage 1 | stage 2 | stage 3 cancer.”

Read more about this, including my thoughts on machine learning, pattern recognition, expert systems, and comparisons to human intelligence, in my story for Ars Technica, “Never mind the Elon—the forecast isn’t that spooky for AI in business.”

Long after intruders are removed and public scrutiny has faded, the impacts from a cyberattack can reverberate over a multi-year timeline. Legal costs can cascade as stolen data is leveraged in various ways over time; it can take years to recover pre-incident growth and profitability levels; and brand impact can play out in multiple ways.

That’s from a Deloitte report, “Beneath the surface of a cyberattack: A deeper look at business impacts,” released in late 2016. The report’s contents, and other statements on cyber security from Deloitte, are ironic given the company’s huge breach reported this week.

The big breach

The Deloitte breach was reported on Monday, Sept. 25. It appears to have leaked confidential emails and financial documents of some of its clients. According to the Guardian,

The Guardian understands Deloitte clients across all of these sectors had material in the company email system that was breached. The companies include household names as well as US government departments. So far, six of Deloitte’s clients have been told their information was “impacted” by the hack. Deloitte’s internal review into the incident is ongoing. The Guardian understands Deloitte discovered the hack in March this year, but it is believed the attackers may have had access to its systems since October or November 2016.

The Guardian asserts that hackers gained access to the Deloitte’s global email server via an administrator’s account protected by only a single password. Without two-factor authentication, hackers could gain entry via any computer, as long as they guessed the right password (or obtained it via hacking, malware, or social engineering). The story continues,

In addition to emails, the Guardian understands the hackers had potential access to usernames, passwords, IP addresses, architectural diagrams for businesses and health information. Some emails had attachments with sensitive security and design details.

Okay, the breach was bad. What did Deloitte have to say about these sorts of incidents? Lots.

The Deloitte Cybersecurity Report

In its 2016 report, Deloitte’s researchers pointed to 14 cyberattack impact factors. Half are the directly visible costs of breach incidents; the others which can be more subtle or hidden, and potentially never fully understood.

  • The “Above the Surface” incident costs include the expenses of technical investigations, consumer breach notifications, regulatory compliance, attorneys fees and litigation, post-preach customer protection, public relations, and cybersecurity protections.
  • Hard to tally are the “Below the Surface” costs of insurance premium increases, increased cost to raise debt, impact of operational disruption/destruction, value of lost contact revenue, devaluation of trade name, loss of intellectual property, and lost value of customer relationship.

As the report says,

Common perceptions about the impact of a cyberattack are typically shaped by what companies are required to report publicly—primarily theft of personally identifiable information (PII), payment data, and personal health information (PHI). Discussions often focus on costs related to customer notification, credit monitoring, and the possibility of legal judgments or regulatory penalties. But especially when PII theft isn’t an attacker’s only objective, the impacts can be even more far-reaching.

Recovery can take a long time, as the Deloitte says:

Beyond the initial incident triage, there are impact management and business recovery stages. These stages involve a wide range of business functions in efforts to rebuild operations, improve cybersecurity, and manage customer and third-party relationships, legal matters, investment decisions, and changes in strategic course.

Indeed, asserts Deloitte in the 2016 report, it can take months or years to repair the damage to the business. That includes redesigning processes and assets, and investing in cyber programs to emerge stronger after the incident. But wait, there’s more.

Intellectual Property and Lawsuits

A big part of the newly reported breach is the loss of intellectual property. That’s not necessarily only Deloitte’s IP, but also the IP of its biggest blue-chip customers. About the loss of IP, the 2016 reports says:

Loss of IP is an intangible cost associated with loss of exclusive control over trade secrets, copyrights, investment plans, and other proprietary and confidential information, which can lead to loss of competitive advantage, loss of revenue, and lasting and potentially irreparable economic damage to the company. Types of IP include, but are not limited to, patents, designs, copyrights, trademarks, and trade secrets.

We’ll see some of those phrases in lawsuits filed by Deloitte’s customers as they try to get a handle on what hackers may have stolen. Oh, about lawsuits, here’s what the Deloitte report says:

Attorney fees and litigation costs can encompass a wide range of legal advisory fees and settlement costs externally imposed and costs associated with legal actions the company may take to defend its interests. Such fees could potentially be offset through the recovery of damages as a result of assertive litigation pursued against an attacker, especially in regards to the theft of IP. However, the recovery could take years to pursue through litigation and may not be ultimately recoverable, even after a positive verdict in favor of the company. Based on our analysis of publicly available data pertaining to recent consumer settlement cases and other legal costs relating to cyber incidents, we observed that, on average, it could cost companies approximately $10 million in attorney fees, potential settlement of loss claims, and other legal matters.

Who wants to bet that the legal costs from this breach will be significantly higher than $10 million?

Stay Vigilant

The back page of Deloitte’s 2016 report says something important:

To grow, streamline, and innovate, many organizations have difficulty keeping pace with the evolution of cyber threats. The traditional discipline of IT security, isolated from a more comprehensive risk-based approach, may no longer be enough to protect you. Through the lens of what’s most important to your organization, you must invest in cost-justified security controls to protect your most important assets, and focus equal or greater effort on gaining more insight into threats, and responding more effectively to reduce their impact. A Secure. Vigilant. Resilient. cyber risk program can help you become more confident in your ability to reap the value of your strategic investments.

Wise words — too bad Deloitte’s email administrators, SOC teams, and risk auditors didn’t heed them. Or read their own report.

Stupidity. Incompetence. Negligence. The unprecedented huge data breach at Equifax has dominated the news cycle, infuriating IT managers, security experts, legislators, and attorneys — and scaring consumers. It appears that sensitive personally identifiable information (PII) on 143 million Americans was exfiltrated, as well as PII on some non-US nationals.

There are many troubling aspects. Reports say the tools that consumers can use to see if they are affected by the breach are inaccurate. Articles that say that by using those tools, consumers are waiving their rights to sue Equifax. Some worry that Equifax will actually make money off this by selling affected consumers its credit-monitoring services.

Let’s look at the technical aspects, though. While details about the breach are still widely lacking, two bits of information are making the rounds. One is that Equifax practiced bad password practices, allowing hackers to easily gain access to at least one server. Another is that there was a flaw in a piece of open-source software – but the patch had been available for months, yet Equifax didn’t apply that patch.

It’s unclear about the veracity of those two possible causes of the breach. Even so, this points to a troubling pattern of utter irresponsibility by Equifax’s IT and security operations teams.

Bad Password Practices

Username “admin.” Password “admin.” That’s often the default for hardware, like a home WiFi router. The first thing any owner should do is change both the username and password. Every IT professional knows that. Yet the fine techies at Equifax, or at least their Argentina office, didn’t know that. According to well-known security writer Brian Krebs, earlier this week,

Earlier today, this author was contacted by Alex Holden, founder of Milwaukee, Wisc.-based Hold Security LLC. Holden’s team of nearly 30 employees includes two native Argentinians who spent some time examining Equifax’s South American operations online after the company disclosed the breach involving its business units in North America.

It took almost no time for them to discover that an online portal designed to let Equifax employees in Argentina manage credit report disputes from consumers in that country was wide open, protected by perhaps the most easy-to-guess password combination ever: “admin/admin.”

What’s more, writes Krebs,

Once inside the portal, the researchers found they could view the names of more than 100 Equifax employees in Argentina, as well as their employee ID and email address. The “list of users” page also featured a clickable button that anyone authenticated with the “admin/admin” username and password could use to add, modify or delete user accounts on the system.

and

A review of those accounts shows all employee passwords were the same as each user’s username. Worse still, each employee’s username appears to be nothing more than their last name, or a combination of their first initial and last name. In other words, if you knew an Equifax Argentina employee’s last name, you also could work out their password for this credit dispute portal quite easily.

Idiots.

Patches Are Important, Kids

Apache’s Struts is a well-regarded open source framework for creating Web applications. It’s excellent — I’ve used it myself — but like all software, it can have bugs. One such defect was discovered in March 2017, and was given the name “CVE-2017-5638.” A patch was issued within days by the Struts team. Yet Equifax never installed that patch.

Even so, the company is blaming the U.S. breach on that defect:

Equifax has been intensely investigating the scope of the intrusion with the assistance of a leading, independent cybersecurity firm to determine what information was accessed and who has been impacted. We know that criminals exploited a U.S. website application vulnerability. The vulnerability was Apache Struts CVE-2017-5638. We continue to work with law enforcement as part of our criminal investigation, and have shared indicators of compromise with law enforcement.

Keeping up with vulnerability reports, and applying patches right away, is essential for good security. Everyone knows this. Including, I’m sure, Equifax’s IT team. There is no excuse. Idiots.

When was the last time most organizations discussed the security of their Oracle E-Business Suite? How about SAP S/4HANA? Microsoft Dynamics? IBM’s DB2? Discussions about on-prem server software security too often begin and end with ensuring that operating systems are at the latest level, and are current with patches.

That’s not good enough. Just as clicking on a phishing email or opening a malicious document in Microsoft Word can corrupt a desktop, so too server applications can be vulnerable. When those server applications are involved with customer records, billing systems, inventory, transactions, financials, or human resources, a hack into ERP or CRM systems can threaten an entire organization. Worse, if that hack leveraged stolen credentials, the business may never realize that competitors or criminals are stealing its data, and potentially even corrupting its records.

A new study from the Ponemon Institute points to the potential severity of the problem. Sixty percent of the respondents to the “Cybersecurity Risks to Oracle E-Business Suite” say that information theft, modification of data and disruption of business processes on their company’s Oracle E-Business Suite applications would be catastrophic. While 70% respondents said a material security or data breach due to insecure Oracle E-Business Suite applications is likely, 67% of respondents believe their top executives are not aware of this risk. (The research was sponsored by Onapsis, which sells security solutions for ERP suites, so apply a little sodium chloride to your interpretation of the study’s results.)

The audience of this study was of businesses that rely upon Oracle E-Business Suite. About 24% of respondents said that it was the most critical application they ran, and altogether, 93% said it was one of the top 10 critical applications. Bearing in mind that large businesses run thousands of server applications, that’s saying something.

Yet more than half of respondents – 53% — said that it was Oracle’s responsibility to ensure that its applications and platforms are safe and secure. Unless they’ve contracted with Oracle to manage their on-prem applications, and to proactively apply patches and fixes, well, they are delusional.

Another area of delusion: That software must be connected to the Internet to pose a risk. In this study, 52% of respondents agree or strongly agree that “Oracle E-Business applications that are not connected to the Internet are not a security threat.” They’ve never heard of insider threats? Credentials theft? Penetrations of enterprise networks?

What About Non-Oracle Packages?

This Ponemon/Onapsis study represents only one data point. It does not adequately discuss the role of vendors in this space, including ERP/CRM value-added resellers, consultants and MSSPs (managed security service providers). It also doesn’t differentiate between Oracle instances running on-prem compared to the Oracle ERP Cloud – where Oracle does manage all the security.

Surprising, packaged software isn’t talked about very often. Given the amount of chatter at most security conferences, bulletin boards, and the like, packaged applications like these on-prem ERP or CRM suites are rarely a factor in conversations about security. Instead, everyone is seemingly focused on the endpoint, firewalls, and operating systems. Sometimes we’ll see discussions of the various tiers in an n-tier architecture, such as databases, application servers, and presentation systems (like web servers or mobile app back ends).

Another company that offers ERP security, ERPScan, conducted a study with Crowd Research Partners focused on SAP. The “ERP Cybersecurity Study 2017” said that (and I quote from the report on these bullet points):

  • 89% of respondents expect that the number of cyber-attacks against ERP systems will grow in next 12 months.
  • An average cost of a security breach in SAP is estimated at $5m with fraud considered as the costliest risk. A third of organizations assesses the damage of fraudulent actions at more than 10m USD.
  • There is a lack of awareness towards ERP Security, worryingly, even among people who are engaged in ERP Security. One-third of them haven’t even heard about any SAP Security incident. Only 4% know about the episode with the direst consequences – USIS data breach started with an SAP vulnerability, which resulted in the company’s bankruptcy.
  • One of three respondents hasn’t taken any ERP Security initiative yet and is going to do so this year.
  • Cybersecurity professionals are most concerned about protecting customer data (72%), employee data (66%), and emails (54%). Due to this information being stored in different SAP systems (e.g. ERP, HR, or others), they are one of the most important assets to protect.
  • It is still unclear who is in charge of ERP Security: 43% of responders suppose that CIO takes responsibilities, while 28% consider it CISO’s duty.

Of course, we still must secure our operating systems, network perimeters, endpoints, mobile applications, WiFi networks, and so-on. Let’s not forget, however, the crucial applications our organizations depend upon. Breaches into those systems could be invisible – and ruinous to any business.

The more advanced the military technology, the greater the opportunities for intentional or unintentional failure in a cyberwar. As Scotty says in Star Trek III: The Search for Spock, “The more they overthink the plumbing, the easier it is to stop up the drain.”

In the case of a couple of recent accidents involving the U.S. Navy, the plumbing might actually be the computer systems that control navigation. In mid-August, the destroyer U.S.S. John S. McCain rammed into an oil tanker near Singapore. A month or so earlier, a container ship hit the nearly identical U.S.S. Fitzgerald off Japan. Why didn’t those hugely sophisticated ships see the much-larger merchant vessels, and move out of the way?

There has been speculation, and only speculation, that both ships might have been victims of cyber foul play, perhaps as a test of offensive capabilities by a hostile state actor. The U.S. Navy has not given a high rating to that possibility, and let’s admit, the odds are against it.

Even so, the military hasn’t dismissed the idea, writes Bill Gertz in the Washington Free Beacon:

On the possibility that China may have triggered the collision, Chinese military writings indicate there are plans to use cyber attacks to “weaken, sabotage, or destroy enemy computer network systems or to degrade their operating effectiveness.” The Chinese military intends to use electronic, cyber, and military influence operations for attacks against military computer systems and networks, and for jamming American precision-guided munitions and the GPS satellites that guide them, according to one Chinese military report.

The datac enters of those ships are hardened and well protected. Still, given the sophistication of today’s warfare, what if systems are hacked?

Imagine what would happen if, say, foreign powers were able to break into drones or cruise missiles. This might cause them to crash prematurely, self-destruct, or hit a friendly target, or perhaps even “land” and become captured. What about disruptions to fighter aircraft, such as jets or helicopters? Radar systems? Gear carried by troops?

It’s a chilling thought. It reminds me that many gun owners in the United States, including law enforcement officers, don’t like so-called “smart” pistols that require fingerprint matching before they can fire – because those systems might fail in a crisis, or if the weapon is dropped or becomes wet, leaving the police officer effectively unarmed.

The Council on Foreign Relations published a blog by David P. Fidler, “A Cyber Norms Hypothetical: What If the USS John S. McCain Was Hacked? In the post, Fidler says, “The Fitzgerald and McCain accidents resulted in significant damage to naval vessels and deaths and injuries to sailors. If done by a foreign nation, then hacking the navigation systems would be an illegal use of force under international law.”

Fidler believes this could lead to a real shooting war:

In this scenario, the targets were naval vessels not merchant ships, which means the hacking threatened and damaged core national security interests and military assets of the United States. In the peacetime circumstances of these incidents, no nation could argue that such a use of force had a plausible justification under international law. And every country knows the United States reserves the right to use force in self-defense if it is the victim of an illegal use of force.

There is precedent. In May and June 2017, two Sukhoi 30 fighter jets belonging to the Indian Air Force crashed – and there was speculation that these were caused by China. In one case, reports Naveen Goud in Cybersecurity Insiders,

The inquiry made by IAF led to the discovery of a fact that the flying aircraft was cyber attacked when it was airborne which led to the death of the two IAF officers- squadron leader D Pankaj and Flight Lieutenant Achudev who were flying the aircraft. The death was caused due to the failure in initiating the ejection process of the pilot’s seat due to a cyber interference caused in the air.

Let us hope that we’re not entering a hot phase of active cyberwarfare.

No organization likes to reveal that its network has been breached, or it data has been stolen by hackers or disclosed through human error. Yet under the European Union’s new General Data Protection Regulation (GDPR), breaches must be disclosed.

The GDPR is a broad set of regulations designed to protect citizens of the European Union. The rules apply to every organization and business that collects or stores information about people in Europe. It doesn’t matter if the company has offices in Europe: If data is collected about Europeans, the GDPR applies.

Traditionally, most organizations hide all information about security incidents, especially if data is compromised. That makes sense: If a business is seen to be careless with people’s data, its reputation can suffer, competitors can attack, and there can be lawsuits or government penalties.

We tend to hear about security incidents only if there’s a breach sufficiently massive that the company must disclose to regulators, or if there’s a leak to the media. Even then, the delay between the breach can take weeks or month — meaning that folks aren’t given enough time to engage identity theft protection companies, monitor their credit/debit payments, or even change their passwords.

Thanks to GDPR, organizations must now disclose all incidents where personal data may have been compromised – and make that disclosure quickly. Not only that, but the GDPR says that the disclosure must be to the general public, or at least to those people affected; the disclosure can’t be buried in a regulatory filing.

Important note: The GDPR says absolutely nothing about disclosing successful cyberattacks where personal data is not stolen or placed at risk. That includes distributed denial-of-service (DDoS) attacks, ransomware, theft of financial data, or espionage of intellectual property. That doesn’t mean that such cyberattacks can be kept secret, but in reality, good luck finding out about them, unless the company has other reasons to disclose. For example, after some big ransomware attacks earlier this year, some publicly traded companies revealed to investors that those attacks could materially affect their quarterly profits. This type of disclosure is mandated by financial regulation – not by the GDPR, which is focused on protecting individuals’ personal data.

The Clock Is Ticking

How long does the organization have to disclose the breach? Three days from when the breach was discovered. That’s pretty quick, though of course, sometimes breaches themselves can take weeks or months to be discovered, especially if the hackers are extremely skilled, or if human error was involved. (An example of human error: Storing unencrypted data in a public cloud without strong password protection. It’s been happening more and more often.)

Here’s what the GDPR says about such breaches — and the language is pretty clear. The first step is to disclose to authorities within three days:

A personal data breach may, if not addressed in an appropriate and timely manner, result in physical, material or non-material damage to natural persons such as loss of control over their personal data or limitation of their rights, discrimination, identity theft or fraud, financial loss, unauthorised reversal of pseudonymisation, damage to reputation, loss of confidentiality of personal data protected by professional secrecy or any other significant economic or social disadvantage to the natural person concerned. Therefore, as soon as the controller becomes aware that a personal data breach has occurred, the controller should notify the personal data breach to the supervisory authority without undue delay and, where feasible, not later than 72 hours after having become aware of it, unless the controller is able to demonstrate, in accordance with the accountability principle, that the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. Where such notification cannot be achieved within 72 hours, the reasons for the delay should accompany the notification and information may be provided in phases without undue further delay.

The GDPR does not specify how quickly the organization must notify the individuals whose data was compromised, beyond “as soon as reasonably feasible”:

The controller should communicate to the data subject a personal data breach, without undue delay, where that personal data breach is likely to result in a high risk to the rights and freedoms of the natural person in order to allow him or her to take the necessary precautions. The communication should describe the nature of the personal data breach as well as recommendations for the natural person concerned to mitigate potential adverse effects. Such communications to data subjects should be made as soon as reasonably feasible and in close cooperation with the supervisory authority, respecting guidance provided by it or by other relevant authorities such as law-enforcement authorities. For example, the need to mitigate an immediate risk of damage would call for prompt communication with data subjects whereas the need to implement appropriate measures against continuing or similar personal data breaches may justify more time for communication.

The phrase “personal data breach” doesn’t only mean theft or accidental disclosure of a person’s private information. The GDPR defines the phrase as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed.” So the loss of important data (think health records) would qualify as a personal data breach.

Big Fines and Penalties

What happens if an organization does not disclose? It can be fined up to 4% of annual global turnover. There’s a cap of €20 million of the fines.

These GDPR rules about breaches are good, and so are the penalties. Too many organizations prefer to hide this type of information, or dribble out disclosures as slowly and quietly as possible, to protect the company’s reputation and share prices. The new EU regulation recognizes that individuals have a vested interest in data that organizations collect or store about them – and need to be told if that data is stolen or compromised.