, ,

Payment cards and bill payment services are great for criminals

Criminals like to steal money from banks. Nothing new there: As Willie Sutton famously said, “I rob banks because that’s where the money is.” While many cybercriminals target banks, the reality is that there are better places to steal money, or at least, steal information that can be used to steal money. That’s because banks are generally well-protected – and gas stations, convenience stores, smaller on-line retailers, and even payment processors are likely to have inadequate defenses — or make stupid mistakes that aren’t caught by security professionals.

Take TIO Networks, a bill-payment service purchased by PayPal for US$233 in July 2017. TIO processed more than $7 billion in bill payments last year, serving more than 10,000 vendors and 16 million consumers.

Hackers now know critical information about all 16 million TIO customers. According to Paymts.com, “… the data that may have been impacted included names, addresses, bank account details, Social Security numbers and login information. How much of those details fell into the hands of cybercriminals depends on how many of TIO’s services the consumers used.”

PayPal has said,

“The ongoing investigation has uncovered evidence of unauthorized access to TIO’s network, including locations that stored personal information of some of TIO’s customers and customers of TIO billers. TIO has begun working with the companies it services to notify potentially affected individuals. We are working with a consumer credit reporting agency to provide free credit monitoring memberships. Individuals who are affected will be contacted directly and receive instructions to sign up for monitoring.”

Card Skimmers and EMV Chips

Another common place where money changes hands: The point-of-purchase device. Consider payment-card skimmers – that is, a hardware device secretly installed into a retail location’s card reader, often at an unattended location like a gasoline pump.

The amount of fraud caused by skimmers copying information on payment cards is expected to rise from $3.1 billion in 2015 to $6.4 billion in 2018, affecting about 16 million cardholders. Those are for payment cards that don’t have the integrated EMV chip, or for transactions that don’t use the EMV system.

EMV chips, also known as chip-and-PIN or chip-and-signature, are named for the three companies behind the technology standards – Europay, MasterCard, and Visa. Chip technology, which is seen as a nuisance by consumers, has dramatically reduced the amount of fraud by generating a unique, non-repeatable transaction code for each purchase.

Read more in my essay, “Payment processors and point-of-sale are opportunities for hackers.”

,

Passwords? Fingerprints? Face recognition? It’s all questionable.

I unlock my smartphone with a fingerprint, which is pretty secure. Owners of the new Apple iPhone X unlock theirs with their faces – which is reported to be hackable with a mask. My tablet is unlocked with a six-digit numerical code, which is better than four digits or a pattern. I log into my laptop with an alphanumeric password. Many online services, including banks and SaaS applications, require their own passwords.

It’s a mess! Not the least because humans tend to reuse passwords, so that if a username and password for one service is stolen, criminals can try using that same combination on other services. They get your email and password for some insecure e-commerce site? They’ll try it on Facebook, LinkedIn, eBay, Amazon, Walmart.com, Gmail, Office 365, Citibank, Fidelity, Schwab… you get the idea.

Two more weaknesses: Most people don’t change their passwords frequently, and the passwords that they choose are barely more secure than ABCD?1234. And while biometrics are good, they’re not always sufficient. Yes, my smartphone has a fingerprint sensor, but my laptop doesn’t. Sure, companies can add on such technology, but it’s a kludge. It’s not a standard, and certainly I can’t log into my Amazon.com account with a fingerprint swipe.

Passwords Spell Out Trouble

The 2017 Verizon Data Breach Report reports that 81% of hacking-related breaches leverage either stolen or weak passwords. That’s the single biggest tactic used in breaches – followed by actual hacking, at 62%, and malware, at 51%.

To quote from the report: “… if you are relying on username/email address and password, you are rolling the dice as far as password re-usage from other breaches or malware on your customers’ devices are concerned.” About retailers specifically — which is where we see a lot of breaches — Verizon writes: “Their business is their web presence and thus the web application is the prime target of compromise to harvest data, frequently some combination of usernames, passwords (sometimes encrypted, sometimes not), and email addresses.”

(I am dismayed by the common use of email address instead of a unique login name by many retailers and online services. That reduces the bits of data that hackers or criminals need. It’s pretty easy to figure out my email address, which means that to get into my bank account, all you need is to guess or steal my password. But if my login name was a separate thing, like WeinerDogFancier, you’d have to know that andfind my password. On the other hand, using the email address makes things easier for programmers, and presumably for users as well. As usual, convenience beats security.)

Read more in my essay, “The Problem with Passwords.”

, ,

AI-driven network scanning is the secret to effective mobile security

The secret sauce is AI-based zero packet inspection. That’s how to secure mobile users, and their personal data and employers’ data.

Let’s back up a step. Mobile devices are increasingly under attack, from malicious apps, from rogue emails, from adware, and from network traffic. Worse, that network traffic can come from any number of sources, including cellular data, WiFi, even Bluetooth. Users want their devices to be safe and secure. But how, if the network traffic can’t be trusted?

The best approach around is AI-based zero packet inspection (ZPI). It all starts with data. Tons of training data, used to train a machine learning algorithm to recognize patterns that indicate whether a device is performing normally – or if it’s under attack. Machine learning refers to a number of advanced AI algorithms that can study streams of data, rapidly and accurately detect patterns in that data, and from those patterns, sort the data into different categories.

The Zimperium z9 engine, as an example, works with machine learning to train against a number of test cases (on both iOS and Android devices) that represent known patterns of safe and not-safe traffic. We call those patterns zero-packet inspection in that the objective is not to look at the contents of the network packets but to scan the lower-level underlying traffic patterns at the network level, such as IP, TCP, UDP and ARP scans.

If you’re not familiar with those terms, suffice it to say that at the network level, the traffic is focused on delivering data to a specific device, and then within that device, making sure it gets to the right application. Think of it as being like an envelope going to a big business – it has the business name, street address, and department/mail stop. The machine learning algorithms look at patterns at that level, rather than examining the contents of the envelope. This makes the scans very fast and accurate.

Read more in my new essay for Security Brief Europe, “Opinion: Mobile security starts with a powerful AI-based scanning engine.”

, , ,

How AI is changing the role of cybersecurity – and of cybersecurity experts

In The Terminator, the Skynet artificial intelligence was turned on to track down hacking a military computer network. Turns out the hacker was Skynet itself. Is there a lesson there? Could AI turn against us, especially as it relates to the security domain?

That was one of the points I made while moderating a discussion of cybersecurity and AI back in October 2017. Here’s the start of a blog post written by my friend Tami Casey about the panel:

Mention artificial intelligence (AI) and security and a lot of people think of Skynet from The Terminator movies. Sure enough, at a recent Bay Area Cyber Security Meetup group panel on AI and machine learning, it was moderator Alan Zeichick – technology analyst, journalist and speaker – who first brought it up. But that wasn’t the only lively discussion during the panel, which focused on AI and cybersecurity.

I found two areas of discussion particularly interesting, which drew varying opinions from the panelists. One, around the topic of AI eliminating jobs and thoughts on how AI may change a security practitioner’s job, and two, about the possibility that AI could be misused or perhaps used by malicious actors with unintended negative consequences.

It was a great panel. I enjoyed working with the Meetup folks, and the participants: Allison Miller (Google), Ali Mesdaq (Proofpoint), Terry Ray (Imperva), Randy Dean (Launchpad.ai & Fellowship.ai).

You can read the rest of Tami’s blog here, and also watch a video of the panel.

, ,

Too long: The delays between cyberattacks and their discovery and disclosure

Critical information about 46 million Malaysians were leaked online onto the Dark Web. The stolen data included mobile phone numbers from telcos and mobile virtual network operators (MVNOs), prepaid phone numbers, customers details including physical addresses – and even the unique IMEI and IMSI registration numbers associated with SIM cards.

That’s pretty bad, right? The carriers included Altel, Celcom, DiGi, Enabling Asia, Friendimobile, Maxis, MerchantTradeAsia, PLDT, RedTone, TuneTalk, Umobile and XOX; news about the breach were first published 19 October 2017 by a Malaysian online community.

When did the breach occur? According to lowyat.net, “Time stamps on the files we downloaded indicate the leaked data was last updated between May and July 2014 between the various telcos.”

That’s more than three years between theft of the information and its discovery. We have no idea if the carriers had already discovered the losses, and chose not to disclose the breaches.

A huge delay between a breach and its disclosure is not unusual. Perhaps things will change once the General Data Protection Regulation (GDPR) kicks in next year, when organizations must reveal a breach within three days of discovery. That still leaves the question of discovery. It simply takes too long!

Verizon’s Data Breach Investigations Report for 2017 has some depressing news: “Breach timelines continue to paint a rather dismal picture — with time-to-compromise being only seconds, time-to-exfiltration taking days, and times to discovery and containment staying firmly in the months camp. Not surprisingly, fraud detection was the most prominent discovery method, accounting for 85% of all breaches, followed by law enforcement which was seen in 4% of cases.”

Read more in my essay, “Months, Years Go By Before Cyberattacks Are Discovered And Revealed.”

,

It’s a bot, bot, bot world: The new battle for enterprise cybersecurity

Humans can’t keep up. At least, not when it comes to meeting the rapidly expanding challenges inherent to enterprise cybersecurity. There are too many devices, too many applications, too many users, and too many megabytes of log files for humans to make sense of it all. Moving forward, effective cybersecurity is going to be a “Battle of the Bots,” or to put it less dramatically, machine versus machine.

Consider the 2015 breach at the U.S. Government’s Office of Personnel Management (OPM). According to a story in Wired, “The Office of Personnel Management repels 10 million attempted digital intrusions per month—mostly the kinds of port scans and phishing attacks that plague every large-scale Internet presence.” Yet despite sophisticated security mechanisms, hackers managed to steal millions of records on applications for security clearances, personnel files, and even 5.6 digital images of government employee fingerprints. (In August 2017, the FBI arrested a Chinese national in connection with that breach.)

Traditional security measures are often slow, and potentially ineffective. Take the practice of applying patches and updates to address new-found software vulnerabilities. Companies now have too many systems in play for the process of finding and installing patches to be effectively handled manually,

Another practice that can’t be handled manually: Scanning log files to identify abnormalities and outliers in data traffic. While there are many excellent tools for reviewing those files, they are often slow and aren’t good at aggregating lots across disparate silos (such as a firewall, a web application server, and an Active Directory user authentication system). Thus, results may not be comprehensive, patterns may be missed, and results of deep analysis may not be returned in real time.

Read much more about this in my new essay, “Machine Versus Machine: The New Battle For Enterprise Cybersecurity.”

, ,

The same coding bugs cause the same security vulnerabilities, year after year

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10. That’s a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Note that the top 10 list doesn’t directly represent the 10 most common attacks. Rather, it’s a ranking of risk. There are four factors used for this calculation. One is the likelihood that applications would have specific vulnerabilities; that’s based on data provided by companies. That’s the only “hard” metric in the OWASP Top 10. The other three risk factors are based on professional judgement.

It boggles the mind that a majority of top 10 issues appear across the 2007, 2010, 2013, and draft 2017 OWASP lists. That doesn’t mean that these application security vulnerabilities have to remain on your organization’s list of top problems, though—you can swat those flaws.

Read more in my essay, “The OWASP Top 10 is killing me, and killing you!

,

Patches are security low-hanging fruit — but there’s too much of it

Apply patches. Apply updates. Those are considered to be among the lowest-hanging of the low-hanging fruit for IT cybersecurity. When commercial products release patches, download and install the code right away. When open-source projects disclose a vulnerability, do the appropriate update as soon as you can, everyone says.

A problem is that there are so many patches and updates. They’re found in everything from device firmware to operating systems, to back-end server software to mobile apps. To be able to even discover all the patches is a huge effort. You have to know:

  • All the hardware and software in your organization — so you can scan the vendors’ websites or emails for update notices. This may include the data center, the main office, remote offices, and employees homes. Oh, and rogue software installed without knowledge of IT.
  • The versions of all the hardware and software instances — you can tell which updates apply to you, and which don’t. Sometimes there may be an old version somewhere that’s never been patched.
  • The dependencies. Installing a new operating system may break some software. Installing a new version of a database may require changes on a web application server.
  • The location of each of those instances — so you can know which ones need patching. Sometimes this can be done remotely, but other times may require a truck roll.
  • The administrator access links, usernames and password — hopefully, those are not set to “admin/admin.” The downside of changing default admin passwords is that you have to remember the new ones. Sure, sometimes you can make changes with, say, any Active Director user account with the proper privileges. That won’t help you, though, with most firmware or mobile devices.

The above steps are just for discovery. The real challenge is to actually install the patch, which often (but not always) requires taking the hardware, software, or service offline for minutes or hours. This requires scheduling. And inconvenience. Even if you have patch-management tools (and there are many available), too many low-hanging fruit can be overlooked.

As Oracle CEO Larry Ellison said in October at his keynote at OpenWorld 2017,

Our data centers are enormously complicated. There are lots of servers and storage and operating systems, virtual machines, containers and databases, data stores, file systems. And there are thousands of them, tens of thousands, hundreds of thousands of them. It’s hard for people to locate all these things and patch them. They have to be aware there’s a vulnerability. It’s got to be an automated process.

You can’t wait for a downtime window, where you say, “Oh, I can’t take the system down. I know I’ve got to patch this, but we have scheduled downtime middle of next month.” Well, that’s wrong thinking and that’s kind of lack of priority for security.

All that said, patching and updating must be a priority. Dr. Ron Layton, Deputy Assistant Director of the U.S. Secret Service, said at the NetEvents Global Press Summit, September 2017:

Most successful hacks and breaches – most of them – were because low-level controls were not in place. That’s it. That’s it. Patch management. It’s the low-level stuff that will get you to the extent that the bad guys will say, I’m not going to go here. I’m going to go somewhere else. That’s it.

Read more in my essay, “Too many low-hanging patches and updates to easily manage.”

,

Sinking sensation: Protecting the Internet of Ships from cyberattack

This is scary stuff: According to separate reports published by the British government and the cruise ship industry, large cargo and passenger vessels could be damaged by cyberattacks – and potentially even sent to the bottom of the ocean.

The foreword pulls no punches. “Code of Practice: Cyber Security for Ships” was commissioned by the U.K. Department of Transport, and published by the Institution of Engineering and Technology (IET) in London.

Poor security could lead to significant loss of customer and/or industry confidence, reputational damage, potentially severe financial losses or penalties, and litigation affecting the companies involved. The compromise of ship systems may also lead to unwanted outcomes, for example:

(a) physical harm to the system or the shipboard personnel or cargo – in the worst case scenario this could lead to a risk to life and/or the loss of the ship;

(b) disruptions caused by the ship no longer functioning or sailing as intended;

(c) loss of sensitive information, including commercially sensitive or personal data;

and

(d) permitting criminal activity, including kidnap, piracy, fraud, theft of cargo, imposition of ransomware.

The above scenarios may occur at an individual ship level or at fleet level; the latter is likely to be much worse and could severely disrupt fleet operations.

Cargo and Passenger Systems

The report goes into considerable detail about the need to protect confidential information, including intellectual property, cargo manifests, passenger lists, and financial documents. Beyond that, the document warns about dangers from activist groups (or “hackivism”) where actors might work to prevent the handling of specific cargoes, or even disrupt the operation of the ship. The target may be the ship itself, the ship’s owner or operator, or the supplier or recipient of the cargo.

The types of damage could be as simple as the disruption of ship-to-shore communications through a DDoS attack. It might be as dangerous as the corruption or feeding false sensor data that could cause the vessel to flounder or head off course. What can done? The reports several important steps to maintain the security of critical systems including:

(a) Confidentiality – the control of access and prevention of unauthorised access to ship data, which might be sensitive in isolation or in aggregate. The ship systems and associated processes should be designed, implemented, operated and maintained so as to prevent unauthorised access to, for example, sensitive financial, security, commercial or personal data. All personal data should be handled in accordance with the Data Protection Act and additional measures may be required to protect privacy due to the aggregation of data, information or metadata.

(b) Possession and/or control – the design, implementation, operation and maintenance of ship systems and associated processes so as to prevent unauthorised control, manipulation or interference. The ship systems and associated processes should be designed, implemented, operated and maintained so as to prevent unauthorised control, manipulation or interference. An example would be the loss of an encrypted storage device – there is no loss of confidentiality as the information is inaccessible without the encryption key, but the owner or user is deprived of its contents.

(c) Integrity – maintaining the consistency, coherence and configuration of information and systems, and preventing unauthorised changes to them. The ship systems and associated processes should be designed, implemented, operated and maintained so as to prevent unauthorised changes being made to assets, processes, system state or the configuration of the system itself. A loss of system integrity could occur through physical changes to a system, such as the unauthorised connection of a Wi-Fi access point to a secure network, or through a fault such as the corruption of a database or file due to media storage errors.

(d) Authenticity – ensuring that inputs to, and outputs from, ship systems, the state of the systems and any associated processes and ship data, are genuine and have not been tampered with or modified. It should also be possible to verify the authenticity of components, software and data within the systems and any associated processes. Authenticity issues could relate to data such as a forged security certificate or to hardware such as a cloned device.

With passenger vessels, the report points for the need for modular controls and hardened IT infrastructure. That stops unauthorized people from gaining access to online booking, point-of-sales, passenger management, and other critical ships systems by tapping into wiring cabinets, cable junctions, and maintenance areas. Like we said, scary stuff.

Read more in my essay, “Loose Cybersecurity Lips Can Sink Cargo Or Passenger Ships.”

, ,

Elon Musk is wrong about the dangers of machine learning and artificial intelligence

Despite Elon Musk’s warnings this summer, there’s not a whole lot of reason to lose any sleep worrying about Skynet and the Terminator. Artificial Intelligence (AI) is far from becoming a maleficent, all-knowing force. The only “Apocalypse” on the horizon right now is an over reliance by humans on machine learning and expert systems, as demonstrated by the deaths of Tesla owners who took their hands off the wheel.

Examples of what currently pass for “Artificial Intelligence” — technologies such as expert systems and machine learning — are excellent for creating software. AI software is truly valuable help in contexts that involve pattern recognition, automated decision-making, and human-to-machine conversations. Both types of AI have been around for decades. And both are only as good as the source information they are based on. For that reason, it’s unlikely that AI will replace human beings’ judgment on important tasks requiring decisions more complex than “yes or no” any time soon.

Expert systems, also known as rule-based or knowledge-based systems, are when computers are programmed with explicit rules, written down by human experts. The computers can then run the same rules but much faster, 24×7, to come up with the same conclusions as the human experts. Imagine asking an oncologist how she diagnoses cancer and then programming medical software to follow those same steps. For a particular diagnosis, an oncologist can study which of those rules was activated to validate that the expert system is working correctly.

However, it takes a lot of time and specialized knowledge to create and maintain those rules, and extremely complex rule systems can be difficult to validate. Needless to say, expert systems can’t function beyond their rules.

By contrast, machine learning allows computers to come to a decision—but without being explicitly programmed. Instead, they are shown hundreds or thousands of sample data sets and told how they should be categorized, such as “cancer | no cancer,” or “stage 1 | stage 2 | stage 3 cancer.”

Read more about this, including my thoughts on machine learning, pattern recognition, expert systems, and comparisons to human intelligence, in my story for Ars Technica, “Never mind the Elon—the forecast isn’t that spooky for AI in business.”

,

Breached Deloitte Talks About the Cost of Cyber Breaches

Long after intruders are removed and public scrutiny has faded, the impacts from a cyberattack can reverberate over a multi-year timeline. Legal costs can cascade as stolen data is leveraged in various ways over time; it can take years to recover pre-incident growth and profitability levels; and brand impact can play out in multiple ways.

That’s from a Deloitte report, “Beneath the surface of a cyberattack: A deeper look at business impacts,” released in late 2016. The report’s contents, and other statements on cyber security from Deloitte, are ironic given the company’s huge breach reported this week.

The breach was reported on Monday, Sept. 25, and appears to have leaked confidential emails and financial documents of some of its clients. According to the Guardian,

The Guardian understands Deloitte clients across all of these sectors had material in the company email system that was breached. The companies include household names as well as US government departments. So far, six of Deloitte’s clients have been told their information was “impacted” by the hack. Deloitte’s internal review into the incident is ongoing. The Guardian understands Deloitte discovered the hack in March this year, but it is believed the attackers may have had access to its systems since October or November 2016.

The Guardian asserts that hackers gained access to the Deloitte’s global email server via an administrator’s account that was protected by only a single password. Without two-factor authentication, hackers could gain entry via any computer, as long as they guessed the right password (or obtained it via hacking, malware, or social engineering). The story continues,

In addition to emails, the Guardian understands the hackers had potential access to usernames, passwords, IP addresses, architectural diagrams for businesses and health information. Some emails had attachments with sensitive security and design details.

Okay, the breach was bad. What did Deloitte have to say about these sorts of incidents? Lots. In the 2016 report, Deloitte’s researchers pointed to 14 cyberattack impact factors – half of which are the directly visible costs of breach incidents, the others which can be more subtle or hidden, and potentially never fully understood.

The “Above the Surface” incident costs include the expenses of technical investigations, consumer breach notifications, regulatory compliance, attorneys fees and litigation, post-preach customer protection, public relations, and cybersecurity protections. Hard to tally are the “Below the Surface” costs of insurance premium increases, increased cost to raise debt, impact of operational disruption/destruction, value of lost contact revenue, devaluation of trade name, loss of intellectual property, and lost value of customer relationship.

As the report says,

Common perceptions about the impact of a cyberattack are typically shaped by what companies are required to report publicly—primarily theft of personally identifiable information (PII), payment data, and personal health information (PHI). Discussions often focus on costs related to customer notification, credit monitoring, and the possibility of legal judgments or regulatory penalties. But especially when PII theft isn’t an attacker’s only objective, the impacts can be even more far-reaching.

Read more in my essay, “Hacked and Breached: Let’s Hear Deloitte In Its Own Words.”

, ,

The cause of the Equifax breach: Sheer human incompetence

Stupidity. Incompetence. Negligence. The unprecedented data breach at Equifax has dominated the news cycle, infuriating IT managers, security experts, legislators, and attorneys — and scaring consumers. It appears that sensitive personally identifiable information (PII) on 143 million Americans was exfiltrated, as well as PII on some non-US nationals.

There are many troubling aspects. Reports say the tools that consumers can use to see if they are affected by the breach are inaccurate. Articles that say that by using those tools, consumers are waiving their rights to sue Equifax. Some worry that Equifax will actually make money off this by selling affected consumers its credit-monitoring services.

Let’s look at the technical aspects, though. While details about the breach are still widely lacking, two bits of information are making the rounds. One is that Equifax practiced bad password practices, allowing hackers to easily gain access to at least one server. Another is that there was a flaw in a piece of open-source software – but the patch had been available for months, yet Equifax didn’t apply that patch.

It’s unclear about the veracity of those two possible causes of the breach. Even so, this points to a troubling pattern of utter irresponsibility by Equifax’s IT and security operations teams.

Bad Equifax Password Practices

Username “admin.” Password “admin.” That’s often the default for hardware, like a home WiFi router. The first thing any owner should do is change both the username and password. Every IT professional knows that. Yet the fine techies at Equifax, or at least their Argentina office, didn’t know that. According to well-known security writer Brian Krebs, earlier this week,

Earlier today, this author was contacted by Alex Holden, founder of Milwaukee, Wisc.-based Hold Security LLC. Holden’s team of nearly 30 employees includes two native Argentinians who spent some time examining Equifax’s South American operations online after the company disclosed the breach involving its business units in North America.

It took almost no time for them to discover that an online portal designed to let Equifax employees in Argentina manage credit report disputes from consumers in that country was wide open, protected by perhaps the most easy-to-guess password combination ever: “admin/admin.”

What’s more, writes Krebs,

Once inside the portal, the researchers found they could view the names of more than 100 Equifax employees in Argentina, as well as their employee ID and email address. The “list of users” page also featured a clickable button that anyone authenticated with the “admin/admin” username and password could use to add, modify or delete user accounts on the system.

and

A review of those accounts shows all employee passwords were the same as each user’s username. Worse still, each employee’s username appears to be nothing more than their last name, or a combination of their first initial and last name. In other words, if you knew an Equifax Argentina employee’s last name, you also could work out their password for this credit dispute portal quite easily.

Incompetence. Stupidity. Appalling. Amazing. Read more about the Equifax breach in my essay, “Initial Analysis Of The Equifax Breach.”

, ,

Many on-prem ERP and CRM packages are not sufficiently secured

When was the last time most organizations discussed the security of their Oracle E-Business Suite? How about SAP S/4HANA? Microsoft Dynamics? IBM’s DB2? Discussions about on-prem server software security too often begin and end with ensuring that operating systems are at the latest level, and are current with patches.

That’s not good enough. Just as clicking on a phishing email or opening a malicious document in Microsoft Word can corrupt a desktop, so too server applications can be vulnerable. When those server applications are involved with customer records, billing systems, inventory, transactions, financials, or human resources, a hack into ERP or CRM systems can threaten an entire organization. Worse, if that hack leveraged stolen credentials, the business may never realize that competitors or criminals are stealing its data, and potentially even corrupting its records.

A new study from the Ponemon Institute points to the potential severity of the problem. Sixty percent of the respondents to the “Cybersecurity Risks to Oracle E-Business Suite” say that information theft, modification of data and disruption of business processes on their company’s Oracle E-Business Suite applications would be catastrophic. While 70% respondents said a material security or data breach due to insecure Oracle E-Business Suite applications is likely, 67% of respondents believe their top executives are not aware of this risk. (The research was sponsored by Onapsis, which sells security solutions for ERP suites, so apply a little sodium chloride to your interpretation of the study’s results.)

The audience of this study was of businesses that rely upon Oracle E-Business Suite. About 24% of respondents said that it was the most critical application they ran, and altogether, 93% said it was one of the top 10 critical applications. Bearing in mind that large businesses run thousands of server applications, that’s saying something.

Yet more than half of respondents – 53% — said that it was Oracle’s responsibility to ensure that its applications and platforms are safe and secure. Unless they’ve contracted with Oracle to manage their on-prem applications, and to proactively apply patches and fixes, well, they are delusional.

Another area of delusion: That software must be connected to the Internet to pose a risk. In this study, 52% of respondents agree or strongly agree that “Oracle E-Business applications that are not connected to the Internet are not a security threat.” They’ve never heard of insider threats? Credentials theft? Penetrations of enterprise networks?

What about securing other ERP/CRM packages, like those from IBM, Microsoft, and SAP? Read all about that, and more, in my story, “Organizations Must Secure Their Business-Critical ERP And CRM Server Applications.”

, ,

Cyberwar: Can ships like the USS John S. McCain be hacked?

The more advanced the military technology, the greater the opportunities for intentional or unintentional failure in a cyberwar. As Scotty says in Star Trek III: The Search for Spock, “The more they overthink the plumbing, the easier it is to stop up the drain.”

In the case of a couple of recent accidents involving the U.S. Navy, the plumbing might actually be the computer systems that control navigation. In mid-August, the destroyer U.S.S. John S. McCain rammed into an oil tanker near Singapore. A month or so earlier, a container ship hit the nearly identical U.S.S. Fitzgerald off Japan. Why didn’t those hugely sophisticated ships see the much-larger merchant vessels, and move out of the way?

There has been speculation, and only speculation, that both ships might have been victims of cyber foul play, perhaps as a test of offensive capabilities by a hostile state actor. The U.S. Navy has not given a high rating to that possibility, and let’s admit, the odds are against it.

Even so, the military hasn’t dismissed the idea, writes Bill Gertz in the Washington Free Beacon:

On the possibility that China may have triggered the collision, Chinese military writings indicate there are plans to use cyber attacks to “weaken, sabotage, or destroy enemy computer network systems or to degrade their operating effectiveness.” The Chinese military intends to use electronic, cyber, and military influence operations for attacks against military computer systems and networks, and for jamming American precision-guided munitions and the GPS satellites that guide them, according to one Chinese military report.

The datac enters of those ships are hardened and well protected. Still, given the sophistication of today’s warfare, what if systems are hacked?

Imagine what would happen if, say, foreign powers were able to break into drones or cruise missiles. This might cause them to crash prematurely, self-destruct, or hit a friendly target, or perhaps even “land” and become captured. What about disruptions to fighter aircraft, such as jets or helicopters? Radar systems? Gear carried by troops?

To learn more about these unsettling ideas, read my article, “Can Warships Like the U.S.S. John S. McCain Be Hacked?

,

The GDPR says you must reveal personal data breaches

No organization likes to reveal that its network has been breached, or it data has been stolen by hackers or disclosed through human error. Yet under the European Union’s new General Data Protection Regulation (GDPR), breaches must be disclosed.

The GDPR is a broad set of regulations designed to protect citizens of the European Union. The rules apply to every organization and business that collects or stores information about people in Europe. It doesn’t matter if the company has offices in Europe: If data is collected about Europeans, the GDPR applies.

Traditionally, most organizations hide all information about security incidents, especially if data is compromised. That makes sense: If a business is seen to be careless with people’s data, its reputation can suffer, competitors can attack, and there can be lawsuits or government penalties.

We tend to hear about security incidents only if there’s a breach sufficiently massive that the company must disclose to regulators, or if there’s a leak to the media. Even then, the delay between the breach can take weeks or month — meaning that folks aren’t given enough time to engage identity theft protection companies, monitor their credit/debit payments, or even change their passwords.

Thanks to GDPR, organizations must now disclose all incidents where personal data may have been compromised – and make that disclosure quickly. Not only that, but the GDPR says that the disclosure must be to the general public, or at least to those people affected; the disclosure can’t be buried in a regulatory filing.

Important note: The GDPR says absolutely nothing about disclosing successful cyberattacks where personal data is not stolen or placed at risk. That includes distributed denial-of-service (DDoS) attacks, ransomware, theft of financial data, or espionage of intellectual property. That doesn’t mean that such cyberattacks can be kept secret, but in reality, good luck finding out about them, unless the company has other reasons to disclose. For example, after some big ransomware attacks earlier this year, some publicly traded companies revealed to investors that those attacks could materially affect their quarterly profits. This type of disclosure is mandated by financial regulation – not by the GDPR, which is focused on protecting individuals’ personal data.

The clock is ticking. To see what you must do, read my article, “With the GDPR, You Must Reval the Personal Data Breach.”

,

Get ready for huge fines if you don’t comply with the GDPR

The European Union is taking computer security, data breaches, and individual privacy seriously. The EU’s General Data Protection Regulation (GDPR) will take effect on May 25, 2018 – but it’s not only a regulation for companies based in Europe.

The GDPR is designed to protect European consumers. That means that every business that stores information about European residents will be affected, no matter where that business operates or is headquartered. That means the United States, and also a post-Brexit United Kingdom.

There’s a hefty fee for non-compliance: Businesses can be fined up to 4% of their worldwide top-line revenue, with a cap of €20 million. No matter how you slice it, for most businesses that’s going to hurt, though for some of the tech industry’s giants, that €20 million penalty might look like a slap on the wrist.

A big topic within GDPR is “data portability.” That is the notion that an individual has the right to see information that it has shared with an organization (or has given permission to be collected), inn a commonly used machine-readable format. Details need to be worked out to make that effective.

Another topic is that individuals have the right to make changes to some of their information, or to delete all or part of their information. No, customers can’t delete their transaction history, for example, or delete that they owe the organization money. However, they may choose to delete information that the organization may have collected, such as their age, where they went to college, or the names of their children. They also have the right to request corrections to the data, such as a misspelled name or an incorrect address.

That’s not as trivial as it may seem. It is not uncommon for organizations to have multiple versions of, say, a person’s name and spelling, or to have the information contain differences in formatting. This can have implications when records don’t match. In some countries, there have been problems with a traveler’s passport information not 100% exactly matching the information on a driver’s license, airline ticket, or frequent traveller program. While the variations might appear trivial to a human — a missing middle name, a missing accent mark, an extra space — it can be enough to throw off automated data processing systems, which therefore can’t 100% match the traveler to a ticket. Without rules like the GDPR, organizations haven’t been required to make it easy, or even possible, for customers to make corrections.

For more about this, read my article, “The GDPR is coming.”

,

Cybersecurity pros are hard to get —here’s how to find and keep them

It’s difficult to recruit qualified security staff because there are more openings than humans to fill them. It’s also difficult to retain IT security professionals because someone else is always hiring. But don’t worry: Unless you work for an organization that refuses to pay the going wage, you’ve got this.

Two recent studies present dire, but somewhat conflicting, views of the availability of qualified cybersecurity professionals over the next four or five years. The first study is the Global Information Security Workforce Study from the Center for Cyber Safety and Education, which predicts a shortfall of 1.8 million cybersecurity workers by 2022. Among the highlights from that research, which drew on data from 19,000 cybersecurity professionals:

  • The cybersecurity workforce gap will hit 1.8 million by 2022. That’s a 20 percent increase since 2015.
  • Sixty-eight percent of workers in North America believe this workforce shortage is due to a lack of qualified personnel.
  • A third of hiring managers globally are planning to increase the size of their departments by 15 percent or more.
  • There aren’t enough workers to address current threats, according to 66 percent of respondents.
  • Around the globe, 70 percent of employers are looking to increase the size of their cybersecurity staff this year.
  • Nine in ten security specialists are male. The majority have technical backgrounds, suggesting that recruitment channels and tactics need to change.
  • While 87 percent of cybersecurity workers globally did not start in cybersecurity, 94 percent of hiring managers indicate that security experience in the field is an important consideration.

The second study is the Cybersecurity Jobs Report, created by the editors of Cybersecurity Ventures. Here are some highlights:

  • There will be 3.5 million cybersecurity job openings by 2021.
  • Cybercrime will more than triple the number of job openings over the next five years. India alone will need 1 million security professionals by 2020 to meet the demands of its rapidly growing economy.
  • Today, the U.S. employs nearly 780,000 people in cybersecurity positions. But a lot more are needed: There are approximately 350,000 current cybersecurity job openings, up from 209,000 in 2015.

So, whether you’re hiring a chief information security officer or a cybersecurity operations specialist, expect a lot of competition. What can you do about it? How can you beat the staffing shortage? Read my suggestion in “How to beat the cybersecurity staffing shortage.”

, ,

Ransomware dominates the Black Hat 2017 conference

“Ransomware! Ransomware! Ransomware!” Those words may lack the timeless resonance of Steve Ballmer’s epic “Developers! Developers! Developers!” scream in 2000, but ransomware was seemingly an obsession or at Black Hat USA 2017, happening this week in Las Vegas.

There are good reason for attendees and vendors to be focused on ransomware. For one thing, ransomware is real. Rates of ransomware attacks have exploded off the charts in 2017, helped in part by the disclosures of top-secret vulnerabilities and hacking tools allegedly stolen from the United States’ three-letter-initial agencies.

For another, the costs of ransomware are significant. Looking only at a few attacks in 2017, including WannaCry, Petya, and NotPetya, corporates have been forced to revise their earnings downward to account for IT downtime and lost productivity. Those include ReckittNuance, and FedEx. Those types of impact grab the attention of every CFO and every CEO.

Talking with another analyst at Black Hat, he observed that just about every vendor on the expo floor had managed to incorporate ransomware into its magic show. My quip: “I wouldn’t be surprised to see a company marketing network cables as specially designed to prevent against ransomware.” His quick retort: “The queue would be half a mile long for samples. They’d make a fortune.”

Read my article, “A Singular Message about Malware,” to learn what organizations can and should do to handle ransomware. It’s not rocket science, and it’s not brain surgery.

, ,

The billion-dollar cost of extreme cyberattacks

A major global cyberattack could cost US$53 billion of economic losses. That’s on the scale of a catastrophic disaster like 2012’s Hurricane Sandy.

Lloyds of London, the famous insurance company, partnered with Cyence, a risk analysis firm specializing in cybersecurity. The result is a fascinating report, “Counting the Cost: Cyber Exposure Decoded.” This partnership makes sense: Lloyds needs to understand the risk before deciding whether to underwrite a venture — and when it comes to cybersecurity, this is an emerging science. Traditional actuarial methods used to calculate the risk of a cargo ship falling prey to pirates, or an office block to a devastating flood, simply don’t apply.

Lloyds says that in 2016, cyberattacks cost businesses as much as $450 billion. While insurers can help organizations manage that risk, the risk is increasing. The report points to those risks covering “everything from individual breaches caused by malicious insiders and hackers, to wider losses such as breaches of retail point-of-sale devices, ransomware attacks such as BitLocker, WannaCry and distributed denial-of-service attacks such as Mirai.”

The worry? Despite writing $1.35 billion in cyberinsurance in 2016, “insurers’ understanding of cyber liability and risk aggregation is an evolving process as experience and knowledge of cyber-attacks grows. Insureds’ use of the internet is also changing, causing cyber-risk accumulation to change rapidly over time in a way that other perils do not.”

And that is why the lack of time-tested actuarial tables can cause disaster, says Lloyds. “Traditional insurance risk modelling relies on authoritative information sources such as national or industry data, but there are no equivalent sources for cyber-risk and the data for modelling accumulations must be collected at scale from the internet. This makes data collection, and the regular update of it, key components of building a better understanding of the evolving risk.”

Huge Liability Costs

The “Counting the Cost” report makes for some depressing reading. Here are three of the key findings, quoted verbatim. Read the 56-page report to dig deeply into the scenarios, and the damages.

  • The direct economic impacts of cyber events lead to a wide range of potential economic losses. For the cloud service disruption scenario in the report, these losses range from US$4.6 billion for a large event to US$53.1 billion for an extreme event; in the mass software vulnerability scenario, the losses range from US$9.7 billion for a large event to US$28.7 billion for an extreme event.
  • Economic losses could be much lower or higher than the average in the scenarios because of the uncertainty around cyber aggregation. For example, while average losses in the cloud service disruption scenario are US$53 billion for an extreme event, they could be as high as US$121.4 billion or as low as US$15.6 billion, depending on factors such as the different organisations involved and how long the cloud-service disruption lasts for.
  • Cyber-attacks have the potential to trigger billions of dollars of insured losses. For example, in the cloud- services scenario insured losses range from US$620 million for a large loss to US$8.1 billion for an extreme loss. For the mass software vulnerability scenario, the insured losses range from US$762 million (large loss) to US$2.1 billion (extreme loss).

Read more in my article for Zonic News, “Lloyds Of London Estimates The Billion-Dollar Cost Of Extreme Cyberattacks.”

, , ,

Learn datacenter principles from ISO 26262 standards for automotive safety engineering

Automotive ECU (engine control unit)

Automotive ECU (engine control unit)

In my everyday life, I trust that if I make a panic stop, my car’s antilock brake system will work. The hardware, software, and servos will work together to ensure that my wheels don’t lock up—helping me avoid an accident. If that’s not sufficient, I trust that the impact sensors embedded behind the front bumper will fire the airbag actuators with the correct force to protect me from harm, even though they’ve never been tested. I trust that the bolts holding the seat in its proper place won’t shear. I trust the seat belts will hold me tight, and that cargo in the trunk won’t smash through the rear seats into the passenger cabin.

Engineers working on nearly every automobile sold worldwide ensure that their work practices conform to ISO 26262. That standard describes how to manage the functional safety of the electrical and electronic systems in passenger cars. A significant portion of ISO 26262 involves ensuring that software embedded into cars—whether in the emissions system, the antilock braking systems, the security systems, or the entertainment system—is architected, coded, and tested to be as reliable as possible.

I’ve worked with ISO 26262 and related standards on a variety of automotive software security projects. Don’t worry, we’re not going to get into the hairy bits of those standards because unless you are personally designing embedded real-time software for use in automobile components, they don’t really apply. Also, ISO 26262 is focused on the real-world safety of two-ton machines hurtling at 60-plus miles per hour—that is, things that will kill or hurt people if they don’t work as expected.

Instead, here are five IT systems management ideas that are inspired by ISO 26262. We’ll help you ensure your systems are designed to be Reliable, with a capital R, and Safe, with a capital S.

Read the list, and more, in my article for HP Enterprise Insights, “5 lessons for data center pros, inspired by automotive engineering standards.”

, ,

Cybersecurity has a problem with women — and many opportunities

MacKenzie Brown has nailed the problem — and has good ideas for the solution. As she points out in her three part blog series, “The Unicorn Extinction” (links in a moment):

  • Overall, [only] 25% of women hold occupations in technology alone.
  • Women’s Society of Cyberjutsu (WSC), a nonprofit for empowering women in cybersecurity, states that females make up 11% of the cybersecurity workforce while (ISC)2, a non-profit specializing in education and certification, reports a whopping estimation of 10%.
  • Lastly, put those current numbers against the 1 million employment opportunities predicted for 2017, with a global demand of up to 6 million by 2019.

While many would decry the system sexism and misogyny in cybersecurity, Ms. Brown sees opportunity:

…the cybersecurity industry, a market predicted to have global expenditure exceeding $1 trillion between now and 2021(4), will have plenty of demand for not only information security professionals. How can we proceed to find solutions and a fixed approach towards resolving this gender gap and optimizing this employment fluctuation? Well, we promote unicorn extinction.

The problem of a lack of technically developed and specifically qualified women in Cybersecurity is not unique to this industry alone; however the proliferation of women in tangential roles associated with our industry shows that there is a barrier to entry, whatever that barrier may be. In the next part of this series we will examine the ideas and conclusions of senior leadership and technical women in the industry in order to gain a woman’s point of view.

She continues to write about analyzing the problem from a woman’s point of view:

Innovating solutions to improve this scarcity of female representation, requires breaking “the first rule about Fight Club; don’t talk about Fight Club!” The “Unicorn Law”, this anecdote, survives by the circling routine of the “few women in Cybersecurity” invoking a conversation about the “few women in Cybersecurity” on an informal basis. Yet, driving the topic continuously and identifying the values will ensure more involvement from the entirety of the Cybersecurity community. Most importantly, the executive members of Fortune 500 companies who apply a hiring strategy which includes diversity, can begin to fill those empty chairs with passionate professionals ready to impact the future of cyber.

Within any tale of triumph, obstacles are inevitable. Therefore, a comparative analysis of successful women may be the key to balancing employment supply and demand. I had the pleasure of interviewing a group of women; all successful, eclectic in roles, backgrounds of technical proficiency, and amongst the same wavelength of empowerment. These interviews identified commonalities and distinct perspectives on the current gender gap within the technical community.

What’s the Unicorn thing?

Ms. Brown writes,

During hours of research and writing, I kept coming across a peculiar yet comically exact tokenism deemed, The Unicorn Law. I had heard this in my industry before, attributed to me, “unicorn,” which is described (even in the cybersecurity industry) as: a woman-in-tech, eventually noticed for their rarity and the assemblage toward other females within the industry. In technology and cybersecurity, this is a leading observation many come across based upon the current metrics. When applied to the predicted demand of employment openings for years to come, we can see an enormous opportunity for women.

Where’s the opportunity?

She concludes,

There may be a notable gender gap within cybersecurity, but there also lies great opportunity as well. Organizations can help narrow the gap, but there is also tremendous opportunity in women helping each other as well.

Some things that companies can do to help, include:

  • Providing continuous education, empowering and encouraging women to acquire new skill through additional training and certifications.
  • Using this development training to promote from within.
    Reaching out to communities to encourage young women from junior to high school levels to consider cyber security as a career.
  • Seek out women candidates for jobs, both independently and utilizing outsourcing recruitment if need be.
  • At events, refusing to field all male panels.
  • And most importantly, encourage the discussion about the benefits of a diverse team.

If you care about the subject of gender opportunity in cybersecurity, I urge you to read these three essays.

The Unicorn Extinction Series: An Introspective Analysis of Women in Cybersecurity, Part 1

The Unicorn Extinction Series: An Introspective Analysis of Women in Cybersecurity, Part 2

The Unicorn Extinction Series: An Introspective Analysis of Women in Cybersecurity, Part 3

, ,

Tell your customers about your data breaches!

Did they tell their customers that data was stolen? No, not right away. When AA — a large automobile club and insurer in the United Kingdom — was hacked in April, the company was completely mum for months, in part because it didn’t believe the stolen data was sensitive. AA’s customers only learned about it when information about the breach was publicly disclosed in late June.

There are no global laws that require companies to disclose information about data thefts to customers. There are similarly no global laws that require companies to disclose defects in their software or hardware products, including those that might introduce security vulnerabilities.

It’s obviously why companies wouldn’t want to disclose problems with their products (such as bugs or vulnerabilities) or with their back-end operations (such as system breaches or data exfiltration). If customers think you’re insecure, they’ll leave. If investors think you’re insecure, they’ll leave. If competitors think you’re insecure, they’ll pounce on it. And if lawyers or regulators think you’re insecure, they might file lawsuits.

No matter how you slice it, disclosures about problems is not good for business. Far better to share information about new products, exciting features, customer wins, market share increases, additional platforms, and pricing promotions.

That’s not to say that all companies hide bad news. Microsoft, for example, is considered to be very proactive on disclosing flaws in its products and platforms, including those that affect security. When Microsoft learned about the Server Message Block (SMB) flaw that enabled malware like WannaCry and Petya in March, it quickly issued a Security Bulletin that explained the problem — and supplied the necessary patches. If customers had read the bulletin and applied the patches, those ransomware outbreaks wouldn’t have occurred.

When you get outside the domain of large software companies, such disclosures are rare. Automobile manufacturers do share information about vehicle defects with regulators, as per national laws, but resist recalls because of the expense and bad publicity. Beyond that, companies share information about problems with products, services, and operations unwillingly – and with delays.

In the AA case, as SC Magazine wrote,

The leaky database was first discovered by the AA on April 22 and fixed by April 25. In the time that it had been exposed, it had reportedly been accessed by several unauthorised parties. An investigation by the AA deemed the leaky data to be not sensitive, meaning that the organisation did not feel it necessary to tell customers.

Read more about this in my piece for Zonic News, “Tell Customers about Vulnerabilities – And Data Breaches.”

,

Watch out for threatening emails from Anonymous or Lizard Squad

The Federal Bureau of Investigation is warning about potential attacks from a hacking group called Lizard Squad. This information, released today, was labeled “TLP:Green” by the FBI and CERT, which means that it shouldn’t be publicly shared – but I am sharing it because this information was published on a publicly accessible blog run by the New York State Bar Association. I do not know why distribution of this information was restricted.

The FBI said:

Summary

An individual or group claiming to be “Anonymous” or “Lizard Squad” sent extortion emails to private-sector companies threatening to conduct distributed denial of service (DDoS) attacks on their network unless they received an identified amount of Bitcoin. No victims to date have reported DDoS activity as a penalty for non-payment.

Threat

In April and May 2017, at least six companies received emails claiming to be from “Anonymous” and “Lizard Squad” threatening their companies with DDoS attacks within 24 hours unless the company sent an identified amount of Bitcoin to the email sender. The email stated the demanded amount of Bitcoin would increase each day the amount went unpaid. No victims to date have reported DDoS activity as a penalty for nonpayment.

Reporting on schemes of this nature go back at least three years.

In 2016, a group identifying itself as “Lizard Squad” sent extortion demands to at least twenty businesses in the United Kingdom, threatening DDoS attacks if they were not paid five Bitcoins (as of 14 June, each Bitcoin was valued at 2,698 USD). No victims reported actual DDoS activity as a penalty for non-payment.

Between 2014 and 2015, a cyber extortion group known as “DDoS ‘4’ Bitcoin” (DD4BC) victimized hundreds of individuals and businesses globally. DD4BC would conduct an initial, demonstrative low-level DDoS attack on the victim company, followed by an

email message introducing themselves, demanding a ransom paid in Bitcoins, and threatening a higher level attack if the ransom was not paid within the stated time limit. While no significant disruption or DDoS activity was noted, it is probable companies paid the ransom to avoid the threat of DDoS activity.

Background

Lizard Squad is a hacking group known for their DDoS attacks primarily targeting gaming-related services. On 25 December 2014, Lizard Squad was responsible for taking down the Xbox Live and PlayStation networks. Lizard Squad also successfully conducted DDoS attacks on the UK’s National Crime Agency’s (NCA) website in 2015.

Anonymous is a hacking collective known for several significant DDoS attacks on government, religious, and corporate websites conducted for ideological reasons.

Recommendations

  • The FBI suggests precautionary measures to mitigate DDoS threats to include, but not limited to:
  • Have a DDoS mitigation strategy ready ahead of time.
  • Implement an incident response plan that includes DDoS mitigation and practice this plan before an actual incident occurs. This plan may involve external organizations such as your Internet Service Provider, technology companies that offer DDoS mitigation services, and law enforcement.
  • Ensure your plan includes the appropriate contacts within these external organizations. Test activating your incident response team and third party contacts.
  • Implement a data back-up and recovery plan to maintain copies of sensitive or proprietary data in a separate and secure location. Backup copies of sensitive data should not be readily accessible from local networks.
  • Ensure upstream firewalls are in place to block incoming User Data Protocol (UDP) packets.
  • Ensure software or firmware updates are applied as soon as the device manufacturer releases them.

If you have received one of these demands:

  • Do not make the demand payment.
  • Retain the original emails with headers.
  • If applicable, maintain a timeline of the attack, recording all times and content of the attack.

The FBI encourages recipients of this document to report information concerning suspicious or criminal activity to their local FBI field office or the FBI’s 24/7 Cyber Watch (CyWatch). Field office contacts can be identified at www.fbi.gov/contact-us/field. CyWatch can be contacted by phone at (855) 292-3937 or by e-mail at email hidden; JavaScript is required. When available, each report submitted should include the date, time, location, type of activity, number of people, and type of equipment used for the activity, the name of the submitting company or organization, and a designated point of contact. Press inquiries should be directed to the FBI’s national Press Office at email hidden; JavaScript is required or (202) 324-3691.

With Petya, Malware Means Cyberwar

Analysts  believe that Petya is something new: This malware  pretends to be plain old ransomware that asks for $300 to unlock encrypted data – but is actually intended to steal passwords and destroy data. In other words, it’s a true weaponized cyberattack.

email hidden; JavaScript is required

Petya appears to have been modified specifically to make the encoding of user data irreversible by overwriting the master boot record. The attackers’ email address also appears to have been taken offline, preventing ransoms from being paid.

email hidden; JavaScript is required

Nicholas Weaver, a security researcher at the International Computer Science Institute and a lecturer at UC Berkeley, said Petya appears to have been well engineered to be destructive while masquerading as a ransomware strain.

Weaver noted that Petya’s ransom note includes the same Bitcoin address for every victim, whereas most ransomware strains create a custom Bitcoin payment address for each victim.

Also, he said, Petya urges victims to communicate with the extortionists via an email address, while the majority of ransomware strains require victims who wish to pay or communicate with the attackers to use Tor, a global anonymity network that can be used to host Web sites which can be very difficult to take down.

“I’m willing to say with at least moderate confidence that this was a deliberate, malicious, destructive attack or perhaps a test disguised as ransomware,” Weaver said. “The best way to put it is that Petya’s payment infrastructure is a fecal theater.”

email hidden; JavaScript is required

After an analysis of the encryption routine of the malware used in the Petya/ExPetr attacks, we have thought that the threat actor cannot decrypt victims’ disk, even if a payment was made.

This supports the theory that this malware campaign was not designed as a ransomware attack for financial gain. Instead, it appears it was designed as a wiper pretending to be ransomware.

Different than WannaCry

Both Petya and WannaCry are the results of an exploitable flaw in many versions of Windows. Microsoft learned about the flaw after NSA data was stolen, and quickly issued an effective patch. However, many customers have not installed the patch, and therefore, their systems remained vulnerable. Making the situation more complicated, many of those Windows system used pirated versions of the operating system, which means that the system owners may not have been notified about the vulnerability and patch – and not all may have been able to install the patch in any case, because Microsoft verifies the license of Windows during upgrades.

Learn more in my article for Zonic News, “Petya Isn’t Ransomware — It’s Cyberwar.”

, ,

Just say No to Flash, CNN

CNN didn’t get the memo. After all the progress that’s been made to eliminate the requirement for using Adobe’s Flash player by so many streaming-media websites, CNNgo still requires the problematic plug-in, as you can see by the screen I saw just a few minutes ago.


Have you not heard of HTML5, oh, CNN programmers? Perhaps the techies at CNN should read “Why Adobe Flash is a Security Risk and Why Media Companies Still Use it.” After that, “Gone in a Flash: Top 10 Vulnerabilities Used by Exploit Kits.”

Yes, Adobe keeps patching Flash to make it less insecure. Lots and lots of patches, says the story “Patch Tuesday: Adobe Flash Player receives updates for 13 security issues,” publishing in January. That comes in the heels of 17 security flaws patched in December 2016.

And yes, there were more critical patches issued on June 13, 2017. Flash. Just say no. Goodbye, CNNgo, until you stop requiring that prospective customers utilize such a buggy, flawed media player.

And no, I didn’t enable the use of Flash. Guess I’ll never see what CNN wanted to show me. No great loss.

, , ,

Business advice for chief information security officers (CISOs)

An organization’s Chief Information Security Officer’s job isn’t ones and zeros. It’s not about unmasking cybercriminals. It’s about reducing risk for the organization, for enabling executives and line-of-business managers to innovate and compete safely and  securely. While the CISO is often seen as the person who loves to say “No,” in reality, the CISO wants to say “Yes” — the job, after all, is to make the company thrive.

Meanwhile, the CISO has a small staff, tight budget, and the need to demonstrate performance metrics and ROI. What’s it like in the real world? What are the biggest challenges? We asked two former CISOs (it’s hard to get current CISOs to speak on the record), both of whom worked in the trenches and now advise CISOs on a daily basis.

To Jack Miller, a huge challenge is the speed of decision-making in today’s hypercompetitive world. Miller, currently Executive in Residence at Norwest Venture Partners, conducts due diligence and provides expertise on companies in the cyber security space. Most recently he served as chief security strategy officer at ZitoVault Software, a startup focused on safeguarding the Internet of Things.

Before his time at ZitoVault, Miller was the head of information protection for Auto Club Enterprises. That’s the largest AAA conglomerate with 15 million members in 22 states. Previously, he served as the CISO of the 5th and 11th largest counties in the United States, and as a security executive for Pacific Life Insurance.

“Big decisions are made in the blink of an eye,” says Miller. “Executives know security is important, but don’t understand how any business change can introduce security risks to the environment. As a CISO, you try to get in front of those changes – but more often, you have to clean up the mess afterwards.”

Another CISO, Ed Amoroso, is frustrated by the business challenge of justifying a security ROI. Amoroso is the CEO of TAG Cyber LLC, which provides advanced cybersecurity training and consulting for global enterprise and U.S. Federal government CISO teams. Previously, he was Senior Vice President and Chief Security Officer for AT&T, and managed computer and network security for AT&T Bell Laboratories. Amoroso is also an Adjunct Professor of Computer Science at the Stevens Institute of Technology.

Amoroso explains, “Security is an invisible thing. I say that I’m going to spend money to prevent something bad from happening. After spending the money, I say, ta-da, look, I prevented that bad thing from happening. There’s no demonstration. There’s no way to prove that the investment actually prevented anything. It’s like putting a “This House is Guarded by a Security Company” sign in front of your house. Maybe a serial killer came up the street, saw the sign, and moved on. Maybe not. You can’t put in security and say, here’s what didn’t happen. If you ask, 10 out of 10 CISOs will say demonstrating ROI is a huge problem.”

Read more in my article for Global Banking & Finance Magazine, “Be Prepared to Get Fired! And Other Business Advice for CISOs.”

,

Streamlining the cybersecurity insurance application process

Have you ever suffered through the application process for cybersecurity insurance? You know that “suffered” is the right word because of a triple whammy.

  • First, the general risk factors involved in cybersecurity are constantly changing. Consider the rapid rise in ransomware, for example.
  • Second, it is extremely labor-intensive for businesses to document how “safe” they are, in terms of their security maturity, policies, practices and technology.
  • Third, it’s hard for insurers, the underwriters, and their actuaries, to feel confident that they truly understand how risky a potential customer can be — information and knowledge that’s required for quoting a policy that offers sufficient coverage at reasonable rates.

That is, of course, assuming that everyone is on the same page and agrees that cybersecurity insurance is important to consider for the organization. Is cybersecurity insurance a necessary evil for every company to consider? Or, is it only a viable option for a small few? That’s a topic for a separate conversation. For now, let’s assume that you’re applying for insurance.

From their part, insurance carriers aren’t equipped to go into your business and examine your IT infrastructure. They won’t examine firewall settings or audit your employee anti-phishing training materials. Instead, they rely upon your answers to questionnaires developed and interpreted by their own engineers. Unfortunately, those questionnaires may not get into the nuances, especially if you’re in a vertical where the risks are especially high, and so are the rewards for successful hackers.

According to InformationAge, 77% of ransomware appear in four industries. Those are business & professional services (28%), government (19%), healthcare (15%) and retail (15%). In 2016 and 2017, healthcare organizations like hospitals and medical practices were repeatedly hit by ransomware. Give that data to the actuaries, and they might look for those types of organizations to fill out even more questionnaires.

About those questionnaires? “Applications tend to have a lot of yes/no answers… so that doesn’t give the entire picture of what the IT framework actually looks like,” says Michelle Chia, Vice President, Zurich North America. She explained that an insurance company’s internal assessment engineers have to dig deeper to understand what is really going on: “They interview the more complex clients to get a robust picture of what the combination of processes and controls actually looks like and how secure the network and the IT infrastructure are.”

Read more in my latest for ITSP Magazine, “How to Streamline the Cybersecurity Insurance Process.”

,

Hacking can kill — and cyberattacks can lead to warfare

Two Indian Air Force pilots are dead, possibly because of a cyberattack on their Sukhoi 30 fighter jet. According to the Economic Times of India,

Squadron leader D Pankaj and Flight Lieutenant S Achudev, the pilots of the Su-30 aircraft, had sustained fatal injuries when the aircraft crashed approximately 60 km from Tezpur Airbase on May 23. A court of Inquiry has already been ordered to investigate the cause of the accident.

According to defence spokesperson S Ghosh, analysis of the Flight Data Recorder of the aircraft and certain other articles recovered from the crash site revealed that the pilots could not initiate ejection before crash. The wreckage of the aircraft was located on May 26.

What does that have to do with hackers? Well, the aircraft was flying close to India’s border with China, and according to reports, the Sukhoi’s two pilots were possibly victims of cyberwarfare. Says the Indian Defense News,

Analysts based in the vicinity of New York and St Petersburg warn that the loss, days ago, of an advanced and mechanically certified as safe, Sukhoi 30 fighter aircraft, close to the border with China may be the result of “cyber-interference with the onboard computers” in the cockpit. This may explain why even the pilots may have found it difficult to activate safety ejection mechanisms, once it became obvious that the aircraft was in serious trouble, as such mechanisms too could have been crippled by computer malfunctions induced from an outside source.

You’ve undoubtedly heard about the troubles going on with Qatar in the Middle East, and it might lead to a shooting war. In mid-May, stories were published on the Qatar News Agency that outraged its Arab neighbors. According to CNN,

The Qatari government has said a May 23 news report on its Qatar News Agency attributed false remarks to the nation’s ruler that appeared friendly to Iran and Israel and questioned whether President Donald Trump would last in office.

Soon thereafter, three Arab countries cut off ties and boycotted the country, which borders Saudi Arabia on the Persian Gulf. It’s now believed that those stories were “fake news” planted by hackers. Were they state-sponsored agents? It’s too soon to tell. However, given how quickly Bahrain, Saudi Arabia, and the United Arab Emirates reacted — and given how hard Saudi Arabia is fighting in Yemen — this is troubling. Could keystrokes from hackers lead to the drumbeat of war?

Read more in my latest piece for Zonic News, Cyberattacks Can Lead to Real Warfare, and to Real Deaths.

 

,

It’s suddenly harder to do tech business in China

Doing business in China is always a rollercoaster. For Internet businesses, the ride just became more thrilling.

The Chinese government has rolled out new cybersecurity laws, which begin affecting foreign companies today, June 1, 2017. The new rules give the Chinese government more control over Internet companies. The government says that the rules are designed to help address threats causes by terrorists and hackers – but the terms are broad enough to confuse anyone doing business in China.

Two of the biggest requirements of the new legislation:

  • Companies that do business in China must store all data related to that business, including customer data, within China.
  • Consumers must register with their real names on retail sites, community sites, news sites, and social media, including messaging services.

According to many accounts, the wording of the new law is too ambiguous to assure compliance. Perhaps the drafters were careless, or lacked of understanding of technical issues. However, it’s possible that the ambiguity is intentional, to give Chinese regulators room to selectively apply the new laws based on political or business objectives. To quote coverage in The New York Times,

One instance cited by Mats Harborn, president of the European Union Chamber of Commerce in China, in a round-table discussion with journalists, was that the government said it wanted to regulate “critical information infrastructure,” but had not defined what that meant.

“The way it’s enforced and implemented today and the way it might be enforced and implemented in a year is a big question mark,” added Lance Noble, the chamber’s policy and communications manager. He warned that uncertainty surrounding the law could make foreign technology firms reluctant to bring their best innovations to China.

Learn more about the new rules  facing tech companies in “The New Cybersecurity Requirement of Doing Business in China,” published today on Zonic News.

, ,

Malware in movie subtitles are coming to a mobile near you

Movie subtitles — those are the latest attack vector for malware. According to Check Point Software, by crafting malicious subtitle files, which are then downloaded by a victim’s media player, attackers can take complete control over any type of device via vulnerabilities found in many popular streaming platforms. Those media players include VLC, Kodi (XBMC), Popcorn-Time and strem.io.

I was surprised to see that this would work, because I thought that text subtitles were just that – text. Silly me. Subtitles embedded into media files (like mp4 movies) can be encoded in dozens of different formats, each with unique features, capabilities, metadata, and payloads. The data and metadata in those subtitles can be hard to analyze, in part because of the many ways the subtitles are stored in a repository. To quote Check Point:

These subtitles repositories are, in practice, treated as a trusted source by the user or media player; our research also reveals that those repositories can be manipulated and be made to award the attacker’s malicious subtitles a high score, which results in those specific subtitles being served to the user. This method requires little or no deliberate action on the part of the user, making it all the more dangerous.

Unlike traditional attack vectors, which security firms and users are widely aware of, movie subtitles are perceived as nothing more than benign text files. This means users, Anti-Virus software, and other security solutions vet them without trying to assess their real nature, leaving millions of users exposed to this risk.

According to Check Point, more than 200 million users (or devices) are potentially vulnerable to this exploit. The risk?

Damage: By conducting attacks through subtitles, hackers can take complete control over any device running them. From this point on, the attacker can do whatever he wants with the victim’s machine, whether it is a PC, a smart TV, or a mobile device. The potential damage the attacker can inflict is endless, ranging anywhere from stealing sensitive information, installing ransomware, mass Denial of Service attacks, and much more.

Here’s an infographic from Check Point:

Read more, about this vulnerability in my latest for Zonic News, “Malware Hides in Plain Sight on the Small Screen.”