Oracle Database is the world’s most popular enterprise database. This year’s addition of autonomous operating capabilities to the cloud version of Oracle Database is one of the most important advances in the database’s history. What does it mean for a database to be “autonomous?” Let’s look under the covers of Oracle Autonomous Database to show just a few of the ways it does that.

Oracle Autonomous Database is a fully managed cloud service. Like all cloud services, the database runs on servers in cloud data centers—in this case, on hardware called Oracle Exadata Database Machine that’s specifically designed and tuned for high-performance, high-availability workloads. The tightly controlled and optimized hardware enables some of the autonomous functionality we’ll discuss shortly.

While the autonomous capability of Oracle Autonomous Database is new, it builds on scores of automation features that Oracle has been building into its Oracle database software and the Exadata database hardware for years. The goals of the autonomous functions are twofold: First, to lower operating costs by reducing costly and tedious manual administration, and second, to improve service levels through automation and fewer human errors.

My essay in Forbes, “What Makes Oracle Autonomous Database Truly ‘Autonomous,’” shows of how the capabilities in Oracle Autonomous Database change the game for database administrators (DBAs). The benefits: DBAs are freed them from mundane tasks and letting them focus on higher-value work.

At too many government agencies and companies, the security mindset, even though it’s never spoken, is that “We’re not a prime target, our data isn’t super-sensitive.” Wrong. The reality is that every piece of personal data adds to the picture that potential criminals or state-sponsored actors are painting of individuals.

And that makes your data a target. “Just because you think your data isn’t useful, don’t assume it’s not valuable to someone, because they’re looking for columns, not rows,” says Hayri Tarhan, Oracle regional vice president for public sector security.

Here’s what Tarhan means by columns not rows: Imagine that the bad actors are storing information in a database (which they probably are). What hackers want in many data breaches is more information about people already in that database. They correlate new data with the old, using big data techniques to fill in the columns, matching up data stolen from different sources to form a more-complete picture.

That picture is potentially much more important and more lucrative than finding out about new people and creating new, sparsely populated data rows. So, every bit of data, no matter how trivial it might seem, is important when it comes to filling the empty squares.

Read more about this – and how machine learning can help – in my article in Forbes, “Data Thieves Want Your Columns—Not Your Rows.”

Blockchain and the cloud go together like organic macaroni and cheese. What’s the connection? Choosy shoppers would like to know that their organic food is tracked from farm to shelf, to make sure they’re getting what’s promised on the label. Blockchain provides an immutable ledger perfect for tracking cheese, for example, as it goes from dairy to cheesemaker to distributor to grocer.

Oracle’s new Blockchain Cloud Service provides a platform for each participant in a supply chain to register transactions. Within that blockchain, each participant—and regulators, if appropriate—can review those transactions to ensure that promises are being kept, and that data has not been tampered with. Use cases range from supply chains and financial transactions to data sharing inside a company.

Launched this month, Oracle Blockchain Cloud Service has the features that an enterprise needs to move from experimenting with blockchain to creating production applications. It addresses some of the biggest challenges facing developers and administrators, such as mastering the peer-to-peer protocols used to link blockchain servers, ensuring resiliency and high availability, and ensuring that security is solid. For example, developers previously had to code one-off integrations using complex APIs; Oracle’s Blockchain Cloud Service provides integration accelerators with sample templates and design patterns for many Oracle and third-party applications in the cloud and running on-premises in the data center.

Oracle Blockchain Cloud Service provides the kind of resilience, recoverability, security, and global reach that enterprises require before they’d trust their supply chain and customer experience to blockchain. With blockchain implemented as a managed cloud service, organizations also get a system that’s ready to be integrated with other enterprise applications, and where Oracle handles the back end to ensure availability and security.

Read more about this in my story for Forbes, “Oracle Helps You Put Blockchain Into Real-World Use With New Cloud Service.”

If you saw the 2013 Sandra Bullock-George Clooney science-fiction movie Gravity, then you know about the silent but deadly damage that even a small object can do if it hits something like the Hubble telescope, a satellite, or even the International Space Station as it hurtles through space. If you didn’t see Gravity, a non-spoiler, one-word summary would be “disaster.” Given the thousands of satellites and pieces of man-made debris circling our planet, plus new, emerging threats from potentially hostile satellites, you don’t need to be a rocket scientist to know that it’s important to keep track of what’s around you up there.

It all starts with the basic physics of motion and managing the tens of thousands of data points associated with those objects, says Paul Graziani, CEO and cofounder of Analytical Graphics. The Exton, Pennsylvania-based software company develops four-dimensional software that analyzes and visualizes objects based on their physical location and their time and relative position to each other or to other known locations. AGI has leveraged its software models to build the ComSpOC – its Commercial Space Operations Center. ComSpOC is the first and only commercial Space Situational Awareness center, and since 2014 it has helped space agencies and satellite operators keep track of space objects, including satellites and spacecraft.

ComSpOC uses data from sensors that AGI owns around the globe, plus data from other organizations, to track objects in space. These sensors include optical telescopes, radar systems, and passive rf (radio frequency) sensors. “A telescope gathers reflections of sunlight that come off of objects in space,” Graziani says. “And a radar broadcasts radio signals that reflect off of those objects and then times how long it takes for those signals to get back to the antenna.”

The combination of these measurements helps pinpoint the position of each object. The optical measurements of the telescopes provide directional accuracy, while the time measurements of the radar systems provide the distance of that object from the surface of the Earth. Passive rf sensors, meanwhile, use communications antennas that receive the broadcast information from operational satellites to measure satellite position and velocity.

Read more in my story for Forbes, “How Satellites Avoid Attacks And Space Junk While Circling The Earth.”

You wouldn’t enjoy paying a fine of 4 percent of your company’s total revenue. But that’s the potential penalty if your company is found in violation of the European Union’s new General Data Protection Regulation (GDPR), which goes into effect May 25, 2018. As you’ve probably read, organizations anywhere in the world are subject to GDPR if they have customers in the EU and are storing any of their personal data.

GDPR compliance is a complex topic. It’s too much for one article — heck, books galore are being written about it, seminars abound, and GDPR consultants are on every street corner.

One challenge is that GDPR is a regulation, not a how-to guide. It’s big on explaining penalties for failing to detect and report a data breach in a sufficiently timely manner. It’s not big on telling you how to detect that breach. Rather than tell you what to do, let’s see what could go wrong with your GDPR plans—to help you avoid that 4 percent penalty.

First, the ground rules: GDPR’s overarching goal is to protect citizens’ privacy. In particular, the regulation pertains to anything that can be used to directly or indirectly identify a person. Such data can be anything: a name, a photo, an email address, bank details, social network posts, medical information, or even a computer IP address. To that end, data breaches that may pose a risk to individuals must be disclosed to the authorities within 72 hours and to the affected individuals soon thereafter.

What does that mean? As part of the regulations, individuals must have the ability to see what data you have about them, correct that data if appropriate, or have that data deleted, again if appropriate. (If someone owes you money, they can’t ask you to delete that record.)

Enough preamble. Let’s get into ten common problems.

First: Your privacy and data retention policies aren’t compliant with GDPR

There’s no specific policy wording required by GDPR. However, the policies must meet the overall objectives on GDPR, as well as the requirements in any other jurisdictions in which you operate (such as the United States). What would Alan do? Look at policies from big multinationals that do business in Europe and copy what they do, working with your legal team. You’ve got to get it right.

Second: Your actual practices don’t match your privacy policy

It’s easy to create a compliant privacy policy but hard to ensure your company actually is following it. Do you claim that you don’t store IP addresses? Make sure you’re not. Do you claim that data about a European customer is never stored in a server in the United States? Make sure that’s truly the case.

For example, let’s say you store information about German customers in Frankfurt. Great. But if that data is backed up to a server in Toronto, maybe not great.

Third: Your third-party providers aren’t honoring your GDPR responsibilities

Let’s take that customer data in Frankfurt. Perhaps you have a third-party provider in San Francisco that does data analytics for you, or that runs credit reports or handles image resizing. In those processes, does your customer data ever leave the EU? Even if it stays within the EU, is it protected in ways that are compliant with GDPR and other regulations? It’s your responsibility to make sure: While you might sue a supplier for a breach, that won’t cancel out your own primary responsibility to protect your customers’ privacy.

A place to start with compliance: Do you have an accurate, up-to-date listing of all third-party providers that ever touch your data? You can’t verify compliance if you don’t know where your data is.

But wait, there’s more

You can read the entire list of common GDPR failures in my story for HPE Enterprise.nxt, “10 ways to fail at GDPR compliance.”

The public cloud is part of your network. But it’s also not part of your network. That can make security tricky, and sometimes become a nightmare.

The cloud represents resources that your business rents. Computational resources, like CPU and memory; infrastructure resources, like Internet bandwidth and Internal networks; storage resources; and management platforms, like the tools needed to provision and configure services.

Whether it’s Amazon Web Services, Microsoft Azure or Google Cloud Platform, it’s like an empty apartment that you rent for a year or maybe a few months. You start out with empty space, put in there whatever you want and use it however you want. Is such a short-term rental apartment your home? That’s a big question, especially when it comes to security. By the way, let’s focus on platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS), where your business has a great deal of control over how the resource is used — like an empty rental apartment.

We are not talking about software-as-a-service (SaaS), like Office 365 or Salesforce.com. That’s where you show up, pay your bill and use the resources as configured. That’s more like a hotel room: you sleep there, but you can’t change the furniture. Security is almost entirely the responsibility of the hotel; your security responsibility is to ensure that you don’t lose your key, and to refuse to open the door for strangers. The SaaS equivalent: Protect your user accounts and passwords, and ensure users only have the least necessary access privileges.

Why PaaS/IaaS are part of your network

As Peter Parker knows, Spider Man’s great powers require great responsibility. That’s true in the enterprise data center — and it’s true in PaaS/IaaS networks. The customer is responsible for provisioning servers, storage and virtual machines. Not only that, but the customer also is responsible for creating connections between the cloud service and other resources, such as an enterprise data center — in a hybrid cloud architecture — and other cloud providers — in a multi-cloud architecture.

The cloud provider sets terms for use of the PaaS/IaaS, and allows inbound and outbound connections. There are service level guarantees for availability of the cloud, and of servers that the cloud provider owns. Otherwise, everything is on the enterprise. Think of the PaaS/IaaS cloud as being a remote data center that the enterprise rents, but where you can’t physically visit and see your rented servers and infrastructure.

Why PaaS/IaaS are not part of your network

In short, except for the few areas that the cloud provider handles — availability, cabling, power supplies, connections to carrier networks, physical security — you own it. That means installing patches and fixes. That means instrumenting servers and virtual machines. That means protecting them with software-based firewalls. That means doing backups, whether using the cloud provider’s value-added services or someone else. That means anti-malware.

That’s not to minimize the benefits the cloud provider offers you. Power and cooling are a big deal. So are racks and cabling. So is that physical security, and having 24×7 on-site staffing in the event of hardware failures. Also, there’s click-of-a-button ability to provision and spool up new servers to handle demand, and then shut them back again when not needed. Cloud providers can also provide firewall services, communications encryption, and of course, consulting on security.

The word elastic is often used for cloud services. That’s what makes the cloud much more agile than an on-premise data center, or renting an equipment cage in a colocation center. It’s like renting an apartment where if you need a couple extra bedrooms for a few months, you can upsize.

For many businesses, that’s huge. Read more about how great cloud power requires great responsibility in my essay for SecurityNow, “Public Cloud, Part of the Network or Not, Remains a Security Concern.”

It’s standard practice for a company to ask its tech suppliers to fill out detailed questionnaires about their security practices. Companies use that information when choosing a supplier. Too much is at stake, in terms of company reputation and customer trust, to be anything but thorough with information security.

But how can a company’s IT security teams be most effective in that technology buying process? How do they get all the information they need, while also staying focused on what really matters and not wasting their time? Oracle Chief Security Officer Mary Ann Davidson at the recent RSA Conference offered her tips on this IT security risk assessment process. Drawing on her extensive experience as both supplier and buyer of technology and cloud services in her role at Oracle, Davidson shared advice from both points of view.

Advice on business risk assessments

It’s time to put out an RFP to engage new technology providers or to conduct an annual assessment of existing service providers. What do you ask in such a vendor security assessment questionnaire? There are many existing documents and templates, some focused on specific industries, others on regulated sectors or regulated information. Those should guide any assessment process, but aren’t the only factors, says Davidson. Consider these practical tips to get the crucial data you need, and avoid gathering a lot of information that will only distract you from issues that are important for keeping your data secure.

  1. Have a clear objective in mind. The purpose of the vendor security assessment questionnaire should be to assess the security performance of the vendor in light of the organization’s tolerance for risk on a given project.
  2. Limit the scope of an assessment to the potential security risks for services that the supplier is offering you. Those services are obviously critical, because they could affect your data, operations, and security. There is no value in focusing on a supplier’s purely internal systems if they don’t contain or connect to your data. By analogy, “you care about the security of a childcare provider’s facility,” says Davidson. “It’s not relevant to ask about the security of the facility owner’s vacation home in Lake Tahoe.”
  3. When possible, align the questions with internationally recognized, relevant, independently developed standards. It’s reasonable to expect service providers to offer open services that conform to true industry standards. Be wary of faux standards, which are the opposite of open—they could be designed to encourage tech buyers to trust what they think are specifications designed around industry consensus, but which are really pushing one tech supplier’s agenda or that of a third-party certification business.

There are a lot more tips in my story for Forbes, “IT Security Risk Assessments: Tips For Streamlining Supplier-Customer Communication.”

No more pizza boxes: Traditional hardware firewalls can’t adequately protect a modern corporate network and its users. Why? Because while there still may be physical servers inside an on-premises data center or in a wiring closet somewhere, an increasing number of essential resources are virtualized or off-site. And off-site includes servers in infrastructure-as-a-service (IaaS) and platform-as-a-service (PasS) clouds.

It’s the enterprise’s responsibility to protect each of those assets, as well as the communications paths to and from those assets, as well as public Internet connections. So, no, a pizza-box appliance next to the router can’t protect virtual servers, IaaS or PaaS. What’s needed are the poorly named “next-generation firewalls” (NGFW) — very badly named because that term is not at all descriptive, and will seem really stupid in a few years, when the software-based NGFW will magically become an OPGFW (obsolete previous-generation firewall).

Still, the industry loves the “next generation” phrase, so let’s stick with NGFW here. If you have a range of assets that must be protected, including some combination of on-premises servers, virtual servers and cloud servers, you need an NGFW to unify protection and ensure consistent coverage and policy compliance across all those assets.

Cobbling together a variety of different technologies may not suffice, and could end up with coverage gaps. Also, only an NGFW can detect attacks or threats against multiple assets; discrete protection for, say, on-premises servers and cloud servers won’t be able to correlate incidents and raise the alarm when an attack is detected.

Here’s how Gartner defines NGFW:

Next-generation firewalls (NGFWs) are deep-packet inspection firewalls that move beyond port/protocol inspection and blocking to add application-level inspection, intrusion prevention, and bringing intelligence from outside the firewall.

What this means is that an NGFW does an excellent job of detecting when traffic is benign or malicious, and can be configured to analyze traffic and detect anomalies in a variety of situations. A true NGFW looks at northbound/southbound traffic, that is, data entering and leaving the network. It also doesn’t trust anything: The firewall software also examines eastbound/westbound traffic, that is, packets flowing from one asset inside the network to another.

After all, an intrusion might compromise one asset, and use that as an entry point to compromise other assets, install malware, exfiltrate data, or cause other mischief. Where does the cloud come in? Read my essay for SecurityNow, “Next-Generation Firewalls: Poorly Named but Essential to the Enterprise Network

Nine takeaways from the RSA Conference 2018 can give business leaders some perspective on how to think about the latest threats and information security trends. I attended the conference in April, along with more than 42,000 corporate security executives and practitioners, tech vendors, consultants, researchers and law enforcement experts.

In my many conversations, over way too much coffee, these nine topics below kept coming up. Consider these as real-world takeaways from the field:

1. Ransomware presents a real threat to operations

The RSA Conference took place shortly after a big ransomware event shut down some of Atlanta’s online services. The general consensus from practitioners at RSA was that such an attack could happen to any municipality, large or small, and the more that government services are interconnected, the greater the likelihood that a breach in one part of an organization could spill over and affect other systems. Thus, IT must be eternally vigilant to ensure that systems are patched and anti-malware measures are up to date to prevent a breach from spreading horizontally through the organization.

2. Spearphishing is getting more sophisticated

One would think that a CFO would know better than to respond to a midnight email from the CEO saying, “Please wire a million dollars to this overseas account immediately.” One would think that employees would know not to respond to requests from their IT department for a “password audit” and apply their login credentials. Yet those types of scenarios are happening with alarming frequencies—enough that when asked what they lose sleep over, many practitioners responded by saying “spearphishing” right after they said “ransomware.”

Spearphishing works because it arrives via carefully written emails. It is sometimes customized to a company or even a person’s role, and capable at times of evading spam filters and other email security software. Spearphishing tricks consumers into logging into fake banking websites, and it tricks employees into giving away money or revealing credentials.

Continuous employee training is the most common solution offered. Another option: strong monitoring that can use machine learning to learn what “normal” is and flag out-of-the-norm behaviors or data access by a person or system.

3. Cryptomining is a growing concern

Cryptomining occurs when hackers manage to install software onto enterprise computers that surreptitiously use processor and memory resources to mine cryptocurrencies. Unlike many other types of malware, cryptomining doesn’t try to disrupt operations or steal data. Instead, the malware wants to stay hidden, invisibly making money (literally) for the hacker for days, weeks, months or years. Again, effective system monitoring could help raise a flag when a company’s computing resources are being abused this way.

Interestingly, while many at RSA were talking about cryptomining, none of the people I talked to had experienced it first-hand. And while everyone agreed that such malware should be blocked, detected and eradicated, some treated cryptomining as a nuisance that is lower in security priority than other threats, like ransomware, spearphishing or other attacks that would steal corporate data.

What about 4-9?

Read the entire list, including thoughts about insider threats and the split between presentation and detection, in my essay for the Wall Street Journal, “9 Practical Takeaways From a Huge Data Security Conference.

No doubt you’ve heard about blockchain. It’s the a distributed digital ledger technology that lets participants add and view blocks of transaction records, but not delete or change them without being detected.

Most of us know blockchain as the foundation of Bitcoin and other digital currencies. But blockchain is starting to enter the business mainstream as the trusted ledger for farm-to-table vegetable tracking, real estate transfers, digital identity management, financial transactions and all manner of contracts. Blockchain can be used for public transactions as well as for private business, inside a company or within an industry group.

What makes the technology so powerful is that there’s no central repository for this ever-growing sequential chain of transaction records, clumped together into blocks. Because that repository is replicated in each participant’s blockchain node, there is no single source of failure, and no insider threat within a single organization can impact its integrity.

“Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days or even weeks.”

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can be permitted to view relevant data, but not everything in the chain.

A customer, for instance, might be able to verify that a contractor has a valid business license. The customer might also see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

Business models and use cases

Blockchain is well-suited for managing transactions between companies or organizations that may not know each other well and where there’s no implicit or explicit trust. Rakhmilevich explains, “Blockchain works because it’s peer-to-peer…and it provides an easy-to-track history, which can serve as an audit trail,” he says.

What’s more, blockchain smart contracts are ideal for automating manual or semi-automated processes prone to errors or fraud. “Blockchain can help when there might be challenges in proving that the data has not been tampered with or when verifying the source of a particular update or transaction is important,” Rakhmilevich says.

Blockchain has uses in many industries, including banking, securities, government, retail, healthcare, manufacturing and transportation. Take healthcare: Blockchain can provide immutable records on clinical trials. Think about all the data being collected and flowing to the pharmaceutical companies and regulators, all available instantly and from verified participants.

Read more about blockchain in my article for the Wall Street Journal, “Blockchain: It’s All About Business—and Trust.”

Get ready for insomnia. Attackers are finding new techniques, and here are five that will give you nightmares worse than after you watched the slasher film everyone warned you about when you were a kid.

At a panel at the 2018 RSA Conference in San Francisco last week, we learned that these new attack techniques aren’t merely theoretically possible. They’re here, they’re real, and they’re hurting companies today. The speakers on the panel laid out the biggest attack vectors we’re seeing — and some of them are either different than in the past, or are becoming more common.

Here’s the list:

1. Repositories and cloud storage data leakage

People have been grabbing data from unsecured cloud storage for as long as cloud storage existed. Now that the cloud is nearly ubiquitous, so are the instances of non-encrypted, non-password-protected repositories on Amazon S3, Microsoft Azure, or Google Cloud Storage.

Ed Skoudis, the Penetration Testing Curriculum Director at the SANS Institute, a security training organization, points to three major flaws here. First, private repositories are accidentally opened to the public. Second, these public repositories are allowed to hold sensitive information, such as encryption keys, user names, and passwords. Third, source code and behind-the-scenes application data can be stored in the wrong cloud repository.

The result? Leakage, if someone happens to find it. And “Hackers are constantly searching for repositories that don’t have the appropriate security,” Skoudis said.

2. Data de-anonymization and correlation

Lots of medical and financial data is shared between businesses. Often that data is anonymized. That is, scrubbed with all the personally identifiable information (PII) removed so it’s impossible to figure out which human a particular data record belongs to.

Well, that’s the theory, said Skoudis. In reality, if you beg, borrow or steal enough data from many sources (including breaches), you can often correlate the data and figure out which person is described by financial or health data. It’s not easy, because a lot of data and computation resources are required, but de-anonymization can be done, and used for identity theft or worse.

3. Monetizing compromised systems using cryptominers

Johannes Ullrich, who runs the SANS Internet Storm Center, said that hackers care about selling your stuff, like all other criminals. Some want to steal your data, including bank accounts, and sell that to other people, say on the Dark Web. A few years ago, hackers learned how to steal your data and sell it back to you, in the form of ransomware. And now, they’re stealing your computer’s processing power.

What’s the processing power used for? “They’re using your system for crypto-coin mining,” the experts said. This became obvious earlier this year, he said, with a PeopleSoft breach where hackers installed a coin miner on thousands of servers – and never touched the PeopleSoft data. Meanwhile, since no data is touched or stolen, the hack could stay undetected for months, maybe years.

Two more

Read the full story, including the two biggest sleep-inhibiting worries, in my story for SecurityNow: “5 New Network Attack Techniques That Will Keep You Awake at Night.”

Blame people for the SOC scalability challenge. On the other hand, don’t blame your people. It’s not their fault.

The security operations center (SOC) team is frequently overwhelmed, particularly the Tier 1 security analysts tasked with triage. As companies grow and add more technology — including the Internet of Things (IoT) — that means more alerts.

As the enterprise adds more sophisticated security tools, such as Endpoint Detection and Response (EDR), that means more alerts. And more complex alerts. You’re not going to see a blinking red light that says: “You’re being hacked.” Or if you do see such an alert, it’s not very helpful.

The problem is people, say experts at the 2018 RSA Conference, which wrapped up last week. Your SOC team — or teams — simply can’t scale fast enough to keep up with the ever-increasing demand. Let’s talk about the five biggest problems challenging SOC scalability.

Reason #1: You can’t afford to hire enough analysts

You certainly can’t afford to hire enough Tier 2 analysts who respond to real — or almost certainly real — incidents. According sites like Glassdoor and Indeed, be prepared to pay over $100,000 per year, per person.

Reason #2: You can’t even find enough analysts

We’ve created a growing demand for labor, and thus, we’ve created this labor shortage,” said Malcolm Harkins, chief security and trust officer of CylanceThere are huge numbers of open positions at all levels of information security, and that includes in-enterprise SOC team members. Sure, you could pay more, or do competitive recruiting, but go back to the previous point: You can’t afford that. Perhaps a managed security service provider can afford to keep raising salaries, because an MSSP can monetize that expense. An ordinary enterprise can’t, because security is an expense.

Reason #3: You can’t train the analysts

Even with the best security tools, analysts require constant training on threats and techniques — which is expensive to offer, especially for a smaller organization. And wouldn’t you know it, as soon as you get a group of triage specialists or incident responders trained up nicely, off they go for a better job.

Read more, including two more reasons, in my essay for SecurityNow, “It’s the People: 5 Reasons Why SOC Can’t Scale.”

Got Terminator? Microsoft is putting artificial intelligence in charge of automatically responding to detected threats, with a forthcoming update to Windows Defender ATP.

Microsoft is expanding its use of artificial intelligence and big data analytics behind the current levels of machine learning in its security platform. Today, AI is used for incident detection and investigation, filtering out false positives and making it easier for humans in the security operations center (SOC) team to determine the correct response to an incident.

Soon, customers will be able to allow the AI to respond to some incidents automatically. Redmond claims this will cut time-to-remediation down to minutes. In a blog post released April 17, Moti Gindi, general manager for Windows Cyber Defense, wrote: “Threat investigation and remediation decisions can be taken automatically by Windows Defender ATP based on extensive historical data collected, stored and analyzed in our cloud (‘time travel’).”

What type of remediation? No, robots won’t teleport from the future and shoot lasers at the cybercriminals. At least, that’s not an announced capability. Rather, Windows Defender ATP will signal the Azure Active Directory user management and Microsoft Intune mobile device management platforms to temporarily revoke access privileges to cloud storage and enterprise applications, such as Office 365.

After the risk has been evaluated — or after the CEO has yelled at the CISO from her sales trip overseas — the access revocation can be reversed. Another significant part of the Windows Defender ATP announcements: Threat signal sharing between Microsoft’s various cloud platforms, which up until now have operated pretty much autonomously in terms of security.

In the example Microsoft offered, threats coming via a phishing email detected by Outlook 365 will be correlated with malware blocked by OneDrive for Business. In this incarnation, signal sharing will bring together Office 365, Azure 365 and Windows Defender ATP.

Read more, including about Microsoft’s Mac support for security, in my essay for SecurityNow, “Microsoft Security Is Channeling the Terminator.”

Is the cloud ready for sensitive data? You bet it is. Some 90% of businesses in a new survey say that at least half of their cloud-based data is indeed sensitive, the kind that cybercriminals would love to get their hands on.

The migration to the cloud can’t come soon enough. About two-thirds of companies in the study say at least one cybersecurity incident has disrupted their operations within the past two years, and 80% say they’re concerned about the threat that cybercriminals pose to their data.

The good news is that 62% of organizations consider the security of cloud-based enterprise applications to be better than the security of their on-premises applications. Another 21% consider it as good. The caveat: Companies must be proactive about their cloud-based data and can’t naively assume that “someone else” is taking care of that security.

Those insights come from a brand-new threat report, the first ever jointly conducted by Oracle and KPMG. The “Oracle and KPMG Cloud Threat Report 2018,” to be released this month at the RSA Conference, fills a unique niche among the vast number of existing threat and security reports, including the well-respected Verizon Data Breach Investigations Report produced annually since 2008.

The difference is the Cloud Threat Report’s emphasis on hybrid cloud, and on organizations lifting and shifting workloads and data into the cloud. “In the threat landscape, you have a wide variety of reports around infrastructure, threat analytics, malware, penetrations, data breaches, and patch management,” says one of the designers of the study, Greg Jensen, senior principal director of Oracle’s Cloud Security Business. “What’s missing is pulling this all together for the journey to the cloud.”

Indeed, 87% of the 450 businesses surveyed say they have a cloud-first orientation. “That’s the kind of trust these organizations have in cloud-based technology,” Jensen says.

Here are data points that break that idea down into more detail:

  • 20% of respondents to the survey say the cloud is much more secure than their on-premises environments; 42% say the cloud is somewhat more secure; and 21% say the cloud is equally secure. Only 21% think the cloud is less secure.
  • 14% say that more than half of their data is in the cloud already, and 46% say that between a quarter and half of their data is in the cloud.

That cloud-based data is increasingly “sensitive,” the survey respondents say. That data includes information collected from customer relationship management systems, personally identifiable information (PII), payment card data, legal documents, product designs, source code, and other types of intellectual property.

Read more, including what cyberattacks say about the “pace gap,” in my essay in Forbes, “Threat Report: Companies Trust Cloud Security.”

Ransomware rules the cybercrime world – perhaps because ransomware attacks are often successful and financially remunerative for criminals. Ransomware features prominently in Verizon’s fresh-off-the-press 2018 Data Breach Investigations Report (DBIR). As the report says, although ransomware is still a relatively new type of attack, it’s growing fast:

Ransomware was first mentioned in the 2013 DBIR and we referenced that these schemes could “blossom as an effective tool of choice for online criminals”. And blossom they did! Now we have seen this style of malware overtake all others to be the most prevalent variety of malicious code for this year’s dataset. Ransomware is an interesting phenomenon that, when viewed through the mind of an attacker, makes perfect sense.

The DBIR explains that ransomware can be attempted with little risk or cost to the attacker. It can be successful because the attacker doesn’t need to monetize stolen data, only ransom the return of that data; and can be deployed across numerous devices in organizations to inflict more damage, and potentially justify bigger ransoms.

Botnets Are Also Hot

Ransomware wasn’t the only prominent attack; the 2018 DBIR also talks extensively about botnet-based infections. Verizon cites more than 43,000 breaches using customer credentials stolen from botnet-infected clients. It’s a global problem, says the DBIR, and can affect organizations in two primary ways:

The first way, you never even see the bot. Instead, your users download the bot, it steals their credentials, and then uses them to log in to your systems. This attack primarily targeted banking organizations (91%) though Information (5%) and Professional Services organizations (2%) were victims as well.

The second way organizations are affected involves compromised hosts within your network acting as foot soldiers in a botnet. The data shows that most organizations clear most bots in the first month (give or take a couple of days).

However, the report says, some bots may be missed during the disinfection process. This could result in a re-infection later.

Insiders Are Still Significant Threats

Overall, says Verizon, outsiders perpetrated most breaches, 73%. But don’t get too complacent about employees or contracts: Many involved internal actors, 28%. Yes, that adds to more than 100% because some outside attacks had inside help. Here’s who Verizon says is behind breaches:

  • 73% perpetrated by outsiders
  • 28% involved internal actors
  • 2% involved partners
  • 2% featured multiple parties
  • 50% of breaches were carried out by organized criminal groups
  • 12% of breaches involved actors identified as nation-state or state-affiliated

Email is still the delivery vector of choice for malware and other attacks. Many of those attacks were financially motivated, says the DBIR. Most worrying, a significant number of breaches took a long time to discover.

  • 49% of non-point-of-sale malware was installed via malicious email
  • 76% of breaches were financially motivated
  • 13% of breaches were motivated by the gain of strategic advantage (espionage)
  • 68% of breaches took months or longer to discover

Taking Months to Discover the Breach

To that previous point: Attackers can move fast, but defenders can take a while. To use a terrible analogy: If someone breaks into your car and steals your designer sunglasses, the time from their initial penetration (picking the lock or smashing the window) to compromising the asset (grabbing the glasses) might be a minute or less. The time to discovery (when you see the broken window or realize your glasses are gone) could be minutes if you parked at the mall – or days, if the car was left at the airport parking garage. The DBIR makes the same point about enterprise data breaches:

When breaches are successful, the time to compromise continues to be very short. While we cannot determine how much time is spent in intelligence gathering or other adversary preparations, the time from first action in an event chain to initial compromise of an asset is most often measured in seconds or minutes. The discovery time is likelier to be weeks or months. The discovery time is also very dependent on the type of attack, with payment card compromises often discovered based on the fraudulent use of the stolen data (typically weeks or months) as opposed to a stolen laptop which is discovered when the victim realizes they have been burglarized.

Good News, Bad News on Phishing

Let’s end on a positive note, or a sort of positive note. The 2018 DBIR notes that most people never click phishing emails: “When analyzing results from phishing simulations the data showed that in the normal (median) organization, 78% of people don’t click a single phish all year.”

The less good news: “On average 4% of people in any given phishing campaign will click it.” The DBIR notes that the more phishing emails someone has clicked, the more they are likely to click on phishing emails in the future. The report’s advice: “Part of your overall strategy to combat phishing could be that you can try and find those 4% of people ahead of time and plan for them to click.”

Good luck with that.

The purchase order looks legitimate, yet does it have all the proper approvals? Many lawyers reviewed this draft contract so is this the latest version? Can we prove that this essential document hasn’t been tampered with, before I sign it? Can we prove that these two versions of a document are absolutely identical?

Blockchain might be able to help solve these kinds of everyday trust issues related to documents, especially when they are PDFs—data files created using the Portable Document Format. Blockchain technology is best known for securing financial transactions, including powering new financial instruments such as Bitcoin. But blockchain’s ability to increase trust will likely find enterprise use cases solving common, non-financial information exchanges like these documents use.

Joris Schellekens, a software engineer and PDF expert at iText Software in Ghent, Belgium, recently presented his ideas for blockchain-supported documents at Oracle Code Los Angeles. Oracle Code is a series of free events around the world created to bring developers together to share fresh thinking and collaborate on ideas like these.

PDF’s Power and Limitations

The PDF file format was created in the early 1990s by Adobe Systems. PDF was a way to share richly formatted documents whose visual layout, text, and graphics would look the same, no matter which software created them or where they were viewed or printed. The PDF specification became an international standard in 2008.

Early on, Adobe and other companies implemented security features into PDF files. That included password protection, encryption, and digital signatures. In theory, the digital signatures should be able to prove who created, or at least who encrypted, a PDF document. However, depending on the hashing algorithm used, it’s not so difficult to subvert those protections to, for example, change a date/time stamp, or even the document content, says Schellekens. His company, iText Software, markets a software development kit and APIs for creating and manipulating PDFs.

“The PDF specification contains the concept of an ID tuple,” or an immutable sequence of data, says Schellekens. “This ID tuple contains timestamps for when the file was created and when it was revised. However, the PDF spec is vague about how to implement these when creating the PDF.”

Even in the case of an unaltered PDF, the protections apply to the entire document, not to various parts of it. Consider a document that must be signed by multiple parties. Since not all certificate authorities store their private keys with equal vigilance, you might lack confidence about who really modified the document (e.g. signed it), at which times, and in which order. Or, you might not be confident that there were no modifications before or after someone signed it.

A related challenge: Signatures to a digital document generally must be made serially, one at a time. The PDF specification doesn’t allow for a document to be signed in parallel by several people (as is common with contract reviews and signatures) and then merged together.

Blockchain has the potential to solve such document problems, and several others besides. Read more in my story for Forbes, “Can Blockchain Solve Your Document And Digital Signature Headaches?

“We estimate that malicious cyber activity cost the U.S. economy between $57 billion and $109 billion in 2016.” That’s from a February 2018 report, “email hidden; JavaScript is required,” by the Council of Economic Advisors – part of the Office of the President. It’s a big deal.

The White House is concerned about a number of sources of cyber threats. Those include attacks from nation-states, corporate competitors, hacktivists, organized criminal groups, opportunists, and company insiders.

It’s not always easy to tell exactly who is behind some event, or even how to categorize those events. Still, the report says that incidents breaks down as roughly 25% insiders, 75% outsiders. “Overall, 18 percent of threat actors were state-affiliated groups, and 51 percent involved organized criminal groups,” it says.

It’s More Than Stolen Valuables

The report points out that the economic cost includes many factors, including the stolen property, the costs of repairs – and opportunity lost costs. For example, the report says, “Consider potential costs of a DDoS attack. A DDoS attack interferes with a firm’s online operations, causing a loss of sales during the period of disruption. Some of the firm’s customers may permanently switch to a competing firm due to their inability to access online services, imposing additional costs in the form of the firm’s lost future revenue. Furthermore, a high-visibility attack may tarnish the firm’s brand name, reducing its future revenues and business opportunities.”

However, it’s not always that cut-and-dried. Intellectual property theft shows:

The costs incurred by a firm in the wake of IP theft are somewhat different. As the result of IP theft, the firm no longer has a monopoly on its proprietary findings because the stolen IP may now potentially be held and utilized by a competing firm. If the firm discovers that its IP has been stolen (and there is no guarantee of such discovery), attempting to identify the perpetrator or obtain relief via legal process could result in sizable costs without being successful, especially if the IP was stolen by a foreign actor. Hence, expected future revenues of the firm could decline. The cost of capital is likely to increase because investors will conclude that the firm’s IP is both sought-after and not sufficiently protected.

Indeed, this last example is particularly worrisome. Why? “IP theft is the costliest type of malicious cyber activity. Moreover, security breaches that enable IP theft via cyber may go undetected for years, allowing the periodic pilfering of corporate IP.”

Affecting the Economy

Do investors worry about cyber incidents? You bet. And it hits the share price of companies. According to the White House report, “We find that the stock price reaction to the news of an adverse cyber event is significantly negative. Firms on average lost about 0.8 percent of their market value in the seven days following news of an adverse cyber event.”

How much is that? Given that the study looked at large companies, “We estimate that, on average, the firms in our sample lost $498 million per adverse cyber event. The distribution of losses is highly right-skewed. When we trim the sample of estimated losses at 1 percent on each side of the distribution, the average loss declines to $338 million per event.” That’s significant.

Small and mid-sized companies can be harder hit by incidents, because they are less resilient. “Smaller firms, and especially those with few product lines, can easily go out of business if they are attacked or breached.”

Overall, the hit by cyber incidents cost the U.S. economy between $57 billion and $109 billion in 2016. That’s between 0.31% and 0.58% of that year’s gross domestic product (GDP), says the report. That’s lot, but could be worse. Let’s hope this amount doesn’t increase – by, say, a full-fledged cyberwar or significant terrorist incident.

“What type of dog are you?” “I scored 9 out of 10 on this vocabulary test! Can you beat me? Take the quiz!” “Are you a true New Yorker?”

If you use Facebook (or other social media sites) you undoubtedly see quizzes like this nearly every day. Sometimes the quizzes appear in Facebook advertisements. Sometimes they appear because one of your friends took the quiz, and the quiz appeared as a post by your friend.

Is it safe to take those quizzes? As with many security topics, the answer is a somewhat vague “yes and no.” There are two areas to think about. The first is privacy – are you giving away information that should be kept confidential? The second is, by interacting with the quiz, are you giving permission for future interactions? Let’s talk about both those aspects, and then you can make an informed decision.

Bear in mind, however, that quizzes like this were likely used by Cambridge Analytica to harvest personal details about millions of Facebook users. Those details were allegedly used to email hidden; JavaScript is required.

Personal Dossier

Let’s start with content. When you take a quiz, you may not realize the extent of the personal information you are providing. Does the quiz ask you for your favorite color? For the year you graduated secondary school? For the type of car you drive? All of that information could potentially be aggregated into a profile. That’s especially true if you take multiple quizzes from the same company.

You don’t know, and you can’t realistically learn, if the organization behind the quiz is storing the information — and what it’s doing with it. Certainly, they can tag you as someone who likes quizzes, and show you more of them. However, are they using that information to profile you for their advertisements? Are they depositing cookies or other tracking mechanisms on your computer? Are they selling that information to other organizations?

A quiz about your favorite color is probably benign. A quiz about “What type of dog are you?” might indicate that you are a dog owner. It’s likely that ads for dog food might be in your future!

Be wary of quizzes that ask for any information that might be used for identity theft, like your home town or the year you were born. While you might sometimes post information like that on Facebook, that information may not be readily accessible to third parties, like the company that offers up those fun quizzes. If you provide such info to the quiz company, you are handing it to them on a silver platter.

Consider the “Is My Dog Fat Quiz,” hosted on the site GoToQuiz. It asks for your age range and your gender – which is totally unnecessary for asking about your dog’s weight and dietary habits. (You can see the lack of professionalism with misspellings like, “How much excersize does your dog get?” This quiz isn’t about you or your dog, it’s about gathering information for Internet marketers.

Permission Granted

Second, you’re giving implicit permission for future interactions. Sometimes when you click on a Facebook quiz, you take the quiz right inside Facebook. When you do so, you are interacting with the quiz giver – which means that future posts or quizzes by that quiz giver will show up on your news feed. You may be totally fine with that… it’s not particularly harmful. However, you should be aware that this is the case. (Those posts and quizzes may also show up on your friends’ news feeds as well, spreading the marketer’s reach)

What concerns me more is when clicking the quiz opens up an external website. When you are on an external website, whatever happens is outside of Facebook’s privacy protections and security protocols. You have no idea what the quiz site will do with your information.

Well, now, perhaps you do now.

Has Russia hacked the U.S. energy grid? This could be bigger than Stuxnet, the cyberattack that damaged uranium-enriching centrifuges in Iran back in 2010 – and demonstrated, to the public at least, that cyberattacks could do more than erase hard drives and steal peoples’ banking passwords.

For the first time, the United States has officially accused Russia of breaking into critical infrastructure. That’s not only a shocking admission of vulnerability, but also pointing the finger at a specific country.

While there may be geopolitical reasons for the timing of the accusation, let’s look at what’s going on from the tech perspective. On March 15, the U.S. Computer Emergency Response Team (US-CERT) put out an alert entitled, “Russian Government Cyber Activity Targeting Energy and Other Critical Infrastructure Sectors.” It’s not blaming hackers, or hackers based in Russia, it’s blaming the Russian government.

The danger couldn’t be clearer. “Since at least March 2016, Russian government cyber actors—hereafter referred to as “threat actors”—targeted government entities and multiple U.S. critical infrastructure sectors, including the energy, nuclear, commercial facilities, water, aviation, and critical manufacturing sectors.”

The Targets: System Controllers

What were the attackers doing? Reconnaissance, looking for information on the critical controller in the energy facilities, also known as SCADA systems. The US-CERT alert explains,

In multiple instances, the threat actors accessed workstations and servers on a corporate network that contained data output from control systems within energy generation facilities. The threat actors accessed files pertaining to ICS or supervisory control and data acquisition (SCADA) systems. Based on DHS analysis of existing compromises, these files were named containing ICS vendor names and ICS reference documents pertaining to the organization (e.g., “SCADA WIRING DIAGRAM.pdf” or “SCADA PANEL LAYOUTS.xlsx”)

The threat actors targeted and copied profile and configuration information for accessing ICS systems on the network. DHS observed the threat actors copying Virtual Network Connection (VNC) profiles that contained configuration information on accessing ICS systems.

The Attack Vendor: User Accounts

How did the attackers manage to get into these energy systems? First, they carefully chose which companies or facilities to target, says US-CERT: “The threat actors appear to have deliberately chosen the organizations they targeted, rather than pursuing them as targets of opportunity.” The attackers then using spear phishing (custom-crafted malicious emails) and watering holes (hacks into trusted websites that employees of those energy sites would visit). For example, says the report,

One of the threat actors’ primary uses for staging targets was to develop watering holes. Threat actors compromised the infrastructure of trusted organizations to reach intended targets. Approximately half of the known watering holes are trade publications and informational websites related to process control, ICS, or critical infrastructure. Although these watering holes may host legitimate content developed by reputable organizations, the threat actors altered websites to contain and reference malicious content.

These hacks into user accounts were delivered via malicious .docx files that energy employees opened – and which captured user credentials. The attackers then used those credentials to get into the energy systems, create new accounts, and begin their work. The US CERT reports that the attackers weren’t able to get into systems that require multi-function authentication, by the way.

A History of Targeting Energy

We don’t know what Russia was doing, or why – assuming that it was Russia, of course. Dustin Volz and Timothy Gardner, writing for Bloomberg, say,

It was not clear what Russia’s motive was. Many cyber security experts and former U.S. officials say such behavior is generally espionage-oriented with the potential, if needed, for sabotage.

Russia has shown a willingness to leverage access into energy networks for damaging effect in the past. Kremlin-linked hackers were widely blamed for two attacks on the Ukrainian energy grid in 2015 and 2016, that caused temporary blackouts for hundreds of thousands of customers and were considered first-of-their-kind assaults.

As political issues escalate between Russia and the West, these types of reports and unanswered questions are indeed troubling.

Go ahead, blame the user. You can’t expect end users to protect their Internet of Things devices from hacks or breaches. They can’t. They won’t. Security must be baked in. Security must be totally automatic. And security shouldn’t allow end users to mess anything up, especially if the device has some sort of Web browser.

Case in point: Medical devices with some sort of network connection, and thus qualify as IoT. In some cases, those connections might be very busy, connecting to a cloud service to report back telemetry and diagnostics, with the ability for a doctor to adjust functionality. In other cases, the connections might be quiet, used only for firmware updates. In either case, though, any connection might lead to a vulnerability.

According to the Annual Threat Report: Connected Medical Devices, from Zingbox, the most common IoT devices are infusion pumps, followed by imaging systems. Despite their #2 status, the study says that those imaging systems have the most security issues:

They account for 51% of all security issues across tens of thousands devices included in this study. Several characteristics of imaging systems attribute to it being the most risky device in an organization’s inventory. Imaging systems are often designed on commercial-off-the-shelf (COTS) OS, they are expected to have long lifespan (15-20 years), very expensive to replace, and often outlive the service agreement from the vendors as well as the COTS provider.

This is not good. For all devices, the study says that, “Most notably, user practice issues make up 41% of all security issues. The user practice issues consist of rogue applications and browser usage including risky internet sites.” In addition, Zingbox says, “Unfortunately, outdated OS/SW (representing 33% of security issues) is the reality of connected medical devices. Legacy OS, obsolete applications, and unpatched firmware makes up one-third of all security issues.”

Need to Restrict IoT Device Access to Websites

Many devices contain embedded web browsers. Not infusion pumps, of course, but other devices, such those imaging sensors. Network access for such devices should be severely restricted – the embedded browser on a medical device shouldn’t be able to access eBay or Amazon or the New York Times – or anything else other than the device’s approved services. As the study explains, “Context-aware policy enforcement should be put in place to restrict download of rogue applications and enable URL access specific to the operation of the device.”

Even if the device operator’s intentions are good, you don’t want the device used to access, say, Gmail. And then get a virus. Remember, many of the larger IoT medical devices run Windows, and may not have up-to-date malware protection. Or any malware protection whatsoever.

When planning out IoT security, the device must be protected from the user, as well as from hackers. “IoT Security: How To Make The World Safe When Everything’s Connected,” published in Forbes, quoted Gerry Kane, Cyber Security Segment Director for Risk Engineering at The Zurich Services Corporation:

Information security must evolve with the times, Kane believes. “It’s not just about data anymore,” he said. “It’s an accumulation of the bad things that could happen when there’s a security breach. And consider the number of threat vectors that are brought into play by the Internet of Things.”

Human error poses another risk. Although these devices are supposed to operate on their own, they still need to receive instructions from people. The wrong commands could result in mistakes.

“Human error is always a big part of security breaches, even if it’s not always done with malicious intent,” Kane said.

Indeed, the IoT world is pretty dangerous… thanks to those darned end users.

Blockchain is a distributed digital ledger technology in which blocks of transaction records can be added and viewed—but can’t be deleted or changed without detection. Here’s where the name comes from: a blockchain is an ever-growing sequential chain of transaction records, clumped together into blocks. There’s no central repository of the chain, which is replicated in each participant’s blockchain node, and that’s what makes the technology so powerful. Yes, blockchain was originally developed to underpin Bitcoin and is essential to the trust required for users to trade digital currencies, but that is only the beginning of its potential.

Blockchain neatly solves the problem of ensuring the validity of all kinds of digital records. What’s more, blockchain can be used for public transactions as well as for private business, inside a company or within an industry group. “Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days, or even weeks.”

That’s the power of blockchain: an immutable digital ledger for recording transactions. It can be used to power anonymous digital currencies—or farm-to-table vegetable tracking, business contracts, contractor licensing, real estate transfers, digital identity management, and financial transactions between companies or even within a single company.

“Blockchain doesn’t have to just be used for accounting ledgers,” says Rakhmilevich. “It can store any data, and you can use programmable smart contracts to evaluate and operate on this data. It provides nonrepudiation through digitally signed transactions, and the stored results are tamper proof. Because the ledger is replicated, there is no single source of failure, and no insider threat within a single organization can impact its integrity.”

It’s All About Distributed Ledgers

Several simple concepts underpin any blockchain system. The first is the block, which is a batch of one or more transactions, grouped together and hashed. The hashing process produces an error-checking and tamper-resistant code that will let anyone viewing the block see if it has been altered. The block also contains the hash of the previous block, which ties them together in a chain. The backward hashing makes it extremely difficult for anyone to modify a single block without detection.

A chain contains collections of blocks, which are stored on decentralized, distributed servers. The more the better, with every server containing the same set of blocks and the latest values of information, such as account balances. Multiple transactions are handled within a single block using an algorithm called a Merkle tree, or hash tree, which provides fault and fraud tolerance: if a server goes down, or if a block or chain is corrupted, the missing data can be reconstructed by polling other servers’ chains.

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can view relevant data, but not everything in the chain. A customer might be able to verify that a contractor has a valid business license and see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

When originally conceived, blockchain had a narrow set of protocols. They were designed to govern the creation of blocks, the grouping of hashes into the Merkle tree, the viewing of data encapsulated into the chain, and the validation that data has not been corrupted or tampered with. Over time, creators of blockchain applications (such as the many competing digital currencies) innovated and created their own protocols—which, due to their independent evolutionary processes, weren’t necessarily interoperable. By contrast, the success of general-purpose blockchain services, which might encompass computing services from many technology, government, and business players, created the need for industry standards—such as Hyperledger, a Linux Foundation project.

Read more in my feature article in Oracle Magazine, March/April 2018, “It’s All About Trust.”

Far too many companies fail to learn anything from security breaches. According to CyberArk, cyber-security inertia is putting organizations at risk. Nearly half — 46% — of enterprises say their security strategy rarely changes substantially, even after a cyberattack.

That data comes from the organization’s new Global Advanced Threat Landscape Report 2018. The researchers surveyed 1,300 IT security decision-makers, DevOps and app developer professionals, and line-of-business owners in seven countries.

The Cloud is Unsecured

Cloud computing is a major focus of this report, and the study results are scary. CyberArk says, “Automated processes inherent in cloud environments are responsible for prolific creation of privileged credentials and secrets. These credentials, if compromised, can give attackers a crucial jumping-off point to achieve lateral access across networks, data and applications — whether in the cloud or on-premises.”

The study shows that

  • 50% of IT professionals say their organization stores business-critical information in the cloud, including revenue-generating customer- facing applications
  • 43% say they commit regulated customer data to the cloud
  • 49% of respondents have no privileged account security strategy for the cloud

While we haven’t yet seen major breaches caused by tech failures of cloud vendors, we have seen many, many examples of customer errors with the cloud. Those errors, such as posting customer information to public cloud storage services without encryption or proper password control, have allowed open access to private information.

CyberArk’s view is dead right: “There are still gaps in the understanding of who is responsible for security in the cloud, even though the public cloud vendors are very clear that the enterprise is responsible for securing cloud workloads. Additionally, few understand the full impact of the unsecured secrets that proliferate in dynamic cloud environments and automated processes.”

In other words, nobody is stepping up to the plate. (Perhaps cloud vendors should scan their customers’ files and warn them if they are uploading unsecured files. Nah. That’ll never happen – because if there’s a failure of that monitoring system, the cloud vendor could be held liable for the breach.)

Endpoint Security Is Neglected

I was astonished that the CyberArk study shows only 52% of respondents keep their operating system and patches current. Yikes. It’s conventional wisdom that maintaining patches is about the lowest-hanging of the low-hanging fruit. Unpatched servers have been easy pickings for hackers over the past few years.

CyberArk’s analysis appears accurate here: ”End users deploy a lot of technologies to protect endpoints, and they face many competing factors. These include compliance drivers, end-user usability, endpoint configuration management and an increasingly highly mobile and remote user base, all of which make visibility and control harder. With advanced malware attacks over the past year including WannaCry and NotPetya, there is certainly room for greater prioritization around blocking credential theft as a critical step to preventing attackers from gaining access to the network and initiating lateral movement.”

Many Threats, Poor Planning

According to the study, the greatest cyber security threats expected by IT professionals are:

  • Targeted phishing attacks (56%)
  • Insider threats (51%)
  • Ransomware or malware (48%)
  • Unsecured privileged accounts (42%)
  • Unsecured data stored in the cloud (41%)

Meanwhile, 37% respondents say they store user passwords in Excel spreadsheets or in Word docs (hopefully not on the cloud).

Back to the cloud for a moment. The study says that “Almost all (94%) security respondents say their organizations store and serve data using public cloud services. And they are increasingly likely to entrust cloud providers with much more sensitive data than in the past. For instance, half (50%) of IT professionals say their organization stores business-critical information in the cloud, including revenue-generating customer-facing applications, and 43% say they commit regulated customer data to the cloud.”

And all that, with far too many companies reporting poor security practices when it comes to the cloud. Expect more breaches. Lots more.

Spectre and Meltdown are two separate computer security problems. They are often lumped together because they were revealed around the same time – and both exploit vulnerabilities in many modern microprocessors. The website MeltdownAttack, from the Graz University of Technology, explains both Spectre and Meltdown very succinctly – and also links to official security advisories from the industry:

Meltdown breaks the most fundamental isolation between user applications and the operating system. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system. If your computer has a vulnerable processor and runs an unpatched operating system, it is not safe to work with sensitive information without the chance of leaking the information. This applies both to personal computers as well as cloud infrastructure. Luckily, there are software patches against Meltdown.

Spectre breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets. In fact, the safety checks of said best practices actually increase the attack surface and may make applications more susceptible to Spectre. Spectre is harder to exploit than Meltdown, but it is also harder to mitigate. However, it is possible to prevent specific known exploits based on Spectre through software patches.

For now, nearly everyone is dependent on microprocessor makers and operating system vendors to develop, test, and distribute patches to mitigate both flaws. In the future, new microprocessors should be immune to those exploits – but because of the long processor developing new processors, we are unlikely to see computers using such next-generation processors available for several years.

So, expect Spectre and Meltdown to be around for many years to come. Some devices will remain unpatched — because some devices always remain unpatched. Even after new computers become available, it will take years to replace all the old machines.

Wide-Ranging Effects

Just about everything is affected by these flaws. Says the Graz University website:

Which systems are affected by Meltdown? Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether AMD processors are also affected by Meltdown. According to ARM, some of their processors are also affected.

 Which systems are affected by Spectre? Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors.

Ignore Spectre and Meltdown at your peril.

Patch. Sue. Repeat.

Many techies are involved in trying to handle the Spectre and Meltdown issues. So are attorneys. Intel alone has indicated dozens of lawsuits in its annual report filing with the U.S. Securities asnd Exchange Commission:

As of February 15, 2018, 30 customer class action lawsuits and two securities class action lawsuits have been filed. The customer class action plaintiffs, who purport to represent various classes of end users of our products, generally claim to have been harmed by Intel’s actions and/or omissions in connection with the security vulnerabilities and assert a variety of common law and statutory claims seeking monetary damages and equitable relief.

Given that there are many microprocessor makers involved (it’s not only Intel, remember), expect lots more patches. And lots more lawsuits.

The VPN model of extending security through enterprise firewalls is dead, and the future now belongs to the Software Defined Perimeter (SDP). Firewalls imply that there’s an inside to the enterprise, a place where devices can communicate in a trusted manner. This being so, there must also be an outside where communications aren’t trusted. Residing between the two is that firewall which decides which traffic can egress and which can enter following deep inspection, based on scans and policies.

What about trusted applications requiring direct access to corporate resources from outside the firewall? That’s where Virtual Private Networks came in, by offering a way to push a hole in the firewall. VPNs are a complex mechanism for using encryption and secure tunnels to bridge multiple networks, such as a head-office and regional office network. They can also temporarily allow remote users to become part of the network.

VPNs are well established but perceived as difficult to configure on the endpoints, hard for IT to manage and challenging to scale for large deployments. There are also issues of software compatibility: not everything works through a VPN. Putting it bluntly, almost nobody likes VPNs and there is now a better way to securely connect mobile applications and Industrial Internet of Things (IIoT) devices into the world of datacenter servers and cloud-based applications.

Authenticate Then Connect

The Software Defined Perimeter depends on a rigorous process of identity verification of both client and server using a secure control channel, thereby replacing the VPN. The negotiation for trustworthy identification is based on cryptographic protocols like Transport Layer Security (TLS) which succeeds the old Secure Sockets Layer (SSL).

With identification and trust established by both parties, a secure data channel can be provisioned with specified bandwidth and quality. For example, the data channel might require very low latency and minimal jitter for voice messaging or it might need high bandwidth for streaming video, or alternatively be low-bandwidth and low-cost for data backups.

On the client side, the trust negotiation and data channel can be tied to a specific mobile application, perhaps an employee’s phone or tablet. The corporate customer account management app needs trusted access to the corporate database server, but no other phone service should be granted access.

SDP is based on the notion of authenticate-before-connect, which reminds me of reverse-charge phone calls of the distant past. A caller would ask the operator to place a reverse charge call to Sally on a specified number from her nephew, Bob. The operator placing the call would chat with Sally over the equivalent of the control channel. Only if the operator believed she was talking to Sally, and providing Sally accepted the charges, would the operator establish the Bob-to-Sally connection, which is the equivalent of the SDP data channel.

Read more in my essay for Network Computing, “Forget VPNs: the future is SDP.”

On February 7, 2018, the carrier Swisscom admitted that a security lapse revealed sensitive information about 800,000 customers was exposed. The security failure was at one of Swisscom’s sales partners.

This is what can happen when a business gives its partners access to critical data. The security chain is only as good as the weakest link – and it can be difficult to ensure that partners are taking sufficient care, even if they pass an onboarding audit. Swisscom says,

In autumn of 2017, unknown parties misappropriated the access rights of a sales partner, gaining unauthorised access to customers’ name, address, telephone number and date of birth.

That’s pretty bad, but what came next was even worse, in my opinion. “Under data protection law this data is classed as ‘non-sensitive’,” said Swisscom. That’s distressing, because that’s exactly the sort of data needed for identity theft. But we digress.

Partners and Trust

Partners can be the way into an organization. Swisscom claims that new restrictions, such as preventing high-volume queries and using two-factor authentication, mean such an event can never occur again, which seems optimistic: “Swisscom also made a number of changes to better protect access to such non-sensitive personal data by third-party companies… These measures mean that there is no chance of such a breach happening again in the future.”

Let’s hope they are correct. But in the meantime, what can organizations do? First, Ensure that all third parties that have access to sensitive data, such as intellectual property, financial information, and customer information, go through a rigorous security audit.

Tricia C. Bailey’s article, “Managing Third-Party Vendor Risk,” makes good recommendations for how to vet vendors – and also how to prepare at your end. For example, do you know what (and where) your sensitive data is? Do vendor contracts spell out your rights and responsibilities for security and data protection – and your vendor’s rights and responsibilities? Do you have a strong internal security policy? If your own house isn’t in order, you can’t expect a vendor to improve your security. After all, you might be the weakest link.

Unaccustomed to performing security audits on partners? Organizations like CA Veracode offer audit-as-a-service, such as with their Vendor Application Security Testing service. There are also vertical industry services: the HITRUST Alliance, for example, offers a standardized security audit process for vendors serving the U.S. healthcare industry with its Third Party Assurance Program.

Check the Back Door

Many vendors and partners require back doors into enterprise data systems. Those back doors, or remote access APIs, can be essential for the vendors’ performing their line-of-business function. Take the Swisscom sales partner: It needs to be able to query Swisscom customers and add/update customer information, in order to effectively serve as as a sales organization.

Yet if the partner is breached, that back door can fall under the control of hackers, using the partner’s systems or credentials. In its 2017 Data Breach Investigations Report, Verizon reported that in regard to Point-of-Sale (POS) systems, “Almost 65% of breaches involved the use of stolen credentials as the hacking variety, while a little over a third employed brute force to compromise POS systems. Following the same trend as last year, 95% of breaches featuring the use of stolen credentials leveraged vendor remote access to hack into their customer’s POS environments.”

A Handshake Isn’t Good Enough

How secure is your business partner, your vendor, your contractor? If you don’t know, then you don’t know. If something goes wrong at your partners’ end, never forget that it may be your IP, your financials, and your customers’ data that is exposed. After all, whether or not you can recover damages from the partner in a lawsuit, your organization is the one that will pay the long-term price in the marketplace.

Savvy businesses have policies that prevent on-site viewing of pornography, in part to avoid creating a hostile work environment — and to avoid sexual harassment lawsuits. For security professionals, porn sites are also a dangerous source of malware.

That’s why human-resources policies should be backed up with technological measures. Those include blocking porn sites at the firewall, and for using on-device means to stop browsers from accessing such sites.

Even that may not be enough, says Kaspersky Labs, in its report, “Naked online: cyberthreats facing users of adult websites and applications.” Why? Because naughty content and videos have gone mainstream, says the report:

Today, porn can be found not only on specialist websites, but also in social media networks and on social platforms like Twitter. Meanwhile, the ‘classic’ porn websites are turning into content-sharing platforms, creating loyal communities willing to share their videos with others in order to get ‘likes’ and ‘shares’.

This problem is not new, but it’s increasingly dangerous, thanks to the criminal elements on the Dark Web, which are advertising tools for weaponizing porn content. Says Kaspersky, “While observing underground and semi-underground market places on the dark web, looking for information on the types of legal and illegal goods sold there, we found that among the drugs, weapons, malware and more, credentials to porn websites were often offered for sale.”

So, what’s the danger? There are concerns about attacks on both desktop/notebook and mobile users. In the latter case, says Kaspersky,

  • In 2017, at least 1.2 million users encountered malware with adult content at least once. That is 25.4% of all users who encountered any type of Android malware.
  • Mobile malware is making extensive use of porn to attract users: Kaspersky Lab researchers identified 23 families of mobile malware that use porn content to hide their real functionality.
  • Malicious clickers, rooting malware, and banking Trojans are the types of malware that are most often found inside porn apps for Android.

That’s the type of malware that’s dangerous on a home network. It’s potential ruinous if it provides a foothold onto an enterprise network not protected by intrusion detection/prevention systems or other anti-malware tech. The Kaspersky report goes into a lot of detail, and you should read it.

For another take on the magnitude of the problem: The Nielsen Company reported that more than 21 million Americans accessed adult websites on work computers – that is, 29% of working adults. Bosses are in on it too. In 2013, Time Magazine said that a survey of 200 U.S.-based data security analysts reveals that 40 percent removed malware from a senior executive’s computer, phone, or tablet after the executive visited a porn website.

What Can You Do?

Getting rid of pornography isn’t easy, but it’s not rocket science either. Start with a strong policy. Work with your legal team to make sure the policy is both legal and comprehensive. Get employee feedback on the policy, to help generate buy-in from executives and the rank-and-file.

Once the policy is finalized, communicate it clearly. Train employees on what to do, what not to do… and the employment ramifications for violating the policy. Explain that this policy is not just about harassment, but also about information security.

Block, block, block. Block at the firewall, block at proxy servers, block on company-owned devices. Block on social media. Make sure that antivirus is up to date. Review log files.

Finally, take this seriously. This isn’t a case of giggling (or eye-rolling) about boys-being-boys, or harmless diversions comparable to work-time shopping on eBay. Porn isn’t only offensive in the workplace, but it’s also a gateway to the Dark Web, criminals, and hackers. Going after porn isn’t only about being Victorian about naughty content. It’s about protecting your business from hackers.

Not a connected car.Nobody wants bad guys to be able to hack connected cars. Equally importantly, they shouldn’t be able to hack any part of the multi-step communications path that lead from the connected car to the Internet to cloud services – and back again. Fortunately, companies are working across the automotive and security industries to make sure that does happen.

The consequences of cyberattacks against cars range from the bad to the horrific: Hackers might be able to determine that a driver is not home, and sell that information to robbers. Hackers could access accounts and passwords, and be able to leverage that information for identity theft, or steal information from bank accounts. Hackers might be able to immobilize vehicles, or modify/degrade the functionality of key safety features like brakes or steering. Hackers might even be able to seize control of the vehicle, and cause accidents or terrorist incidents.

Horrific. Thankfully, companies like semiconductor leader Micron Technology, along with communication security experts NetFoundry, have a plan – and are partnering with vehicle manufacturers to embed secure, trustworthy hardware into connected cars. The result: Safety. Security. Trust. Vroom.

It starts with the Internet of Things

The IoT consists of autonomous computing units, connected to back-end services via the Internet. Those back-end services are often in the cloud, and in the case of connected cars, might offer anything from navigation to infotainment to preventive maintenance to firmware upgrades for build-in automotive features. Often, the back-end services would be offered through the automobile’s manufacturer, though they may be provisioned through third-party providers.

The communications chain for connected cars is lengthy. On the car side, it begins with an embedded component (think stereo head unit, predictive front-facing radar used for adaptive cruise control, or anti-lock brake monitoring system). The component will likely contain or be connected to a ECU – an embedded control unit, a circuit board with a microprocessor, firmware, RAM, and a network connection. The ECU, in turn, is connected via an in-vehicle network, which connected to a communications gateway.

That communications gateway talks to a telecommunications provider, which could change as the vehicle crosses service provider or national boundaries. The telco links to the Internet, the Internet links to a cloud provider (such as Amazon Web Services), and from there, there are services that talk to the automotive systems.

Trust is required at all stages of the communications. The vehicle must be certain that its embedded devices, ECUs, and firmware are not corrupted or hacked. The gateway needs to know that it’s talking to the real car and its embedded systems – not fakes or duplicates offered by hackers. It also needs to know that the cloud services are the genuine article, and not fakes. And of course, the cloud services must be assured that they are talking to the real, authenticated automotive gateway and in-vehicle components.

Read more about this in my feature for Business Continuity, “Building Cybertrust into the Connected Car.”

Farewell, Prius

A sad footnote to this blog post. Our faithful Prius, pictured above, was totaled in a collision. Nobody was injured, which is the most important, but the car is gone. May it rust in piece.

From January 1, 2005 through December 27, 2017, the Identity Theft Resource Center (ITRC) reported 8,190 breaches, with 1,057,771,011 records exposed. That’s more than a billion records. Billion with a B. That’s not a problem. That’s an epidemic.

That horrendous number compiles data breaches in the United States confirmed by media sources or government agencies. Breaches may have exposed information that could potentially lead to identity theft, including Social Security numbers, financial account information, medical information, and even email addresses and passwords.

Of course, some people may be included on multiple breaches, and given today’s highly interconnected world, that’s probably very likely. There’s no good way to know how many individuals were affected.

What defines a breach? The organization says,

Security breaches can be broken down into a number of additional sub-categories by what happened and what information (data) was exposed. What they all have in common is they usually contain personal identifying information (PII) in a format easily read by thieves, in other words, not encrypted.

The ITRC tracks seven categories of breaches:

  • Insider Theft
  • Hacking / Computer Intrusion (includes Phishing, Ransomware/Malware and Skimming)
  • Data on the Move
  • Physical Theft
  • Employee Error / Negligence / Improper Disposal / Lost
  • Accidental Web/Internet Exposure
  • Unauthorized Access

As we’ve seen, data loss has occurred when employees store data files on a cloud service without encryption, without passwords, without access controls. It’s like leaving a luxury car unlocked, windows down, keys on the seat: If someone sees this and steals the car, it’s theft – but it was easily preventable theft abetted by negligence.

The rate of breaches is increasing, says the ITRC. The number of U.S. data breach incidents tracked in 2017 hit a record high of 1,579 breaches exposing 178,955,069 records. This is a 44.7% increase over the record high figures reported for 2016, says the ITRC.

It’s mostly but not entirely about hacking. The ITRC says in its “2017 Annual Data Breach Year-End Review,”

Hacking continues to rank highest in the type of attack, at 59.4% of the breaches, an increase of 3.2 percent over 2016 figures: Of the 940 breaches attributed to hacking, 21.4% involved phishing and 12.4% involved ransomware/malware.

In addition,

Nearly 20% of breaches included credit and debit card information, a nearly 6% increase from last year. The actual number of records included in these breaches grew by a dramatic 88% over the figures we reported in 2016. Despite efforts from all stakeholders to lessen the value of compromised credit/debit credentials, this information continues to be attractive and lucrative to thieves and hackers.

Data theft truly is becoming epidemic. And it’s getting worse.

At least, I think it’s Swedish! Just stumbled across this. I hope they bought the foreign rights to one of my articles…


Wireless Ethernet connections aren’t necessarily secure. The authentication methods used to permit access between a device and a wireless router aren’t very strong. The encryption methods used to handle that authentication, and then the data traffic after authorization, aren’t very strong. The rules that enforce the use of authorization and encryption aren’t always enabled, especially with public hotspots like in hotel, airports and coffee shops; the authentication is handled by a web browser application, not the Wi-Fi protocols embedded in a local router.

Helping to solve those problems will be WPA3, an update to decades-old wireless security protocols. Announced by the Wi-Fi Alliance at CES in January 2018, the new standard is said to:

Four new capabilities for personal and enterprise Wi-Fi networks will emerge in 2018 as part of Wi-Fi CERTIFIED WPA3™. Two of the features will deliver robust protections even when users choose passwords that fall short of typical complexity recommendations, and will simplify the process of configuring security for devices that have limited or no display interface. Another feature will strengthen user privacy in open networks through individualized data encryption. Finally, a 192-bit security suite, aligned with the Commercial National Security Algorithm (CNSA) Suite from the Committee on National Security Systems, will further protect Wi-Fi networks with higher security requirements such as government, defense, and industrial.

This is all good news. According to Zack Whittaker writing for ZDNet,

One of the key improvements in WPA3 will aim to solve a common security problem: open Wi-Fi networks. Seen in coffee shops and airports, open Wi-Fi networks are convenient but unencrypted, allowing anyone on the same network to intercept data sent from other devices.

WPA3 employs individualized data encryption, which scramble the connection between each device on the network and the router, ensuring secrets are kept safe and sites that you visit haven’t been manipulated.

Another key improvement in WPA3 will protect against brute-force dictionary attacks, making it tougher for attackers near your Wi-Fi network to guess a list of possible passwords.

The new wireless security protocol will also block an attacker after too many failed password guesses.

What About KRACK?

A challenge for the use of WPA2 is that a defect, called KRACK, was discovered and published in October 2017. To quote my dear friend John Romkey, founder of FTP Software:

The KRACK vulnerability allows malicious actors to access a Wi-Fi network without the password or key, observe what connected devices are doing, modify the traffic amongst them, and tamper with the responses the network’s users receive. Everyone and anything using Wi-Fi is at risk. Computers, phones, tablets, gadgets, things. All of it. This isn’t just a flaw in the way vendors have implemented Wi-Fi. No. It’s a bug in the specification itself.

The timing of the WPA3 release couldn’t be better. But what about older devices. I have no idea how many of my devices — including desktops, phones, tablets, and routers — will be able to run WPA3. I don’t know if firmware updates will be automatically applied, or I will need to search them out.

What’s more, what about the millions of devices out there? Presumably new hotspots will downgrade to WPA2 if a device can’t support WPA3. (And the other way around: A new mobile device will downgrade to talk to an older or unpatched hotel room’s Wi-Fi router.) It could take ages before we reach a critical mass of new devices that can handle WPA3 end-to-end.

The Wi-Fi Alliance says that it “will continue enhancing WPA2 to ensure it delivers strong security protections to Wi-Fi users as the security landscape evolves.” Let’s hope that is indeed the case, and that those enhancements can be pushed down to existing devices. If not, well, the huge installed base of existing Wi-Fi devices will continue to lack real security for years to come.