Solve the puzzle: A company’s critical customer data is in a multiterabyte on-premises database, and the digital marketing application that uses that data to manage and execute campaigns runs in the cloud. How can the cloud-based marketing software quickly access and leverage that on-premises data?

It’s a puzzle that one small consumer-engagement consulting company, Embel Assist, found its clients facing. The traditional solution, perhaps, would be to periodically replicate the on-premises database in the cloud using extract-transfer-load (ETL) software, but that may take too much time and bandwidth, especially when processing terabytes of data. What’s more, the replicated data could quickly become out of date.

Using cloud-based development and computing resources, Embel Assist found another way to crack this problem. It created an app called EALink that acts as a smart interface between an organization’s customer data sources and Oracle Eloqua, a cloud-based marketing automation platform. EALink also shows how development using Oracle Cloud Infrastructure creates new opportunities for a small and creative company to take on big enterprise data challenges.

Say the on-premises CRM system for a drugstore chain has 1 million customer records. The chain wants an e-mail campaign to reach customers who made their last purchase more than a month ago, who live within 20 miles of one set of stores, and who purchased products related to a specific condition. Instead of exporting the entire database into Eloqua, EALink runs the record-extraction query on the CRM system and sends Eloqua only the minimum information needed to execute the campaign. And, the query is run when the campaign is being executed, so the campaign information won’t be out of date.

Learn more about Embel Assist in my story for Forbes, “Embel Assist Links Marketing Apps With Enterprise Data.”

When a microprocessor vulnerability rocked the tech industry last year, companies scrambled to patch nearly every server they had. In Oracle’s case, that meant patching the operating system on about 1.5 million Linux-based servers.

Oracle finished the job in just 4 hours, without taking down the applications the servers ran, by using Oracle’s own automation technology. The technology involved is at the heart of Oracle Autonomous Linux, which the company announced at Oracle OpenWorld 2019 in San Francisco last month. Oracle has been using Autonomous Linux to run its own Generation 2 Cloud infrastructure, and now it is available at no cost to Oracle Cloud customers.

The last things most CIOs, CTOs, chief information security officers, and even developers want to worry about are patching their server operating systems. Whether they have a hundred servers or hundreds of thousands, that type of maintenance can slow down a business, especially if the maintenance requires shutting down the software running on that server.

A delay is doubly worrying when the reason for the patch is to handle a software or hardware vulnerability. In those instances, delays create an opportunity for malicious operators to strike. If an organization traditionally applies updates to its servers every three months, for example, and a zero-day vulnerability comes out just after that update, the company is vulnerable for months. When updates require a lengthy process, companies are reluctant to do it more frequently.

Not with Autonomous Linux, which can patch itself quickly after a vulnerability is found and the patch is applied by Oracle. Combined with Oracle Cloud Infrastructure’s other cost advantages, customers can expect significant total cost of ownership savings compared to other Linux versions that run either on-premises or in the cloud.

Underneath the Autonomous Linux service is Oracle Linux, which remains binary compatible with Red Hat Enterprise Linux. Therefore, software that runs on RHEL will run on Oracle Autonomous Linux in Oracle Cloud Infrastructure without change.

Learn more in my story for Forbes, “With Autonomous Linux, Oracle Keeps Server Apps Running During Patching.”

Users care passionately about their software being fast and responsive. You need to give your applications both 0-60 speed and the strongest long-term endurance. Here are 14 guidelines for choosing a deployment platform to optimize performance, whether your application runs in the data center or the cloud.

Faster! Faster! Faster! That killer app won’t earn your company a fortune if the software is slow as molasses. Sure, your development team did the best it could to write server software that offers the maximum performance, but that doesn’t mean diddly if those bits end up on a pokey old computer that’s gathering cobwebs in the server closet.

Users don’t care where it runs as long as it runs fast. Your job, in IT, is to make the best choices possible to enhance application speed, including deciding if it’s best to deploy the software in-house or host it in the cloud.

When choosing an application’s deployment platform, there are 14 things you can do to maximize the opportunity for the best overall performance. But first, let’s make two assumptions:

  • These guidelines apply only to choosing the best data center or cloud-based platform, not to choosing the application’s software architecture. The job today is simply to find the best place to run the software.
  • I presume that if you are talking about a cloud deployment, you are choosing infrastructure as a service (IaaS) instead of platform as a service (PaaS). What’s the difference? In PaaS, the operating system is provided by the host, such as Windows or Linux, .NET, or Java; all you do is provide the application. In IaaS, you can provide, install, and configure the operating system yourself, giving you more control over the installation.

Here’s the checklist

  1. Run the latest software. Whether in your data center or in the IaaS cloud, install the latest version of your preferred operating system, the latest core libraries, and the latest application stack. (That’s one reason to go with IaaS, since you can control updates.) If you can’t control this yourself, because you’re assigned a server in the data center, pick the server that has the latest software foundation.
  2. Run the latest hardware. Assuming we’re talking the x86 architecture, look for the latest Intel Xeon processors, whether in the data center or in the cloud. As of mid-2018, I’d want servers running the Xeon E5 v3 or later, or E7 v4 or later. If you use anything older than that, you’re not getting the most out of the applications or taking advantage of the hardware chipset. For example, some E7 v4 chips have significantly improved instructions-per-CPU-cycle processing, which is a huge benefit. Similarly, if you choose AMD or another processor, look for the latest chip architectures.
  3. If you are using virtualization, make sure the server has the best and latest hypervisor. The hypervisor is key to a virtual machine’s (VM) performance—but not all hypervisors are created equal. Many of the top hypervisors have multiple product lines as well as configuration settings that affect performance (and security). There’s no way to know which hypervisor is best for any particular application. So, assuming your organization lets you make the choice, test, test, test. However, in the not-unlikely event you are required to go with the company’s standard hypervisor, make sure it’s the latest version.
  4. Take Spectre and Meltdown into account. The patches for Spectre and Meltdown slow down servers, but the extent of the performance hit depends on the server, the server’s firmware, the hypervisor, the operating system, and your application. It would be nice to give an overall number, such as expect a 15 percent hit (a number that’s been bandied about, though some dispute its accuracy). However, there’s no way to know except by testing. Thus, it’s important to know if your server has been patched. If it hasn’t been yet, expect application performance to drop when the patch is installed. (If it’s not going to be patched, find a different host server!)
  5. Base the number of CPUs and cores and the clock speed on the application requirements. If your application and its core dependencies (such as the LAMP stack or the .NET infrastructure) are heavily threaded, the software will likely perform best on servers with multiple CPUs, each equipped with the greatest number of cores—think 24 cores. However, if the application is not particularly threaded or runs in a not-so-well-threaded environment, you’ll get the biggest bang with the absolute top clock speeds on an 8-core server.

But wait, there’s more!

Read the full list of 14 recommendations in my story for HPE Enteprise.nxt, “Checklist: Optimizing application performance at deployment.”

The public cloud is part of your network. But it’s also not part of your network. That can make security tricky, and sometimes become a nightmare.

The cloud represents resources that your business rents. Computational resources, like CPU and memory; infrastructure resources, like Internet bandwidth and Internal networks; storage resources; and management platforms, like the tools needed to provision and configure services.

Whether it’s Amazon Web Services, Microsoft Azure or Google Cloud Platform, it’s like an empty apartment that you rent for a year or maybe a few months. You start out with empty space, put in there whatever you want and use it however you want. Is such a short-term rental apartment your home? That’s a big question, especially when it comes to security. By the way, let’s focus on platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS), where your business has a great deal of control over how the resource is used — like an empty rental apartment.

We are not talking about software-as-a-service (SaaS), like Office 365 or Salesforce.com. That’s where you show up, pay your bill and use the resources as configured. That’s more like a hotel room: you sleep there, but you can’t change the furniture. Security is almost entirely the responsibility of the hotel; your security responsibility is to ensure that you don’t lose your key, and to refuse to open the door for strangers. The SaaS equivalent: Protect your user accounts and passwords, and ensure users only have the least necessary access privileges.

Why PaaS/IaaS are part of your network

As Peter Parker knows, Spider Man’s great powers require great responsibility. That’s true in the enterprise data center — and it’s true in PaaS/IaaS networks. The customer is responsible for provisioning servers, storage and virtual machines. Not only that, but the customer also is responsible for creating connections between the cloud service and other resources, such as an enterprise data center — in a hybrid cloud architecture — and other cloud providers — in a multi-cloud architecture.

The cloud provider sets terms for use of the PaaS/IaaS, and allows inbound and outbound connections. There are service level guarantees for availability of the cloud, and of servers that the cloud provider owns. Otherwise, everything is on the enterprise. Think of the PaaS/IaaS cloud as being a remote data center that the enterprise rents, but where you can’t physically visit and see your rented servers and infrastructure.

Why PaaS/IaaS are not part of your network

In short, except for the few areas that the cloud provider handles — availability, cabling, power supplies, connections to carrier networks, physical security — you own it. That means installing patches and fixes. That means instrumenting servers and virtual machines. That means protecting them with software-based firewalls. That means doing backups, whether using the cloud provider’s value-added services or someone else. That means anti-malware.

That’s not to minimize the benefits the cloud provider offers you. Power and cooling are a big deal. So are racks and cabling. So is that physical security, and having 24×7 on-site staffing in the event of hardware failures. Also, there’s click-of-a-button ability to provision and spool up new servers to handle demand, and then shut them back again when not needed. Cloud providers can also provide firewall services, communications encryption, and of course, consulting on security.

The word elastic is often used for cloud services. That’s what makes the cloud much more agile than an on-premise data center, or renting an equipment cage in a colocation center. It’s like renting an apartment where if you need a couple extra bedrooms for a few months, you can upsize.

For many businesses, that’s huge. Read more about how great cloud power requires great responsibility in my essay for SecurityNow, “Public Cloud, Part of the Network or Not, Remains a Security Concern.”

No more pizza boxes: Traditional hardware firewalls can’t adequately protect a modern corporate network and its users. Why? Because while there still may be physical servers inside an on-premises data center or in a wiring closet somewhere, an increasing number of essential resources are virtualized or off-site. And off-site includes servers in infrastructure-as-a-service (IaaS) and platform-as-a-service (PasS) clouds.

It’s the enterprise’s responsibility to protect each of those assets, as well as the communications paths to and from those assets, as well as public Internet connections. So, no, a pizza-box appliance next to the router can’t protect virtual servers, IaaS or PaaS. What’s needed are the poorly named “next-generation firewalls” (NGFW) — very badly named because that term is not at all descriptive, and will seem really stupid in a few years, when the software-based NGFW will magically become an OPGFW (obsolete previous-generation firewall).

Still, the industry loves the “next generation” phrase, so let’s stick with NGFW here. If you have a range of assets that must be protected, including some combination of on-premises servers, virtual servers and cloud servers, you need an NGFW to unify protection and ensure consistent coverage and policy compliance across all those assets.

Cobbling together a variety of different technologies may not suffice, and could end up with coverage gaps. Also, only an NGFW can detect attacks or threats against multiple assets; discrete protection for, say, on-premises servers and cloud servers won’t be able to correlate incidents and raise the alarm when an attack is detected.

Here’s how Gartner defines NGFW:

Next-generation firewalls (NGFWs) are deep-packet inspection firewalls that move beyond port/protocol inspection and blocking to add application-level inspection, intrusion prevention, and bringing intelligence from outside the firewall.

What this means is that an NGFW does an excellent job of detecting when traffic is benign or malicious, and can be configured to analyze traffic and detect anomalies in a variety of situations. A true NGFW looks at northbound/southbound traffic, that is, data entering and leaving the network. It also doesn’t trust anything: The firewall software also examines eastbound/westbound traffic, that is, packets flowing from one asset inside the network to another.

After all, an intrusion might compromise one asset, and use that as an entry point to compromise other assets, install malware, exfiltrate data, or cause other mischief. Where does the cloud come in? Read my essay for SecurityNow, “Next-Generation Firewalls: Poorly Named but Essential to the Enterprise Network

Get ready for insomnia. Attackers are finding new techniques, and here are five that will give you nightmares worse than after you watched the slasher film everyone warned you about when you were a kid.

At a panel at the 2018 RSA Conference in San Francisco last week, we learned that these new attack techniques aren’t merely theoretically possible. They’re here, they’re real, and they’re hurting companies today. The speakers on the panel laid out the biggest attack vectors we’re seeing — and some of them are either different than in the past, or are becoming more common.

Here’s the list:

1. Repositories and cloud storage data leakage

People have been grabbing data from unsecured cloud storage for as long as cloud storage existed. Now that the cloud is nearly ubiquitous, so are the instances of non-encrypted, non-password-protected repositories on Amazon S3, Microsoft Azure, or Google Cloud Storage.

Ed Skoudis, the Penetration Testing Curriculum Director at the SANS Institute, a security training organization, points to three major flaws here. First, private repositories are accidentally opened to the public. Second, these public repositories are allowed to hold sensitive information, such as encryption keys, user names, and passwords. Third, source code and behind-the-scenes application data can be stored in the wrong cloud repository.

The result? Leakage, if someone happens to find it. And “Hackers are constantly searching for repositories that don’t have the appropriate security,” Skoudis said.

2. Data de-anonymization and correlation

Lots of medical and financial data is shared between businesses. Often that data is anonymized. That is, scrubbed with all the personally identifiable information (PII) removed so it’s impossible to figure out which human a particular data record belongs to.

Well, that’s the theory, said Skoudis. In reality, if you beg, borrow or steal enough data from many sources (including breaches), you can often correlate the data and figure out which person is described by financial or health data. It’s not easy, because a lot of data and computation resources are required, but de-anonymization can be done, and used for identity theft or worse.

3. Monetizing compromised systems using cryptominers

Johannes Ullrich, who runs the SANS Internet Storm Center, said that hackers care about selling your stuff, like all other criminals. Some want to steal your data, including bank accounts, and sell that to other people, say on the Dark Web. A few years ago, hackers learned how to steal your data and sell it back to you, in the form of ransomware. And now, they’re stealing your computer’s processing power.

What’s the processing power used for? “They’re using your system for crypto-coin mining,” the experts said. This became obvious earlier this year, he said, with a PeopleSoft breach where hackers installed a coin miner on thousands of servers – and never touched the PeopleSoft data. Meanwhile, since no data is touched or stolen, the hack could stay undetected for months, maybe years.

Two more

Read the full story, including the two biggest sleep-inhibiting worries, in my story for SecurityNow: “5 New Network Attack Techniques That Will Keep You Awake at Night.”

Blame people for the SOC scalability challenge. On the other hand, don’t blame your people. It’s not their fault.

The security operations center (SOC) team is frequently overwhelmed, particularly the Tier 1 security analysts tasked with triage. As companies grow and add more technology — including the Internet of Things (IoT) — that means more alerts.

As the enterprise adds more sophisticated security tools, such as Endpoint Detection and Response (EDR), that means more alerts. And more complex alerts. You’re not going to see a blinking red light that says: “You’re being hacked.” Or if you do see such an alert, it’s not very helpful.

The problem is people, say experts at the 2018 RSA Conference, which wrapped up last week. Your SOC team — or teams — simply can’t scale fast enough to keep up with the ever-increasing demand. Let’s talk about the five biggest problems challenging SOC scalability.

Reason #1: You can’t afford to hire enough analysts

You certainly can’t afford to hire enough Tier 2 analysts who respond to real — or almost certainly real — incidents. According sites like Glassdoor and Indeed, be prepared to pay over $100,000 per year, per person.

Reason #2: You can’t even find enough analysts

We’ve created a growing demand for labor, and thus, we’ve created this labor shortage,” said Malcolm Harkins, chief security and trust officer of CylanceThere are huge numbers of open positions at all levels of information security, and that includes in-enterprise SOC team members. Sure, you could pay more, or do competitive recruiting, but go back to the previous point: You can’t afford that. Perhaps a managed security service provider can afford to keep raising salaries, because an MSSP can monetize that expense. An ordinary enterprise can’t, because security is an expense.

Reason #3: You can’t train the analysts

Even with the best security tools, analysts require constant training on threats and techniques — which is expensive to offer, especially for a smaller organization. And wouldn’t you know it, as soon as you get a group of triage specialists or incident responders trained up nicely, off they go for a better job.

Read more, including two more reasons, in my essay for SecurityNow, “It’s the People: 5 Reasons Why SOC Can’t Scale.”

Ransomware rules the cybercrime world – perhaps because ransomware attacks are often successful and financially remunerative for criminals. Ransomware features prominently in Verizon’s fresh-off-the-press 2018 Data Breach Investigations Report (DBIR). As the report says, although ransomware is still a relatively new type of attack, it’s growing fast:

Ransomware was first mentioned in the 2013 DBIR and we referenced that these schemes could “blossom as an effective tool of choice for online criminals”. And blossom they did! Now we have seen this style of malware overtake all others to be the most prevalent variety of malicious code for this year’s dataset. Ransomware is an interesting phenomenon that, when viewed through the mind of an attacker, makes perfect sense.

The DBIR explains that ransomware can be attempted with little risk or cost to the attacker. It can be successful because the attacker doesn’t need to monetize stolen data, only ransom the return of that data; and can be deployed across numerous devices in organizations to inflict more damage, and potentially justify bigger ransoms.

Botnets Are Also Hot

Ransomware wasn’t the only prominent attack; the 2018 DBIR also talks extensively about botnet-based infections. Verizon cites more than 43,000 breaches using customer credentials stolen from botnet-infected clients. It’s a global problem, says the DBIR, and can affect organizations in two primary ways:

The first way, you never even see the bot. Instead, your users download the bot, it steals their credentials, and then uses them to log in to your systems. This attack primarily targeted banking organizations (91%) though Information (5%) and Professional Services organizations (2%) were victims as well.

The second way organizations are affected involves compromised hosts within your network acting as foot soldiers in a botnet. The data shows that most organizations clear most bots in the first month (give or take a couple of days).

However, the report says, some bots may be missed during the disinfection process. This could result in a re-infection later.

Insiders Are Still Significant Threats

Overall, says Verizon, outsiders perpetrated most breaches, 73%. But don’t get too complacent about employees or contracts: Many involved internal actors, 28%. Yes, that adds to more than 100% because some outside attacks had inside help. Here’s who Verizon says is behind breaches:

  • 73% perpetrated by outsiders
  • 28% involved internal actors
  • 2% involved partners
  • 2% featured multiple parties
  • 50% of breaches were carried out by organized criminal groups
  • 12% of breaches involved actors identified as nation-state or state-affiliated

Email is still the delivery vector of choice for malware and other attacks. Many of those attacks were financially motivated, says the DBIR. Most worrying, a significant number of breaches took a long time to discover.

  • 49% of non-point-of-sale malware was installed via malicious email
  • 76% of breaches were financially motivated
  • 13% of breaches were motivated by the gain of strategic advantage (espionage)
  • 68% of breaches took months or longer to discover

Taking Months to Discover the Breach

To that previous point: Attackers can move fast, but defenders can take a while. To use a terrible analogy: If someone breaks into your car and steals your designer sunglasses, the time from their initial penetration (picking the lock or smashing the window) to compromising the asset (grabbing the glasses) might be a minute or less. The time to discovery (when you see the broken window or realize your glasses are gone) could be minutes if you parked at the mall – or days, if the car was left at the airport parking garage. The DBIR makes the same point about enterprise data breaches:

When breaches are successful, the time to compromise continues to be very short. While we cannot determine how much time is spent in intelligence gathering or other adversary preparations, the time from first action in an event chain to initial compromise of an asset is most often measured in seconds or minutes. The discovery time is likelier to be weeks or months. The discovery time is also very dependent on the type of attack, with payment card compromises often discovered based on the fraudulent use of the stolen data (typically weeks or months) as opposed to a stolen laptop which is discovered when the victim realizes they have been burglarized.

Good News, Bad News on Phishing

Let’s end on a positive note, or a sort of positive note. The 2018 DBIR notes that most people never click phishing emails: “When analyzing results from phishing simulations the data showed that in the normal (median) organization, 78% of people don’t click a single phish all year.”

The less good news: “On average 4% of people in any given phishing campaign will click it.” The DBIR notes that the more phishing emails someone has clicked, the more they are likely to click on phishing emails in the future. The report’s advice: “Part of your overall strategy to combat phishing could be that you can try and find those 4% of people ahead of time and plan for them to click.”

Good luck with that.

Endpoints everywhere! That’s the future, driven by the Internet of Things. When IoT devices are deployed in their billions, network traffic patterns won’t look at all like today’s patterns. Sure, enterprises have a few employees working at home, or use technologies like MPLS (Multi-Protocol Label Switching) or even SD-WAN (Software Defined Wide-Area Networks) to connect branch offices. However, for the most part, most internal traffic remains within the enterprise LAN, and external traffic is driven by end-users accessing websites from browsers.

The IoT will change all of that, predicts IHS Markit, one of the industry’s most astute analyst firms. In particular, the IoT will accelerate the growth of colo facilities, because it will be to everyone’s benefit to place servers closer to the network edge, avoiding the last mile.

To set the stage, IHS Markit forecasts Internet connectable devices to grow from 27.5 billion in 2017 to 45.4 billion in 2021. That’s a 65% increase in four short years. How will that affect colos? “Data center growth is correlated with general data growth. The more data transmitted via connected devices; the more data centers are needed to store, transfer, and analyze this data.” The analysts say:

In the specific case of the Internet of Things, there’s a need for geographically distributed data centers that can provide low-latency connections to certain connected devices. There are applications, like autonomous vehicles or virtual reality, which are going to require local data centers to manage much of the data processing required to operate.

Therefore, most enterprises will not have the means or the business case to build new data centers everywhere. “They will need to turn to colocations to provide quickly scalable, low capital-intensive options for geographically distributed data centers.”

Another trend being pointed to by IHS Markit: More local processing, rather than relying on servers in a colo-facility, at a cloud provider, or in the enterprise’s own data center. “And thanks to local analytics on devices, and the use of machine learning, a lot of data will never need to leave the device. Which is good news for the network infrastructure of the world that is not yet capable of handling a 65% increase in data traffic, given the inevitable proliferation of devices.”

Four Key Drivers of IoT This Year

The folks at IHS Markit have pointed out four key drivers of IoT growth. They paint a compelling picture, which we can summarize here:

  • Innovation and competitiveness. There are many new wireless models and solutions being released, which means lots of possibility for the future, but confusion in the short term. Companies are also seeing that the location of data is increasingly relevant to competition, and this will drive both on-prem data center and cloud computing.
  • Business models. As 5G rolls out, it will improve the economies of scale on machine-to-machine communications. This will create new business opportunities for the industry, a well as new security products and services.
  • Standardization and security. Speaking of which, IoT must be secure from the beginning, not only for business reasons, but also for compliance reasons. Soon there will be more IoT devices out there than traditional computing devices, which changes the security equation.
  • Wireless technology innovation. IHS Markit says there are currently more than 400 IoT platform providers, and vendors are working hard to integrate the platforms so that the data can be accessed by app developers. “A key inflection point for the IoT will be the gradual shift from the current ‘Intranets of Things’ deployment model to one where data can be exposed, discovered, entitled and shared with third-party IoT application developers,” says IHS Markit.

The IoT is not new. However, “what is new is it’s now working hand in hand with other transformative technologies like artificial intelligence and the cloud,” said Jenalea Howell, research director for IoT connectivity and smart cities at IHS Markit. “This is fueling the convergence of verticals such as industrial IoT, smart cities and buildings, and the connected home, and it’s increasing competitiveness.”

 

The VPN model of extending security through enterprise firewalls is dead, and the future now belongs to the Software Defined Perimeter (SDP). Firewalls imply that there’s an inside to the enterprise, a place where devices can communicate in a trusted manner. This being so, there must also be an outside where communications aren’t trusted. Residing between the two is that firewall which decides which traffic can egress and which can enter following deep inspection, based on scans and policies.

What about trusted applications requiring direct access to corporate resources from outside the firewall? That’s where Virtual Private Networks came in, by offering a way to push a hole in the firewall. VPNs are a complex mechanism for using encryption and secure tunnels to bridge multiple networks, such as a head-office and regional office network. They can also temporarily allow remote users to become part of the network.

VPNs are well established but perceived as difficult to configure on the endpoints, hard for IT to manage and challenging to scale for large deployments. There are also issues of software compatibility: not everything works through a VPN. Putting it bluntly, almost nobody likes VPNs and there is now a better way to securely connect mobile applications and Industrial Internet of Things (IIoT) devices into the world of datacenter servers and cloud-based applications.

Authenticate Then Connect

The Software Defined Perimeter depends on a rigorous process of identity verification of both client and server using a secure control channel, thereby replacing the VPN. The negotiation for trustworthy identification is based on cryptographic protocols like Transport Layer Security (TLS) which succeeds the old Secure Sockets Layer (SSL).

With identification and trust established by both parties, a secure data channel can be provisioned with specified bandwidth and quality. For example, the data channel might require very low latency and minimal jitter for voice messaging or it might need high bandwidth for streaming video, or alternatively be low-bandwidth and low-cost for data backups.

On the client side, the trust negotiation and data channel can be tied to a specific mobile application, perhaps an employee’s phone or tablet. The corporate customer account management app needs trusted access to the corporate database server, but no other phone service should be granted access.

SDP is based on the notion of authenticate-before-connect, which reminds me of reverse-charge phone calls of the distant past. A caller would ask the operator to place a reverse charge call to Sally on a specified number from her nephew, Bob. The operator placing the call would chat with Sally over the equivalent of the control channel. Only if the operator believed she was talking to Sally, and providing Sally accepted the charges, would the operator establish the Bob-to-Sally connection, which is the equivalent of the SDP data channel.

Read more in my essay for Network Computing, “Forget VPNs: the future is SDP.”

Companies can’t afford downtime. Employees need access to their applications and data 24/7, and so do other business applications, manufacturing and logistics management systems, and security monitoring centers. Anyone who thinks that the brute force effort of their hard-working IT administrators is enough to prevent system downtime just isn’t facing reality.

Traditional systems administrators and their admin tools can’t keep up with the complexity inherent in any modern enterprise. A recent survey of the Oracle Applications Users Group has found that despite significant progress in systems management automation, many customers still report that more than 80% of IT issues are first discovered and reported by users. The number of applications is spiraling up, while data increases at an even more rapid rate.

The boundaries between systems are growing more complex, especially with cloud-based and hybrid-cloud architectures. That reality is why Oracle, after analyzing a survey of its industry-leading customers, recently predicted that by 2020, more than 80% of application infrastructure operations will be managed autonomously.

Autonomously is an important word here. It means not only doing mundane day-to-day tasks including monitoring, tuning, troubleshooting, and applying fixes automatically, but also detecting and rapidly resolving issues. Even when it comes to the most complex problems, machines can simplify the analysis—sifting through the millions of possibilities to present simpler scenarios, to which people then can apply their expertise and judgment of what action to take.

Oracle asked, about the kind of activities that IT system administrators do. That includes on a daily, weekly, and monthly basis—things such as password resets, system reboots, software patches, and the like.

Expect that IT teams will soon reduce by several orders of magnitude the number of situations like those that need manual intervention. If an organization typically has 20,000 human-managed interventions per year, humans will need to touch only 20. The rest will be handled through systems that can apply automation combined with machine learning, which can analyze patterns and react faster than human admins to enable preventive maintenance, performance optimization, and problem resolution.

Read more in my article for Forbes, “Prediction: 80% of Routine IT Operations Will Soon Be Solved Autonomously.”

We all have heard the usual bold predictions for technology in 2018: Lots of cloud computing, self-driving cars, digital cryptocurrencies, 200-inch flat-screen televisions, and versions of Amazon’s Alexa smart speaker everywhere on the planet. Those types of predictions, however, are low-hanging fruit. They’re not bold. One might as well predict that there will be some sunshine, some rainy days, a big cyber-bank heist, and at least one smartphone catching fire.

Let’s dig for insights beyond the blindingly obvious. I talked to several tech leaders, deep-thinking individuals in California’s Silicon Valley, asking them for their predictions, their idea of new trends, and disruptions in the tech industry. Let’s see what caught their eye.

Gary Singh, VP of marketing, OnDot Systems, believes that 2018 will be the year when mobile banking will transform into digital banking — which is more disruptive than one would expect. “The difference between digital and mobile banking is that mobile banking is informational. You get information about your accounts,” he said. Singh continues, “But in terms of digital banking, it’s really about actionable insights, about how do you basically use your funds in the most appropriate way to get the best value for your dollar or your pound in terms of how you want to use your monies. So that’s one big shift that we would see start to happen from mobile to digital.”

Tom Burns, Vice President and General Manager of Dell EMC Networking, has been following Software-Defined Wide Area Networks. SD-WAN is a technology that allows enterprise WANs to thrive over the public Internet, replacing expensive fixed-point connections provisioned by carriers using technologies like MPLS. “The traditional way of connecting branches in office buildings and providing services to those particular branches is going to change,” Burns observed. “If you look at the traditional router, a proprietary architecture, dedicated lines. SD-WAN is offering a much lower cost but same level of service opportunity for customers to have that data center interconnectivity or branch connectivity providing some of the services, maybe a full even office in the box, but security services, segmentation services, at a much lower cost basis.”

NetFoundry’s co-founder, Mike Hallett, sees a bright future for Application Specific Networks, which link applications directly to cloud or data center applications. The focus is on the application, not on the device. “For 2018, when you think of the enterprise and the way they have to be more agile, flexible and faster to move to markets, particularly going from what I would call channel marketing to, say, direct marketing, they are going to need application-specific networking technologies.” Hallett explains that Application Specific Networks offer the ability to be able to connect from an application, from a cloud, from a device, from a thing, to any application or other device or thing quickly and with agility. Indeed, those connections, which are created using software, not hardware, could be created “within minutes, not within the weeks or months it might take, to bring up a very specific private network, being able to do that. So the year of 2018 will see enterprises move towards software-only networking.”

Mansour Karam, CEO and founder of Apstra, also sees software taking over the network. “I really see massive software-driven automation as a major trend. We saw technologies like intent-based networking emerge in 2017, and in 2018, they’re going to go mainstream,” he said.

There’s more

There are predictions around open networking, augmented reality, artificial intelligence – and more. See my full story in Upgrade Magazine, “From SD-WAN to automation to white-box switching: Five tech predictions for 2018.”

Tom Burns, VP and General Manager of Dell EMC Networking, doesn’t want 2018 to be like 2017. Frankly, none of us in tech want to hit the “repeat” button either. And we won’t, not with increased adoption of blockchain, machine learning/deep learning, security-as-a-service, software-defined everything, and critical enterprise traffic over the public Internet.

Of course, not all possible trends are positive ones. Everyone should prepare for more ransomware, more dangerous data breaches, newly discovered flaws in microprocessors and operating systems, lawsuits over GDPR, and political attacks on Net Neutrality. Yet, as the tech industry embraces 5G wireless and practical applications of the Internet of Things, let’s be optimistic, and hope that innovation outweighs the downsides of fast-moving technology.

Here, Dell has become a major force in networking across the globe. The company’s platform, known as Dell EMC Open Networking, includes a portfolio of data center switches and software, as well as solutions for campus and branch networks. Plus, Dell offers end-to-end services for digital transformation, training, and multivendor environment support.

Tom Burns heads up Dell’s networking business. That business became even larger in September 2106, which Dell closed on its US$67 billion acquisition of EMC Corp. Before joining Dell in 2012, Burns was a senior executive at Alcatel-Lucent for many years. He and I chatted in early January at one of Dell’s offices in Santa Clara, Calif.

Q: What’s the biggest tech trend from 2017 that you see continuing into 2018?

Tom Burns (TB): The trend that I think that will continue into 2018 and even beyond is around digital transformation. And I recognize that everyone may have a different definition of what that means, but what we at Dell Technologies believe it means is that the number of connected devices exploding, whether it be cell phones or RFIDs or intelligent type of devices that are looking at our factories and so forth.

And all of this information needs to be collected and analyzed, with what some call artificial intelligence. Some of it needs to be aggregated at the edge. Some of it’s going to be brought back to the core data centers. This is what we infer to as IT transformation, to enable workforce transformation and other capabilities to deliver the applications, the information, the video, the voice communications, in real time to the users and give them the intelligence from the information that’s being gathered to make real-time decisions or whatever they need the information for.

Q: What do you see as being the tech trend from 2017 that you hope won’t continue into 2018?

TB: The trend that won’t continue into 2018 is the old buying habits around previous-generation technology. CIOs and CEOs, whether in enterprises or in service providers, are going to have to think of a new way to deliver their services and applications on a real-time basis, and the traditional architectures that have driven our data centers over the years just is not going to work anymore. It’s not scalable. It’s not flexible. It doesn’t drive out the costs that are necessary in order to enable those new applications.

So one of the things that I think is going to stop in 2018 is the old way of thinking – proprietary applications, proprietary full stacks. I think disaggregation, open, is going to be adopted much, much faster.

Q: If you could name one thing that will predict how the tech industry will do business next year, what do you think it will be?

TB: Well, I think one of the major changes, and we’ve started to see it already, and in fact, Dell Technologies announced it about a year ago, is how is our technology being consumed? We’ve been, let’s face it, box sellers or even solution providers that look at it from a CapEx standpoint. We go in, talk to our customers, we help them enable a new application as a service, and we kind of walk away. We sell them the product, and then obviously we support the product.

More and more, I think the customers and the consumers are looking for different ways to consume that technology, so we’ve started things like consumption models like pay as you grow, pay as you turn on, consumption models that allows us to basically ignite new services on demand. We have some several customers that are doing this, particularly around the service provider area. So I think one way tech companies are going to change on how they deliver is this whole thing around pay as a service, consumption models and a new way to really provide the technology capabilities to our customers and then how do they enable them.

Q: If you could predict one thing that will change how enterprise customers do business next year…?

TB: One that we see as a huge, tremendous impact on how customers are going to operate is SD-WAN. The traditional way of connecting branches and office buildings and providing services to those particular branches is going to change. If you look at the traditional router, a proprietary architecture, dedicated lines, SD-WAN is offering a much lower cost but same level of service opportunity for customers to have that data center interconnectivity or branch connectivity, providing some of the services, maybe a full even office in the box, but security services, segmentation services, at a much lower cost basis. So I think that one of the major changes for enterprises next year and service providers is going to be this whole concept and idea with real technology behind it around Software-Defined WAN.

Read the full interview

There’s a lot more to my conversation with Tom Burns. Read the entire interview at Upgrade Magazine.

With lots of inexpensive, abundant computation resources available, nearly anything becomes possible. For example, you can process a lot of network data to identify patterns, identify intelligence, and product insight that can be used to automate networks. The road to Intent-Based Networking Systems (IBNS) and Application-Specific Networks (ASN) is a journey. That’s the belief of Rajesh Ghai, Research Director of Telecom and Carrier IP Networks at IDC.

Ghai defines IBNS as a closed-loop continuous implementation of several steps:

  • Declaration of intent, where the network administrator defines what the network is supposed to do
  • Translation of intent into network design and configuration.
  • Validation of the design using a model that decides if that configuration can actually be implemented,
  • Propagation of that configuration into the network devices via APIs.
  • Gather and study real-time telemetry from all the devices.
  • Use machine learning to determine whether desired state of policy has been achieved. And then repeat,

Related to that concept, Ghai explains, is ASN. “It’s also a concept which is software control and optimization and automation. The only difference is that ASN is more applicable to distributed applications over the internet than IBNS.”

IBNS Operates Networks as One System

“Think of intent-based networking as software that sits on top of your infrastructure and focusing on the networking infrastructure, and enables you to operate your network infrastructure as one system, as opposed to box per box,” explained Mansour Karam, Founder, CEO of Apstra, which offers IBNS solutions for enterprise data centers.

“To achieve this, we have to start with intent,” he continued. “Intent is both the high-level business outcomes that are required by the business, but then also we think of intent as applying to every one of those stages. You may have some requirements in how you want to build.”

Karam added, “Validation includes tests that you would run — we call them expectations — to validate that your network indeed is behaving as you expected, as per intent. So we have to think of a sliding scale of intent and then we also have to collect all the telemetry in order to close the loop and continuously validate that the network does what you want it to do. There is the notion of state at the core of an IBNS that really boils down to managing state at scale and representing it in a way that you can reason about the state of your system, compare it with the desired state and making the right adjustments if you need to.”

The upshot of IBNS, Karam said: If you have powerful automation you’re taking the human out of the equation, and so you get a much more agile network. You can recoup the revenues that otherwise you would have lost, because you’re unable to deliver your business services on time. You will reduce your outages massively, because 80% of outages are caused by human error. You reduce your operational expenses massively, because organizations spend $4 operating every dollar of CapEx, and 80% of it is manual operations. So if you take that out you should be able to recoup easily your entire CapEx spend on IBNS.”

ASN Gives Each Application It Own Network

“Application-Specific Networks, like Intent-Based Networking Systems, enable digital transformation, agility, speed, and automation,” explained Galeal Zino, Founder of NetFoundry, which offers an ASN platform.

He continued, “ASN is a new term, so I’ll start with a simple analogy. ASNs are like are private clubs — very, very exclusive private clubs — with exactly two members, the application and the network. ASN literally gives each application its own network, one that’s purpose-built and driven by the specific needs of that application. ASN merges the application world and the network world into software which can enable digital transformation with velocity, with scale, and with automation.”

Read more in my new article for Upgrade Magazine, “Manage smarter, more autonomous networks with Intent-Based Networking Systems and Application Specific Networking.”

Wireless Ethernet connections aren’t necessarily secure. The authentication methods used to permit access between a device and a wireless router aren’t very strong. The encryption methods used to handle that authentication, and then the data traffic after authorization, aren’t very strong. The rules that enforce the use of authorization and encryption aren’t always enabled, especially with public hotspots like in hotel, airports and coffee shops; the authentication is handled by a web browser application, not the Wi-Fi protocols embedded in a local router.

Helping to solve those problems will be WPA3, an update to decades-old wireless security protocols. Announced by the Wi-Fi Alliance at CES in January 2018, the new standard is said to:

Four new capabilities for personal and enterprise Wi-Fi networks will emerge in 2018 as part of Wi-Fi CERTIFIED WPA3™. Two of the features will deliver robust protections even when users choose passwords that fall short of typical complexity recommendations, and will simplify the process of configuring security for devices that have limited or no display interface. Another feature will strengthen user privacy in open networks through individualized data encryption. Finally, a 192-bit security suite, aligned with the Commercial National Security Algorithm (CNSA) Suite from the Committee on National Security Systems, will further protect Wi-Fi networks with higher security requirements such as government, defense, and industrial.

This is all good news. According to Zack Whittaker writing for ZDNet,

One of the key improvements in WPA3 will aim to solve a common security problem: open Wi-Fi networks. Seen in coffee shops and airports, open Wi-Fi networks are convenient but unencrypted, allowing anyone on the same network to intercept data sent from other devices.

WPA3 employs individualized data encryption, which scramble the connection between each device on the network and the router, ensuring secrets are kept safe and sites that you visit haven’t been manipulated.

Another key improvement in WPA3 will protect against brute-force dictionary attacks, making it tougher for attackers near your Wi-Fi network to guess a list of possible passwords.

The new wireless security protocol will also block an attacker after too many failed password guesses.

What About KRACK?

A challenge for the use of WPA2 is that a defect, called KRACK, was discovered and published in October 2017. To quote my dear friend John Romkey, founder of FTP Software:

The KRACK vulnerability allows malicious actors to access a Wi-Fi network without the password or key, observe what connected devices are doing, modify the traffic amongst them, and tamper with the responses the network’s users receive. Everyone and anything using Wi-Fi is at risk. Computers, phones, tablets, gadgets, things. All of it. This isn’t just a flaw in the way vendors have implemented Wi-Fi. No. It’s a bug in the specification itself.

The timing of the WPA3 release couldn’t be better. But what about older devices. I have no idea how many of my devices — including desktops, phones, tablets, and routers — will be able to run WPA3. I don’t know if firmware updates will be automatically applied, or I will need to search them out.

What’s more, what about the millions of devices out there? Presumably new hotspots will downgrade to WPA2 if a device can’t support WPA3. (And the other way around: A new mobile device will downgrade to talk to an older or unpatched hotel room’s Wi-Fi router.) It could take ages before we reach a critical mass of new devices that can handle WPA3 end-to-end.

The Wi-Fi Alliance says that it “will continue enhancing WPA2 to ensure it delivers strong security protections to Wi-Fi users as the security landscape evolves.” Let’s hope that is indeed the case, and that those enhancements can be pushed down to existing devices. If not, well, the huge installed base of existing Wi-Fi devices will continue to lack real security for years to come.

A friend insists that “the Internet is down” whenever he can’t get a strong wireless connection on his smartphone. With that type of logic, enjoy this photo found on the afore-mentioned Internet:

“Wi-Fi” is apparently now synonymous with “Internet” or “network.” It’s clear that we have come a long way from the origins of the Wi-Fi Alliance, which originally defined the term as meaning “Wireless Fidelity.” The vendor-driven alliance was formed in 1999 to jointly promote the broad family of IEEE 802.11 wireless local-area networking standards, as well as to insure interoperability through certifications.

But that was so last millennium! It’s all Wi-Fi, all the time. In that vein, let me propose three new acronyms:

  • Wi-Fi-Wi – Wireless networking, i.e., 802.11
  • Wi-Fi-Cu – Any conventionally cabled network
  • Wi-Fi-Fi – Networking over fiber optics (but not Fibre Channel)
  • Wi-Fi-FC – Wireless Fibre Channel, I suppose

You get the idea….

It’s all about the tradeoffs! You can have the chicken or the fish, but not both. You can have the big engine in your new car, but that means a stick shift—you can’t have the V8 and an automatic. Same for that cake you want to have and eat. Your business applications can be easy to use or secure—not both.

But some of those are false dichotomies, especially when it comes to security for data center and cloud applications. You can have it both ways. The systems can be easy to use and maintain, and they can be secure.

On the consumer side, consider two-factor authentication (2FA), whereby users receive a code number, often by text message to their phones, which they must type into a webpage to confirm their identity. There’s no doubt that 2FA makes systems more secure. The problem is that 2FA is a nuisance for the individual end user, because it slows down access to a desired resource or application. Unless you’re protecting your personal bank account, there’s little incentive for you to use 2FA. Thus, services that require 2FA frequently aren’t used, get phased out, are subverted, or are simply loathed.

Likewise, security measures specified by corporate policies can be seen as a nuisance or an impediment. Consider dividing an enterprise network into small “trusted” networks, such as by using virtual LANs or other forms of authenticating users, applications, or API calls. This setup can require considerable effort for internal developers to create, and even more effort to modify or update.

When IT decides to migrate an application from a data center to the cloud, the steps required to create API-level authentication across such a hybrid deployment can be substantial. The effort required to debug that security scheme can be horrific. As for audits to ensure adherence to the policy? Forget it. How about we just bypass it, or change the policy instead?

Multiply that simple scenario by 1,000 for all the interlinked applications and users at a typical midsize company. Or 10,000 or 100,000 at big ones. That’s why post-mortem examinations of so many security breaches show what appears to be an obvious lack of “basic” security. However, my guess is that in many of those incidents, the chief information security officer or IT staffers were under pressure to make systems, including applications and data sources, extremely easy for employees to access, and there was no appetite for creating, maintaining, and enforcing strong security measures.

Read more about these tradeoffs in my article on Forbes for Oracle Voice: “You Can Have Your Security Cake And Eat It, Too.”

Man-in-the-Middle (MITM or MitM) attacks are about to become famous. Famous, in the way that ransomware, Petya, Distributed Denial of Service (DDoS), and credit-card skimmers have become well-known.

MITM attacks go back thousands of years. A merchant writes a parchment offering to buy spices, and hands it to a courier to deliver to his supplier in a far-away land. The local courier hands the parchment to another courier, who in turns hands it to another courier, and so-on, until the final courier gives the parchment to the supplier. Unbeknownst to anyone, however, one of the couriers was a swindler who might change the parchment to set up a fraud, or who might sell details of the merchant’s purchase offer to a competitor, who could then negotiate a better deal.

In modern times, MITM takes advantage of a weakness in the use of cryptography. Are you completely sure who you’re set up that encrypted end-to-end message session with? Perhaps it’s your bank… or perhaps it’s a scammer, who to you looks like your bank – but to your bank, looks like you. Everyone thinks that it’s a secure communications link, but the man-in-the-middle sees everything, and might be able to change things too.

According to Wikipedia,

In cryptography and computer security, a man-in-the-middle attack (MITM; also Janus attack) is an attack where the attacker secretly relays and possibly alters the communication between two parties who believe they are directly communicating with each other.

We haven’t heard much about MITM attacks, because, quite frankly, they’ve not been in the news associated with breaches. That changed recently, when Fox-IT, a cybersecurity firm in Holland, was nailed with one. Writing on their blog on Dec. 14, 2017, the company said:

In the early morning of September 19 2017, an attacker accessed the DNS records for the Fox-IT.com domain at our third party domain registrar. The attacker initially modified a DNS record for one particular server to point to a server in their possession and to intercept and forward the traffic to the original server that belongs to Fox-IT. This type of attack is called a Man-in-the-Middle (MitM) attack. The attack was specifically aimed at ClientPortal, Fox-IT’s document exchange web application, which we use for secure exchange of files with customers, suppliers and other organizations. We believe that the attacker’s goal was to carry out a sustained MitM attack.

The company pointed to several weaknesses in their security setup that allowed the attack to succeed. The DNS provider’s password hadn’t been changed since 2013; two-factor authentication (2FA) wasn’t used or even supported by the DNS provider; and heavier-than-usual scans from the Internet, while detected by Fox-IT, weren’t flagged for investigation or even extra vigilance.

How To Prevent MITM Attacks

After a timeline discussion and more technical analysis, Fox-IT offered suggestions on how to handle such incidents, and I quote:

  • Choose a DNS provider that doesn’t allow changes through a control panel but requires a more manual process, considering that name servers are very stable and hardly ever change. If you do require more frequent changes, use 2FA.
  • Ensure that all system access passwords are reviewed regularly and changed, even those which are used rarely.
  • Deploy certificate transparency monitoring in order to detect, track and respond to fraudulent certificates.
  • Deploy full packet capture capabilities with good retention in crucial points of your infrastructure, such as the DMZ and border gateways.
  • Always inform Law Enforcement at an early stage, so that they can help you with your investigations, as we did in our case.
  • Make it a management decision to first understand an attack before taking specific actions to mitigate it. This may include letting an attack continue for a short period of time. We consciously made that decision.

It’s a shame about Fox-IT’s breach, but the company responded correctly and promptly once the breach was detected. This is the first serious instance of a successful MITM attack I’ve heard about in some time – but probably won’t be the last.

Ransomware is genuine, and is threatening individuals, services, schools, medical facilities, governments – and there’s no indication that ransomware is stopping. In fact, it’s probably increasing. Why? Let’s be honest: Ransomware is probably the single most efficient attack that hackers have ever created. Anybody can develop ransomware utilizing easily available tools; any cash received is likely in untraceable Bitcoin; and if something goes wrong with decrypting someone’s disk drive, the hacker isn’t impacted.

A business is hit with ransomware every 40 seconds, according to some sources, and 60% of all malware were ransomware. It strikes all sectors. No industry is safe. And with the increase of RaaS (Ransomware-as-a-Service) it’s going to get worse.

Fortunately: We can fight back. Here’s a 4 step fight plan.

Four steps to good fundamental hygiene

  1. Training employees on handling destructive e-mails. There are falsified messages from service partners. There’s phishing and target spearphishing. Some will survive email spam/malware filters; workers need to be taught not to click links in those messages, or naturally, not to give permission for plugins or apps to be installed. However, some malware, like ransomware, will get through, typically making use of obsolete software applications or unpatched systems, just like in the Equifax breach.
  2. Patch everything. Guaranteeing that end points are completely patched and completely updated with the current, most safe OS, applications, utilities, device drivers, and code libraries. In this way, if there is an attack, the end point is healthy, and has the ability to best battle the infection.
  3. Ransomware isn’t really a technology or security problem. It’s an organization problem. And it’s a lot more than the ransom that is demanded. That’s peanuts compared to loss of efficiency because of downtime, bad public relations, angry clients if service is interfered with, and the expense of rebuilding lost data. (And that assumes that valuable intellectual property or protected financial or consumer health data isn’t really stolen.).
  4. Backup, backup, backup, and safeguard those backups. If you do not have safe, protected backups, you cannot restore data and core infrastructure in a timely fashion. That includes making day-to-day snapshots of virtual machines, databases, applications, source code, and configuration files.

By the way, businesses need tools to discover, determine, and avoid malware like ransomware from dispersing. This needs continuous visibility and reporting of what’s taking place in the environment – consisting of “zero day” attacks that have not been seen before. Part of that is keeping an eye on end points, from the smart phone to the PC to the server to the cloud, to make sure that endpoints are up-to-date and secure, which no unexpected changes have been made to their underlying configuration. That way, if a machine is contaminated by ransomware or other malware, the breach can be discovered quickly, and the device separated and closed down pending forensics and healing. If an end point is breached, quick containment is critical.

Read more in my guest story for Chuck Leaver’s blog, “Prevent And Manage Ransomware With These 4 Steps.”

The secret sauce is AI-based zero packet inspection. That’s how to secure mobile users, and their personal data and employers’ data.

Let’s back up a step. Mobile devices are increasingly under attack, from malicious apps, from rogue emails, from adware, and from network traffic. Worse, that network traffic can come from any number of sources, including cellular data, WiFi, even Bluetooth. Users want their devices to be safe and secure. But how, if the network traffic can’t be trusted?

The best approach around is AI-based zero packet inspection (ZPI). It all starts with data. Tons of training data, used to train a machine learning algorithm to recognize patterns that indicate whether a device is performing normally – or if it’s under attack. Machine learning refers to a number of advanced AI algorithms that can study streams of data, rapidly and accurately detect patterns in that data, and from those patterns, sort the data into different categories.

The Zimperium z9 engine, as an example, works with machine learning to train against a number of test cases (on both iOS and Android devices) that represent known patterns of safe and not-safe traffic. We call those patterns zero-packet inspection in that the objective is not to look at the contents of the network packets but to scan the lower-level underlying traffic patterns at the network level, such as IP, TCP, UDP and ARP scans.

If you’re not familiar with those terms, suffice it to say that at the network level, the traffic is focused on delivering data to a specific device, and then within that device, making sure it gets to the right application. Think of it as being like an envelope going to a big business – it has the business name, street address, and department/mail stop. The machine learning algorithms look at patterns at that level, rather than examining the contents of the envelope. This makes the scans very fast and accurate.

Read more in my new essay for Security Brief Europe, “Opinion: Mobile security starts with a powerful AI-based scanning engine.”

Smart televisions, talking home assistants, consumer wearables – that’s not the real story of the Internet of Things. While those are fun and get great stories on blogs and morning news reports, the real IoT is the Industrial IoT. That’s where businesses will truly be transformed, with intelligent, connected devices working together to improve services, reduce friction, and disrupt everything. Everything.

According to Grand View Research, the Industrial IoT (IIoT) market will be $933.62 billion by 2025. “The ability of IoT to reduce costs has been the prime factor for its adoption in the industrial sector. However, several significant investment incentives, such as increased productivity, process automation, and time-to-market, have also been boosting this adoption. The falling prices of sensors have reduced the overall cost associated with data collection and analytics,” says the report.

The report continues,

An emerging trend among enterprises worldwide is the transformation of technical focus to improving connectivity in order to undertake data collection with the right security measures in place and with improved connections to the cloud. The emergence of low-power hardware devices, cloud integration, big data analytics, robotics & automation, and smart sensors are also driving IIoT market growth.

Markets and Markets

Markets & Markets predicts that IIoT will be worth $195.47 billion by 2022. The company says,

A key influencing factor for the growth of the IIoT market is the need to implement predictive maintenance techniques in industrial equipment to monitor their health and avoid unscheduled downtimes in the production cycle. Factors which driving the IIoT market include technological advancements in semiconductor and electronics industry and evolution of cloud computing technologies.

The manufacturing vertical is witnessing a transformation through the implementation of the smart factory concept and factory automation technologies. Government initiatives such as Industrie 4.0 in Germany and Plan Industriel in France are expected to promote the implementation of the IIoT solutions in Europe. Moreover, leading countries in the manufacturing vertical such as U.S., China, and India are expected to further expand their manufacturing industries and deploy smart manufacturing technologies to increase this the contribution of this vertical to their national GDPs.

The IIoT market for camera systems is expected to grow at the highest rate between 2016 and 2022. Camera systems are mainly used in the retail and transportation verticals. The need of security and surveillance in these sectors is the key reason for the high growth rate of the market for camera systems. In the retail sector, the camera systems are used for capturing customer behavior, moment tracking, people counting, and heat mapping. The benefits of installation of surveillance systems include the safety at the workplace, and the prevention of theft and other losses, sweet hearting, and other retail crimes. Video analytics plays a vital role for security purpose in various areas in transportation sector including airports, railway stations, and large public places. Also, intelligent camera systems are used for traffic monitoring, and incident detection and reporting.

Accenture

The huge research firm Accenture says that the IIoT will add $14.2 trillion to the global economy by 2030. That’s not talking about the size of the market, but the overall lift that IIoT will have. By any measure, that’s staggering. Accenture reports,

Today, the IIoT is helping to improve productivity, reduce operating costs and enhance worker safety. For example, in the petroleum industry, wearable devices sense dangerous chemicals and unmanned aerial vehicles can inspect remote pipelines.

However, the longer-term economic and employment potential will require companies to establish entirely new product and service hybrids that disrupt their own markets and generate fresh revenue streams. Many of these will underpin the emergence of the “outcome economy,” where organizations shift from selling products to delivering measurable outcomes. These may range from guaranteed energy savings in commercial buildings to guaranteed crop yields in a specific parcel of farmland.

IIoT Is a Work in Progress

The IIoT is going to have huge impact. But it hasn’t yet, not on any large scale. As Accenture says,

When Accenture surveyed more than 1,400 C-suite decision makers—including 736 CEOs—from some of the world’s largest companies, the vast majority (84 percent) believe their organizations have the capability to create new, service-based income streams from the IIoT.

But scratch beneath the surface and the gloss comes off. Seventy-three percent confess that their companies have yet to make any concrete progress. Just 7 percent have developed a comprehensive strategy with investments to match.

Challenge and opportunity: That’s the Industrial Internet of Things. Watch this space.

The bad news: There are servers used in serverless computing. Real servers, with whirring fans and lots of blinking lights, installed in racks inside data centers inside the enterprise or up in the cloud.

The good news: You don’t need to think about those servers in order to use their functionality to write and deploy enterprise software. Your IT administrators don’t need to provision or maintain those servers, or think about their processing power, memory, storage, or underlying software infrastructure. It’s all invisible, abstracted away.

The whole point of serverless computing is that there are small blocks of code that do one thing very efficiently. Those blocks of code are designed to run in containers so that they are scalable, easy to deploy, and can run in basically any computing environment. The open Docker platform has become the de facto industry standard for containers, and as a general rule, developers are seeing the benefits of writing code that target Docker containers, instead of, say, Windows servers or Red Hat Linux servers or SuSE Linux servers, or any specific run-time environment. Docker can be hosted in a data center or in the cloud, and containers can be easily moved from one Docker host to another, adding to its appeal.

Currently, applications written for Docker containers still need to be managed by enterprise IT developers or administrators. That means deciding where to create the containers, ensuring that the container has sufficient resources (like memory and processing power) for the application, actually installing the application into the container, running/monitoring the application while it’s running, and then adding more resources if required. Helping do that is Kubernetes, an open container management and orchestration system for Docker. So while containers greatly assist developers and admins in creating portable code, the containers still need to be managed.

That’s where serverless comes in. Developers write their bits of code (such as to read or write from a database, or encrypt/decrypt data, or search the Internet, or authenticate users, or to format output) to run in a Docker container. However, instead of deploying directly to Docker, or using Kubernetes to handle deployment, they write their code as a function, and then deploy that function onto a serverless platform, like the new Fn project. Other applications can call that function (perhaps using a RESTful API) to do the required operation, and the serverless platform then takes care of everything else automatically behind the scenes, running the code when needed, idling it when not needed.

Read my essay, “Serverless Computing: What It Is, Why You Should Care,” to find out more.

Critical information about 46 million Malaysians were leaked online onto the Dark Web. The stolen data included mobile phone numbers from telcos and mobile virtual network operators (MVNOs), prepaid phone numbers, customers details including physical addresses – and even the unique IMEI and IMSI registration numbers associated with SIM cards.

Isolated instance from one rogue carrier? No. The carriers included Altel, Celcom, DiGi, Enabling Asia, Friendimobile, Maxis, MerchantTradeAsia, PLDT, RedTone, TuneTalk, Umobile and XOX; news about the breach were first published 19 October 2017 by a Malaysian online community.

When did the breach occur? According to lowyat.net, “Time stamps on the files we downloaded indicate the leaked data was last updated between May and July 2014 between the various telcos.”

That’s more than three years between theft of the information and its discovery. We have no idea if the carriers had already discovered the losses, and chose not to disclose the breaches.

A huge delay between a breach and its disclosure is not unusual. Perhaps things will change once the General Data Protection Regulation (GDPR) kicks in next year, when organizations must reveal a breach within three days of discovery. That still leaves the question of discovery. It simply takes too long!

According to Mandiant, the global average dwell time (time between compromise and detection) is 146 days. In some areas, it’s far worse: the EMEA region has a dwell time of 469 days. Research from the Ponemon Institute says that it takes an average of 98 days for financial services companies to detect intrusion on their networks, and 197 days in retail. It’s not surprising that the financial services folks do a better job – but three months seems like a very long time.

An article headline from InfoSecurity Magazine says it all: “Hackers Spend 200+ Days Inside Systems Before Discovery.” Verizon’s Data Breach Investigations Report for 2017 has some depressing news: “Breach timelines continue to paint a rather dismal picture — with time-to-compromise being only seconds, time-to-exfiltration taking days, and times to discovery and containment staying firmly in the months camp. Not surprisingly, fraud detection was the most prominent discovery method, accounting for 85% of all breaches, followed by law enforcement which was seen in 4% of cases.”

What Can You Do?

There are two relevant statistics. The first is time-to-discovery, and the other is time-to-disclosure, whether to regulators or customers.

  • Time-to-disclosure is a matter of policy, not technology. There are legal aspects, public-relations, financial (what if the breach happens during a “quiet period” prior to announcing results?), regulatory, and even law-enforcement (what if investigators are laying a trap, and don’t want to tip off that the breach has been discovered?).
  • Time-to-discovery, on the other hand, is a matter of technology (and the willingness to use it). What doesn’t work? Scanning log files using manual or semi-automated methods. Excel spreadsheets won’t save you here!

What’s needed are comprehensive endpoint monitoring capabilities, coupled with excellent threat intelligence and real-time analytics driven by machine learning. Nothing else can correlate huge quantities of data from such widely disparate sources, and hope to discover outliers based on patterns.

Discovery and containment takes months, says Verizon. You can’t have containment without discovery. With current methods, we’ve seen that discovery takes months or years, if it’s every detected at all. Endpoint monitoring technology, when coupled with machine learning — and with 24×7 managed security software providers — can reduce that to seconds or minutes.

There is no excuse for breaches staying hidden for three years or longer. None. That’s no way to run a business.

It’s always nice when a friend is quoted in an article. In this case, it’s one of my dearest and closest, John Romkey, founder of FTP Software. The story is, “The Internet Of Things Just Got Even More Unsafe To Use,” by Harold Stark, and published on Forbes.com.

The story talks about a serious vulnerability in the Internet of Things:

Mathy Vanhoef, Security Researcher at KU Leuven, made headlines last week with a blog where he described this strange new vulnerability that had the potential to affect every device that has ever been on a wi-fi network all at once. The vulnerability, dubbed KRACK or Key Reinstallation Attack, has a simple way of functioning. WPA2-PSK, the most widely used security protocol used to secure devices and routers connected to a wi-fi network, had a glaring flaw. This flaw, which allows a third-party hacker to trick their way into a device as it connects to a wi-fi network using a password, allows said hacker to access and modify all information available to this device without even being on the network. By interfering with the authorization process that allows a device to connect to a closed wi-fi network, the hacker can do things such as intercept traffic, access stored data and even modify information accessed by the device at the time. So this hacker could tell which websites you like to visit, play that video from your friend’s wedding last month or even infect your device with an unknown malware to cause further damage. Just to be clear, this vulnerability affects any and all devices that can connect to wi-fi networks, regardless of which software it is running.

You should read the whole story, which includes a quote from my friend John, here.

Open source software (OSS) offers many benefits for organizations large and small—not the least of which is the price tag, which is often zero. Zip. Nada. Free-as-in-beer. Beyond that compelling price tag, what you often get with OSS is a lack of a hidden agenda. You can see the project, you can see the source code, you can see the communications, you can see what’s going on in the support forums.

When OSS goes great, everyone is happy, from techies to accounting teams. Yes, the legal department may want to scrutinize the open source license to make sure your business is compliant, but in most well-performing scenarios, the lawyers are the only ones frowning. (But then again, the lawyers frown when scrutinizing commercial closed-source software license agreements too, so you can’t win.)

The challenge with OSS is that it can be hard to manage, especially when something goes wrong. Depending on the open source package, there can be a lot of mysteries, which can make ongoing support, including troubleshooting and performance tuning, a real challenge. That’s because OSS is complex.

It’s not like you can say, well, here’s my Linux distribution on my server. Oh, and here’s my open source application server, and my open source NoSQL database, and my open source log suite. In reality, those bits of OSS may be from separate OSS projects, which may (or may not) have been tested for how well they work together.

A separate challenge is that because OSS is often free-as-in-beer, the software may not be in the corporate inventory. That’s especially common if the OSS is in the form of a library or an API that might be built into other applications you’ve written yourself. The OSS might be invisible but with the potential to break or cause problems down the road.

You can’t manage what you don’t know about

When it comes to OSS, there may be a lot you don’t know about, such as those license terms or interoperability gotchas. Worse, there can be maintenance issues — and security issues. Ask yourself: Does your organization know all the OSS it has installed on servers on-prem or in the cloud? Coded into custom applications? Are you sure that all patches and fixes have been installed (and installed correctly), even on virtual machine templates, and that there are no security vulnerabilities?

In my essay “The six big gotchas: The impact of open source on data centers,” we’ll dig into the key topics: License management, security, patch management, maximizing uptime, maximizing performance, and supporting the OSS.

Automotive ECU (engine control unit)

Automotive ECU (engine control unit)

In my everyday life, I trust that if I make a panic stop, my car’s antilock brake system will work. The hardware, software, and servos will work together to ensure that my wheels don’t lock up—helping me avoid an accident. If that’s not sufficient, I trust that the impact sensors embedded behind the front bumper will fire the airbag actuators with the correct force to protect me from harm, even though they’ve never been tested. I trust that the bolts holding the seat in its proper place won’t shear. I trust the seat belts will hold me tight, and that cargo in the trunk won’t smash through the rear seats into the passenger cabin.

Engineers working on nearly every automobile sold worldwide ensure that their work practices conform to ISO 26262. That standard describes how to manage the functional safety of the electrical and electronic systems in passenger cars. A significant portion of ISO 26262 involves ensuring that software embedded into cars—whether in the emissions system, the antilock braking systems, the security systems, or the entertainment system—is architected, coded, and tested to be as reliable as possible.

I’ve worked with ISO 26262 and related standards on a variety of automotive software security projects. Don’t worry, we’re not going to get into the hairy bits of those standards because unless you are personally designing embedded real-time software for use in automobile components, they don’t really apply. Also, ISO 26262 is focused on the real-world safety of two-ton machines hurtling at 60-plus miles per hour—that is, things that will kill or hurt people if they don’t work as expected.

Instead, here are five IT systems management ideas that are inspired by ISO 26262. We’ll help you ensure your systems are designed to be Reliable, with a capital R, and Safe, with a capital S.

Read the list, and more, in my article for HP Enterprise Insights, “5 lessons for data center pros, inspired by automotive engineering standards.”

An organization’s Chief Information Security Officer’s job isn’t ones and zeros. It’s not about unmasking cybercriminals. It’s about reducing risk for the organization, for enabling executives and line-of-business managers to innovate and compete safely and  securely. While the CISO is often seen as the person who loves to say “No,” in reality, the CISO wants to say “Yes” — the job, after all, is to make the company thrive.

Meanwhile, the CISO has a small staff, tight budget, and the need to demonstrate performance metrics and ROI. What’s it like in the real world? What are the biggest challenges? We asked two former CISOs (it’s hard to get current CISOs to speak on the record), both of whom worked in the trenches and now advise CISOs on a daily basis.

To Jack Miller, a huge challenge is the speed of decision-making in today’s hypercompetitive world. Miller, currently Executive in Residence at Norwest Venture Partners, conducts due diligence and provides expertise on companies in the cyber security space. Most recently he served as chief security strategy officer at ZitoVault Software, a startup focused on safeguarding the Internet of Things.

Before his time at ZitoVault, Miller was the head of information protection for Auto Club Enterprises. That’s the largest AAA conglomerate with 15 million members in 22 states. Previously, he served as the CISO of the 5th and 11th largest counties in the United States, and as a security executive for Pacific Life Insurance.

“Big decisions are made in the blink of an eye,” says Miller. “Executives know security is important, but don’t understand how any business change can introduce security risks to the environment. As a CISO, you try to get in front of those changes – but more often, you have to clean up the mess afterwards.”

Another CISO, Ed Amoroso, is frustrated by the business challenge of justifying a security ROI. Amoroso is the CEO of TAG Cyber LLC, which provides advanced cybersecurity training and consulting for global enterprise and U.S. Federal government CISO teams. Previously, he was Senior Vice President and Chief Security Officer for AT&T, and managed computer and network security for AT&T Bell Laboratories. Amoroso is also an Adjunct Professor of Computer Science at the Stevens Institute of Technology.

Amoroso explains, “Security is an invisible thing. I say that I’m going to spend money to prevent something bad from happening. After spending the money, I say, ta-da, look, I prevented that bad thing from happening. There’s no demonstration. There’s no way to prove that the investment actually prevented anything. It’s like putting a “This House is Guarded by a Security Company” sign in front of your house. Maybe a serial killer came up the street, saw the sign, and moved on. Maybe not. You can’t put in security and say, here’s what didn’t happen. If you ask, 10 out of 10 CISOs will say demonstrating ROI is a huge problem.”

Read more in my article for Global Banking & Finance Magazine, “Be Prepared to Get Fired! And Other Business Advice for CISOs.”

“Someone is waiting just for you / Spinnin’ wheel, spinnin’ true.”

Those lyrics to a 1969 song by Blood, Sweat & Tears could also describe 2017 enterprise apps that time-out or fail because of dropped or poor connectivity. Wheels spin. Data is lost. Applications crash. Users are frustrated. Devices are thrown. Screens are smashed.

It doesn’t have to be that way. Always-on applications can continue to function even when the user loses an Internet or Wi-Fi connection. With proper design and testing, you won’t have to handle as many smartphone accidental-damage insurance claims.

Let’s start with the fundamentals. Many business applications are friendly front ends to remote services. The software may run on phones, tablets, or laptops, and the services may be in the cloud or in the on-premises data center.

When connectivity is strong, with sufficient bandwidth and low latency, the front-end software works fine. The user experience is excellent. Data sent to the back end is received and confirmed, and data served to the user front end is transmitted without delay. Joy!

When connectivity is non-existent or fails intermittently, when bandwidth is limited, and when there’s too much latency — which you can read as “Did the Internet connection go down again?!” — users immediately feel frustration. That’s bad news for the user experience, and also extremely bad in terms of saving and processing transactions. A user who taps a drop-down menu or presses “Enter” and sees nothing happen might progress to multiple mouse clicks, a force-reset of the application, or a reboot of the device, any of which could result in data loss. Submitted forms and uploads could be lost in a time-out. Sessions could halt. In some cases, the app could freeze (with or without a spinning indicator) or crash outright. Disaster!

What can you do about it? Easy: Read my article for HP Enterprise Insights, “How to design software that doesn’t crash when the Internet connection fails.”

 

I have a new research paper in Elsevier’s technical journal, Network Security. Here’s the abstract:

Lock it down! Button it up tight! That’s the default reaction of many computer security professionals to anything and everything that’s perceived as introducing risk. Given the rapid growth of cybercrime such as ransomware and the non-stop media coverage of data theft of everything from customer payment card information through pre-release movies to sensitive political email databases, this is hardly surprising.

The default reaction of many computer security professionals to anything that’s perceived as introducing risk is to lock down the system.

In attempting to lower risk, however, they also exclude technologies and approaches that could contribute significantly to the profitability and agility of the organisation. Alan Zeichick of Camden Associates explains how to make the most of technology by opening up networks and embracing innovation – but safely.

You can read the whole article, “Enabling innovation by opening up the network,” here.

Some large percentage of IT and security tasks and alerts require simple responses. On a small network, there aren’t many alerts, and so administrators can easily accommodate them: Fixing a connection here, approving external VPN access there, updating router firmware on that side, giving users the latest patches to Microsoft Office on that side, evaluating a security warning, dismissing a security warning, making sure that a newly spun-up virtual machine has the proper agents and firewall settings, reviewing log activity. That sort of thing.

On a large network, those tasks become tedious… and on a very large network, they can escalate unmanageably. As networks scale to hundreds, thousands, and hundreds of thousands of devices, thanks to mobility and the Internet of Things, the load expands exponentially – and so do routine IT tasks and alerts, especially when the network, its devices, users and applications are in constant flux.

Most tasks can be automated, yes, but it’s not easy to spell out in a standard policy-based system exactly what to do. Similarly, the proper way of handling alerts can be automated, but given the tremendous variety of situations, variables, combinations and permutations, that too can be challenging. Merely programming a large number of possible situations, and their responses, would be a tremendous task — and not even worth the effort, since the scripts would be brittle and would themselves require constant review and maintenance.

That’s why in many organizations, only responses to the very simplest of tasks and alert responses are programmed in rule-based systems. The rest are shunted over to IT and security professionals, whose highly trained brains can rapidly decide what to do and execute the proper response.

At the same time, those highly trained brains turn into mush because handling routine, easy-to-solve problems is mind-numbing and not intellectually challenging. Solving a problem once is exciting. Solving nearly the same problem a hundred times every day, five days a week, 52 weeks a year (not counting holidays) is inspiration for updating the C.V… and finding a more interesting job.

Enter Artificial Intelligence

AI has already proven itself in computer management and security. Consider the high-profile role that AI patter recognition plays in Cylance’s endpoint security software. The Cylance solution trains itself to recognize good files (like executables, images and documents) and malicious ones – and can spot the bad ones without using signatures. It can even spot those which have never been seen before, because it’s not training on specific viruses or trojans, but rather, on “good” vs. “bad.”

Torsten George is a believer, as he writes in “The Role of Artificial Intelligence in Cyber Security,”

Last year, the IT security community started to buzz about AI and machine learning as the Holy Grail for improving an organization’s detection and response capabilities. Leveraging algorithms that iteratively learn from data, promises to uncover threats without requiring headcounts or the need to know “what to look for”.

He continues,

Enlisting machine learning to do the heavy lifting in first line security data assessment enables analysts to focus on more advanced investigations of threats rather than performing tactical data crunching. This meeting of the minds, whereby AI is applied using a human-interactive approach holds a lot of promise for fighting, detecting, and responding to cyber risks.

Menlo Security is one of many network-protection companies that uses artificial intelligence. The Menlo Security Isolation Platform uses AI to prevent Internet-based malware from ever reaching an endpoint, such as a desktop or mobile device, because email and websites are accessed inside the cloud – not on the client’s computer. Only safe, malware-free rendering information is sent to the user’s endpoint, eliminating the possibility of malware reaching the user’s device. An artificial intelligence engine constantly scans the Internet session to provide protection against spear-phishing and other email attacks.

What if a machine does become compromised? It’s unlikely, but it can happen – and the price of a single breech can be incredible, especially if a hacker can take full control of the compromised device and use it to attack other assets within the enterprise, such as servers, routers or executives’ computers.

If a breach does occur, that’s when the AI technology like that of Javelin Networks leaps into action. The AI detects that the attack is in progress, alerts security teams, and isolates the device from the network. Simultaneously, the AI tricks the attackers into believing they’ve succeeded in their attack, therefore keeping them “on the line” while real-time forensics tools gather information needed to identify the attacker and help shut them down for good.

Manage the Network, Hal

Of course, AI can serve a vital purpose in managing a key element of modern networks beyond security. As Ajay Malik recently wrote in “Artificial intelligence will revolutionize Wi-Fi,”

The problem is that the data source in a wireless network is huge. The data varies at every transmission level. There is a “data rate” of each message transmitted. There are “retries” for each message transmitted.

The reason for not being able to “construct” the received message is specific for each message. The manual classification and analysis of this data is infeasible and uneconomic. Hence, all data available by different vendors is plagued by averages. This is where I believe artificial intelligence has a role to play.

Deep neural nets can automate the analysis and make it possible to analyze every trend of wireless. Machine learning and algorithms can ensure the end user experience. Only the use of AI can change the center of focus from the evolution of wireless or adding value to wireless networks to automatically ensuring the experience.

We will see AI at every level of the network operations center. There are too many devices, too many users, and too many rapid changes, for human and normal rule-based automation systems to keep up. Self-learning systems that adapt and solve real problems quickly and correctly will be essential in every IT organization.