Blockchain is a distributed digital ledger technology in which blocks of transaction records can be added and viewed—but can’t be deleted or changed without detection. Here’s where the name comes from: a blockchain is an ever-growing sequential chain of transaction records, clumped together into blocks. There’s no central repository of the chain, which is replicated in each participant’s blockchain node, and that’s what makes the technology so powerful. Yes, blockchain was originally developed to underpin Bitcoin and is essential to the trust required for users to trade digital currencies, but that is only the beginning of its potential.

Blockchain neatly solves the problem of ensuring the validity of all kinds of digital records. What’s more, blockchain can be used for public transactions as well as for private business, inside a company or within an industry group. “Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days, or even weeks.”

That’s the power of blockchain: an immutable digital ledger for recording transactions. It can be used to power anonymous digital currencies—or farm-to-table vegetable tracking, business contracts, contractor licensing, real estate transfers, digital identity management, and financial transactions between companies or even within a single company.

“Blockchain doesn’t have to just be used for accounting ledgers,” says Rakhmilevich. “It can store any data, and you can use programmable smart contracts to evaluate and operate on this data. It provides nonrepudiation through digitally signed transactions, and the stored results are tamper proof. Because the ledger is replicated, there is no single source of failure, and no insider threat within a single organization can impact its integrity.”

It’s All About Distributed Ledgers

Several simple concepts underpin any blockchain system. The first is the block, which is a batch of one or more transactions, grouped together and hashed. The hashing process produces an error-checking and tamper-resistant code that will let anyone viewing the block see if it has been altered. The block also contains the hash of the previous block, which ties them together in a chain. The backward hashing makes it extremely difficult for anyone to modify a single block without detection.

A chain contains collections of blocks, which are stored on decentralized, distributed servers. The more the better, with every server containing the same set of blocks and the latest values of information, such as account balances. Multiple transactions are handled within a single block using an algorithm called a Merkle tree, or hash tree, which provides fault and fraud tolerance: if a server goes down, or if a block or chain is corrupted, the missing data can be reconstructed by polling other servers’ chains.

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can view relevant data, but not everything in the chain. A customer might be able to verify that a contractor has a valid business license and see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

When originally conceived, blockchain had a narrow set of protocols. They were designed to govern the creation of blocks, the grouping of hashes into the Merkle tree, the viewing of data encapsulated into the chain, and the validation that data has not been corrupted or tampered with. Over time, creators of blockchain applications (such as the many competing digital currencies) innovated and created their own protocols—which, due to their independent evolutionary processes, weren’t necessarily interoperable. By contrast, the success of general-purpose blockchain services, which might encompass computing services from many technology, government, and business players, created the need for industry standards—such as Hyperledger, a Linux Foundation project.

Read more in my feature article in Oracle Magazine, March/April 2018, “It’s All About Trust.”

Far too many companies fail to learn anything from security breaches. According to CyberArk, cyber-security inertia is putting organizations at risk. Nearly half — 46% — of enterprises say their security strategy rarely changes substantially, even after a cyberattack.

That data comes from the organization’s new Global Advanced Threat Landscape Report 2018. The researchers surveyed 1,300 IT security decision-makers, DevOps and app developer professionals, and line-of-business owners in seven countries.

The Cloud is Unsecured

Cloud computing is a major focus of this report, and the study results are scary. CyberArk says, “Automated processes inherent in cloud environments are responsible for prolific creation of privileged credentials and secrets. These credentials, if compromised, can give attackers a crucial jumping-off point to achieve lateral access across networks, data and applications — whether in the cloud or on-premises.”

The study shows that

  • 50% of IT professionals say their organization stores business-critical information in the cloud, including revenue-generating customer- facing applications
  • 43% say they commit regulated customer data to the cloud
  • 49% of respondents have no privileged account security strategy for the cloud

While we haven’t yet seen major breaches caused by tech failures of cloud vendors, we have seen many, many examples of customer errors with the cloud. Those errors, such as posting customer information to public cloud storage services without encryption or proper password control, have allowed open access to private information.

CyberArk’s view is dead right: “There are still gaps in the understanding of who is responsible for security in the cloud, even though the public cloud vendors are very clear that the enterprise is responsible for securing cloud workloads. Additionally, few understand the full impact of the unsecured secrets that proliferate in dynamic cloud environments and automated processes.”

In other words, nobody is stepping up to the plate. (Perhaps cloud vendors should scan their customers’ files and warn them if they are uploading unsecured files. Nah. That’ll never happen – because if there’s a failure of that monitoring system, the cloud vendor could be held liable for the breach.)

Endpoint Security Is Neglected

I was astonished that the CyberArk study shows only 52% of respondents keep their operating system and patches current. Yikes. It’s conventional wisdom that maintaining patches is about the lowest-hanging of the low-hanging fruit. Unpatched servers have been easy pickings for hackers over the past few years.

CyberArk’s analysis appears accurate here: ”End users deploy a lot of technologies to protect endpoints, and they face many competing factors. These include compliance drivers, end-user usability, endpoint configuration management and an increasingly highly mobile and remote user base, all of which make visibility and control harder. With advanced malware attacks over the past year including WannaCry and NotPetya, there is certainly room for greater prioritization around blocking credential theft as a critical step to preventing attackers from gaining access to the network and initiating lateral movement.”

Many Threats, Poor Planning

According to the study, the greatest cyber security threats expected by IT professionals are:

  • Targeted phishing attacks (56%)
  • Insider threats (51%)
  • Ransomware or malware (48%)
  • Unsecured privileged accounts (42%)
  • Unsecured data stored in the cloud (41%)

Meanwhile, 37% respondents say they store user passwords in Excel spreadsheets or in Word docs (hopefully not on the cloud).

Back to the cloud for a moment. The study says that “Almost all (94%) security respondents say their organizations store and serve data using public cloud services. And they are increasingly likely to entrust cloud providers with much more sensitive data than in the past. For instance, half (50%) of IT professionals say their organization stores business-critical information in the cloud, including revenue-generating customer-facing applications, and 43% say they commit regulated customer data to the cloud.”

And all that, with far too many companies reporting poor security practices when it comes to the cloud. Expect more breaches. Lots more.

Don’t be misled by the name: Serverless cloud computing contains servers. Lots of servers. What makes serverless “serverless” is that developers, IT administrators and business leaders don’t have to think about those servers. Ever.

In the serverless model, online computing power gets tapped automatically only at the moment it’s needed. This can save organizations money and, just as importantly, make the IT organization more agile when it comes to building and launching new applications. That’s why serverless has the potential to be a game-changer for enterprise.

“Serverless is the next logical step for computing,” says Bob Quillin, Oracle vice president of developer relations. “We went from a data center where you own everything, to the cloud with shared servers and centralized infrastructure, to serverless, where you don’t even care about the servers themselves.”

In the serverless model, developers write and deploy what are called “functions.” Those are slimmed-down applications that take one action, such as processing an e-commerce order or recording that a shipment arrived. They run those functions directly on the cloud, using technology that eliminates the need to manage the servers, since it delivers computing power the moment that a function gets called into action.

Both the economics and the speed-of-development benefits of serverless cloud computing are compelling. Here are four CEO-level insights from Quillin for thinking about serverless computing.

First: Serverless can save real money. In the old data center model, says Quillin, organizations had to buy and maintain expensive servers, infrastructure and real estate.

In a traditional cloud model, organizations turn that capital expense into an operating one by provisioning virtualized servers and infrastructure. That saves money compared with the old data center model, Quillin says, but “you are typically paying for compute resources that are running all the time—in increments of CPU hours at least.” If you create a cluster of cloud servers, you don’t typically build it up and break it down every day, and certainly not every hour, as needed. That’s just too much management and orchestration for most organizations.

Serverless, on the other hand, essentially lets you pay only for exactly the time that a workload runs. For closing the books, it may be a once-a-month charge for a few hours of computing time. For handling transactions, it might be a few tenths of a second whenever a customer makes a sale or an Internet of Things (IoT) device sends data.

For the rest of the list, and the full story, see my essay for the Wall Street Journal, “4 Things CEOs Should Know About Serverless Computing.”

DevOps is a technology discipline well-suited to cloud-native application development. When it only takes a few mouse clicks to create or manage cloud resources, why wouldn’t developers and IT operation teams work in sync to get new apps out the door and in front of user faster? The DevOps culture and tactics have done much to streamline everything from coding to software testing to application deployment.

Yet far from every organization has embraced DevOps, and not every organization that has tried DevOps has found the experience transformative. Perhaps that’s because the idea is relatively young (the term was coined around 2009), suggests Javed Mohammed, systems community manager at Oracle, or perhaps because different organization are at such different spots in DevOps’ technology adoption cycle. That idea—about where we are in the adoption of DevOps—became a central theme of a recent podcast discussion among tech experts. Following are some highlights.

Confusion about DevOps can arise because DevOps affects dev and IT teams in many ways. “It can apply to the culture piece, to the technology piece, to the process piece—and even how different teams interact, and how all of the different processes tie together,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC and co-author of Accelerate: The Science of Lean Software and DevOps.

The adoption and effectiveness of DevOps within a team depends on where each team is, and where organizations are. One team might be narrowly focused on the tech used to automate software deployment to the public, while another is looking at the culture and communication needed to release new features on a weekly or even daily basis. “Everyone is at a very, very different place,” Forsgren says.

Indeed, says Forsgren, some future-thinking organizations are starting to talk about what ‘DevOps Next’ is, extending the concept of developer-led operations beyond common best practices. At the same time, in other companies, there’s no DevOps. “DevOps isn’t even on their radar,” she sighs. Many experts, including Forsgren, see that DevOps is here, is working, and is delivering real value to software teams today—and is helping businesses create and deploy better software faster and less expensively. That’s especially true when it comes to cloud-native development, or when transitioning existing workloads from the data center into the cloud.

Read more in my essay, “DevOps: Sometimes Incredibly Transformative, Sometimes Not So Much.”

The VPN model of extending security through enterprise firewalls is dead, and the future now belongs to the Software Defined Perimeter (SDP). Firewalls imply that there’s an inside to the enterprise, a place where devices can communicate in a trusted manner. This being so, there must also be an outside where communications aren’t trusted. Residing between the two is that firewall which decides which traffic can egress and which can enter following deep inspection, based on scans and policies.

What about trusted applications requiring direct access to corporate resources from outside the firewall? That’s where Virtual Private Networks came in, by offering a way to push a hole in the firewall. VPNs are a complex mechanism for using encryption and secure tunnels to bridge multiple networks, such as a head-office and regional office network. They can also temporarily allow remote users to become part of the network.

VPNs are well established but perceived as difficult to configure on the endpoints, hard for IT to manage and challenging to scale for large deployments. There are also issues of software compatibility: not everything works through a VPN. Putting it bluntly, almost nobody likes VPNs and there is now a better way to securely connect mobile applications and Industrial Internet of Things (IIoT) devices into the world of datacenter servers and cloud-based applications.

Authenticate Then Connect

The Software Defined Perimeter depends on a rigorous process of identity verification of both client and server using a secure control channel, thereby replacing the VPN. The negotiation for trustworthy identification is based on cryptographic protocols like Transport Layer Security (TLS) which succeeds the old Secure Sockets Layer (SSL).

With identification and trust established by both parties, a secure data channel can be provisioned with specified bandwidth and quality. For example, the data channel might require very low latency and minimal jitter for voice messaging or it might need high bandwidth for streaming video, or alternatively be low-bandwidth and low-cost for data backups.

On the client side, the trust negotiation and data channel can be tied to a specific mobile application, perhaps an employee’s phone or tablet. The corporate customer account management app needs trusted access to the corporate database server, but no other phone service should be granted access.

SDP is based on the notion of authenticate-before-connect, which reminds me of reverse-charge phone calls of the distant past. A caller would ask the operator to place a reverse charge call to Sally on a specified number from her nephew, Bob. The operator placing the call would chat with Sally over the equivalent of the control channel. Only if the operator believed she was talking to Sally, and providing Sally accepted the charges, would the operator establish the Bob-to-Sally connection, which is the equivalent of the SDP data channel.

Read more in my essay for Network Computing, “Forget VPNs: the future is SDP.”

Companies can’t afford downtime. Employees need access to their applications and data 24/7, and so do other business applications, manufacturing and logistics management systems, and security monitoring centers. Anyone who thinks that the brute force effort of their hard-working IT administrators is enough to prevent system downtime just isn’t facing reality.

Traditional systems administrators and their admin tools can’t keep up with the complexity inherent in any modern enterprise. A recent survey of the Oracle Applications Users Group has found that despite significant progress in systems management automation, many customers still report that more than 80% of IT issues are first discovered and reported by users. The number of applications is spiraling up, while data increases at an even more rapid rate.

The boundaries between systems are growing more complex, especially with cloud-based and hybrid-cloud architectures. That reality is why Oracle, after analyzing a survey of its industry-leading customers, recently predicted that by 2020, more than 80% of application infrastructure operations will be managed autonomously.

Autonomously is an important word here. It means not only doing mundane day-to-day tasks including monitoring, tuning, troubleshooting, and applying fixes automatically, but also detecting and rapidly resolving issues. Even when it comes to the most complex problems, machines can simplify the analysis—sifting through the millions of possibilities to present simpler scenarios, to which people then can apply their expertise and judgment of what action to take.

Oracle asked, about the kind of activities that IT system administrators do. That includes on a daily, weekly, and monthly basis—things such as password resets, system reboots, software patches, and the like.

Expect that IT teams will soon reduce by several orders of magnitude the number of situations like those that need manual intervention. If an organization typically has 20,000 human-managed interventions per year, humans will need to touch only 20. The rest will be handled through systems that can apply automation combined with machine learning, which can analyze patterns and react faster than human admins to enable preventive maintenance, performance optimization, and problem resolution.

Read more in my article for Forbes, “Prediction: 80% of Routine IT Operations Will Soon Be Solved Autonomously.”

Not a connected car.Nobody wants bad guys to be able to hack connected cars. Equally importantly, they shouldn’t be able to hack any part of the multi-step communications path that lead from the connected car to the Internet to cloud services – and back again. Fortunately, companies are working across the automotive and security industries to make sure that does happen.

The consequences of cyberattacks against cars range from the bad to the horrific: Hackers might be able to determine that a driver is not home, and sell that information to robbers. Hackers could access accounts and passwords, and be able to leverage that information for identity theft, or steal information from bank accounts. Hackers might be able to immobilize vehicles, or modify/degrade the functionality of key safety features like brakes or steering. Hackers might even be able to seize control of the vehicle, and cause accidents or terrorist incidents.

Horrific. Thankfully, companies like semiconductor leader Micron Technology, along with communication security experts NetFoundry, have a plan – and are partnering with vehicle manufacturers to embed secure, trustworthy hardware into connected cars. The result: Safety. Security. Trust. Vroom.

It starts with the Internet of Things

The IoT consists of autonomous computing units, connected to back-end services via the Internet. Those back-end services are often in the cloud, and in the case of connected cars, might offer anything from navigation to infotainment to preventive maintenance to firmware upgrades for build-in automotive features. Often, the back-end services would be offered through the automobile’s manufacturer, though they may be provisioned through third-party providers.

The communications chain for connected cars is lengthy. On the car side, it begins with an embedded component (think stereo head unit, predictive front-facing radar used for adaptive cruise control, or anti-lock brake monitoring system). The component will likely contain or be connected to a ECU – an embedded control unit, a circuit board with a microprocessor, firmware, RAM, and a network connection. The ECU, in turn, is connected via an in-vehicle network, which connected to a communications gateway.

That communications gateway talks to a telecommunications provider, which could change as the vehicle crosses service provider or national boundaries. The telco links to the Internet, the Internet links to a cloud provider (such as Amazon Web Services), and from there, there are services that talk to the automotive systems.

Trust is required at all stages of the communications. The vehicle must be certain that its embedded devices, ECUs, and firmware are not corrupted or hacked. The gateway needs to know that it’s talking to the real car and its embedded systems – not fakes or duplicates offered by hackers. It also needs to know that the cloud services are the genuine article, and not fakes. And of course, the cloud services must be assured that they are talking to the real, authenticated automotive gateway and in-vehicle components.

Read more about this in my feature for Business Continuity, “Building Cybertrust into the Connected Car.”

Farewell, Prius

A sad footnote to this blog post. Our faithful Prius, pictured above, was totaled in a collision. Nobody was injured, which is the most important, but the car is gone. May it rust in piece.

We all have heard the usual bold predictions for technology in 2018: Lots of cloud computing, self-driving cars, digital cryptocurrencies, 200-inch flat-screen televisions, and versions of Amazon’s Alexa smart speaker everywhere on the planet. Those types of predictions, however, are low-hanging fruit. They’re not bold. One might as well predict that there will be some sunshine, some rainy days, a big cyber-bank heist, and at least one smartphone catching fire.

Let’s dig for insights beyond the blindingly obvious. I talked to several tech leaders, deep-thinking individuals in California’s Silicon Valley, asking them for their predictions, their idea of new trends, and disruptions in the tech industry. Let’s see what caught their eye.

Gary Singh, VP of marketing, OnDot Systems, believes that 2018 will be the year when mobile banking will transform into digital banking — which is more disruptive than one would expect. “The difference between digital and mobile banking is that mobile banking is informational. You get information about your accounts,” he said. Singh continues, “But in terms of digital banking, it’s really about actionable insights, about how do you basically use your funds in the most appropriate way to get the best value for your dollar or your pound in terms of how you want to use your monies. So that’s one big shift that we would see start to happen from mobile to digital.”

Tom Burns, Vice President and General Manager of Dell EMC Networking, has been following Software-Defined Wide Area Networks. SD-WAN is a technology that allows enterprise WANs to thrive over the public Internet, replacing expensive fixed-point connections provisioned by carriers using technologies like MPLS. “The traditional way of connecting branches in office buildings and providing services to those particular branches is going to change,” Burns observed. “If you look at the traditional router, a proprietary architecture, dedicated lines. SD-WAN is offering a much lower cost but same level of service opportunity for customers to have that data center interconnectivity or branch connectivity providing some of the services, maybe a full even office in the box, but security services, segmentation services, at a much lower cost basis.”

NetFoundry’s co-founder, Mike Hallett, sees a bright future for Application Specific Networks, which link applications directly to cloud or data center applications. The focus is on the application, not on the device. “For 2018, when you think of the enterprise and the way they have to be more agile, flexible and faster to move to markets, particularly going from what I would call channel marketing to, say, direct marketing, they are going to need application-specific networking technologies.” Hallett explains that Application Specific Networks offer the ability to be able to connect from an application, from a cloud, from a device, from a thing, to any application or other device or thing quickly and with agility. Indeed, those connections, which are created using software, not hardware, could be created “within minutes, not within the weeks or months it might take, to bring up a very specific private network, being able to do that. So the year of 2018 will see enterprises move towards software-only networking.”

Mansour Karam, CEO and founder of Apstra, also sees software taking over the network. “I really see massive software-driven automation as a major trend. We saw technologies like intent-based networking emerge in 2017, and in 2018, they’re going to go mainstream,” he said.

There’s more

There are predictions around open networking, augmented reality, artificial intelligence – and more. See my full story in Upgrade Magazine, “From SD-WAN to automation to white-box switching: Five tech predictions for 2018.”

Tom Burns, VP and General Manager of Dell EMC Networking, doesn’t want 2018 to be like 2017. Frankly, none of us in tech want to hit the “repeat” button either. And we won’t, not with increased adoption of blockchain, machine learning/deep learning, security-as-a-service, software-defined everything, and critical enterprise traffic over the public Internet.

Of course, not all possible trends are positive ones. Everyone should prepare for more ransomware, more dangerous data breaches, newly discovered flaws in microprocessors and operating systems, lawsuits over GDPR, and political attacks on Net Neutrality. Yet, as the tech industry embraces 5G wireless and practical applications of the Internet of Things, let’s be optimistic, and hope that innovation outweighs the downsides of fast-moving technology.

Here, Dell has become a major force in networking across the globe. The company’s platform, known as Dell EMC Open Networking, includes a portfolio of data center switches and software, as well as solutions for campus and branch networks. Plus, Dell offers end-to-end services for digital transformation, training, and multivendor environment support.

Tom Burns heads up Dell’s networking business. That business became even larger in September 2106, which Dell closed on its US$67 billion acquisition of EMC Corp. Before joining Dell in 2012, Burns was a senior executive at Alcatel-Lucent for many years. He and I chatted in early January at one of Dell’s offices in Santa Clara, Calif.

Q: What’s the biggest tech trend from 2017 that you see continuing into 2018?

Tom Burns (TB): The trend that I think that will continue into 2018 and even beyond is around digital transformation. And I recognize that everyone may have a different definition of what that means, but what we at Dell Technologies believe it means is that the number of connected devices exploding, whether it be cell phones or RFIDs or intelligent type of devices that are looking at our factories and so forth.

And all of this information needs to be collected and analyzed, with what some call artificial intelligence. Some of it needs to be aggregated at the edge. Some of it’s going to be brought back to the core data centers. This is what we infer to as IT transformation, to enable workforce transformation and other capabilities to deliver the applications, the information, the video, the voice communications, in real time to the users and give them the intelligence from the information that’s being gathered to make real-time decisions or whatever they need the information for.

Q: What do you see as being the tech trend from 2017 that you hope won’t continue into 2018?

TB: The trend that won’t continue into 2018 is the old buying habits around previous-generation technology. CIOs and CEOs, whether in enterprises or in service providers, are going to have to think of a new way to deliver their services and applications on a real-time basis, and the traditional architectures that have driven our data centers over the years just is not going to work anymore. It’s not scalable. It’s not flexible. It doesn’t drive out the costs that are necessary in order to enable those new applications.

So one of the things that I think is going to stop in 2018 is the old way of thinking – proprietary applications, proprietary full stacks. I think disaggregation, open, is going to be adopted much, much faster.

Q: If you could name one thing that will predict how the tech industry will do business next year, what do you think it will be?

TB: Well, I think one of the major changes, and we’ve started to see it already, and in fact, Dell Technologies announced it about a year ago, is how is our technology being consumed? We’ve been, let’s face it, box sellers or even solution providers that look at it from a CapEx standpoint. We go in, talk to our customers, we help them enable a new application as a service, and we kind of walk away. We sell them the product, and then obviously we support the product.

More and more, I think the customers and the consumers are looking for different ways to consume that technology, so we’ve started things like consumption models like pay as you grow, pay as you turn on, consumption models that allows us to basically ignite new services on demand. We have some several customers that are doing this, particularly around the service provider area. So I think one way tech companies are going to change on how they deliver is this whole thing around pay as a service, consumption models and a new way to really provide the technology capabilities to our customers and then how do they enable them.

Q: If you could predict one thing that will change how enterprise customers do business next year…?

TB: One that we see as a huge, tremendous impact on how customers are going to operate is SD-WAN. The traditional way of connecting branches and office buildings and providing services to those particular branches is going to change. If you look at the traditional router, a proprietary architecture, dedicated lines, SD-WAN is offering a much lower cost but same level of service opportunity for customers to have that data center interconnectivity or branch connectivity, providing some of the services, maybe a full even office in the box, but security services, segmentation services, at a much lower cost basis. So I think that one of the major changes for enterprises next year and service providers is going to be this whole concept and idea with real technology behind it around Software-Defined WAN.

Read the full interview

There’s a lot more to my conversation with Tom Burns. Read the entire interview at Upgrade Magazine.

The pattern of cloud adoption moves something like the ketchup bottle effect: You tip the bottle and nothing comes out, so you shake the bottle and suddenly you have ketchup all over your plate.

That’s a great visual from Frank Munz, software architect and cloud evangelist at Munz & More, in Germany. Munz and a few other leaders in the Oracle community were interviewed on a podcast by Bob Rhubart, Architect Community Manager at Oracle, about the most important trends they saw in 2017. The responses covered a wide range of topics, from cloud to blockchain, from serverless to machine learning and deep learning.

During the 44-minute session, “What’s Hot? Tech Trends That Made a Real Difference in 2017,” the panel took some fascinating detours into the future of self-programming computers and the best uses of container technologies like Kubernetes. For those, you’ll need to listen to the podcast.

The panel included: Frank Munz; Lonneke Dikmans, chief product officer of eProseed, Netherlands; Lucas Jellema, CTO, AMIS Services, Netherlands; Pratik Patel, CTO, Triplingo, US; and Chris Richardson, founder and CEO, Eventuate, US. The program was recorded in San Francisco at Oracle OpenWorld and JavaOne.

The cloud’s tipping point

The ketchup quip reflects the cloud passing a tipping point of adoption in 2017. “For the first time in 2017, I worked on projects where large, multinational companies give up their own data center and move 100% to the cloud,” Munz said. These workload shifts are far from a rarity. As Dikmans said, the cloud drove the biggest change and challenge: “[The cloud] changes how we interact with customers, and with software. It’s convenient at times, and difficult at others.”

Security offered another way of looking at this tipping point. “Until recently, organizations had the impression that in the cloud, things were less secure and less well managed, in general, than they could do themselves,” said Jellema. Now, “people have come to realize that they’re not particularly good at specific IT tasks, because it’s not their core business.” They see that cloud vendors, whose core business is managing that type of IT, can often do those tasks better.

In 2017, the idea of shifting workloads en masse to the cloud and decommissioning data centers became mainstream and far less controversial.

But wait, there’s more! See about Blockchain, serverless computing, and pay-as-you-go machine learning, in my essay published in Forbes, “Tech Trends That Made A Real Difference In 2017.”

With lots of inexpensive, abundant computation resources available, nearly anything becomes possible. For example, you can process a lot of network data to identify patterns, identify intelligence, and product insight that can be used to automate networks. The road to Intent-Based Networking Systems (IBNS) and Application-Specific Networks (ASN) is a journey. That’s the belief of Rajesh Ghai, Research Director of Telecom and Carrier IP Networks at IDC.

Ghai defines IBNS as a closed-loop continuous implementation of several steps:

  • Declaration of intent, where the network administrator defines what the network is supposed to do
  • Translation of intent into network design and configuration.
  • Validation of the design using a model that decides if that configuration can actually be implemented,
  • Propagation of that configuration into the network devices via APIs.
  • Gather and study real-time telemetry from all the devices.
  • Use machine learning to determine whether desired state of policy has been achieved. And then repeat,

Related to that concept, Ghai explains, is ASN. “It’s also a concept which is software control and optimization and automation. The only difference is that ASN is more applicable to distributed applications over the internet than IBNS.”

IBNS Operates Networks as One System

“Think of intent-based networking as software that sits on top of your infrastructure and focusing on the networking infrastructure, and enables you to operate your network infrastructure as one system, as opposed to box per box,” explained Mansour Karam, Founder, CEO of Apstra, which offers IBNS solutions for enterprise data centers.

“To achieve this, we have to start with intent,” he continued. “Intent is both the high-level business outcomes that are required by the business, but then also we think of intent as applying to every one of those stages. You may have some requirements in how you want to build.”

Karam added, “Validation includes tests that you would run — we call them expectations — to validate that your network indeed is behaving as you expected, as per intent. So we have to think of a sliding scale of intent and then we also have to collect all the telemetry in order to close the loop and continuously validate that the network does what you want it to do. There is the notion of state at the core of an IBNS that really boils down to managing state at scale and representing it in a way that you can reason about the state of your system, compare it with the desired state and making the right adjustments if you need to.”

The upshot of IBNS, Karam said: If you have powerful automation you’re taking the human out of the equation, and so you get a much more agile network. You can recoup the revenues that otherwise you would have lost, because you’re unable to deliver your business services on time. You will reduce your outages massively, because 80% of outages are caused by human error. You reduce your operational expenses massively, because organizations spend $4 operating every dollar of CapEx, and 80% of it is manual operations. So if you take that out you should be able to recoup easily your entire CapEx spend on IBNS.”

ASN Gives Each Application It Own Network

“Application-Specific Networks, like Intent-Based Networking Systems, enable digital transformation, agility, speed, and automation,” explained Galeal Zino, Founder of NetFoundry, which offers an ASN platform.

He continued, “ASN is a new term, so I’ll start with a simple analogy. ASNs are like are private clubs — very, very exclusive private clubs — with exactly two members, the application and the network. ASN literally gives each application its own network, one that’s purpose-built and driven by the specific needs of that application. ASN merges the application world and the network world into software which can enable digital transformation with velocity, with scale, and with automation.”

Read more in my new article for Upgrade Magazine, “Manage smarter, more autonomous networks with Intent-Based Networking Systems and Application Specific Networking.”

When the little wireless speaker in your kitchen acts on your request to add chocolate milk to your shopping list, there’s artificial intelligence (AI) working in the cloud, to understand your speech, determine what you want to do, and carry out the instruction.

When you send a text message to your HR department explaining that you woke up with a vision-blurring migraine, an AI-powered chatbot knows how to update your status to “out of the office” and notify your manager about the sick day.

When hackers attempt to systematically break into the corporate computer network over a period of weeks, AI sees the subtle patterns in historical log data, recognizes outliers in the packet traffic, raises the alarm, and recommends appropriate countermeasures.

AI is nearly everywhere in today’s society. Sometimes it’s fairly obvious (as with a chatbot), and sometimes AI is hidden under the covers (as with network security monitors). It’s a virtuous cycle: Modern cloud computing and algorithms make AI a fast, efficient, and inexpensive approach to problem-solving. Developers discover those cloud services and algorithms and imagine new ways to incorporate the latest AI functionality into their software. Businesses see the value of those advances (even if they don’t know that AI is involved), and everyone benefits. And quickly, the next wave of emerging technology accelerates the cycle again.

AI can improve the user experience, such as when deciphering spoken or written communications, or inferring actions based on patterns of past behavior. AI techniques are excellent at pattern-matching, making it easier for machines to accurately decipher human languages using context. One characteristic of several AI algorithms is flexibility in handling imprecise data: Human text. Specially, chatbots—where humans can type messages on their phones, and AI-driven software can understand what they say and carry on a conversation, providing desired information or taking the appropriate actions.

If you think AI is everywhere today, expect more tomorrow. AI-enhanced software-as-a-service and platform-as-a-service products will continue to incorporate additional AI to help make cloud-delivered and on-prem services more reliable, more performant, and more secure. AI-driven chatbots will find their ways into new, innovative applications, and speech-based systems will continue to get smarter. AI will handle larger and larger datasets and find its way into increasingly diverse industries.

Sometimes you’ll see the AI and know that you’re talking to a bot. Sometimes the AI will be totally hidden, as you marvel at the, well, uncanny intelligence of the software, websites, and even the Internet of Things. If you don’t believe me, ask a chatbot.

Read more in my feature article in the January/February 2018 edition of Oracle Magazine, “It’s Pervasive: AI Is Everywhere.”

Amazon says that that a cloud-connected speaker/microphone was at the top of the charts: “This holiday season was better than ever for the family of Echo products. The Echo Dot was the #1 selling Amazon Device this holiday season, and the best-selling product from any manufacturer in any category across all of Amazon, with millions sold.”

The Echo products are an ever-expanding family of inexpensive consumer electronics from Amazon, which connect to a cloud-based service called Alexa. The devices are always listening for spoken commands, and will respond through conversation, playing music, turning on/off lights and other connected gadgets, making phone calls, and even by showing videos.

While Amazon doesn’t release sales figures for its Echo products, it’s clear that consumers love them. In fact, Echo is about to hit the road, as BMW will integrate the Echo technology (and Alexa cloud service) into some cars beginning this year. Expect other automakers to follow.

Why the Echo – and Apple’s Siri and Google’s Home? Speech.

The traditional way of “talking” to computers has been through the keyboard, augmented with a mouse used to select commands or input areas. Computers initially responded only to typed instructions using a command-line interface (CLI); this was replaced in the era of the Apple Macintosh and the first iterations of Microsoft Windows with windows, icons, menus, and pointing devices (WIMP). Some refer to the modern interface used on standard computers as a graphic user interface (GUI); embedded devices, such as network routers, might be controlled by either a GUI or a CLI.

Smartphones, tablets, and some computers (notably running Windows) also include touchscreens. While touchscreens have been around for decades, it’s only in the past few years they’ve gone mainstream. Even so, the primary way to input data was through a keyboard – even if it’s a “soft” keyboard implemented on a touchscreen, as on a smartphone.

Talk to me!

Enter speech. Sometimes it’s easier to talk, simply talk, to a device than to use a physical interface. Speech can be used for commands (“Alexa, turn up the thermostat” or “Hey Google, turn off the kitchen lights”) or for dictation.

Speech recognition is not easy for computers; in fact, it’s pretty difficult. However, improved microphones and powerful artificial-intelligence algorithms make speech recognition a lot easier. Helping the process: Cloud computing, which can throw nearly unlimited resources at speech recognition, including predictive analytics. Another helper: Constrained inputs, which means that when it comes to understanding commands, there are only so many words for the speech recognition system to decode. (Free-form dictation, like writing an essay using speech recognition, is a far harder problem.)

Speech recognition is only going to get better – and bigger. According to one report, “The speech and voice recognition market is expected to be valued at USD 6.19 billion in 2017and is likely to reach USD 18.30 billion by 2023, at a CAGR of 19.80% between 2017 and 2023. The growing impact of artificial intelligence (AI) on the accuracy of speech and voice recognition and the increased demand for multifactor authentication are driving the market growth.”

Helping the process: Cloud computing, which can throw nearly unlimited resources at speech recognition, including predictive analytics. Another helper: Constrained inputs, which means that when it comes to understanding commands, there are only so many words for the speech recognition system to decode. (Free-form dictation, like writing an essay using speech recognition, is a far harder problem.)

It’s a big market

Speech recognition is only going to get better – and bigger. According to one report, “The speech and voice recognition market is expected to be valued at USD 6.19 billion in 2017and is likely to reach USD 18.30 billion by 2023, at a CAGR of 19.80% between 2017 and 2023. The growing impact of artificial intelligence (AI) on the accuracy of speech and voice recognition and the increased demand for multifactor authentication are driving the market growth.” The report continues:

“The speech recognition technology is expected to hold the largest share of the market during the forecast period due to its growing use in multiple applications owing to the continuously decreasing word error rate (WER) of speech recognition algorithm with the developments in natural language processing and neural network technology. The speech recognition technology finds applications mainly across healthcare and consumer electronics sectors to produce health data records and develop intelligent virtual assistant devices, respectively.

“The market for the consumer vertical is expected to grow at the highest CAGR during the forecast period. The key factor contributing to this growth is the ability to integrate speech and voice recognition technologies into other consumer devices, such as refrigerators, ovens, mixers, and thermostats, with the growth of Internet of Things.”

Right now, many of us are talking to Alexa, talking to Siri, and talking to Google Home. Back in 2009, I owned a Ford car that had a primitive (and laughably inaccurate) infotainment system – today, a new car might do a lot better, perhaps due to embedded Alexa. Will we soon be talking to our ovens, to our laser printers and photocopiers, to our medical implants, to our assembly-line equipment, and to our network infrastructure? It wouldn’t surprise Alexa in the least.

In The Terminator, the Skynet artificial intelligence was turned on to track down hacking a military computer network. Turns out the hacker was Skynet itself. Is there a lesson there? Could AI turn against us, especially as it relates to the security domain?

That was one of the points I made while moderating a discussion of cybersecurity and AI back in October 2017. Here’s the start of a blog post written by my friend Tami Casey about the panel:

Mention artificial intelligence (AI) and security and a lot of people think of Skynet from The Terminator movies. Sure enough, at a recent Bay Area Cyber Security Meetup group panel on AI and machine learning, it was moderator Alan Zeichick – technology analyst, journalist and speaker – who first brought it up. But that wasn’t the only lively discussion during the panel, which focused on AI and cybersecurity.

I found two areas of discussion particularly interesting, which drew varying opinions from the panelists. One, around the topic of AI eliminating jobs and thoughts on how AI may change a security practitioner’s job, and two, about the possibility that AI could be misused or perhaps used by malicious actors with unintended negative consequences.

It was a great panel. I enjoyed working with the Meetup folks, and the participants: Allison Miller (Google), Ali Mesdaq (Proofpoint), Terry Ray (Imperva), Randy Dean (Launchpad.ai & Fellowship.ai).

You can read the rest of Tami’s blog here, and also watch a video of the panel.

The bad news: There are servers used in serverless computing. Real servers, with whirring fans and lots of blinking lights, installed in racks inside data centers inside the enterprise or up in the cloud.

The good news: You don’t need to think about those servers in order to use their functionality to write and deploy enterprise software. Your IT administrators don’t need to provision or maintain those servers, or think about their processing power, memory, storage, or underlying software infrastructure. It’s all invisible, abstracted away.

The whole point of serverless computing is that there are small blocks of code that do one thing very efficiently. Those blocks of code are designed to run in containers so that they are scalable, easy to deploy, and can run in basically any computing environment. The open Docker platform has become the de facto industry standard for containers, and as a general rule, developers are seeing the benefits of writing code that target Docker containers, instead of, say, Windows servers or Red Hat Linux servers or SuSE Linux servers, or any specific run-time environment. Docker can be hosted in a data center or in the cloud, and containers can be easily moved from one Docker host to another, adding to its appeal.

Currently, applications written for Docker containers still need to be managed by enterprise IT developers or administrators. That means deciding where to create the containers, ensuring that the container has sufficient resources (like memory and processing power) for the application, actually installing the application into the container, running/monitoring the application while it’s running, and then adding more resources if required. Helping do that is Kubernetes, an open container management and orchestration system for Docker. So while containers greatly assist developers and admins in creating portable code, the containers still need to be managed.

That’s where serverless comes in. Developers write their bits of code (such as to read or write from a database, or encrypt/decrypt data, or search the Internet, or authenticate users, or to format output) to run in a Docker container. However, instead of deploying directly to Docker, or using Kubernetes to handle deployment, they write their code as a function, and then deploy that function onto a serverless platform, like the new Fn project. Other applications can call that function (perhaps using a RESTful API) to do the required operation, and the serverless platform then takes care of everything else automatically behind the scenes, running the code when needed, idling it when not needed.

Read my essay, “Serverless Computing: What It Is, Why You Should Care,” to find out more.

Those are two popular ways of migrating enterprise assets to the cloud:

  1. Write new cloud-native applications.
  2. Lift-and-shift existing data center applications to the cloud.

Gartner’s definition: “Lift-and-shift means that workloads are migrated to cloud IaaS in as unchanged a manner as possible, and change is done only when absolutely necessary. IT operations management tools from the existing data center are deployed into the cloud environment largely unmodified.”

There’s no wrong answer, no wrong way of proceeding. Some data center applications (including servers and storage) may be easier to move than others. Some cloud-native apps may be easier to write than others. Much depends on how much interconnectivity there is between the applications and other software; that’s why, for example, public-facing websites are often easiest to move to the web, while tightly coupled internal software, such as inventory control or factory-floor automation, can be trickier.

That’s why in some cases, a hybrid strategy is best. Some parts of the applications are moved up to the cloud, while others remain in the data centers, with SD-WANs or other connectivity linking everything together in a secure manner.

In other words, no one size fits all. And no one timeframe fits all, especially when it comes to lifting-and-shifting.

Saas? Paas? It Depends.

A recent survey from the Oracle Applications User Group (OAUG) showed that 70% of respondents who have plans to adopt Oracle Cloud solutions will do so in the next three years. About 35% plan to implement Software-as-a-Service (SaaS) solutions to run with their existing Oracle on-premises installations and 29 percent planning to use Platform-as-a-Service (PaaS) services to accelerate software development efforts in the next 12 months.

Joe Paiva, CIO of the U.S. Commerce Department’s International Trade Administration (ITA), is a fan of lift-and-shift. He said at a cloud conference that “Sometimes it makes sense because it gets you there. That was the key. We had to get there because we would be no worse off or no better off, and we were still spending a lot of money, but it got us to the cloud. Then we started doing rationalization of hardware and applications, and dropped our bill to Amazon by 40 percent compared to what we were spending in our government data center. We were able to rationalize the way we use the service.” Paiva estimates government agencies could save 5%-15% using lift-and-shift.

The benefits of moving existing workloads to the cloud are almost entirely financial. If you can shut down a data center and pay less to run the application in the cloud, it’s can be a good short-term solution with immediate ROI. Gartner cautions, however, that lift and shift “generally results in little created value. Plus, it can be a more expensive option and does not deliver immediate cost savings.” Much depends on how much it costs to run that application today.

A Multi-Track Process for Cloud Migration

The real benefits of new cloud development and deployment architectures take time to realize. For many organizations, there may be a multi-track process:

First track: Lift-and-shift existing workloads that are relatively easy to migrate, while simultaneously writing cloud-native applications for new projects. Those provide the biggest and fastest return on investment, while leaving data center workloads in place and untouched.

Second track: Write cloud-native applications for the remaining data-center workloads, the ones impractical to migrate in their existing form. These will be slower, but the payoff would result in the ability to turn off some or all existing data centers – and eliminating their associated expenses, such as power and cooling, bandwidth, and physical space.

Third track: At some point, revisit the lifted-and-shifted workloads to see which would significantly benefit from being rewritten as cloud-native apps. Unless there is an order of magnitude increase in efficiency, or significant added functionality, the financial returns won’t be high – or may be nonexistent. For some applications, it may never make sense to redesign and rewrite them in a cloud-native way. So, those old enterprise applications may live on for years to come.

To get the most benefit from the new world of cloud-native server applications, forget about the old way of writing software. In the old model, architects designed software. Programmers wrote the code, and testers tested it on test server. Once the testing was complete, the code was “thrown over the wall” to administrators, who installed the software on production servers, and who essentially owned the applications moving forward, only going back to the developers if problems occurred.

The new model, which appeared about 10 years ago is called “DevOps,” or developer operations. In the DevOps model, architects, developers, testers, and administrators collaborate much more closely to create and manage applications. Specifically, developers play a much broader role in the day-to-day administration of deployed applications, and use information about how the applications are running to tune and enhance those applications.

The involvement of developers in administration made DevOps perfect for cloud computing. Because administrators had fewer responsibilities (i.e., no hardware to worry about), it was less threatening for those developers and administrators to collaborate as equals.

Change matters

In that old model of software development and deployment, developers were always change agents. They created new stuff, or added new capabilities to existing stuff. They embraced change, including new technologies – and the faster they created change (i.e., wrote code), the more competitive their business.

By contrast, administrators are tasked with maintaining uptime, while ensuring security. Change is not a virtue to those departments. While admins must accept change as they install new applications, it’s secondary to maintaining stability. Indeed, admins could push back against deploying software if they believed those apps weren’t reliable, or if they might affect the overall stability of the data center as a whole.

With DevOps, everyone can embrace change. One of the ways that works, with cloud computing, is to reduce the risk that an unstable application can damage system reliability. In the cloud, applications can be build and deployed using bare-metal servers (like in a data center), or in virtual machines or containers.

DevOps works best when software is deployed in VMs or containers, since those are isolated from other systems – thereby reducing risk. Turns out that administrators do like change, if there’s minimal risk that changes will negatively affect overall system reliability, performance, and uptime.

Benefits of DevOps

Goodbye, CapEx, hello, OpEx. Cloud computing moves enterprises from capital-expense data centers (which must be built, electrified, cooled, networked, secured, stocked with servers, and refreshed periodically) to operational-expense service (where the business pays monthly for the processors, memory, bandwidth, and storage reserved and/or consumed). When you couple those benefits that with virtual machines, containers, and DevOps, you get:

  • Easier Maintenance: It can be faster to apply patches and fixes to software virtual machines – and use snapshots to roll back if needed.
  • Better Security: Cloud platform vendors offer some security monitoring tools, and it’s relatively easy to install top-shelf protections like next-generation firewalls – themselves offered as cloud services.
  • Improved Agility: Studies show that the process of designing, coding, testing, and deploying new applications can be 10x faster than traditional data center methods, because the cloud reduces and provides robust resources.
  • Lower Cost: Vendors such as Amazon, Google, Microsoft, and Oracle, are aggressively lowering prices to gain market share — and in many cases, those prices are an order of magnitude below what it could cost to provision an enterprise data center.
  • Massive Scale: Need more power? Need more bandwidth? Need more storage? Push a button, and the resources are live. If those needs are short-term, you can turn the dials back down, to lower the monthly bill. You can’t do that in a data center.

Rapidly evolving

The technologies used in creating cloud-native applications are evolving rapidly. Containers, for example, are relatively new, yet are becoming incredibly popular because they require 4x-10x fewer resources than VMs – thereby slashing OpEx costs even further. Software development and management tools, like Kubernetes (for orchestration of multiple containers), Chef (which makes it easy to manage cloud infrastructure), Puppet (which automates pushing out cloud service configurations), and OpenWhisk (which strips down cloud services to “serverless” basics) push the revolution farther.

DevOps is more important than the meaningless “developer operations” moniker implies. It’s a whole new, faster way of doing computing with cloud-native applications. Because rapid change means everything in achieving business agility, everyone wins.

“One of these things is not like the others,” the television show Sesame Street taught generations of children. Easy. Let’s move to the next level: “One or more of these things may or may not be like the others, and those variances may or may not represent systems vulnerabilities, failed patches, configuration errors, compliance nightmares, or imminent hardware crashes.” That’s a lot harder than distinguishing cookies from brownies.

Looking through gigabytes of log files and transactions records to spot patterns or anomalies is hard for humans: it’s slow, tedious, error-prone, and doesn’t scale. Fortunately, it’s easy for artificial intelligence (AI) software, such as the machine learning algorithms built into Oracle Management Cloud. What’s more, the machine learning algorithms can be used to direct manual or automated remediation efforts to improve security, compliance, and performance.

Consider how large-scale systems gradually drift away from their required (or desired) configuration, a key area of concern in the large enterprise. In his Monday, October 2 Oracle OpenWorld session on managing and securing systems at scale using AI, Prakash Ramamurthy, senior vice president of systems management at Oracle, talked about how drift happens. Imagine that you’ve applied a patch, but then later you spool up a virtual server that is running an old version of a critical service or contains an obsolete library with a known vulnerability. That server is out of compliance, Ramamurthy said. Drift.

Drift is bad, said Ramamurthy, and detecting and stopping drift is a core competency of Oracle Management Cloud. It starts with monitoring cloud and on-premises servers, services, applications, and logs, using machine learning to automatically understand normal behavior and identify anomalies. No training necessary here: A variety of machine learning algorithms teach themselves how to play the “one of these things is not like the others” game with your data, your systems, and your configuration, and also to classify the systems in ways that are operationally relevant. Even if those logs contain gigabytes of information on hundreds of thousands of transactions each second.

Learn more in my article for Forbes, “Catch The Drift With Machine Learning — Before The Drift Catches You.”

IT managers shouldn’t have to choose between cloud-driven innovation and data-center-style computing. Developers shouldn’t have to choose between the latest DevOps programming using containers and microservices, and traditional architectures and methodologies. CIOs shouldn’t have to choose between a fully automated and fully managed cloud and a self-managed model using their own on-staff administrators.

At an Oracle OpenWorld general session on infrastructure-as-a-service (IaaS) October 3, Don Johnson, senior vice president of product development at Oracle, lamented that CIOs are often forced to make such difficult choices. Sure, the cloud is excellent for purpose-built applications, he said, “and so what’s working for them is cloud-native, but what’s not working in the cloud are enterprise workloads. It’s an unnecessary set of bad choices.”

When it comes to moving existing business-critical applications to the cloud, Johnson explained the three difficult choices faced by many organizations:

  • First, CIOs can rewrite those applications from the ground up to run in the cloud in a platform-as-a-service (PaaS) model. That’s best in terms of achieving the greatest computational efficiency, as well as integration with other cloud services, but it can be time-consuming and costly.
  • Second, organizations can retrofit their existing applications to run in in the cloud, but this can be challenging at best, or nearly impossible in some cases.
  • Or third, CIOs can “lift and shift” existing on-premises applications, including their full software stack, directly into the cloud, using the IaaS model.

Historically, those three models have required three different clouds. No longer. Only the Oracle Cloud Infrastructure, Johnson stated, “lets you run your full existing stack alongside cloud-native applications.” And this is important, he added, because migration to the cloud must be slow and deliberate. “Running in the cloud is very disruptive. It can’t happen overnight. You need to move when and how you want to move,” he said. And a deliberative movement to the cloud means a combination of new cloud-native PaaS applications and legacy applications migrated to IaaS.

Read more in my story for Forbes, “Lift And Shift Workloads — And Write Cloud-Native Apps — For The Same Cloud.”

When was the last time most organizations discussed the security of their Oracle E-Business Suite? How about SAP S/4HANA? Microsoft Dynamics? IBM’s DB2? Discussions about on-prem server software security too often begin and end with ensuring that operating systems are at the latest level, and are current with patches.

That’s not good enough. Just as clicking on a phishing email or opening a malicious document in Microsoft Word can corrupt a desktop, so too server applications can be vulnerable. When those server applications are involved with customer records, billing systems, inventory, transactions, financials, or human resources, a hack into ERP or CRM systems can threaten an entire organization. Worse, if that hack leveraged stolen credentials, the business may never realize that competitors or criminals are stealing its data, and potentially even corrupting its records.

A new study from the Ponemon Institute points to the potential severity of the problem. Sixty percent of the respondents to the “Cybersecurity Risks to Oracle E-Business Suite” say that information theft, modification of data and disruption of business processes on their company’s Oracle E-Business Suite applications would be catastrophic. While 70% respondents said a material security or data breach due to insecure Oracle E-Business Suite applications is likely, 67% of respondents believe their top executives are not aware of this risk. (The research was sponsored by Onapsis, which sells security solutions for ERP suites, so apply a little sodium chloride to your interpretation of the study’s results.)

The audience of this study was of businesses that rely upon Oracle E-Business Suite. About 24% of respondents said that it was the most critical application they ran, and altogether, 93% said it was one of the top 10 critical applications. Bearing in mind that large businesses run thousands of server applications, that’s saying something.

Yet more than half of respondents – 53% — said that it was Oracle’s responsibility to ensure that its applications and platforms are safe and secure. Unless they’ve contracted with Oracle to manage their on-prem applications, and to proactively apply patches and fixes, well, they are delusional.

Another area of delusion: That software must be connected to the Internet to pose a risk. In this study, 52% of respondents agree or strongly agree that “Oracle E-Business applications that are not connected to the Internet are not a security threat.” They’ve never heard of insider threats? Credentials theft? Penetrations of enterprise networks?

What About Non-Oracle Packages?

This Ponemon/Onapsis study represents only one data point. It does not adequately discuss the role of vendors in this space, including ERP/CRM value-added resellers, consultants and MSSPs (managed security service providers). It also doesn’t differentiate between Oracle instances running on-prem compared to the Oracle ERP Cloud – where Oracle does manage all the security.

Surprising, packaged software isn’t talked about very often. Given the amount of chatter at most security conferences, bulletin boards, and the like, packaged applications like these on-prem ERP or CRM suites are rarely a factor in conversations about security. Instead, everyone is seemingly focused on the endpoint, firewalls, and operating systems. Sometimes we’ll see discussions of the various tiers in an n-tier architecture, such as databases, application servers, and presentation systems (like web servers or mobile app back ends).

Another company that offers ERP security, ERPScan, conducted a study with Crowd Research Partners focused on SAP. The “ERP Cybersecurity Study 2017” said that (and I quote from the report on these bullet points):

  • 89% of respondents expect that the number of cyber-attacks against ERP systems will grow in next 12 months.
  • An average cost of a security breach in SAP is estimated at $5m with fraud considered as the costliest risk. A third of organizations assesses the damage of fraudulent actions at more than 10m USD.
  • There is a lack of awareness towards ERP Security, worryingly, even among people who are engaged in ERP Security. One-third of them haven’t even heard about any SAP Security incident. Only 4% know about the episode with the direst consequences – USIS data breach started with an SAP vulnerability, which resulted in the company’s bankruptcy.
  • One of three respondents hasn’t taken any ERP Security initiative yet and is going to do so this year.
  • Cybersecurity professionals are most concerned about protecting customer data (72%), employee data (66%), and emails (54%). Due to this information being stored in different SAP systems (e.g. ERP, HR, or others), they are one of the most important assets to protect.
  • It is still unclear who is in charge of ERP Security: 43% of responders suppose that CIO takes responsibilities, while 28% consider it CISO’s duty.

Of course, we still must secure our operating systems, network perimeters, endpoints, mobile applications, WiFi networks, and so-on. Let’s not forget, however, the crucial applications our organizations depend upon. Breaches into those systems could be invisible – and ruinous to any business.

The water is rising up over your desktops, your servers, and your data center. Glug, glug, gurgle.

You’d better hope that the disaster recovery plans included the word “offsite.” Hope the backup IT site wasn’t another local business that’s also destroyed by the hurricane, the flood, the tornado, the fire, or the earthquake.

Disasters are real, as August’s Hurricane Harvey and immense floods in Southeast Asia have taught us all. With tens of thousands of people displaced, it’s hard to rebuild a business. Even with a smaller disaster, like a power outage that lasts a couple of days, the business impact can be tremendous.

I once worked for a company in New York that was hit by a blizzard that snapped the power and telephone lines to the office building. Down went the PBX, down went the phone system and the email servers. Remote workers (I was in in California) were massively impaired. Worse, incoming phone calls simply rang and rang; incoming email messages bounced back to the sender.

With that storm, electricity was gone for more than a week, and broadband took an additional time to be restored. You’d better believe our first order of business, once we began the recovery phase, was to move our internal Microsoft Exchange Server to a colocation facility with redundant T1 lines, and move our internal PBX to a hosted solution from the phone company. We didn’t like the cost, but we simply couldn’t afford to be shut down again the next time a storm struck.

These days, the answer lies within the cloud, either for primary data center operations, or for the source of a backup. (Forget trying to salvage anything from a submerged server rack or storage system.)

Be very prepared

Are you ready for a disaster? In a February 2017 study conducted by the Disaster Recovery Journal and Forrester Research, “The State Of Disaster Recovery Preparedness 2017,” only 18% of disaster recovery decision makers said they were “very prepared” to recover their data center in the event of a site failure or disaster event. Another 37% were prepared, 34% were somewhat prepared, and 11% not prepared at all.

That’s not good enough if you’re in Houston or Bangladesh or even New York during a blizzard. And that’s clear even among the survey respondents, 43% of whom said there was a business requirement to stay online and competitive 24×7. The cloud is considered to be one option for disaster recovery (DR) planning, but it’s not the only one. Says the study:

DR in the cloud has been a hot topic that has garnered a significant amount of attention during the past few years. Adoption is increasing but at a slow rate. According to the latest survey, 18 percent of companies are now using the cloud in some way as a recovery site – an increase of 3 percent. This includes 10 percent who use a fully packaged DR-as-a-Service (DRaaS) offering and 8 percent who use Infrastructure-as-a-Service (IaaS) to configure their own DR in the cloud configuration. Use of colocation for recovery sites is remains consistent at 37 percent (roughly the same as the prior study). However, the most common method of sourcing recovery sites is still in-house at 43 percent.

The study shows that 43% own their site and IT infrastructure. Also, 37% use a colocation site with their own infrastructure, 20% used a shared, fix-site IT IaaS provider, 10% use DRaaS offering in the cloud, and only 8% use public cloud IaaS as a recovery site.

For the very largest companies, the public cloud, or even a DRaaS provider, may not be the way to go. If the organization is still maintaining a significant data center (or multiple data centers), the cost and risks of moving to the cloud are significant. Unless a data center is heavily virtualized, it will be difficult to replicate the environment – including servers, storage, networking, and security – at a cloud provider.

For smaller businesses, however, moving to a cloud system is becoming increasingly cost-effective. It’s attractive for scalability and OpEx reasons, and agile for deploying new applications. This month’s hurricanes offer an urgent reason to move away from on-prem or hybrid to a full cloud environment — or at least explore DRaaS. With the right service provider, offering redundancy and portability, the cloud could be the only real hope in a significant disaster.

A major global cyberattack could cost US$53 billion of economic losses. That’s on the scale of a catastrophic disaster like 2012’s Hurricane Sandy.

Lloyds of London, the famous insurance company, partnered with Cyence, a risk analysis firm specializing in cybersecurity. The result is a fascinating report, “Counting the Cost: Cyber Exposure Decoded.” This partnership makes sense: Lloyds must understand the risk before deciding whether to underwrite a venture — and when it comes to cybersecurity, this is an emerging science. Traditional actuarial methods used to calculate the risk of a cargo ship falling prey to pirates, or an office block to a devastating flood, simply don’t apply.

Lloyds says that in 2016, cyberattacks cost businesses as much as $450 billion. While insurers can help organizations manage that risk, the risk is increasing. The report points to those risks covering “everything from individual breaches caused by malicious insiders and hackers, to wider losses such as breaches of retail point-of-sale devices, ransomware attacks such as BitLocker, WannaCry and distributed denial-of-service attacks such as Mirai.”

The worry? Despite writing $1.35 billion in cyberinsurance in 2016, “insurers’ understanding of cyber liability and risk aggregation is an evolving process as experience and knowledge of cyber-attacks grows. Insureds’ use of the internet is also changing, causing cyber-risk accumulation to change rapidly over time in a way that other perils do not.”

And that is why the lack of time-tested actuarial tables can cause disaster, says Lloyds. “Traditional insurance risk modelling relies on authoritative information sources such as national or industry data, but there are no equivalent sources for cyber-risk and the data for modelling accumulations must be collected at scale from the internet. This makes data collection, and the regular update of it, key components of building a better understanding of the evolving risk.”

Where the Risk Is Growing

The report points to six significant trends that are causing increased risk of an expensive attack – and therefore, increased liability:

  • Volume of contributors: The number of people developing software has grown significantly over the past three decades; each contributor could potentially add vulnerability to the system unintentionally through human error.
  • Volume of software: In addition to the growing number of people amending code, the amount of it in existence is increasing. More code means the potential for more errors and therefore greater vulnerability.
  • Open source software: The open-source movement has led to many innovative initiatives. However, many open-source libraries are uploaded online and while it is often assumed they have been reviewed in terms of their functionality and security, this is not always the case. Any errors in the primary code could then be copied unwittingly into subsequent iterations.
  • Old software: The longer software is out in the market, the more time malicious actors have to find and exploit vulnerabilities. Many individuals and companies run obsolete software that has more secure alternatives.
  • Multi-layered software: New software is typically built on top of prior software code. This makes software testing and correction very difficult and resource intensive.
  • “Generated” software: Code can be produced through automated processes that can be modified for malicious intent.

Based on those points, and other factors, Lloyds and Cyence have come up with two primary scenarios that could lead to widespread, and costly, damages. The first – a successful hack of a major cloud service provider, which hosts websites, applications, and data for many companies. The second — a mass vulnerability attack that affects many client systems. One could argue that some of the recent ransomware attacks fit into that scenario.

Huge Liability Costs

The “Counting the Cost” report makes for some depressing reading. Here are three of the key findings, quoted verbatim. Read the 56-page report to dig deeply into the scenarios, and the damages.

  • The direct economic impacts of cyber events lead to a wide range of potential economic losses. For the cloud service disruption scenario in the report, these losses range from US$4.6 billion for a large event to US$53.1 billion for an extreme event; in the mass software vulnerability scenario, the losses range from US$9.7 billion for a large event to US$28.7 billion for an extreme event.
  • Economic losses could be much lower or higher than the average in the scenarios because of the uncertainty around cyber aggregation. For example, while average losses in the cloud service disruption scenario are US$53 billion for an extreme event, they could be as high as US$121.4 billion or as low as US$15.6 billion, depending on factors such as the different organisations involved and how long the cloud-service disruption lasts for.
  • Cyber-attacks have the potential to trigger billions of dollars of insured losses. For example, in the cloud- services scenario insured losses range from US$620 million for a large loss to US$8.1 billion for an extreme loss. For the mass software vulnerability scenario, the insured losses range from US$762 million (large loss) to US$2.1 billion (extreme loss).

Read the 56-page report to dig deeply into the scenarios, and the damages. You may not sleep well afterwards.

I am unapologetically mocking this company’s name. Agylytyx emailed me this press release today, and only the name captured my attention. Plus, their obvious love of the ™ symbol — even people they quote use the ™. Amazing!

Beyond that, I’ve never talked to the company or used its products, and have no opinion about them. (My guess is that it’s supposed to be pronounced as “Agil-lytics.”)

Agylytyx Announces Availability of New IOT Data Analysis Application

SUNNYVALE, Calif., June 30, 2017 /PRNewswire/ — Agylytyx, a leading cloud-based analytic software vendor, today announced a new platform for analyzing IoT data. The Agylytyx Generator™ IoT platform represents an application of the vendor’s novel Construct Library™ approach to the IoT marketplace. For the first time, companies can both explore their IoT data and make it actionable much more quickly than previously thought possible.

From PLC data streams archived as tags in traditional historians to time series data streaming from sensors attached to devices, the Agylytyx Generator™ aggregates and presents IoT data in a decision-ready format. The company’s unique Construct Library™ (“building block”) approach allows decision makers to create and explore aggregated data such as pressure, temperature, output productivity, worker status, waste removal, fuel consumption, heat transfer, conductivity, condensation or just about any “care abouts.” This data can be instantly explored visually at any level such as region, plant, line, work cell or even device. Best of all, the company’s approach eliminates the need to build charts or write queries.

One of the company’s long-time advisors, John West of Clean Tech Open, noticed the Agylytyx Generator™ potential from the outset. West’s wide angle on data analysis led him to stress the product’s broad applicability. West said “Even as the company was building the initial product, I advised the team that I thought there was strong applicability of the platform to operational data. The idea of applying Constructs to a received data set has broad usage. Their evolution of the Agylytyx Generator™ platform to IoT data is a very natural one.”

The company’s focus on industrial process data was the brainchild of one the company’s investors, Jim Smith. Jim is a chemical engineer with extensive experience working with plant floor data. Smith stated “I recognized the potential in the company’s approach for analyzing process data. Throughout the brainstorming process, we all gradually realized we were on to something groundbreaking.”

This unique approach to analytics attracted the attention of PrecyseTech, a pioneer of Industrial IoT (IIoT) Systems providing end-to-end management of high-value physical assets and personnel. Paul B. Silverman, the CEO of PrecyseTech, has had a longstanding relationship with the company. Silverman noted: “The ability of the Agylytyx Generator™ to address cloud-based IoT data analytic solutions is a good fit with PrecyseTech’s strategy. Agylytyx is working with the PrecyseTech team to develop our inPALMSM Solutions IoT applications, and we are working collaboratively to identify and develop IoT data opportunities targeting PrecyseTech’s clients. Our plans are to integrate the Agylytyx Generator™ within our inPALMSM Solutions product portfolio and also to offer users access to the Agylytyx Generator™ via subscription.”

Creating this IoT focus made the ideal use of the Agylytyx Generator™. Mark Chang, a data scientist for Agylytyx, noted: “All of our previous implementations – financial, entertainment, legal, customer service – had data models with common ‘units of measure’ – projects, media, timekeepers, support cases, etc. IoT data is dissimilar in that there is no common ‘unit of measure’ across devices. This dissimilarity is exactly what makes our Construct Library™ approach so useful to IoT data. The logical next step for us will be to apply machine learning and cluster inference to enable optimization of resource deployment and predictive analytics like predictive maintenance.”

About Agylytyx

Agylytyx provides cloud-based enterprise business analytic software. The company’s flagship product, the Agylytyx Generator™, frees up analyst time and results in better decision making across corporations. Agylytyx is based in Sunnyvale, California, and has locations in Philadelphia and Chicago, IL. For more information about Agylytyx visit www.agylytyx.com.

An organization’s Chief Information Security Officer’s job isn’t ones and zeros. It’s not about unmasking cybercriminals. It’s about reducing risk for the organization, for enabling executives and line-of-business managers to innovate and compete safely and  securely. While the CISO is often seen as the person who loves to say “No,” in reality, the CISO wants to say “Yes” — the job, after all, is to make the company thrive.

Meanwhile, the CISO has a small staff, tight budget, and the need to demonstrate performance metrics and ROI. What’s it like in the real world? What are the biggest challenges? We asked two former CISOs (it’s hard to get current CISOs to speak on the record), both of whom worked in the trenches and now advise CISOs on a daily basis.

To Jack Miller, a huge challenge is the speed of decision-making in today’s hypercompetitive world. Miller, currently Executive in Residence at Norwest Venture Partners, conducts due diligence and provides expertise on companies in the cyber security space. Most recently he served as chief security strategy officer at ZitoVault Software, a startup focused on safeguarding the Internet of Things.

Before his time at ZitoVault, Miller was the head of information protection for Auto Club Enterprises. That’s the largest AAA conglomerate with 15 million members in 22 states. Previously, he served as the CISO of the 5th and 11th largest counties in the United States, and as a security executive for Pacific Life Insurance.

“Big decisions are made in the blink of an eye,” says Miller. “Executives know security is important, but don’t understand how any business change can introduce security risks to the environment. As a CISO, you try to get in front of those changes – but more often, you have to clean up the mess afterwards.”

Another CISO, Ed Amoroso, is frustrated by the business challenge of justifying a security ROI. Amoroso is the CEO of TAG Cyber LLC, which provides advanced cybersecurity training and consulting for global enterprise and U.S. Federal government CISO teams. Previously, he was Senior Vice President and Chief Security Officer for AT&T, and managed computer and network security for AT&T Bell Laboratories. Amoroso is also an Adjunct Professor of Computer Science at the Stevens Institute of Technology.

Amoroso explains, “Security is an invisible thing. I say that I’m going to spend money to prevent something bad from happening. After spending the money, I say, ta-da, look, I prevented that bad thing from happening. There’s no demonstration. There’s no way to prove that the investment actually prevented anything. It’s like putting a “This House is Guarded by a Security Company” sign in front of your house. Maybe a serial killer came up the street, saw the sign, and moved on. Maybe not. You can’t put in security and say, here’s what didn’t happen. If you ask, 10 out of 10 CISOs will say demonstrating ROI is a huge problem.”

Read more in my article for Global Banking & Finance Magazine, “Be Prepared to Get Fired! And Other Business Advice for CISOs.”

The endpoint is vulnerable. That’s where many enterprise cyber breaches begin: An employee clicks on a phishing link and installs malware, such a ransomware, or is tricked into providing login credentials. A browser can open a webpage which installs malware. An infected USB flash drive is another source of attacks. Servers can be subverted with SQL Injection or other attacks; even cloud-based servers are not immune from being probed and subverted by hackers. As the number of endpoints proliferate — think Internet of Things — the odds of an endpoint being compromised and then used to gain access to the enterprise network and its assets only increases.

Which are the most vulnerable endpoints? Which need extra protection? All of them, especially devices running some flavor of Windows, according to Mike Spanbauer, Vice President of Security at testing firm NSS Labs. “All of them. So the reality is that Windows is where most targets attack, where the majority of malware and exploits ultimately target. So protecting your Windows environment, your Windows users, both inside your businesses as well as when they’re remote is the core feature, the core component.”

Roy Abutbul, Co-Founder and CEO of security firm Javelin Networks, agreed. “The main endpoints that need the extra protection are those endpoints that are connected to the [Windows] domain environment, as literally they are the gateway for attackers to get the most sensitive information about the entire organization.” He continued, “From one compromised machine, attackers can get 100 per cent visibility of the entire corporate, just from one single endpoint. Therefore, a machine that’s connected to the domain must get extra protection.”

Scott Scheferman, Director of Consulting at endpoint security company Cylance, is concerned about non-PC devices, as well as traditional computers. That might include the Internet of Things, or unprotected routers, switches, or even air-conditioning controllers. “In any organization, every endpoint is really important, now more than ever with the internet of Things. There are a lot of devices on the network that are open holes for an attacker to gain a foothold. The problem is, once a foothold is gained, it’s very easy to move laterally and also elevate your privileges to carry out further attacks into the network.”

At the other end of the spectrum is cloud computing. Think about enterprise-controlled virtual servers, containers, and other resources configured as Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS). Anything connected to the corporate network is an attack vector, explained Roark Pollock, Vice President at security firm Ziften.

Microsoft, too, takes a broad view of endpoint security. “I think every endpoint can be a target of an attack. So usually companies start first with high privilege boxes, like administrator consoles onboard to service, but everybody can be a victim,” said Heike Ritter, a Product Manager for Security and Networking at Microsoft.

I’ve written a long, detailed article on this subject for NetEvents, “From Raw Data to Actionable Intelligence: The Art and Science of Endpoint Security.”

You can also watch my 10-minute video interview with these people here.

Cybercriminals want your credentials and your employees’ credentials. When those hackers succeed in stealing that information, it can be bad for individuals – and even worse for corporations and other organizations. This is a scourge that’s bad, and it will remain bad.

Credentials come in two types. There are personal credentials, such as the login and password for an email account, bank and retirement accounts, credit-card numbers, airline membership program, online shopping and social media. When hackers manage to obtain those credentials, such as through phishing, they can steal money, order goods and services, and engage in identity theft. This can be extremely costly and inconvenient for victims, but the damage is generally contained to that one unfortunate individual.

Corporate digital credentials, on the other hand, are the keys to an organization’s network. Consider a manager, executive or information-technology worker within a typical medium-size or larger-size business. Somewhere in the organization is a database that describes that employee – and describes which digital assets that employee is authorized to use. If cybercriminals manage to steal the employee’s corporate digital credentials, the criminals can then access those same assets, without setting off any alarm bells. Why? Because they have valid credentials.

What might those assets be? Depending on the employee:

  • It might range from everything to file servers that contain intellectual property, as pricing sheets, product blueprints, or patent applications.
  • It might include email archives that describe business plans. Or accounting servers that contain important financial information that could help competitors or allow for “insider trading.”
  • It might be human resources data that can help the hackers attack other individuals. Or engage in identity theft or even blackmail.

What if the stolen credentials are for individuals in the IT or information security department? The hackers can learn a great deal about the company’s technology infrastructure, perhaps including passwords to make changes to configurations, open up backdoors, or even disable security systems.

Read my whole story about this —including what to do about it — in Telecom Times, “The CyberSecurity Scourge of Credentials Theft.”

I can’t trust the Internet of Things. Neither can you. There are too many players and too many suppliers of the technology that can introduce vulnerabilities in our homes, our networks – or elsewhere. It’s dangerous, my friends. Quite dangerous. In fact, it can be thought of as a sort of Fifth Column, but not in the way many of us expected.

Merriam-Webster defines a Fifth Column as “a group of secret sympathizers or supporters of an enemy that engage in espionage or sabotage within defense lines or national borders.” In today’s politics, there’s lot of talk about secret sympathizers sneaking across national borders, such as terrorists posing as students or refugees. Such “bad actors” are generally part of an organization, recruited by state actors, and embedded into enemy countries for long-term penetration of society.

There have been many real-life Fifth Column activists in recent global history. Think about Kim Philby and Anthony Blunt, part of the “Cambridge Five” who worked for spy agencies in the United Kingdom in post-World War II era; but who themselves turned out to be double agents working for the Soviet Union. Fiction too, is replete with Fifth Column spies. They’re everywhere in James Bond movies and John le Carré novels.

Am I too paranoid?

Let’s bring our paranoia (or at least, my paranoia) to the Internet of Things, and start by way of the late 1990s and early 2000s. I remember quite clearly the introduction of telco and network routers by Huawei, and concerns that the Chinese government may have embedded software into those routers in order to surreptitiously listen to telecom networks and network traffic, to steal intellectual property, or to do other mischief like disable networks in the event of a conflict. (This was before the term “cyberwarfare” was widely used.)

Recall that Huawei was founded by a former engineer in the Chinese People’s Liberation Army. The company was heavily supported by Beijing. Also there were lawsuits alleging that Huawei infringed on Cisco’s intellectual property – i.e., stole its source code. Thus, there was lots of concern surrounding the company and its products.

Read my full story about this, published in Pipeline Magazine, “The Surprising and Dangerous Fifth Column Hiding Within the Internet of Things.”

You keep reading the same three names over and over again. Amazon Web Services. Google Cloud Platform. Microsoft Windows Azure. For the past several years, that’s been the top tier, with a wide gap between them and everyone else. Well, there’s a fourth player, the IBM cloud, with their SoftLayer acquisition. But still, it’s AWS in the lead when it comes to Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS), with many estimates showing about a 37-40% market share in early 2017. In second place, Azure, at around 28-31%. Third, place, Google at around 16-18%. Fourth place, IBM SoftLayer, at 3-5%.

Add that all up, and you get the big four (let’s count IBM) at between 84% and 94%. That doesn’t leave much room for everyone else, including companies like Rackspace, and all the cloud initiatives launched by major computer companies like Alibaba, Dell, HP Enterprise, Oracle, and all the telcos around the world.

Of course, IaaS and PaaS can’t account for all the cloud activity. In the Software-as-a-Service realm, companies like Salesforce.com and Oracle operate their own clouds, which are huge. And then there are the private clouds, operated by the likes of Apple and Facebook, which are immense, with data centers all around the world.

Still, it’s clear that when it comes to the public cloud, there are very few choices. That covers the clouds telcos want to monetize, and enterprises need for hybrid clouds or full migrations,  You can go with the big winner, which is Amazon. You can look to Azure (which is appealing, of course, to Microsoft shops) or Google. And then you can look at everyone else, including IBM SoftLayer, Rackspace, and, well, everyone else.

Amazon Web Services Inside?

Remember when computer makers were touting “Intel Inside”? In today’s world, many SaaS providers are basing their platforms on Amazon, Azure or Google. And many IaaS and PaaS players are doing the same —except in many cases, they’re not advertising it. Unlike many of the smaller PC companies, who wanted to hitch their star to Intel’s huge advertising budget, cloud software companies want to build out their own brands. In the international space, they also don’t want to be seen as fronting U.S.-based technology providers, but rather, want to appeal as a local option.

Speaking of international, the dominance of the IaaS/PaaS market by three U.S. companies can create a bit of a conundrum for global tech providers. Many governments and global businesses are leery of letting their data touch U.S. servers, and in some cases, even if the Amazon/Azure/Google data center is based in Europe or Asia, there are legal minefields regarding U.S. courts and surveillance. Not only that, but across the globe, privacy laws are increasingly strict about where consumer information may be stored.

What does this add up to? Probably not much in the long run. There’s no reason to expect that the lineup of Amazon, Azure and Google will change much over the next year or two, or that they will lose market share to smaller players. In fact, to the contrary: The big players are getting bigger at the expense of the niche offerings. According to a recent report from Synergy Research Group:

New Q4 data from Synergy Research Group shows that Amazon Web Services (AWS) is maintaining its dominant share of the burgeoning public cloud services market at over 40%, while the three main chasing cloud providers – Microsoft, Google and IBM – are gaining ground but at the expense of smaller players in the market. In aggregate the three have increased their worldwide market share by almost five percentage points over the last year and together now account for 23% of the total public IaaS and PaaS market, helped by particularly strong growth at Microsoft and Google.

The bigger are getting bigger. The smaller are getting smaller. That’s the cloud market story, in a nutshell.

Cloud-based firewalls come in two delicious flavors: vanilla and strawberry. Both flavors are software that checks incoming and outgoing packets to filter against access policies and block malicious traffic. Yet they are also quite different. Think of them as two essential network security tools: Both are designed to protect you, your network, and your real and virtual assets, but in different contexts.

Disclosure: I made up the terms “vanilla firewall” and “strawberry firewall” for this discussion. Hopefully they help us differentiate between the two models as we dig deeper.

Let’s start with a quick overview:

  • Vanilla firewalls are usually stand-alone products or services designed to protect an enterprise network and its users — like an on-premises firewall appliance, except that it’s in the cloud. Service providers call this a software-as-a-service (SaaS) firewall, security as a service (SECaaS), or even firewall as a service (FaaS).
  • Strawberry firewalls are cloud-based services that are designed to run in a virtual data center using your own servers in a platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS) model. In these cases, the firewall application runs on the virtual servers and protects traffic going to, from, and between applications in the cloud. The industry sometimes calls these next-generation firewalls, though the term is inconsistently applied and sometimes refers to any advanced firewall system running on-prem or in the cloud.

So why do we need these new firewalls? Why not stick a 1U firewall appliance into a rack, connect it up to the router, and call it good? Easy: Because the definition of the network perimeter has changed. Firewalls used to be like guards at the entrance to a secured facility. Only authorized people could enter that facility, and packages were searched as they entered and left the building. Moreover, your users worked inside the facility, and the data center and its servers were also inside. Thus, securing the perimeter was fairly easy. Everything inside was secure, everything outside was not secure, and the only way in and out was through the guard station.

Intrigued? Hungry? Both? Please read the rest of my story, called “Understanding cloud-based firewalls,” published on Enterprise.nxt.

Want to open up your eyes, expand your horizons, and learn from really smart people? Attend a conference or trade show. Get out there. Meet people. Have conversations. Network. Be inspired by keynotes. Take notes in classes that are delivering great material, and walk out of boring sessions and find something better.

I wrote an article about the upcoming 2017 conferences and trade shows about cloud computing and enterprise infrastructure. Think big and think outside the cubicle: Don’t go to only the events that are about the exact thing you do, and don’t attend only the sessions about the exact thing you do.

The list is organized alphabetically in “must attend,” worth attending,” and “worthy mentions” sections. Those are my subjective labels (though based on experience, having attended many of these conferences in the past decades), so read the descriptions carefully and make your own decisions. If you don’t use Amazon Web Services, then AWS re:Invent simply isn’t right for you. However, if you use or might use the company’s cloud services, then, yes, it’s a must-attend.

And oh, a word about the differences between conferences and trade shows (also known as expos). These can be subtle, and reasonable people might disagree in some edge cases. However, a conference’s main purpose is education: The focus is on speakers, panels, classes, and other sessions. While there might be an exhibit floor for vendors, it’s probably small and not very useful. In contrast, a trade show is designed to expose you to the greatest number of exhibitors, including vendors and trade associations. The biggest value is in walking the floor; while the trade show may offer classes, they are secondary and often (but not always) vendor fluff sessions “awarded” to big advertisers in return for their gold sponsorships.

So if you want to learn from classes, panels, and workshops, you probably want a conference. If you want to talk to vendors, kick the tires on products, and decide which solutions to buy or recommend, you want a trade show or an expo.

And now, on with the list: the most important events in cloud computing and enterprise infrastructure, compiled at the very beginning of 2017. Note that events can change their dates or cities without notice, or even be cancelled, so keep an eye on the websites. You can read the list here.