, ,

Threat report from Oracle, KPMG points to strong trust in the cloud

Is the cloud ready for sensitive data? You bet it is. Some 90% of businesses in a new survey say that at least half of their cloud-based data is indeed sensitive, the kind that cybercriminals would love to get their hands on.

The migration to the cloud can’t come soon enough. About two-thirds of companies in the study say at least one cybersecurity incident has disrupted their operations within the past two years, and 80% say they’re concerned about the threat that cybercriminals pose to their data.

The good news is that 62% of organizations consider the security of cloud-based enterprise applications to be better than the security of their on-premises applications. Another 21% consider it as good. The caveat: Companies must be proactive about their cloud-based data and can’t naively assume that “someone else” is taking care of that security.

Those insights come from a brand-new threat report, the first ever jointly conducted by Oracle and KPMG. The “Oracle and KPMG Cloud Threat Report 2018,” to be released this month at the RSA Conference, fills a unique niche among the vast number of existing threat and security reports, including the well-respected Verizon Data Breach Investigations Report produced annually since 2008.

The difference is the Cloud Threat Report’s emphasis on hybrid cloud, and on organizations lifting and shifting workloads and data into the cloud. “In the threat landscape, you have a wide variety of reports around infrastructure, threat analytics, malware, penetrations, data breaches, and patch management,” says one of the designers of the study, Greg Jensen, senior principal director of Oracle’s Cloud Security Business. “What’s missing is pulling this all together for the journey to the cloud.”

Indeed, 87% of the 450 businesses surveyed say they have a cloud-first orientation. “That’s the kind of trust these organizations have in cloud-based technology,” Jensen says.

Here are data points that break that idea down into more detail:

  • 20% of respondents to the survey say the cloud is much more secure than their on-premises environments; 42% say the cloud is somewhat more secure; and 21% say the cloud is equally secure. Only 21% think the cloud is less secure.
  • 14% say that more than half of their data is in the cloud already, and 46% say that between a quarter and half of their data is in the cloud.

That cloud-based data is increasingly “sensitive,” the survey respondents say. That data includes information collected from customer relationship management systems, personally identifiable information (PII), payment card data, legal documents, product designs, source code, and other types of intellectual property.

Read more, including what cyberattacks say about the “pace gap,” in my essay in Forbes, “Threat Report: Companies Trust Cloud Security.”

, ,

What IHS Markit says about the IoT and colocation hosts

Endpoints everywhere! That’s the future, driven by the Internet of Things. When IoT devices are deployed in their billions, network traffic patterns won’t look at all like today’s patterns. Sure, enterprises have a few employees working at home, or use technologies like MPLS (Multi-Protocol Label Switching) or even SD-WAN (Software Defined Wide-Area Networks) to connect branch offices. However, for the most part, most internal traffic remains within the enterprise LAN, and external traffic is driven by end-users accessing websites from browsers.

The IoT will change all of that, predicts IHS Markit, one of the industry’s most astute analyst firms. In particular, the IoT will accelerate the growth of colo facilities, because it will be to everyone’s benefit to place servers closer to the network edge, avoiding the last mile.

To set the stage, IHS Markit forecasts Internet connectable devices to grow from 27.5 billion in 2017 to 45.4 billion in 2021. That’s a 65% increase in four short years. How will that affect colos? “Data center growth is correlated with general data growth. The more data transmitted via connected devices; the more data centers are needed to store, transfer, and analyze this data.”

The analysts say: “In the specific case of the Internet of Things, there’s a need for geographically distributed data centers that can provide low-latency connections to certain connected devices. There are applications, like autonomous vehicles or virtual reality, which are going to require local data centers to manage much of the data processing required to operate.” Therefore, because most enterprises will not have the means or the business case to build new data centers everywhere, “they will need to turn to colocations to provide quickly scalable, low capital-intensive options for geographically distributed data centers.”

Another trend being pointed to by IHS Markit: More local processing, rather than relying on servers in a colo facility, at a cloud provider, or in the enterprise’s own data center. “And thanks to local analytics on devices, and the use of machine learning, a lot of data will never need to leave the device. Which is good news for the network infrastructure of the world that is not yet capable of handling a 65% increase in data traffic, given the inevitable proliferation of devices.”

For more about what IHS Markit says about colos, see my essay, “The Internet of Things Will Increase the Value of Colocation Hosts.”

, , ,

Make software as simple as possible – but not simpler

Albert Einstein famously said, “Everything should be made as simple as possible, but not simpler.” Agile development guru Venkat Subramaniam has a knack for taking that insight and illustrating just how desperately the software development process needs the lessons of Professor Einstein.

As the keynote speaker at the Oracle Code event in Los Angeles—the first in a 14-city tour of events for developers—Subramaniam describes the art of simplicity, and why and how complexity becomes the enemy. While few would argue that complex is better, that’s what we often end up creating, because complex applications or source code may make us feel smart. But if someone says our software design or core algorithm looks simple, well, we feel bad—perhaps the problem was easy and obvious.

Subramaniam, who’s president of Agile Developer and an instructional professor at the University of Houston, urges us instead to take pride in coming up with a simple solution. “It takes a lot of courage to say, ‘we don’t need to make this complex,’” he argues. (See his full keynote, or register for an upcoming Oracle Code event.)

Simplicity Is Not Simple

Simplicity is hard to define, so let’s start by considering what simple is not, says Subramaniam. In most cases, our first attempts at solving a problem won’t be simple at all. The most intuitive solution might be overly verbose, or inefficient, or perhaps difficult to understand, even by its programmers after the fact.

Simple is not clever. Clever software, or clever solutions, may feel worthwhile, and might cause people to pat developers on the back. But ultimately, it’s hard to understand, and can be hard to change later. “Clever code is self-obfuscating,” says Subramaniam, meaning that it can be incomprehensible. “Even programmers can’t understand their clever code a week later.”

Simple is not necessarily familiar. Subramaniam insists that we are drawn to the old, comfortable ways of writing software, even when those methods are terribly inefficient. He mentioned someone who wrote code with 70 “if/then” questions in a series—because it was familiar. But it certainly wasn’t simple, and would be nearly impossible to debug or modify later. Something that we’re not familiar with may actually be simpler than what we’re comfortable with. To fight complexity, Subramaniam recommends learning new approaches and staying up with the latest thinking and the latest paradigms.

Simple is not over-engineered. Sometimes you can overthink the problem. Perhaps that means trying to develop a generalized algorithm that can be reused to solve many problems, when the situation calls for a fast, basic solution to a single problem. Subramaniam cited Occam’s Razor: When choosing between two solutions, the simplest may be the best.

Simple is not terse. Program source code should be concise, which means that it’s small, but also clearly communicate the programmer’s intent. By contrast, something that’s terse may still execute correctly when compiled into software, but the human understanding may be lost. “Don’t confuse terse with concise,” warns Subramaniam. “Both are really small, but terse code is waiting to hurt you when you least expect it.”

Read more in my essay, “Practical Advice To Whip Complexity And Develop Simpler Software.”

, ,

Let’s make data warehouses more autonomous — here’s why

As the saying goes, you can’t manage what you don’t measure. In a data-driven organization, the best tools for measuring the performance are business intelligence (BI) and analytics engines, which require data. And that explains why data warehouses continue to play such a crucial role in business. Data warehouses often provide the source of that data, by rolling up and summarizing key information from a variety of sources.

Data warehouses, which are themselves relational databases, can be complex to set up and manage on a daily basis. They typically require significant human involvement from database administrators (DBAs). In a large enterprise, a team of DBAs ensure that the data warehouse is extracting data from those disparate data sources, as well as accommodating new and changed data sources—and making sure the extracted data is summarized properly and stored in a structured manner that can be handled by other applications, including those BI and analytics tools.

On top of that, the DBAs are managing the data warehouse’s infrastructure. That includes everything from server processor utilization, the efficiency of storage, security of the data, backups, and more.

However, the labor-intensive nature of data warehouses is about to change, with the advent of Oracle Autonomous Data Warehouse Cloud, announced in October 2017. The self-driving, self-repairing, self-tuning functionality of Oracle’s Data Warehouse Cloud is good for the organization—and good for the DBAs.

Data-driven organizations need timely, up-to-date business intelligence. This can feed instant decision-making, short-term predictions and business adjustments, and long-term strategy. If the data warehouse goes down, slows down, or lacks some information feeds, the impact can be significant. No data warehouse may mean no daily operational dashboards and reports, or inaccurate dashboards or reports.

For C-level executives, Autonomous Data Warehouse can improve the value of the data warehouse. This boosts the responsiveness of business intelligence and other important applications, by improving availability and performance.

Stop worrying about uptime. Forget about disk-drive failures. Move beyond performance tuning. DBAs, you have a business to optimize.

Read more in my article, “Autonomous Capabilities Will Make Data Warehouses — And DBAs — More Valuable.”

, ,

DevOps gets rid of the developer / operations wall

The “throw it over the wall” problem is familiar to anyone who’s seen designers and builders create something that can’t actually be deployed or maintained out in the real world. In the tech world, avoiding this problem is a big part of what gave rise to DevOps.

DevOps, combines “development” and “IT operations.” It refers to a set of practices that help software developers and IT operations staff work better, together. DevOps emerged about a decade ago with the goal of tearing down the silos between the two groups, so that companies can get new apps and features out the door, faster and with fewer mistakes and less downtime in production.

DevOps is now widely accepted as a good idea, but that doesn’t mean it’s easy. It requires cultural shifts by two departments that not only have different working styles and toolsets, but where the teams may not even know or respect each other.

When DevOps is properly embraced and implemented, it can help get better software written more quickly. DevOps can make applications easier and less expensive to manage. It can simplify the process of updating software to respond to new requirements. Overall, a DevOps mindset can make your organization more competitive because you can respond quickly to problems, opportunities and industry pressures.

Is DevOps the right strategic fit for your organization? Here are six CEO-level insights about DevOps to help you consider that question:

  1. DevOps can and should drive business agility.DevOps often means supporting a more rapid rate of change in terms of delivering new software or updating existing applications. And it doesn’t just mean programmers knock out code faster. It means getting those new apps or features fully deployed and into customers’ hands. “A DevOps mindset represents development’s best ability to respond to business pressures by quickly bringing new features to market and we drive that rapid change by leveraging technology that lets us rewire our apps on an ongoing basis,” says Dan Koloski, vice president of product management at Oracle.

For the full story, see my essay for the Wall Street Journal, “Tech Strategy: 6 Things CEOs Should Know About DevOps.”

, , ,

Blockchain is a secure system for trustworthy transactions

Blockchain is a distributed digital ledger technology in which blocks of transaction records can be added and viewed—but can’t be deleted or changed without detection. Here’s where the name comes from: a blockchain is an ever-growing sequential chain of transaction records, clumped together into blocks. There’s no central repository of the chain, which is replicated in each participant’s blockchain node, and that’s what makes the technology so powerful. Yes, blockchain was originally developed to underpin Bitcoin and is essential to the trust required for users to trade digital currencies, but that is only the beginning of its potential.

Blockchain neatly solves the problem of ensuring the validity of all kinds of digital records. What’s more, blockchain can be used for public transactions as well as for private business, inside a company or within an industry group. “Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days, or even weeks.”

That’s the power of blockchain: an immutable digital ledger for recording transactions. It can be used to power anonymous digital currencies—or farm-to-table vegetable tracking, business contracts, contractor licensing, real estate transfers, digital identity management, and financial transactions between companies or even within a single company.

“Blockchain doesn’t have to just be used for accounting ledgers,” says Rakhmilevich. “It can store any data, and you can use programmable smart contracts to evaluate and operate on this data. It provides nonrepudiation through digitally signed transactions, and the stored results are tamper proof. Because the ledger is replicated, there is no single source of failure, and no insider threat within a single organization can impact its integrity.”

It’s All About Distributed Ledgers

Several simple concepts underpin any blockchain system. The first is the block, which is a batch of one or more transactions, grouped together and hashed. The hashing process produces an error-checking and tamper-resistant code that will let anyone viewing the block see if it has been altered. The block also contains the hash of the previous block, which ties them together in a chain. The backward hashing makes it extremely difficult for anyone to modify a single block without detection.

A chain contains collections of blocks, which are stored on decentralized, distributed servers. The more the better, with every server containing the same set of blocks and the latest values of information, such as account balances. Multiple transactions are handled within a single block using an algorithm called a Merkle tree, or hash tree, which provides fault and fraud tolerance: if a server goes down, or if a block or chain is corrupted, the missing data can be reconstructed by polling other servers’ chains.

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can view relevant data, but not everything in the chain. A customer might be able to verify that a contractor has a valid business license and see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

When originally conceived, blockchain had a narrow set of protocols. They were designed to govern the creation of blocks, the grouping of hashes into the Merkle tree, the viewing of data encapsulated into the chain, and the validation that data has not been corrupted or tampered with. Over time, creators of blockchain applications (such as the many competing digital currencies) innovated and created their own protocols—which, due to their independent evolutionary processes, weren’t necessarily interoperable. By contrast, the success of general-purpose blockchain services, which might encompass computing services from many technology, government, and business players, created the need for industry standards—such as Hyperledger, a Linux Foundation project.

Read more in my feature article in Oracle Magazine, March/April 2018, “It’s All About Trust.”

,

No lessons learned from cloud security breaches

According to CyberArk, cyber-security inertia is putting organizations at risk. Nearly half — 46% — of enterprises say their security strategy rarely changes substantially, even after a cyberattack. That data comes from the organization’s new Global Advanced Threat Landscape Report 2018. The researchers surveyed 1,300 IT security decision-makers, DevOps and app developer professionals, and line-of-business owners in seven countries.

Cloud computing is a major focus of this report, and the study results are scary. CyberArk says, “Automated processes inherent in cloud environments are responsible for prolific creation of privileged credentials and secrets. These credentials, if compromised, can give attackers a crucial jumping-off point to achieve lateral access across networks, data and applications — whether in the cloud or on-premises.”

The study shows that

  • 50% of IT professionals say their organization stores business-critical information in the cloud, including revenue-generating customer- facing applications
  • 43% say they commit regulated customer data to the cloud
  • 49% of respondents have no privileged account security strategy for the cloud

While we haven’t yet seen major breaches caused by tech failures of cloud vendors, we have seen many, many examples of customer errors with the cloud. Those errors, such as posting customer information to public cloud storage services without encryption or proper password control, have allowed open access to private information.

CyberArk’s view is dead right: “There are still gaps in the understanding of who is responsible for security in the cloud, even though the public cloud vendors are very clear that the enterprise is responsible for securing cloud workloads. Additionally, few understand the full impact of the unsecured secrets that proliferate in dynamic cloud environments and automated processes.”

In other words, nobody is stepping up to the plate. (Perhaps cloud vendors should scan their customers’ files and warn them if they are uploading unsecured files. Nah. That’ll never happen – because if there’s a failure of that monitoring system, the cloud vendor could be held liable for the breach.)

Read more in my essay, “Far Too Many Companies Fail To Learn From Cloud Cyber-Breaches.”

,

Surprise! Serverless computing has servers

Don’t be misled by the name: Serverless cloud computing contains servers. Lots of servers. What makes serverless “serverless” is that developers, IT administrators and business leaders don’t have to think about those servers. Ever.

In the serverless model, online computing power gets tapped automatically only at the moment it’s needed. This can save organizations money and, just as importantly, make the IT organization more agile when it comes to building and launching new applications. That’s why serverless has the potential to be a game-changer for enterprise.

“Serverless is the next logical step for computing,” says Bob Quillin, Oracle vice president of developer relations. “We went from a data center where you own everything, to the cloud with shared servers and centralized infrastructure, to serverless, where you don’t even care about the servers themselves.”

In the serverless model, developers write and deploy what are called “functions.” Those are slimmed-down applications that take one action, such as processing an e-commerce order or recording that a shipment arrived. They run those functions directly on the cloud, using technology that eliminates the need to manage the servers, since it delivers computing power the moment that a function gets called into action.

Both the economics and the speed-of-development benefits of serverless cloud computing are compelling. Here are four CEO-level insights from Quillin for thinking about serverless computing.

First: Serverless can save real money. In the old data center model, says Quillin, organizations had to buy and maintain expensive servers, infrastructure and real estate.

In a traditional cloud model, organizations turn that capital expense into an operating one by provisioning virtualized servers and infrastructure. That saves money compared with the old data center model, Quillin says, but “you are typically paying for compute resources that are running all the time—in increments of CPU hours at least.” If you create a cluster of cloud servers, you don’t typically build it up and break it down every day, and certainly not every hour, as needed. That’s just too much management and orchestration for most organizations.

Serverless, on the other hand, essentially lets you pay only for exactly the time that a workload runs. For closing the books, it may be a once-a-month charge for a few hours of computing time. For handling transactions, it might be a few tenths of a second whenever a customer makes a sale or an Internet of Things (IoT) device sends data.

For the rest of the list, and the full story, see my essay for the Wall Street Journal, “4 Things CEOs Should Know About Serverless Computing.”

, , ,

How transformative is DevOps? Well, it depends

DevOps is a technology discipline well-suited to cloud-native application development. When it only takes a few mouse clicks to create or manage cloud resources, why wouldn’t developers and IT operation teams work in sync to get new apps out the door and in front of user faster? The DevOps culture and tactics have done much to streamline everything from coding to software testing to application deployment.

Yet far from every organization has embraced DevOps, and not every organization that has tried DevOps has found the experience transformative. Perhaps that’s because the idea is relatively young (the term was coined around 2009), suggests Javed Mohammed, systems community manager at Oracle, or perhaps because different organization are at such different spots in DevOps’ technology adoption cycle. That idea—about where we are in the adoption of DevOps—became a central theme of a recent podcast discussion among tech experts. Following are some highlights.

Confusion about DevOps can arise because DevOps affects dev and IT teams in many ways. “It can apply to the culture piece, to the technology piece, to the process piece—and even how different teams interact, and how all of the different processes tie together,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC and co-author of Accelerate: The Science of Lean Software and DevOps.

The adoption and effectiveness of DevOps within a team depends on where each team is, and where organizations are. One team might be narrowly focused on the tech used to automate software deployment to the public, while another is looking at the culture and communication needed to release new features on a weekly or even daily basis. “Everyone is at a very, very different place,” Forsgren says.

Indeed, says Forsgren, some future-thinking organizations are starting to talk about what ‘DevOps Next’ is, extending the concept of developer-led operations beyond common best practices. At the same time, in other companies, there’s no DevOps. “DevOps isn’t even on their radar,” she sighs. Many experts, including Forsgren, see that DevOps is here, is working, and is delivering real value to software teams today—and is helping businesses create and deploy better software faster and less expensively. That’s especially true when it comes to cloud-native development, or when transitioning existing workloads from the data center into the cloud.

Read more in my essay, “DevOps: Sometimes Incredibly Transformative, Sometimes Not So Much.”

, , ,

Software Defined Perimeter (SDP), not Virtual Private Networks (VPN)

The VPN model of extending security through enterprise firewalls is dead, and the future now belongs to the Software Defined Perimeter (SDP). Firewalls imply that there’s an inside to the enterprise, a place where devices can communicate in a trusted manner. This being so, there must also be an outside where communications aren’t trusted. Residing between the two is that firewall which decides which traffic can egress and which can enter following deep inspection, based on scans and policies.

What about trusted applications requiring direct access to corporate resources from outside the firewall? That’s where Virtual Private Networks came in, by offering a way to push a hole in the firewall. VPNs are a complex mechanism for using encryption and secure tunnels to bridge multiple networks, such as a head-office and regional office network. They can also temporarily allow remote users to become part of the network.

VPNs are well established but perceived as difficult to configure on the endpoints, hard for IT to manage and challenging to scale for large deployments. There are also issues of software compatibility: not everything works through a VPN. Putting it bluntly, almost nobody likes VPNs and there is now a better way to securely connect mobile applications and Industrial Internet of Things (IIoT) devices into the world of datacenter servers and cloud-based applications.

Authenticate Then Connect

The Software Defined Perimeter depends on a rigorous process of identity verification of both client and server using a secure control channel, thereby replacing the VPN. The negotiation for trustworthy identification is based on cryptographic protocols like Transport Layer Security (TLS) which succeeds the old Secure Sockets Layer (SSL).

With identification and trust established by both parties, a secure data channel can be provisioned with specified bandwidth and quality. For example, the data channel might require very low latency and minimal jitter for voice messaging or it might need high bandwidth for streaming video, or alternatively be low-bandwidth and low-cost for data backups.

On the client side, the trust negotiation and data channel can be tied to a specific mobile application, perhaps an employee’s phone or tablet. The corporate customer account management app needs trusted access to the corporate database server, but no other phone service should be granted access.

SDP is based on the notion of authenticate-before-connect, which reminds me of reverse-charge phone calls of the distant past. A caller would ask the operator to place a reverse charge call to Sally on a specified number from her nephew, Bob. The operator placing the call would chat with Sally over the equivalent of the control channel. Only if the operator believed she was talking to Sally, and providing Sally accepted the charges, would the operator establish the Bob-to-Sally connection, which is the equivalent of the SDP data channel.

Read more in my essay for Network Computing, “Forget VPNs: the future is SDP.”

, ,

Most routine IT operations will soon be handled autonomously

Companies can’t afford downtime. Employees need access to their applications and data 24/7, and so do other business applications, manufacturing and logistics management systems, and security monitoring centers. Anyone who thinks that the brute force effort of their hard-working IT administrators is enough to prevent system downtime just isn’t facing reality.

Traditional systems administrators and their admin tools can’t keep up with the complexity inherent in any modern enterprise. A recent survey of the Oracle Applications Users Group has found that despite significant progress in systems management automation, many customers still report that more than 80% of IT issues are first discovered and reported by users. The number of applications is spiraling up, while data increases at an even more rapid rate.

The boundaries between systems are growing more complex, especially with cloud-based and hybrid-cloud architectures. That reality is why Oracle, after analyzing a survey of its industry-leading customers, recently predicted that by 2020, more than 80% of application infrastructure operations will be managed autonomously.

Autonomously is an important word here. It means not only doing mundane day-to-day tasks including monitoring, tuning, troubleshooting, and applying fixes automatically, but also detecting and rapidly resolving issues. Even when it comes to the most complex problems, machines can simplify the analysis—sifting through the millions of possibilities to present simpler scenarios, to which people then can apply their expertise and judgment of what action to take.

Oracle asked, about the kind of activities that IT system administrators do. That includes on a daily, weekly, and monthly basis—things such as password resets, system reboots, software patches, and the like.

Expect that IT teams will soon reduce by several orders of magnitude the number of situations like those that need manual intervention. If an organization typically has 20,000 human-managed interventions per year, humans will need to touch only 20. The rest will be handled through systems that can apply automation combined with machine learning, which can analyze patterns and react faster than human admins to enable preventive maintenance, performance optimization, and problem resolution.

Read more in my article for Forbes, “Prediction: 80% of Routine IT Operations Will Soon Be Solved Autonomously.”

, ,

How to build trust into a connected car

Not a connected car.Nobody wants bad guys to be able to hack connected cars. Equally importantly, they shouldn’t be able to hack any part of the multi-step communications path that lead from the connected car to the Internet to cloud services – and back again. Fortunately, companies are working across the automotive and security industries to make sure that does happen.

The consequences of cyberattacks against cars range from the bad to the horrific: Hackers might be able to determine that a driver is not home, and sell that information to robbers. Hackers could access accounts and passwords, and be able to leverage that information for identity theft, or steal information from bank accounts. Hackers might be able to immobilize vehicles, or modify/degrade the functionality of key safety features like brakes or steering. Hackers might even be able to seize control of the vehicle, and cause accidents or terrorist incidents.

Horrific. Thankfully, companies like semiconductor leader Micron Technology, along with communication security experts NetFoundry, have a plan – and are partnering with vehicle manufacturers to embed secure, trustworthy hardware into connected cars. The result: Safety. Security. Trust. Vroom.

It Starts with the Internet of Things

The IoT consists of autonomous computing units, connected to back-end services via the Internet. Those back-end services are often in the cloud, and in the case of connected cars, might offer anything from navigation to infotainment to preventive maintenance to firmware upgrades for build-in automotive features. Often, the back-end services would be offered through the automobile’s manufacturer, though they may be provisioned through third-party providers.

The communications chain for connected cars is lengthy. On the car side, it begins with an embedded component (think stereo head unit, predictive front-facing radar used for adaptive cruise control, or anti-lock brake monitoring system). The component will likely contain or be connected to a ECU – an embedded control unit, a circuit board with a microprocessor, firmware, RAM, and a network connection. The ECU, in turn, is connected via an in-vehicle network, which connected to a communications gateway.

That communications gateway talks to a telecommunications provider, which could change as the vehicle crosses service provider or national boundaries. The telco links to the Internet, the Internet links to a cloud provider (such as Amazon Web Services), and from there, there are services that talk to the automotive systems.

Trust is required at all stages of the communications. The vehicle must be certain that its embedded devices, ECUs, and firmware are not corrupted or hacked. The gateway needs to know that it’s talking to the real car and its embedded systems – not fakes or duplicates offered by hackers. It also needs to know that the cloud services are the genuine article, and not fakes. And of course, the cloud services must be assured that they are talking to the real, authenticated automotive gateway and in-vehicle components.

Read more about this in my feature for Business Continuity, “Building Cybertrust into the Connected Car.”

, ,

The ketchup bottle meets cloud computing

The pattern of cloud adoption moves something like the ketchup bottle effect: You tip the bottle and nothing comes out, so you shake the bottle and suddenly you have ketchup all over your plate.

That’s a great visual from Frank Munz, software architect and cloud evangelist at Munz & More, in Germany. Munz and a few other leaders in the Oracle community were interviewed on a podcast by Bob Rhubart, Architect Community Manager at Oracle, about the most important trends they saw in 2017. The responses covered a wide range of topics, from cloud to blockchain, from serverless to machine learning and deep learning.

During the 44-minute session, “What’s Hot? Tech Trends That Made a Real Difference in 2017,” the panel took some fascinating detours into the future of self-programming computers and the best uses of container technologies like Kubernetes. For those, you’ll need to listen to the podcast.

The panel included: Frank Munz; Lonneke Dikmans, chief product officer of eProseed, Netherlands; Lucas Jellema, CTO, AMIS Services, Netherlands; Pratik Patel, CTO, Triplingo, US; and Chris Richardson, founder and CEO, Eventuate, US. The program was recorded in San Francisco at Oracle OpenWorld and JavaOne.

The cloud’s tipping point

The ketchup quip reflects the cloud passing a tipping point of adoption in 2017. “For the first time in 2017, I worked on projects where large, multinational companies give up their own data center and move 100% to the cloud,” Munz said. These workload shifts are far from a rarity. As Dikmans said, the cloud drove the biggest change and challenge: “[The cloud] changes how we interact with customers, and with software. It’s convenient at times, and difficult at others.”

Security offered another way of looking at this tipping point. “Until recently, organizations had the impression that in the cloud, things were less secure and less well managed, in general, than they could do themselves,” said Jellema. Now, “people have come to realize that they’re not particularly good at specific IT tasks, because it’s not their core business.” They see that cloud vendors, whose core business is managing that type of IT, can often do those tasks better.

In 2017, the idea of shifting workloads en masse to the cloud and decommissioning data centers became mainstream and far less controversial.

But wait, there’s more! See about Blockchain, serverless computing, and pay-as-you-go machine learning, in my essay published in Forbes, “Tech Trends That Made A Real Difference In 2017.”

, ,

IBNS and ASN: Intent-Based Networking Systems and Application-Specific Networking

With lots of inexpensive, abundant computation resources available, nearly anything becomes possible. For example, you can process a lot of network data to identify patterns, identify intelligence, and product insight that can be used to automate networks. The road to Intent-Based Networking Systems (IBNS) and Application-Specific Networks (ASN) is a journey. That’s the belief of Rajesh Ghai, Research Director of Telecom and Carrier IP Networks at IDC.

Ghai defines IBNS as a closed-loop continuous implementation of several steps:

  • Declaration of intent, where the network administrator defines what the network is supposed to do
  • Translation of intent into network design and configuration.
  • Validation of the design using a model that decides if that configuration can actually be implemented,
  • Propagation of that configuration into the network devices via APIs.
  • Gather and study real-time telemetry from all the devices.
  • Use machine learning to determine whether desired state of policy has been achieved. And then repeat,

Related to that concept, Ghai explains, is ASN. “It’s also a concept which is software control and optimization and automation. The only difference is that ASN is more applicable to distributed applications over the internet than IBNS.”

IBNS Operates Networks as One System

“Think of intent-based networking as software that sits on top of your infrastructure and focusing on the networking infrastructure, and enables you to operate your network infrastructure as one system, as opposed to box per box,” explained Mansour Karam, Founder, CEO of Apstra, which offers IBNS solutions for enterprise data centers.

“To achieve this, we have to start with intent,” he continued. “Intent is both the high-level business outcomes that are required by the business, but then also we think of intent as applying to every one of those stages. You may have some requirements in how you want to build.”

Karam added, “Validation includes tests that you would run — we call them expectations — to validate that your network indeed is behaving as you expected, as per intent. So we have to think of a sliding scale of intent and then we also have to collect all the telemetry in order to close the loop and continuously validate that the network does what you want it to do. There is the notion of state at the core of an IBNS that really boils down to managing state at scale and representing it in a way that you can reason about the state of your system, compare it with the desired state and making the right adjustments if you need to.”

The upshot of IBNS, Karam said: If you have powerful automation you’re taking the human out of the equation, and so you get a much more agile network. You can recoup the revenues that otherwise you would have lost, because you’re unable to deliver your business services on time. You will reduce your outages massively, because 80% of outages are caused by human error. You reduce your operational expenses massively, because organizations spend $4 operating every dollar of CapEx, and 80% of it is manual operations. So if you take that out you should be able to recoup easily your entire CapEx spend on IBNS.”

ASN Gives Each Application It Own Network

“Application-Specific Networks, like Intent-Based Networking Systems, enable digital transformation, agility, speed, and automation,” explained Galeal Zino, Founder of NetFoundry, which offers an ASN platform.

He continued, “ASN is a new term, so I’ll start with a simple analogy. ASNs are like are private clubs — very, very exclusive private clubs — with exactly two members, the application and the network. ASN literally gives each application its own network, one that’s purpose-built and driven by the specific needs of that application. ASN merges the application world and the network world into software which can enable digital transformation with velocity, with scale, and with automation.”

Read more in my new article for Upgrade Magazine, “Manage smarter, more autonomous networks with Intent-Based Networking Systems and Application Specific Networking.”

, ,

Artificial Intelligence is everywhere!

When the little wireless speaker in your kitchen acts on your request to add chocolate milk to your shopping list, there’s artificial intelligence (AI) working in the cloud, to understand your speech, determine what you want to do, and carry out the instruction.

When you send a text message to your HR department explaining that you woke up with a vision-blurring migraine, an AI-powered chatbot knows how to update your status to “out of the office” and notify your manager about the sick day.

When hackers attempt to systematically break into the corporate computer network over a period of weeks, AI sees the subtle patterns in historical log data, recognizes outliers in the packet traffic, raises the alarm, and recommends appropriate countermeasures.

AI is nearly everywhere in today’s society. Sometimes it’s fairly obvious (as with a chatbot), and sometimes AI is hidden under the covers (as with network security monitors). It’s a virtuous cycle: Modern cloud computing and algorithms make AI a fast, efficient, and inexpensive approach to problem-solving. Developers discover those cloud services and algorithms and imagine new ways to incorporate the latest AI functionality into their software. Businesses see the value of those advances (even if they don’t know that AI is involved), and everyone benefits. And quickly, the next wave of emerging technology accelerates the cycle again.

AI can improve the user experience, such as when deciphering spoken or written communications, or inferring actions based on patterns of past behavior. AI techniques are excellent at pattern-matching, making it easier for machines to accurately decipher human languages using context. One characteristic of several AI algorithms is flexibility in handling imprecise data: Human text. Specially, chatbots—where humans can type messages on their phones, and AI-driven software can understand what they say and carry on a conversation, providing desired information or taking the appropriate actions.

If you think AI is everywhere today, expect more tomorrow. AI-enhanced software-as-a-service and platform-as-a-service products will continue to incorporate additional AI to help make cloud-delivered and on-prem services more reliable, more performant, and more secure. AI-driven chatbots will find their ways into new, innovative applications, and speech-based systems will continue to get smarter. AI will handle larger and larger datasets and find its way into increasingly diverse industries.

Sometimes you’ll see the AI and know that you’re talking to a bot. Sometimes the AI will be totally hidden, as you marvel at the, well, uncanny intelligence of the software, websites, and even the Internet of Things. If you don’t believe me, ask a chatbot.

Read more in my feature article in the January/February 2018 edition of Oracle Magazine, “It’s Pervasive: AI Is Everywhere.”

, ,

The future tech user-interface: It’s all about speech

Amazon says that that a cloud-connected speaker/microphone was at the top of the charts: “This holiday season was better than ever for the family of Echo products. The Echo Dot was the #1 selling Amazon Device this holiday season, and the best-selling product from any manufacturer in any category across all of Amazon, with millions sold.”

The Echo products are an ever-expanding family of inexpensive consumer electronics from Amazon, which connect to a cloud-based service called Alexa. The devices are always listening for spoken commands, and will respond through conversation, playing music, turning on/off lights and other connected gadgets, making phone calls, and even by showing videos.

While Amazon doesn’t release sales figures for its Echo products, it’s clear that consumers love them. In fact, Echo is about to hit the road, as BMW will integrate the Echo technology (and Alexa cloud service) into some cars beginning this year. Expect other automakers to follow.

Why the Echo – and similar tech like Apple’s Siri and Google’s Home? Speech.

The traditional way of “talking” to computers has been through the keyboard, augmented with a mouse used to select commands or input areas. Computers initially responded only to typed instructions using a command-line interface (CLI); this was replaced in the era of the Apple Macintosh and the first iterations of Microsoft Windows with windows, icons, menus, and pointing devices (WIMP). Some refer to the modern interface used on standard computers as a graphic user interface (GUI); embedded devices, such as network routers, might be controlled by either a GUI or a CLI.

Smartphones, tablets, and some computers (notably running Windows) also include touchscreens. While touchscreens have been around for decades, it’s only in the past few years they’ve gone mainstream. Even so, the primary way to input data was through a keyboard – even if it’s a “soft” keyboard implemented on a touchscreen, as on a smartphone.

Enter speech. Sometimes it’s easier to talk, simply talk, to a device than to use a physical interface. Speech can be used for commands (“Alexa, turn up the thermostat” or “Hey Google, turn off the kitchen lights”) or for dictation.

Speech recognition is not easy for computers; in fact, it’s pretty difficult. However, improved microphones and powerful artificial-intelligence algorithms make speech recognition a lot easier. Helping the process: Cloud computing, which can throw nearly unlimited resources at speech recognition, including predictive analytics. Another helper: Constrained inputs, which means that when it comes to understanding commands, there are only so many words for the speech recognition system to decode. (Free-form dictation, like writing an essay using speech recognition, is a far harder problem.)

Speech recognition is only going to get better – and bigger. According to one report, “The speech and voice recognition market is expected to be valued at USD 6.19 billion in 2017and is likely to reach USD 18.30 billion by 2023, at a CAGR of 19.80% between 2017 and 2023. The growing impact of artificial intelligence (AI) on the accuracy of speech and voice recognition and the increased demand for multifactor authentication are driving the market growth.”

Read more in my essay, “The Next Tech Revolution Will Be Speech-Driven.”

, , ,

How AI is changing the role of cybersecurity – and of cybersecurity experts

In The Terminator, the Skynet artificial intelligence was turned on to track down hacking a military computer network. Turns out the hacker was Skynet itself. Is there a lesson there? Could AI turn against us, especially as it relates to the security domain?

That was one of the points I made while moderating a discussion of cybersecurity and AI back in October 2017. Here’s the start of a blog post written by my friend Tami Casey about the panel:

Mention artificial intelligence (AI) and security and a lot of people think of Skynet from The Terminator movies. Sure enough, at a recent Bay Area Cyber Security Meetup group panel on AI and machine learning, it was moderator Alan Zeichick – technology analyst, journalist and speaker – who first brought it up. But that wasn’t the only lively discussion during the panel, which focused on AI and cybersecurity.

I found two areas of discussion particularly interesting, which drew varying opinions from the panelists. One, around the topic of AI eliminating jobs and thoughts on how AI may change a security practitioner’s job, and two, about the possibility that AI could be misused or perhaps used by malicious actors with unintended negative consequences.

It was a great panel. I enjoyed working with the Meetup folks, and the participants: Allison Miller (Google), Ali Mesdaq (Proofpoint), Terry Ray (Imperva), Randy Dean (Launchpad.ai & Fellowship.ai).

You can read the rest of Tami’s blog here, and also watch a video of the panel.

, , ,

Why you should care about serverless computing

The bad news: There are servers used in serverless computing. Real servers, with whirring fans and lots of blinking lights, installed in racks inside data centers inside the enterprise or up in the cloud.

The good news: You don’t need to think about those servers in order to use their functionality to write and deploy enterprise software. Your IT administrators don’t need to provision or maintain those servers, or think about their processing power, memory, storage, or underlying software infrastructure. It’s all invisible, abstracted away.

The whole point of serverless computing is that there are small blocks of code that do one thing very efficiently. Those blocks of code are designed to run in containers so that they are scalable, easy to deploy, and can run in basically any computing environment. The open Docker platform has become the de facto industry standard for containers, and as a general rule, developers are seeing the benefits of writing code that target Docker containers, instead of, say, Windows servers or Red Hat Linux servers or SuSE Linux servers, or any specific run-time environment. Docker can be hosted in a data center or in the cloud, and containers can be easily moved from one Docker host to another, adding to its appeal.

Currently, applications written for Docker containers still need to be managed by enterprise IT developers or administrators. That means deciding where to create the containers, ensuring that the container has sufficient resources (like memory and processing power) for the application, actually installing the application into the container, running/monitoring the application while it’s running, and then adding more resources if required. Helping do that is Kubernetes, an open container management and orchestration system for Docker. So while containers greatly assist developers and admins in creating portable code, the containers still need to be managed.

That’s where serverless comes in. Developers write their bits of code (such as to read or write from a database, or encrypt/decrypt data, or search the Internet, or authenticate users, or to format output) to run in a Docker container. However, instead of deploying directly to Docker, or using Kubernetes to handle deployment, they write their code as a function, and then deploy that function onto a serverless platform, like the new Fn project. Other applications can call that function (perhaps using a RESTful API) to do the required operation, and the serverless platform then takes care of everything else automatically behind the scenes, running the code when needed, idling it when not needed.

Read my essay, “Serverless Computing: What It Is, Why You Should Care,” to find out more.

, ,

Lift-and-shift vs building native cloud apps

Write new cloud-native applications. “Lifting and shifting” existing data center applications. Those are two popular ways of migrating enterprise assets to the cloud.

Gartner’s definition: “Lift-and-shift means that workloads are migrated to cloud IaaS in as unchanged a manner as possible, and change is done only when absolutely necessary. IT operations management tools from the existing data center are deployed into the cloud environment largely unmodified.”

There’s no wrong answer, no wrong way of proceeding. Some data center applications (including servers and storage) may be easier to move than others. Some cloud-native apps may be easier to write than others. Much depends on how much interconnectivity there is between the applications and other software; that’s why, for example, public-facing websites are often easiest to move to the web, while tightly coupled internal software, such as inventory control or factory-floor automation, can be trickier.

That’s why in some cases, a hybrid strategy is best. Some parts of the applications are moved up to the cloud, while others remain in the data centers, with SD-WANs or other connectivity linking everything together in a secure manner.

In other words, no one size fits all. And no one timeframe fits all, especially when it comes to lifting-and-shifting.

Joe Paiva, CIO of the U.S. Commerce Department’s International Trade Administration (ITA), is a fan of lift-and-shift. He said at a cloud conference that, “Sometimes it makes sense because it gets you there. That was the key. We had to get there because we would be no worse off or no better off, and we were still spending a lot of money, but it got us to the cloud. Then we started doing rationalization of hardware and applications, and dropped our bill to Amazon by 40 percent compared to what we were spending in our government data center. We were able to rationalize the way we use the service.” Paiva estimates government agencies could save 5%-15% using lift-and-shift.

The benefits of moving existing workloads to the cloud are almost entirely financial. If you can shut down a data center and pay less to run the application in the cloud, it’s can be a good short-term solution with immediate ROI. Gartner cautions, however, that lift and shift “generally results in little created value. Plus, it can be a more expensive option and does not deliver immediate cost savings.” Much depends on how much it costs to run that application today.

Read more in my essay, “Lifting and shifting from the data center up to the cloud.”

, ,

DevOps is the future of enterprise software development, because cloud computing

To get the most benefit from the new world of cloud-native server applications, forget about the old way of writing software. In the old model, architects designed software. Programmers wrote the code, and testers tested it on test server. Once the testing was complete, the code was “thrown over the wall” to administrators, who installed the software on production servers, and who essentially owned the applications moving forward, only going back to the developers if problems occurred.

The new model, which appeared about 10 years ago is called “DevOps,” or developer operations. In the DevOps model, architects, developers, testers, and administrators collaborate much more closely to create and manage applications. Specifically, developers play a much broader role in the day-to-day administration of deployed applications, and use information about how the applications are running to tune and enhance those applications.

The involvement of developers in administration made DevOps perfect for cloud computing. Because administrators had fewer responsibilities (i.e., no hardware to worry about), it was less threatening for those developers and administrators to collaborate as equals.

Change Matters

In that old model of software development and deployment, developers were always change agents. They created new stuff, or added new capabilities to existing stuff. They embraced change, including new technologies – and the faster they created change (i.e., wrote code), the more competitive their business.

By contrast, administrators are tasked with maintaining uptime, while ensuring security. Change is not a virtue to those departments. While admins must accept change as they install new applications, it’s secondary to maintaining stability. Indeed, admins could push back against deploying software if they believed those apps weren’t reliable, or if they might affect the overall stability of the data center as a whole.

With DevOps, everyone can embrace change. One of the ways that works, with cloud computing, is to reduce the risk that an unstable application can damage system reliability. In the cloud, applications can be build and deployed using bare-metal servers (like in a data center), or in virtual machines or containers.

DevOps works best when software is deployed in VMs or containers, since those are isolated from other systems – thereby reducing risk. Turns out that administrators do like change, if there’s minimal risk that changes will negatively affect overall system reliability, performance, and uptime.

Benefits of DevOps

Goodbye, CapEx, hello, OpEx. Cloud computing moves enterprises from capital-expense data centers (which must be built, electrified, cooled, networked, secured, stocked with servers, and refreshed periodically) to operational-expense service (where the business pays monthly for the processors, memory, bandwidth, and storage reserved and/or consumed).

Read more, including about the five biggest benefits of cloud computing, in my essay, “DevOps: The Key To Building And Deploying Cloud-Native Software.”

, ,

Many on-prem ERP and CRM packages are not sufficiently secured

When was the last time most organizations discussed the security of their Oracle E-Business Suite? How about SAP S/4HANA? Microsoft Dynamics? IBM’s DB2? Discussions about on-prem server software security too often begin and end with ensuring that operating systems are at the latest level, and are current with patches.

That’s not good enough. Just as clicking on a phishing email or opening a malicious document in Microsoft Word can corrupt a desktop, so too server applications can be vulnerable. When those server applications are involved with customer records, billing systems, inventory, transactions, financials, or human resources, a hack into ERP or CRM systems can threaten an entire organization. Worse, if that hack leveraged stolen credentials, the business may never realize that competitors or criminals are stealing its data, and potentially even corrupting its records.

A new study from the Ponemon Institute points to the potential severity of the problem. Sixty percent of the respondents to the “Cybersecurity Risks to Oracle E-Business Suite” say that information theft, modification of data and disruption of business processes on their company’s Oracle E-Business Suite applications would be catastrophic. While 70% respondents said a material security or data breach due to insecure Oracle E-Business Suite applications is likely, 67% of respondents believe their top executives are not aware of this risk. (The research was sponsored by Onapsis, which sells security solutions for ERP suites, so apply a little sodium chloride to your interpretation of the study’s results.)

The audience of this study was of businesses that rely upon Oracle E-Business Suite. About 24% of respondents said that it was the most critical application they ran, and altogether, 93% said it was one of the top 10 critical applications. Bearing in mind that large businesses run thousands of server applications, that’s saying something.

Yet more than half of respondents – 53% — said that it was Oracle’s responsibility to ensure that its applications and platforms are safe and secure. Unless they’ve contracted with Oracle to manage their on-prem applications, and to proactively apply patches and fixes, well, they are delusional.

Another area of delusion: That software must be connected to the Internet to pose a risk. In this study, 52% of respondents agree or strongly agree that “Oracle E-Business applications that are not connected to the Internet are not a security threat.” They’ve never heard of insider threats? Credentials theft? Penetrations of enterprise networks?

What about securing other ERP/CRM packages, like those from IBM, Microsoft, and SAP? Read all about that, and more, in my story, “Organizations Must Secure Their Business-Critical ERP And CRM Server Applications.”

, ,

When natural disasters strike, the cloud can aid recovery

The water is rising up over your desktops, your servers, and your data center. You’d better hope that the disaster recovery plans included the word “offsite” – and that the backup IT site wasn’t another local business that’s also destroyed by the hurricane, the flood, the tornado, the fire, or the earthquake.

Disasters are real, as August’s Hurricane Harvey and immense floods in Southeast Asia have taught us all. With tens of thousands of people displaced, it’s hard to rebuild a business. Even with a smaller disaster, like a power outage that lasts a couple of days, the business impact can be tremendous.

I once worked for a company in New York that was hit by a blizzard that snapped the power and telephone lines to the office building. Down went the PBX, down went the phone system and the email servers. Remote workers (I was in in California) were massively impaired. Worse, incoming phone calls simply rang and rang; incoming email messages bounced back to the sender.

With that storm, electricity was gone for more than a week, and broadband took an additional time to be restored. You’d better believe our first order of business, once we began the recovery phase, was to move our internal Microsoft Exchange Server to a colocation facility with redundant T1 lines, and move our internal PBX to a hosted solution from the phone company. We didn’t like the cost, but we simply couldn’t afford to be shut down again the next time a storm struck.

These days, the answer lies within the cloud, either for primary data center operations, or for the source of a backup. (Forget trying to salvage anything from a submerged server rack or storage system.)

We aren’t prepared. In a February 2017 study conducted by the Disaster Recovery Journal and Forrester Research, “The State Of Disaster Recovery Preparedness 2017,” only 18% of disaster recovery decision makers said they were “very prepared” to recover their data center in the event of a site failure or disaster event. Another 37% were prepared, 34% were somewhat prepared, and 11% not prepared at all.

That’s not good enough if you’re in Houston or Bangladesh or even New York during a blizzard. And that’s clear even among the survey respondents, 43% of whom said there was a business requirement to stay online and competitive 24×7.

Read more in my article, “Before the Next Natural Disaster Strikes, Look to the Cloud.”

, ,

The billion-dollar cost of extreme cyberattacks

A major global cyberattack could cost US$53 billion of economic losses. That’s on the scale of a catastrophic disaster like 2012’s Hurricane Sandy.

Lloyds of London, the famous insurance company, partnered with Cyence, a risk analysis firm specializing in cybersecurity. The result is a fascinating report, “Counting the Cost: Cyber Exposure Decoded.” This partnership makes sense: Lloyds needs to understand the risk before deciding whether to underwrite a venture — and when it comes to cybersecurity, this is an emerging science. Traditional actuarial methods used to calculate the risk of a cargo ship falling prey to pirates, or an office block to a devastating flood, simply don’t apply.

Lloyds says that in 2016, cyberattacks cost businesses as much as $450 billion. While insurers can help organizations manage that risk, the risk is increasing. The report points to those risks covering “everything from individual breaches caused by malicious insiders and hackers, to wider losses such as breaches of retail point-of-sale devices, ransomware attacks such as BitLocker, WannaCry and distributed denial-of-service attacks such as Mirai.”

The worry? Despite writing $1.35 billion in cyberinsurance in 2016, “insurers’ understanding of cyber liability and risk aggregation is an evolving process as experience and knowledge of cyber-attacks grows. Insureds’ use of the internet is also changing, causing cyber-risk accumulation to change rapidly over time in a way that other perils do not.”

And that is why the lack of time-tested actuarial tables can cause disaster, says Lloyds. “Traditional insurance risk modelling relies on authoritative information sources such as national or industry data, but there are no equivalent sources for cyber-risk and the data for modelling accumulations must be collected at scale from the internet. This makes data collection, and the regular update of it, key components of building a better understanding of the evolving risk.”

Huge Liability Costs

The “Counting the Cost” report makes for some depressing reading. Here are three of the key findings, quoted verbatim. Read the 56-page report to dig deeply into the scenarios, and the damages.

  • The direct economic impacts of cyber events lead to a wide range of potential economic losses. For the cloud service disruption scenario in the report, these losses range from US$4.6 billion for a large event to US$53.1 billion for an extreme event; in the mass software vulnerability scenario, the losses range from US$9.7 billion for a large event to US$28.7 billion for an extreme event.
  • Economic losses could be much lower or higher than the average in the scenarios because of the uncertainty around cyber aggregation. For example, while average losses in the cloud service disruption scenario are US$53 billion for an extreme event, they could be as high as US$121.4 billion or as low as US$15.6 billion, depending on factors such as the different organisations involved and how long the cloud-service disruption lasts for.
  • Cyber-attacks have the potential to trigger billions of dollars of insured losses. For example, in the cloud- services scenario insured losses range from US$620 million for a large loss to US$8.1 billion for an extreme loss. For the mass software vulnerability scenario, the insured losses range from US$762 million (large loss) to US$2.1 billion (extreme loss).

Read more in my article for Zonic News, “Lloyds Of London Estimates The Billion-Dollar Cost Of Extreme Cyberattacks.”

, ,

Agylytyx is a silly name, let’s make fun of it

I am unapologetically mocking this company’s name. Agylytyx emailed me this press release today, and only the name captured my attention. Plus, their obvious love of the ™ symbol — even people they quote use the ™. Amazing!

Beyond that, I’ve never talked to the company or used its products, and have no opinion about them. (My guess is that it’s supposed to be pronounced as “Agil-lytics.”)

Agylytyx Announces Availability of New IOT Data Analysis Application

SUNNYVALE, Calif., June 30, 2017 /PRNewswire/ — Agylytyx, a leading cloud-based analytic software vendor, today announced a new platform for analyzing IoT data. The Agylytyx Generator™ IoT platform represents an application of the vendor’s novel Construct Library™ approach to the IoT marketplace. For the first time, companies can both explore their IoT data and make it actionable much more quickly than previously thought possible.

From PLC data streams archived as tags in traditional historians to time series data streaming from sensors attached to devices, the Agylytyx Generator™ aggregates and presents IoT data in a decision-ready format. The company’s unique Construct Library™ (“building block”) approach allows decision makers to create and explore aggregated data such as pressure, temperature, output productivity, worker status, waste removal, fuel consumption, heat transfer, conductivity, condensation or just about any “care abouts.” This data can be instantly explored visually at any level such as region, plant, line, work cell or even device. Best of all, the company’s approach eliminates the need to build charts or write queries.

One of the company’s long-time advisors, John West of Clean Tech Open, noticed the Agylytyx Generator™ potential from the outset. West’s wide angle on data analysis led him to stress the product’s broad applicability. West said “Even as the company was building the initial product, I advised the team that I thought there was strong applicability of the platform to operational data. The idea of applying Constructs to a received data set has broad usage. Their evolution of the Agylytyx Generator™ platform to IoT data is a very natural one.”

The company’s focus on industrial process data was the brainchild of one the company’s investors, Jim Smith. Jim is a chemical engineer with extensive experience working with plant floor data. Smith stated “I recognized the potential in the company’s approach for analyzing process data. Throughout the brainstorming process, we all gradually realized we were on to something groundbreaking.”

This unique approach to analytics attracted the attention of PrecyseTech, a pioneer of Industrial IoT (IIoT) Systems providing end-to-end management of high-value physical assets and personnel. Paul B. Silverman, the CEO of PrecyseTech, has had a longstanding relationship with the company. Silverman noted: “The ability of the Agylytyx Generator™ to address cloud-based IoT data analytic solutions is a good fit with PrecyseTech’s strategy. Agylytyx is working with the PrecyseTech team to develop our inPALMSM Solutions IoT applications, and we are working collaboratively to identify and develop IoT data opportunities targeting PrecyseTech’s clients. Our plans are to integrate the Agylytyx Generator™ within our inPALMSM Solutions product portfolio and also to offer users access to the Agylytyx Generator™ via subscription.”

Creating this IoT focus made the ideal use of the Agylytyx Generator™. Mark Chang, a data scientist for Agylytyx, noted: “All of our previous implementations – financial, entertainment, legal, customer service – had data models with common ‘units of measure’ – projects, media, timekeepers, support cases, etc. IoT data is dissimilar in that there is no common ‘unit of measure’ across devices. This dissimilarity is exactly what makes our Construct Library™ approach so useful to IoT data. The logical next step for us will be to apply machine learning and cluster inference to enable optimization of resource deployment and predictive analytics like predictive maintenance.”

About Agylytyx

Agylytyx provides cloud-based enterprise business analytic software. The company’s flagship product, the Agylytyx Generator™, frees up analyst time and results in better decision making across corporations. Agylytyx is based in Sunnyvale, California, and has locations in Philadelphia and Chicago, IL. For more information about Agylytyx visit www.agylytyx.com.

, , ,

Business advice for chief information security officers (CISOs)

An organization’s Chief Information Security Officer’s job isn’t ones and zeros. It’s not about unmasking cybercriminals. It’s about reducing risk for the organization, for enabling executives and line-of-business managers to innovate and compete safely and  securely. While the CISO is often seen as the person who loves to say “No,” in reality, the CISO wants to say “Yes” — the job, after all, is to make the company thrive.

Meanwhile, the CISO has a small staff, tight budget, and the need to demonstrate performance metrics and ROI. What’s it like in the real world? What are the biggest challenges? We asked two former CISOs (it’s hard to get current CISOs to speak on the record), both of whom worked in the trenches and now advise CISOs on a daily basis.

To Jack Miller, a huge challenge is the speed of decision-making in today’s hypercompetitive world. Miller, currently Executive in Residence at Norwest Venture Partners, conducts due diligence and provides expertise on companies in the cyber security space. Most recently he served as chief security strategy officer at ZitoVault Software, a startup focused on safeguarding the Internet of Things.

Before his time at ZitoVault, Miller was the head of information protection for Auto Club Enterprises. That’s the largest AAA conglomerate with 15 million members in 22 states. Previously, he served as the CISO of the 5th and 11th largest counties in the United States, and as a security executive for Pacific Life Insurance.

“Big decisions are made in the blink of an eye,” says Miller. “Executives know security is important, but don’t understand how any business change can introduce security risks to the environment. As a CISO, you try to get in front of those changes – but more often, you have to clean up the mess afterwards.”

Another CISO, Ed Amoroso, is frustrated by the business challenge of justifying a security ROI. Amoroso is the CEO of TAG Cyber LLC, which provides advanced cybersecurity training and consulting for global enterprise and U.S. Federal government CISO teams. Previously, he was Senior Vice President and Chief Security Officer for AT&T, and managed computer and network security for AT&T Bell Laboratories. Amoroso is also an Adjunct Professor of Computer Science at the Stevens Institute of Technology.

Amoroso explains, “Security is an invisible thing. I say that I’m going to spend money to prevent something bad from happening. After spending the money, I say, ta-da, look, I prevented that bad thing from happening. There’s no demonstration. There’s no way to prove that the investment actually prevented anything. It’s like putting a “This House is Guarded by a Security Company” sign in front of your house. Maybe a serial killer came up the street, saw the sign, and moved on. Maybe not. You can’t put in security and say, here’s what didn’t happen. If you ask, 10 out of 10 CISOs will say demonstrating ROI is a huge problem.”

Read more in my article for Global Banking & Finance Magazine, “Be Prepared to Get Fired! And Other Business Advice for CISOs.”

, ,

The art and science of endpoint security

The endpoint is vulnerable. That’s where many enterprise cyber breaches begin: An employee clicks on a phishing link and installs malware, such a ransomware, or is tricked into providing login credentials. A browser can open a webpage which installs malware. An infected USB flash drive is another source of attacks. Servers can be subverted with SQL Injection or other attacks; even cloud-based servers are not immune from being probed and subverted by hackers. As the number of endpoints proliferate — think Internet of Things — the odds of an endpoint being compromised and then used to gain access to the enterprise network and its assets only increases.

Which are the most vulnerable endpoints? Which need extra protection? All of them, especially devices running some flavor of Windows, according to Mike Spanbauer, Vice President of Security at testing firm NSS Labs. “All of them. So the reality is that Windows is where most targets attack, where the majority of malware and exploits ultimately target. So protecting your Windows environment, your Windows users, both inside your businesses as well as when they’re remote is the core feature, the core component.”

Roy Abutbul, Co-Founder and CEO of security firm Javelin Networks, agreed. “The main endpoints that need the extra protection are those endpoints that are connected to the [Windows] domain environment, as literally they are the gateway for attackers to get the most sensitive information about the entire organization.” He continued, “From one compromised machine, attackers can get 100 per cent visibility of the entire corporate, just from one single endpoint. Therefore, a machine that’s connected to the domain must get extra protection.”

Scott Scheferman, Director of Consulting at endpoint security company Cylance, is concerned about non-PC devices, as well as traditional computers. That might include the Internet of Things, or unprotected routers, switches, or even air-conditioning controllers. “In any organization, every endpoint is really important, now more than ever with the internet of Things. There are a lot of devices on the network that are open holes for an attacker to gain a foothold. The problem is, once a foothold is gained, it’s very easy to move laterally and also elevate your privileges to carry out further attacks into the network.”

At the other end of the spectrum is cloud computing. Think about enterprise-controlled virtual servers, containers, and other resources configured as Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS). Anything connected to the corporate network is an attack vector, explained Roark Pollock, Vice President at security firm Ziften.

Microsoft, too, takes a broad view of endpoint security. “I think every endpoint can be a target of an attack. So usually companies start first with high privilege boxes, like administrator consoles onboard to service, but everybody can be a victim,” said Heike Ritter, a Product Manager for Security and Networking at Microsoft.

I’ve written a long, detailed article on this subject for NetEvents, “From Raw Data to Actionable Intelligence: The Art and Science of Endpoint Security.”

You can also watch my 10-minute video interview with these people here.

, , ,

What to do about credentials theft – the scourge of cybersecurity

Cybercriminals want your credentials and your employees’ credentials. When those hackers succeed in stealing that information, it can be bad for individuals – and even worse for corporations and other organizations. This is a scourge that’s bad, and it will remain bad.

Credentials come in two types. There are personal credentials, such as the login and password for an email account, bank and retirement accounts, credit-card numbers, airline membership program, online shopping and social media. When hackers manage to obtain those credentials, such as through phishing, they can steal money, order goods and services, and engage in identity theft. This can be extremely costly and inconvenient for victims, but the damage is generally contained to that one unfortunate individual.

Corporate digital credentials, on the other hand, are the keys to an organization’s network. Consider a manager, executive or information-technology worker within a typical medium-size or larger-size business. Somewhere in the organization is a database that describes that employee – and describes which digital assets that employee is authorized to use. If cybercriminals manage to steal the employee’s corporate digital credentials, the criminals can then access those same assets, without setting off any alarm bells. Why? Because they have valid credentials.

What might those assets be? Depending on the employee:

  • It might range from everything to file servers that contain intellectual property, as pricing sheets, product blueprints, or patent applications.
  • It might include email archives that describe business plans. Or accounting servers that contain important financial information that could help competitors or allow for “insider trading.”
  • It might be human resources data that can help the hackers attack other individuals. Or engage in identity theft or even blackmail.

What if the stolen credentials are for individuals in the IT or information security department? The hackers can learn a great deal about the company’s technology infrastructure, perhaps including passwords to make changes to configurations, open up backdoors, or even disable security systems.

Read my whole story about this —including what to do about it — in Telecom Times, “The CyberSecurity Scourge of Credentials Theft.”

, ,

The Fifth Column hiding in the Internet of Things (IoT)

I can’t trust the Internet of Things. Neither can you. There are too many players and too many suppliers of the technology that can introduce vulnerabilities in our homes, our networks – or elsewhere. It’s dangerous, my friends. Quite dangerous. In fact, it can be thought of as a sort of Fifth Column, but not in the way many of us expected.

Merriam-Webster defines a Fifth Column as “a group of secret sympathizers or supporters of an enemy that engage in espionage or sabotage within defense lines or national borders.” In today’s politics, there’s lot of talk about secret sympathizers sneaking across national borders, such as terrorists posing as students or refugees. Such “bad actors” are generally part of an organization, recruited by state actors, and embedded into enemy countries for long-term penetration of society.

There have been many real-life Fifth Column activists in recent global history. Think about Kim Philby and Anthony Blunt, part of the “Cambridge Five” who worked for spy agencies in the United Kingdom in post-World War II era; but who themselves turned out to be double agents working for the Soviet Union. Fiction too, is replete with Fifth Column spies. They’re everywhere in James Bond movies and John le Carré novels.

Am I too paranoid?

Let’s bring our paranoia (or at least, my paranoia) to the Internet of Things, and start by way of the late 1990s and early 2000s. I remember quite clearly the introduction of telco and network routers by Huawei, and concerns that the Chinese government may have embedded software into those routers in order to surreptitiously listen to telecom networks and network traffic, to steal intellectual property, or to do other mischief like disable networks in the event of a conflict. (This was before the term “cyberwarfare” was widely used.)

Recall that Huawei was founded by a former engineer in the Chinese People’s Liberation Army. The company was heavily supported by Beijing. Also there were lawsuits alleging that Huawei infringed on Cisco’s intellectual property – i.e., stole its source code. Thus, there was lots of concern surrounding the company and its products.

Read my full story about this, published in Pipeline Magazine, “The Surprising and Dangerous Fifth Column Hiding Within the Internet of Things.”

, ,

The three big cloud providers keep getting bigger!

You keep reading the same three names over and over again. Amazon Web Services. Google Cloud Platform. Microsoft Windows Azure. For the past several years, that’s been the top tier, with a wide gap between them and everyone else. Well, there’s a fourth player, the IBM cloud, based on their SoftLayer acquisition. But still, it’s AWS in the lead when it comes to Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS), with many estimates showing about a 37-40% market share in early 2017. In second place, Azure, at around 28-31%. Third, place, Google at around 16-18%. Fourth place, IBM SoftLayer, at 3-5%.

Add that all up, and you get the big four at between 84% and 94%. That doesn’t leave much room for everyone else, including companies like Rackspace, and all the cloud initiatives launched by major computer companies like Alibaba, Dell and HP, and all the telcos around the world.

it’s clear that when it comes to the public cloud, the sort of cloud that telcos want to monetize, and enterprises want to use for hybrid clouds or full migrations, there are very few choices. You can go with the big winner, which is Amazon. You can look to Azure (which is appealing, of course, to Microsoft shops) or Google. And then you can look at everyone else, including IBM SoftLayer, Rackspace, and, well, everyone else.

Read more in my blog post for Zonic News, “For Cloud IaaS and PaaS Providers, There Are the Big Three – and That’s How It’s Going to Stay (for Now).” That post also covers the international angle when all the big cloud providers are in the U.S. — and that’s a real issue.

, ,

There are two types of cloud firewalls: Vanilla and Strawberry

Cloud-based firewalls come in two delicious flavors: vanilla and strawberry. Both flavors are software that checks incoming and outgoing packets to filter against access policies and block malicious traffic. Yet they are also quite different. Think of them as two essential network security tools: Both are designed to protect you, your network, and your real and virtual assets, but in different contexts.

Disclosure: I made up the terms “vanilla firewall” and “strawberry firewall” for this discussion. Hopefully they help us differentiate between the two models as we dig deeper.

Let’s start with a quick overview:

  • Vanilla firewalls are usually stand-alone products or services designed to protect an enterprise network and its users — like an on-premises firewall appliance, except that it’s in the cloud. Service providers call this a software-as-a-service (SaaS) firewall, security as a service (SECaaS), or even firewall as a service (FaaS).
  • Strawberry firewalls are cloud-based services that are designed to run in a virtual data center using your own servers in a platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS) model. In these cases, the firewall application runs on the virtual servers and protects traffic going to, from, and between applications in the cloud. The industry sometimes calls these next-generation firewalls, though the term is inconsistently applied and sometimes refers to any advanced firewall system running on-prem or in the cloud.

So why do we need these new firewalls? Why not stick a 1U firewall appliance into a rack, connect it up to the router, and call it good? Easy: Because the definition of the network perimeter has changed. Firewalls used to be like guards at the entrance to a secured facility. Only authorized people could enter that facility, and packages were searched as they entered and left the building. Moreover, your users worked inside the facility, and the data center and its servers were also inside. Thus, securing the perimeter was fairly easy. Everything inside was secure, everything outside was not secure, and the only way in and out was through the guard station.

Intrigued? Hungry? Both? Please read the rest of my story, called “Understanding cloud-based firewalls,” published on Enterprise.nxt.