Oracle Database is the world’s most popular enterprise database. This year’s addition of autonomous operating capabilities to the cloud version of Oracle Database is one of the most important advances in the database’s history. What does it mean for a database to be “autonomous?” Let’s look under the covers of Oracle Autonomous Database to show just a few of the ways it does that.

Oracle Autonomous Database is a fully managed cloud service. Like all cloud services, the database runs on servers in cloud data centers—in this case, on hardware called Oracle Exadata Database Machine that’s specifically designed and tuned for high-performance, high-availability workloads. The tightly controlled and optimized hardware enables some of the autonomous functionality we’ll discuss shortly.

While the autonomous capability of Oracle Autonomous Database is new, it builds on scores of automation features that Oracle has been building into its Oracle database software and the Exadata database hardware for years. The goals of the autonomous functions are twofold: First, to lower operating costs by reducing costly and tedious manual administration, and second, to improve service levels through automation and fewer human errors.

My essay in Forbes, “What Makes Oracle Autonomous Database Truly ‘Autonomous,’” shows of how the capabilities in Oracle Autonomous Database change the game for database administrators (DBAs). The benefits: DBAs are freed them from mundane tasks and letting them focus on higher-value work.

In Australia, at 8 a.m. on ‘Results Day,’ thousands and thousands of South Australian year 12 students receive their ATAR (Australia Tertiary Admissions Rank)—the all-important standardized score used to gain admission to universities across Australia. The frustrating challenge: many are eligible to add as many as nine school and subject-specific bonus points to their ATAR, which can improve their chances of gaining admission to tertiary institutions like the University of Adelaide. To find out about those bonuses, or adjusted ATAR, they must talk to university staff.

Thousands of students. All receiving their ATAR at the same time. All desperate to know about their bonus points. That very moment. They’re all phoning the university wanting a 5- or 10-minute call to answer a few questions and learn about their adjusted score. This past year, 2,100 of those students skipped what in the past could be an hours-long phone queue to talk to university staff. Instead, they used Facebook Messenger to converse with a chatbot, answering questions about their bonus eligibility and learning their adjusted ATAR score–in about three minutes.

“It’s always been really difficult for us to support the adjusted ATAR calls,” says Catherine Cherry, director of prospect management at University of Adelaide. “There are only so many people we can bring in on that busy day, and only so many phone calls that the staff can take at any given time.” Without the chatbot option, even when the prospective student is able to reach university staff, the staff can’t afford to stay on the phone to answer all that student’s questions, which can create a potentially bad first experience with the university. “The staff who are working that day really feel compelled to hurry the student off the phone because we can see the queue of 15, 20 people waiting, and we can see that they’ve been waiting for a long time,” Cherry says.

Enter the chatbot: Three minutes on Facebook Messenger and students had their adjusted ATAR. Read about the technology behind this chatbot application in my story in Forbes, “University of Adelaide Builds A Chatbot To Solve One Very Hard Problem.”

“All aboooooaaaaard!” Whether you love to watch the big freight engines rumble by, or you just ride a commuter train to work, the safety rules around trains are pretty simple for most of us. Look both ways before crossing the track, and never try to beat a train, for example. If you’re a rail operator, however, safety is a much more complicated challenge—such as making sure you always have the right people on the right positions, and ensuring that the crew is properly trained, rested, and has up-to-date safety certifications.

Helping rail operators tackle that huge challenge is CrewPro, the railroad crew management software from PS Technology, a wholly owned subsidiary of the Union Pacific RailroadThe original versions of this package run on mainframes and still are used by railroads ranging from the largest Class I freight operators to local rail-based passenger transit systems in major US cities.

Those railroad operators use CrewPro to handle complex staffing issues on the engines and on the ground. The demands include scheduling based on availability and seniority; tracking mandatory rest status; and managing certifications and qualifications, including pending certification expirations.

Smaller railroads, though, don’t have the sophisticated IT departments needed to stand up this fully automated crew management system. That’s which is why PS Technology launched a cloud version that saw its first railroad customer online in April. “There are more than 600 short line railroads, and that is our growth area,” says Seenu Chundru, president of PS Technology. “They don’t want to host this type of software on premises.”

Learn more about this in my story for Forbes, “Railroads Roll Ahead With Cloud-Based Crew Management.”

Knowledge is power—and knowledge with the right context at the right moment is the most powerful of all. Emerging technologies will leverage the power of context to help people become more efficient, and one of the first to do so is a new generation of business-oriented digital assistants.

Let’s start by distinguishing a business digital assistant from consumer products such as Apple’s Siri, Amazon’s Echo, and Google’s Home. Those cloud-based technologies have proved themselves at tasks like information retrieval (“How long is my commute today?”) and personal organization (“Add diapers to my shopping list”). Those services have some limited context about you, like your address book, calendar, music library, and shopping cart. What they don’t have is deep knowledge about your job, your employer, and your customers.

In contrast, a business digital assistant needs much richer context to handle the kind of complex tasks we do at work, says Amit Zavery, executive vice president of product development at Oracle. Which sorts of business tasks? How about asking a digital assistant to summarize the recent orders from a company’s three biggest customers in Dallas; set up a conference call with everyone involved with a particular client account; create a report of all employees who haven’t completed information security training; figure out the impact of a canceled meeting on a travel plan; or pull reports on accounts receivable deviations from expected norms?

Those are usually tasks for human associates—often a tech-savvy person in supply chain, sales, finance, or human resources. That’s because so many business tasks require context about the employee making the request and about the organization itself, Zavery says. A digital assistant’s goal should be to reduce the amount of mental energy and physical steps needed to perform such tasks.

Learn more in my article for Forbes, “The One Thing Digital Assistants Need To Become Useful At Work: Context.”

At too many government agencies and companies, the security mindset, even though it’s never spoken, is that “We’re not a prime target, our data isn’t super-sensitive.” Wrong. The reality is that every piece of personal data adds to the picture that potential criminals or state-sponsored actors are painting of individuals.

And that makes your data a target. “Just because you think your data isn’t useful, don’t assume it’s not valuable to someone, because they’re looking for columns, not rows,” says Hayri Tarhan, Oracle regional vice president for public sector security.

Here’s what Tarhan means by columns not rows: Imagine that the bad actors are storing information in a database (which they probably are). What hackers want in many data breaches is more information about people already in that database. They correlate new data with the old, using big data techniques to fill in the columns, matching up data stolen from different sources to form a more-complete picture.

That picture is potentially much more important and more lucrative than finding out about new people and creating new, sparsely populated data rows. So, every bit of data, no matter how trivial it might seem, is important when it comes to filling the empty squares.

Read more about this – and how machine learning can help – in my article in Forbes, “Data Thieves Want Your Columns—Not Your Rows.”

Blockchain and the cloud go together like organic macaroni and cheese. What’s the connection? Choosy shoppers would like to know that their organic food is tracked from farm to shelf, to make sure they’re getting what’s promised on the label. Blockchain provides an immutable ledger perfect for tracking cheese, for example, as it goes from dairy to cheesemaker to distributor to grocer.

Oracle’s new Blockchain Cloud Service provides a platform for each participant in a supply chain to register transactions. Within that blockchain, each participant—and regulators, if appropriate—can review those transactions to ensure that promises are being kept, and that data has not been tampered with. Use cases range from supply chains and financial transactions to data sharing inside a company.

Launched this month, Oracle Blockchain Cloud Service has the features that an enterprise needs to move from experimenting with blockchain to creating production applications. It addresses some of the biggest challenges facing developers and administrators, such as mastering the peer-to-peer protocols used to link blockchain servers, ensuring resiliency and high availability, and ensuring that security is solid. For example, developers previously had to code one-off integrations using complex APIs; Oracle’s Blockchain Cloud Service provides integration accelerators with sample templates and design patterns for many Oracle and third-party applications in the cloud and running on-premises in the data center.

Oracle Blockchain Cloud Service provides the kind of resilience, recoverability, security, and global reach that enterprises require before they’d trust their supply chain and customer experience to blockchain. With blockchain implemented as a managed cloud service, organizations also get a system that’s ready to be integrated with other enterprise applications, and where Oracle handles the back end to ensure availability and security.

Read more about this in my story for Forbes, “Oracle Helps You Put Blockchain Into Real-World Use With New Cloud Service.”

The trash truck rumbles down the street, and its cameras pour video into the city’s data lake. An AI-powered application mines that image data looking for graffiti—and advises whether to dispatch a fully equipped paint crew or a squad with just soap and brushes.

Meanwhile, cameras on other city vehicles could feed the same data lake so another application detects piles of trash that should be collected. That information is used by an application to send the right clean-up squad. Citizens, too, can get into the act, by sending cell phone pictures of graffiti or litter to the city for AI-driven processing.

Applications like these provide the vision for the Intelligent Internet of Things Integration Consortium (I3). This is a new initiative launched by the University of Southern California (USC), the City of Los Angeles, and a number of stakeholders including researchers and industry. At USC, I3 is jointly managed by three institutes: Institute for Communication Technology Management (CTM), Center for Cyber-Physical Systems and the Internet of Things (CCI), and Integrated Media Systems Center (IMSC).

“We’re trying to make the I3 Consortium a big tent,” says Jerry Power, assistant professor at the USC Marshall School of Business’s Institute for Communication Technology Management (CTM). Power serves as executive director of the consortium. “Los Angeles is a founding member, but we’re talking to other cities and vendors. We want lots of people to participate in the process, whether a startup or a super-large corporation.”

As of now, there are 24 members of the consortium, including USC’s Viterbi School of Engineering and Marshall School of Business. And companies are contributing resources. Oracle’s Startup for Higher Education program, for example, is providing $75,000 a year in cloud infrastructure services to support the I3 Consortium’s first three years of development work.

The I3 Consortium needs a lot of computing power. The consortium allows the cities to move beyond data silos where information is confined to individual departments, such as transportation and sanitation, to one where data flows among departments, can be more easily managed, and also lets cities use data contributions from residents or even other governmental or commercial data providers. That information is consolidated into a city’s data lake that can be accessed by AI-powered applications across departments.

The I3 Consortium will provide a vehicle to manage the data flow into the data lake. Cyrus Shahabi, a professor at USC’s Viterbi School of Engineering, and director of its Integrated Media Systems Center (IMSC), is using Oracle Cloud credits to create advanced computation applications that apply vast amounts of processing needed to train AI-based, deep learning neural networks and use real-time I3-driven data lakes to recognize issues, such as graffiti or garbage, that drive action.

 

Read more about the I3 Consortium in my story for Forbes, “How AI Could Tackle City Problems Like Graffiti, Trash, And Fires.”

The public cloud is part of your network. But it’s also not part of your network. That can make security tricky, and sometimes become a nightmare.

The cloud represents resources that your business rents. Computational resources, like CPU and memory; infrastructure resources, like Internet bandwidth and Internal networks; storage resources; and management platforms, like the tools needed to provision and configure services.

Whether it’s Amazon Web Services, Microsoft Azure or Google Cloud Platform, it’s like an empty apartment that you rent for a year or maybe a few months. You start out with empty space, put in there whatever you want and use it however you want. Is such a short-term rental apartment your home? That’s a big question, especially when it comes to security. By the way, let’s focus on platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS), where your business has a great deal of control over how the resource is used — like an empty rental apartment.

We are not talking about software-as-a-service (SaaS), like Office 365 or Salesforce.com. That’s where you show up, pay your bill and use the resources as configured. That’s more like a hotel room: you sleep there, but you can’t change the furniture. Security is almost entirely the responsibility of the hotel; your security responsibility is to ensure that you don’t lose your key, and to refuse to open the door for strangers. The SaaS equivalent: Protect your user accounts and passwords, and ensure users only have the least necessary access privileges.

Why PaaS/IaaS are part of your network

As Peter Parker knows, Spider Man’s great powers require great responsibility. That’s true in the enterprise data center — and it’s true in PaaS/IaaS networks. The customer is responsible for provisioning servers, storage and virtual machines. Not only that, but the customer also is responsible for creating connections between the cloud service and other resources, such as an enterprise data center — in a hybrid cloud architecture — and other cloud providers — in a multi-cloud architecture.

The cloud provider sets terms for use of the PaaS/IaaS, and allows inbound and outbound connections. There are service level guarantees for availability of the cloud, and of servers that the cloud provider owns. Otherwise, everything is on the enterprise. Think of the PaaS/IaaS cloud as being a remote data center that the enterprise rents, but where you can’t physically visit and see your rented servers and infrastructure.

Why PaaS/IaaS are not part of your network

In short, except for the few areas that the cloud provider handles — availability, cabling, power supplies, connections to carrier networks, physical security — you own it. That means installing patches and fixes. That means instrumenting servers and virtual machines. That means protecting them with software-based firewalls. That means doing backups, whether using the cloud provider’s value-added services or someone else. That means anti-malware.

That’s not to minimize the benefits the cloud provider offers you. Power and cooling are a big deal. So are racks and cabling. So is that physical security, and having 24×7 on-site staffing in the event of hardware failures. Also, there’s click-of-a-button ability to provision and spool up new servers to handle demand, and then shut them back again when not needed. Cloud providers can also provide firewall services, communications encryption, and of course, consulting on security.

The word elastic is often used for cloud services. That’s what makes the cloud much more agile than an on-premise data center, or renting an equipment cage in a colocation center. It’s like renting an apartment where if you need a couple extra bedrooms for a few months, you can upsize.

For many businesses, that’s huge. Read more about how great cloud power requires great responsibility in my essay for SecurityNow, “Public Cloud, Part of the Network or Not, Remains a Security Concern.”

No more pizza boxes: Traditional hardware firewalls can’t adequately protect a modern corporate network and its users. Why? Because while there still may be physical servers inside an on-premises data center or in a wiring closet somewhere, an increasing number of essential resources are virtualized or off-site. And off-site includes servers in infrastructure-as-a-service (IaaS) and platform-as-a-service (PasS) clouds.

It’s the enterprise’s responsibility to protect each of those assets, as well as the communications paths to and from those assets, as well as public Internet connections. So, no, a pizza-box appliance next to the router can’t protect virtual servers, IaaS or PaaS. What’s needed are the poorly named “next-generation firewalls” (NGFW) — very badly named because that term is not at all descriptive, and will seem really stupid in a few years, when the software-based NGFW will magically become an OPGFW (obsolete previous-generation firewall).

Still, the industry loves the “next generation” phrase, so let’s stick with NGFW here. If you have a range of assets that must be protected, including some combination of on-premises servers, virtual servers and cloud servers, you need an NGFW to unify protection and ensure consistent coverage and policy compliance across all those assets.

Cobbling together a variety of different technologies may not suffice, and could end up with coverage gaps. Also, only an NGFW can detect attacks or threats against multiple assets; discrete protection for, say, on-premises servers and cloud servers won’t be able to correlate incidents and raise the alarm when an attack is detected.

Here’s how Gartner defines NGFW:

Next-generation firewalls (NGFWs) are deep-packet inspection firewalls that move beyond port/protocol inspection and blocking to add application-level inspection, intrusion prevention, and bringing intelligence from outside the firewall.

What this means is that an NGFW does an excellent job of detecting when traffic is benign or malicious, and can be configured to analyze traffic and detect anomalies in a variety of situations. A true NGFW looks at northbound/southbound traffic, that is, data entering and leaving the network. It also doesn’t trust anything: The firewall software also examines eastbound/westbound traffic, that is, packets flowing from one asset inside the network to another.

After all, an intrusion might compromise one asset, and use that as an entry point to compromise other assets, install malware, exfiltrate data, or cause other mischief. Where does the cloud come in? Read my essay for SecurityNow, “Next-Generation Firewalls: Poorly Named but Essential to the Enterprise Network

Nine takeaways from the RSA Conference 2018 can give business leaders some perspective on how to think about the latest threats and information security trends. I attended the conference in April, along with more than 42,000 corporate security executives and practitioners, tech vendors, consultants, researchers and law enforcement experts.

In my many conversations, over way too much coffee, these nine topics below kept coming up. Consider these as real-world takeaways from the field:

1. Ransomware presents a real threat to operations

The RSA Conference took place shortly after a big ransomware event shut down some of Atlanta’s online services. The general consensus from practitioners at RSA was that such an attack could happen to any municipality, large or small, and the more that government services are interconnected, the greater the likelihood that a breach in one part of an organization could spill over and affect other systems. Thus, IT must be eternally vigilant to ensure that systems are patched and anti-malware measures are up to date to prevent a breach from spreading horizontally through the organization.

2. Spearphishing is getting more sophisticated

One would think that a CFO would know better than to respond to a midnight email from the CEO saying, “Please wire a million dollars to this overseas account immediately.” One would think that employees would know not to respond to requests from their IT department for a “password audit” and apply their login credentials. Yet those types of scenarios are happening with alarming frequencies—enough that when asked what they lose sleep over, many practitioners responded by saying “spearphishing” right after they said “ransomware.”

Spearphishing works because it arrives via carefully written emails. It is sometimes customized to a company or even a person’s role, and capable at times of evading spam filters and other email security software. Spearphishing tricks consumers into logging into fake banking websites, and it tricks employees into giving away money or revealing credentials.

Continuous employee training is the most common solution offered. Another option: strong monitoring that can use machine learning to learn what “normal” is and flag out-of-the-norm behaviors or data access by a person or system.

3. Cryptomining is a growing concern

Cryptomining occurs when hackers manage to install software onto enterprise computers that surreptitiously use processor and memory resources to mine cryptocurrencies. Unlike many other types of malware, cryptomining doesn’t try to disrupt operations or steal data. Instead, the malware wants to stay hidden, invisibly making money (literally) for the hacker for days, weeks, months or years. Again, effective system monitoring could help raise a flag when a company’s computing resources are being abused this way.

Interestingly, while many at RSA were talking about cryptomining, none of the people I talked to had experienced it first-hand. And while everyone agreed that such malware should be blocked, detected and eradicated, some treated cryptomining as a nuisance that is lower in security priority than other threats, like ransomware, spearphishing or other attacks that would steal corporate data.

What about 4-9?

Read the entire list, including thoughts about insider threats and the split between presentation and detection, in my essay for the Wall Street Journal, “9 Practical Takeaways From a Huge Data Security Conference.

Oracle CEO Mark Hurd is known as an avid tennis fan and supporter of the sport’s development, having played in college at Baylor University. At the Collision Conference last week in New Orleans, Hurd discussed the similar challenges facing tennis players and top corporate executives.

“I like this sport because tennis teaches that you’re out there by yourself,” said Hurd, who was interviewed on stage by CNBC reporter Aditi Roy. “Tennis is like being CEO: You can’t call time out, you can’t bring in a substitute,” Hurd said. “Tennis is a space where you have to go out every day, rain or shine, and you’ve got to perform. It’s just like the business world.”

Performance returned to the center of the conversation when Roy asked about Oracle’s acquisition strategy. Hurd noted that Oracle’s leadership team gives intense scrutiny to acquisitions of any size. “We don’t go out of our way to spend money — it’s our shareholder’s money,” he said. “We also think about dividends and buying stock back.”

When it comes to mergers and acquisitions, Oracle is driven by three top criteria, Hurd said. “First, the company has to fit strategically with where we are going,” he said. “Second, it has to make fiscal sense. And third, we have to be able to effectively run the acquisition.”

Hurd emphasized that he’s focused on the future, not a company’s past performance. “We are looking for companies that will be part of things 5 or 10 years from now, not 5 or 10 years ago,” he said. “We want to move forward, in platforms and applications.”

To a large extent, that future includes artificial intelligence. Hurd was quick to say, “I’m not looking for someone to say, ‘I have an AI solution in the cloud, come to me.’” Rather, Oracle wants to be able to integrate AI directly into its applications, in a way that gives customers clear business returns.

He used the example of employee recruitment. “We recruit 2,000 college students today. It used to be done manually, but now we use machine learning and algorithms to figure out where to source people.” Not only does the AI help find potential employees, but it can help evaluate whether the person would be successful at Oracle. “We could never have done that before,” Hurd added.

Read more about what Hurd said at Collision, including his advice for aspiring CEOs, in my story for Forbes, “Mark Hurd On The Perfect Sport For CEOs — And Other Leadership Insights.”

You can also watch the 20-minute entire interview here.

Microservices are a software architecture that have become quite popular in conjunction with cloud-native applications. Microservices allow companies to add or update new or existing tech-powered features more easily—and quite frequently even reduce the operating expenses of a product. A microservices approach does this by making it easier to update a large, complex program without revising the entire application, thereby accelerating the process of software updates.

Think about major enterprise software such as a customer management application. Such programs are often written as a single, monolithic application. Instead, some parts of that application could be considered as neatly encapsulated functionality, such as the function that talks to an order-processing database to create a new order.

In the microservices architecture, developers could write that order-processing service, including its state, into its own program—a loosely-coupled service. The main customer management application would consist of many of those services and interact using the service’s applications programming interfaces (APIs). Here’s what you need to know about the business advantages from a microservices architectural approach, according to Boris Scholl, vice president of development for microservices at Oracle.

1. Microservices can let you add new features faster to your company’s vital applications

Microservices can reduce complexity for your main enterprise applications. By encapsulating each business scenario, such as ordering a product or shopping card functionality into its own services, the code base becomes smaller, easier to maintain and easier to test. When you want to add or update a feature, “you can go faster by updating the service, as opposed to having to change the functionality of a very large project,” says Scholl.

Managing and connecting these many microservices might sound complex. However, “we are lucky that the microservices technology is evolving so fast there are infrastructures and platforms that make the heavy lifting easier,” Scholl says.

2. Microservices can let you embrace new, modern technology like artificial intelligence (AI) more easily

Developers can use the programming languages, tools and frameworks that are best suited for each service. Those may not be the same languages, tools and platforms used for other services. For example, consider how you apply a new idea, like today’s machine learning capability, to a customer management program. If the customer management program is built with a monolithic approach using a specific language, it will be very hard to easily integrate the new functionality—not to mention you are bound to the language, framework and even version used in the monolithic application.

However, with a microservices approach, developers might write a machine learning-focused service in the Scala language. They might run that service in a specialized AI-based cloud service that has the hardware speed needed to process huge datasets. That Scala-based service gets easily integrated with the rest of the customer application that might be written in Java or some other language. “You get cost savings because you can use a different technology stack for each service,” says Scholl. “You can use the best technology for the service.”

3. Each part of an application written with microservices can have its own release cadence

This feature relates to speed and also control and governance, which can be important to highly regulated industries. “Perhaps some parts of your application can be updated yearly and that fits your needs,” says Scholl. “Other parts might need to be updated more often, if you are looking to be agile and react faster to the market or to take advantage of new technology.”

Let’s go back to the data analytics and machine learning example. Perhaps new machine-learning technology has become available, allowing data analytics to run in seconds instead of minutes. That opportunity can be exploited by updating the data-analytics microservices. Or say an order-processing system was moved from an on-premises database to a cloud database. In a microservices architecture, all the developers would need to do is update the service that accesses that order processing; the rest of the customer-management application would not need to be changed.

“Think about where you need to update more frequently,” says Scholl. “If you can identify those components that should be on a different release cycle, break off that functionality into a new service. Then, use an API to let that new service talk to the rest of your application.”

Read more, including about scalability and organizational agility, in my essay for the Wall Street Journal, “Tech Strategy: 5 Things CEOs Should Know About Microservices.”

No doubt you’ve heard about blockchain. It’s the a distributed digital ledger technology that lets participants add and view blocks of transaction records, but not delete or change them without being detected.

Most of us know blockchain as the foundation of Bitcoin and other digital currencies. But blockchain is starting to enter the business mainstream as the trusted ledger for farm-to-table vegetable tracking, real estate transfers, digital identity management, financial transactions and all manner of contracts. Blockchain can be used for public transactions as well as for private business, inside a company or within an industry group.

What makes the technology so powerful is that there’s no central repository for this ever-growing sequential chain of transaction records, clumped together into blocks. Because that repository is replicated in each participant’s blockchain node, there is no single source of failure, and no insider threat within a single organization can impact its integrity.

“Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days or even weeks.”

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can be permitted to view relevant data, but not everything in the chain.

A customer, for instance, might be able to verify that a contractor has a valid business license. The customer might also see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

Business models and use cases

Blockchain is well-suited for managing transactions between companies or organizations that may not know each other well and where there’s no implicit or explicit trust. Rakhmilevich explains, “Blockchain works because it’s peer-to-peer…and it provides an easy-to-track history, which can serve as an audit trail,” he says.

What’s more, blockchain smart contracts are ideal for automating manual or semi-automated processes prone to errors or fraud. “Blockchain can help when there might be challenges in proving that the data has not been tampered with or when verifying the source of a particular update or transaction is important,” Rakhmilevich says.

Blockchain has uses in many industries, including banking, securities, government, retail, healthcare, manufacturing and transportation. Take healthcare: Blockchain can provide immutable records on clinical trials. Think about all the data being collected and flowing to the pharmaceutical companies and regulators, all available instantly and from verified participants.

Read more about blockchain in my article for the Wall Street Journal, “Blockchain: It’s All About Business—and Trust.”

Get ready for insomnia. Attackers are finding new techniques, and here are five that will give you nightmares worse than after you watched the slasher film everyone warned you about when you were a kid.

At a panel at the 2018 RSA Conference in San Francisco last week, we learned that these new attack techniques aren’t merely theoretically possible. They’re here, they’re real, and they’re hurting companies today. The speakers on the panel laid out the biggest attack vectors we’re seeing — and some of them are either different than in the past, or are becoming more common.

Here’s the list:

1. Repositories and cloud storage data leakage

People have been grabbing data from unsecured cloud storage for as long as cloud storage existed. Now that the cloud is nearly ubiquitous, so are the instances of non-encrypted, non-password-protected repositories on Amazon S3, Microsoft Azure, or Google Cloud Storage.

Ed Skoudis, the Penetration Testing Curriculum Director at the SANS Institute, a security training organization, points to three major flaws here. First, private repositories are accidentally opened to the public. Second, these public repositories are allowed to hold sensitive information, such as encryption keys, user names, and passwords. Third, source code and behind-the-scenes application data can be stored in the wrong cloud repository.

The result? Leakage, if someone happens to find it. And “Hackers are constantly searching for repositories that don’t have the appropriate security,” Skoudis said.

2. Data de-anonymization and correlation

Lots of medical and financial data is shared between businesses. Often that data is anonymized. That is, scrubbed with all the personally identifiable information (PII) removed so it’s impossible to figure out which human a particular data record belongs to.

Well, that’s the theory, said Skoudis. In reality, if you beg, borrow or steal enough data from many sources (including breaches), you can often correlate the data and figure out which person is described by financial or health data. It’s not easy, because a lot of data and computation resources are required, but de-anonymization can be done, and used for identity theft or worse.

3. Monetizing compromised systems using cryptominers

Johannes Ullrich, who runs the SANS Internet Storm Center, said that hackers care about selling your stuff, like all other criminals. Some want to steal your data, including bank accounts, and sell that to other people, say on the Dark Web. A few years ago, hackers learned how to steal your data and sell it back to you, in the form of ransomware. And now, they’re stealing your computer’s processing power.

What’s the processing power used for? “They’re using your system for crypto-coin mining,” the experts said. This became obvious earlier this year, he said, with a PeopleSoft breach where hackers installed a coin miner on thousands of servers – and never touched the PeopleSoft data. Meanwhile, since no data is touched or stolen, the hack could stay undetected for months, maybe years.

Two more

Read the full story, including the two biggest sleep-inhibiting worries, in my story for SecurityNow: “5 New Network Attack Techniques That Will Keep You Awake at Night.”

Is the cloud ready for sensitive data? You bet it is. Some 90% of businesses in a new survey say that at least half of their cloud-based data is indeed sensitive, the kind that cybercriminals would love to get their hands on.

The migration to the cloud can’t come soon enough. About two-thirds of companies in the study say at least one cybersecurity incident has disrupted their operations within the past two years, and 80% say they’re concerned about the threat that cybercriminals pose to their data.

The good news is that 62% of organizations consider the security of cloud-based enterprise applications to be better than the security of their on-premises applications. Another 21% consider it as good. The caveat: Companies must be proactive about their cloud-based data and can’t naively assume that “someone else” is taking care of that security.

Those insights come from a brand-new threat report, the first ever jointly conducted by Oracle and KPMG. The “Oracle and KPMG Cloud Threat Report 2018,” to be released this month at the RSA Conference, fills a unique niche among the vast number of existing threat and security reports, including the well-respected Verizon Data Breach Investigations Report produced annually since 2008.

The difference is the Cloud Threat Report’s emphasis on hybrid cloud, and on organizations lifting and shifting workloads and data into the cloud. “In the threat landscape, you have a wide variety of reports around infrastructure, threat analytics, malware, penetrations, data breaches, and patch management,” says one of the designers of the study, Greg Jensen, senior principal director of Oracle’s Cloud Security Business. “What’s missing is pulling this all together for the journey to the cloud.”

Indeed, 87% of the 450 businesses surveyed say they have a cloud-first orientation. “That’s the kind of trust these organizations have in cloud-based technology,” Jensen says.

Here are data points that break that idea down into more detail:

  • 20% of respondents to the survey say the cloud is much more secure than their on-premises environments; 42% say the cloud is somewhat more secure; and 21% say the cloud is equally secure. Only 21% think the cloud is less secure.
  • 14% say that more than half of their data is in the cloud already, and 46% say that between a quarter and half of their data is in the cloud.

That cloud-based data is increasingly “sensitive,” the survey respondents say. That data includes information collected from customer relationship management systems, personally identifiable information (PII), payment card data, legal documents, product designs, source code, and other types of intellectual property.

Read more, including what cyberattacks say about the “pace gap,” in my essay in Forbes, “Threat Report: Companies Trust Cloud Security.”

Endpoints everywhere! That’s the future, driven by the Internet of Things. When IoT devices are deployed in their billions, network traffic patterns won’t look at all like today’s patterns. Sure, enterprises have a few employees working at home, or use technologies like MPLS (Multi-Protocol Label Switching) or even SD-WAN (Software Defined Wide-Area Networks) to connect branch offices. However, for the most part, most internal traffic remains within the enterprise LAN, and external traffic is driven by end-users accessing websites from browsers.

The IoT will change all of that, predicts IHS Markit, one of the industry’s most astute analyst firms. In particular, the IoT will accelerate the growth of colo facilities, because it will be to everyone’s benefit to place servers closer to the network edge, avoiding the last mile.

To set the stage, IHS Markit forecasts Internet connectable devices to grow from 27.5 billion in 2017 to 45.4 billion in 2021. That’s a 65% increase in four short years. How will that affect colos? “Data center growth is correlated with general data growth. The more data transmitted via connected devices; the more data centers are needed to store, transfer, and analyze this data.” The analysts say:

In the specific case of the Internet of Things, there’s a need for geographically distributed data centers that can provide low-latency connections to certain connected devices. There are applications, like autonomous vehicles or virtual reality, which are going to require local data centers to manage much of the data processing required to operate.

Therefore, most enterprises will not have the means or the business case to build new data centers everywhere. “They will need to turn to colocations to provide quickly scalable, low capital-intensive options for geographically distributed data centers.”

Another trend being pointed to by IHS Markit: More local processing, rather than relying on servers in a colo-facility, at a cloud provider, or in the enterprise’s own data center. “And thanks to local analytics on devices, and the use of machine learning, a lot of data will never need to leave the device. Which is good news for the network infrastructure of the world that is not yet capable of handling a 65% increase in data traffic, given the inevitable proliferation of devices.”

Four Key Drivers of IoT This Year

The folks at IHS Markit have pointed out four key drivers of IoT growth. They paint a compelling picture, which we can summarize here:

  • Innovation and competitiveness. There are many new wireless models and solutions being released, which means lots of possibility for the future, but confusion in the short term. Companies are also seeing that the location of data is increasingly relevant to competition, and this will drive both on-prem data center and cloud computing.
  • Business models. As 5G rolls out, it will improve the economies of scale on machine-to-machine communications. This will create new business opportunities for the industry, a well as new security products and services.
  • Standardization and security. Speaking of which, IoT must be secure from the beginning, not only for business reasons, but also for compliance reasons. Soon there will be more IoT devices out there than traditional computing devices, which changes the security equation.
  • Wireless technology innovation. IHS Markit says there are currently more than 400 IoT platform providers, and vendors are working hard to integrate the platforms so that the data can be accessed by app developers. “A key inflection point for the IoT will be the gradual shift from the current ‘Intranets of Things’ deployment model to one where data can be exposed, discovered, entitled and shared with third-party IoT application developers,” says IHS Markit.

The IoT is not new. However, “what is new is it’s now working hand in hand with other transformative technologies like artificial intelligence and the cloud,” said Jenalea Howell, research director for IoT connectivity and smart cities at IHS Markit. “This is fueling the convergence of verticals such as industrial IoT, smart cities and buildings, and the connected home, and it’s increasing competitiveness.”

 

The purchase order looks legitimate, yet does it have all the proper approvals? Many lawyers reviewed this draft contract so is this the latest version? Can we prove that this essential document hasn’t been tampered with, before I sign it? Can we prove that these two versions of a document are absolutely identical?

Blockchain might be able to help solve these kinds of everyday trust issues related to documents, especially when they are PDFs—data files created using the Portable Document Format. Blockchain technology is best known for securing financial transactions, including powering new financial instruments such as Bitcoin. But blockchain’s ability to increase trust will likely find enterprise use cases solving common, non-financial information exchanges like these documents use.

Joris Schellekens, a software engineer and PDF expert at iText Software in Ghent, Belgium, recently presented his ideas for blockchain-supported documents at Oracle Code Los Angeles. Oracle Code is a series of free events around the world created to bring developers together to share fresh thinking and collaborate on ideas like these.

PDF’s Power and Limitations

The PDF file format was created in the early 1990s by Adobe Systems. PDF was a way to share richly formatted documents whose visual layout, text, and graphics would look the same, no matter which software created them or where they were viewed or printed. The PDF specification became an international standard in 2008.

Early on, Adobe and other companies implemented security features into PDF files. That included password protection, encryption, and digital signatures. In theory, the digital signatures should be able to prove who created, or at least who encrypted, a PDF document. However, depending on the hashing algorithm used, it’s not so difficult to subvert those protections to, for example, change a date/time stamp, or even the document content, says Schellekens. His company, iText Software, markets a software development kit and APIs for creating and manipulating PDFs.

“The PDF specification contains the concept of an ID tuple,” or an immutable sequence of data, says Schellekens. “This ID tuple contains timestamps for when the file was created and when it was revised. However, the PDF spec is vague about how to implement these when creating the PDF.”

Even in the case of an unaltered PDF, the protections apply to the entire document, not to various parts of it. Consider a document that must be signed by multiple parties. Since not all certificate authorities store their private keys with equal vigilance, you might lack confidence about who really modified the document (e.g. signed it), at which times, and in which order. Or, you might not be confident that there were no modifications before or after someone signed it.

A related challenge: Signatures to a digital document generally must be made serially, one at a time. The PDF specification doesn’t allow for a document to be signed in parallel by several people (as is common with contract reviews and signatures) and then merged together.

Blockchain has the potential to solve such document problems, and several others besides. Read more in my story for Forbes, “Can Blockchain Solve Your Document And Digital Signature Headaches?

Albert Einstein famously said, “Everything should be made as simple as possible, but not simpler.” Agile development guru Venkat Subramaniam has a knack for taking that insight and illustrating just how desperately the software development process needs the lessons of Professor Einstein.

As the keynote speaker at the Oracle Code event in Los Angeles—the first in a 14-city tour of events for developers—Subramaniam describes the art of simplicity, and why and how complexity becomes the enemy. While few would argue that complex is better, that’s what we often end up creating, because complex applications or source code may make us feel smart. But if someone says our software design or core algorithm looks simple, well, we feel bad—perhaps the problem was easy and obvious.

Subramaniam, who’s president of Agile Developer and an instructional professor at the University of Houston, urges us instead to take pride in coming up with a simple solution. “It takes a lot of courage to say, ‘we don’t need to make this complex,’” he argues. (See his full keynote, or register for an upcoming Oracle Code event.)

Simplicity Is Not Simple

Simplicity is hard to define, so let’s start by considering what simple is not, says Subramaniam. In most cases, our first attempts at solving a problem won’t be simple at all. The most intuitive solution might be overly verbose, or inefficient, or perhaps difficult to understand, even by its programmers after the fact.

Simple is not clever. Clever software, or clever solutions, may feel worthwhile, and might cause people to pat developers on the back. But ultimately, it’s hard to understand, and can be hard to change later. “Clever code is self-obfuscating,” says Subramaniam, meaning that it can be incomprehensible. “Even programmers can’t understand their clever code a week later.”

Simple is not necessarily familiar. Subramaniam insists that we are drawn to the old, comfortable ways of writing software, even when those methods are terribly inefficient. He mentioned someone who wrote code with 70 “if/then” questions in a series—because it was familiar. But it certainly wasn’t simple, and would be nearly impossible to debug or modify later. Something that we’re not familiar with may actually be simpler than what we’re comfortable with. To fight complexity, Subramaniam recommends learning new approaches and staying up with the latest thinking and the latest paradigms.

Simple is not over-engineered. Sometimes you can overthink the problem. Perhaps that means trying to develop a generalized algorithm that can be reused to solve many problems, when the situation calls for a fast, basic solution to a single problem. Subramaniam cited Occam’s Razor: When choosing between two solutions, the simplest may be the best.

Simple is not terse. Program source code should be concise, which means that it’s small, but also clearly communicate the programmer’s intent. By contrast, something that’s terse may still execute correctly when compiled into software, but the human understanding may be lost. “Don’t confuse terse with concise,” warns Subramaniam. “Both are really small, but terse code is waiting to hurt you when you least expect it.”

Read more in my essay, “Practical Advice To Whip Complexity And Develop Simpler Software.”

As the saying goes, you can’t manage what you don’t measure. In a data-driven organization, the best tools for measuring the performance are business intelligence (BI) and analytics engines, which require data. And that explains why data warehouses continue to play such a crucial role in business. Data warehouses often provide the source of that data, by rolling up and summarizing key information from a variety of sources.

Data warehouses, which are themselves relational databases, can be complex to set up and manage on a daily basis. They typically require significant human involvement from database administrators (DBAs). In a large enterprise, a team of DBAs ensure that the data warehouse is extracting data from those disparate data sources, as well as accommodating new and changed data sources—and making sure the extracted data is summarized properly and stored in a structured manner that can be handled by other applications, including those BI and analytics tools.

On top of that, the DBAs are managing the data warehouse’s infrastructure. That includes everything from server processor utilization, the efficiency of storage, security of the data, backups, and more.

However, the labor-intensive nature of data warehouses is about to change, with the advent of Oracle Autonomous Data Warehouse Cloud, announced in October 2017. The self-driving, self-repairing, self-tuning functionality of Oracle’s Data Warehouse Cloud is good for the organization—and good for the DBAs.

Data-driven organizations need timely, up-to-date business intelligence. This can feed instant decision-making, short-term predictions and business adjustments, and long-term strategy. If the data warehouse goes down, slows down, or lacks some information feeds, the impact can be significant. No data warehouse may mean no daily operational dashboards and reports, or inaccurate dashboards or reports.

For C-level executives, Autonomous Data Warehouse can improve the value of the data warehouse. This boosts the responsiveness of business intelligence and other important applications, by improving availability and performance.

Stop worrying about uptime. Forget about disk-drive failures. Move beyond performance tuning. DBAs, you have a business to optimize.

Read more in my article, “Autonomous Capabilities Will Make Data Warehouses — And DBAs — More Valuable.”

The “throw it over the wall” problem is familiar to anyone who’s seen designers and builders create something that can’t actually be deployed or maintained out in the real world. In the tech world, avoiding this problem is a big part of what gave rise to DevOps.

DevOps, combines “development” and “IT operations.” It refers to a set of practices that help software developers and IT operations staff work better, together. DevOps emerged about a decade ago with the goal of tearing down the silos between the two groups, so that companies can get new apps and features out the door, faster and with fewer mistakes and less downtime in production.

DevOps is now widely accepted as a good idea, but that doesn’t mean it’s easy. It requires cultural shifts by two departments that not only have different working styles and toolsets, but where the teams may not even know or respect each other.

When DevOps is properly embraced and implemented, it can help get better software written more quickly. DevOps can make applications easier and less expensive to manage. It can simplify the process of updating software to respond to new requirements. Overall, a DevOps mindset can make your organization more competitive because you can respond quickly to problems, opportunities and industry pressures.

Is DevOps the right strategic fit for your organization? Here are six CEO-level insights about DevOps to help you consider that question:

  1. DevOps can and should drive business agility.DevOps often means supporting a more rapid rate of change in terms of delivering new software or updating existing applications. And it doesn’t just mean programmers knock out code faster. It means getting those new apps or features fully deployed and into customers’ hands. “A DevOps mindset represents development’s best ability to respond to business pressures by quickly bringing new features to market and we drive that rapid change by leveraging technology that lets us rewire our apps on an ongoing basis,” says Dan Koloski, vice president of product management at Oracle.

For the full story, see my essay for the Wall Street Journal, “Tech Strategy: 6 Things CEOs Should Know About DevOps.”

Blockchain is a distributed digital ledger technology in which blocks of transaction records can be added and viewed—but can’t be deleted or changed without detection. Here’s where the name comes from: a blockchain is an ever-growing sequential chain of transaction records, clumped together into blocks. There’s no central repository of the chain, which is replicated in each participant’s blockchain node, and that’s what makes the technology so powerful. Yes, blockchain was originally developed to underpin Bitcoin and is essential to the trust required for users to trade digital currencies, but that is only the beginning of its potential.

Blockchain neatly solves the problem of ensuring the validity of all kinds of digital records. What’s more, blockchain can be used for public transactions as well as for private business, inside a company or within an industry group. “Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days, or even weeks.”

That’s the power of blockchain: an immutable digital ledger for recording transactions. It can be used to power anonymous digital currencies—or farm-to-table vegetable tracking, business contracts, contractor licensing, real estate transfers, digital identity management, and financial transactions between companies or even within a single company.

“Blockchain doesn’t have to just be used for accounting ledgers,” says Rakhmilevich. “It can store any data, and you can use programmable smart contracts to evaluate and operate on this data. It provides nonrepudiation through digitally signed transactions, and the stored results are tamper proof. Because the ledger is replicated, there is no single source of failure, and no insider threat within a single organization can impact its integrity.”

It’s All About Distributed Ledgers

Several simple concepts underpin any blockchain system. The first is the block, which is a batch of one or more transactions, grouped together and hashed. The hashing process produces an error-checking and tamper-resistant code that will let anyone viewing the block see if it has been altered. The block also contains the hash of the previous block, which ties them together in a chain. The backward hashing makes it extremely difficult for anyone to modify a single block without detection.

A chain contains collections of blocks, which are stored on decentralized, distributed servers. The more the better, with every server containing the same set of blocks and the latest values of information, such as account balances. Multiple transactions are handled within a single block using an algorithm called a Merkle tree, or hash tree, which provides fault and fraud tolerance: if a server goes down, or if a block or chain is corrupted, the missing data can be reconstructed by polling other servers’ chains.

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can view relevant data, but not everything in the chain. A customer might be able to verify that a contractor has a valid business license and see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

When originally conceived, blockchain had a narrow set of protocols. They were designed to govern the creation of blocks, the grouping of hashes into the Merkle tree, the viewing of data encapsulated into the chain, and the validation that data has not been corrupted or tampered with. Over time, creators of blockchain applications (such as the many competing digital currencies) innovated and created their own protocols—which, due to their independent evolutionary processes, weren’t necessarily interoperable. By contrast, the success of general-purpose blockchain services, which might encompass computing services from many technology, government, and business players, created the need for industry standards—such as Hyperledger, a Linux Foundation project.

Read more in my feature article in Oracle Magazine, March/April 2018, “It’s All About Trust.”

Far too many companies fail to learn anything from security breaches. According to CyberArk, cyber-security inertia is putting organizations at risk. Nearly half — 46% — of enterprises say their security strategy rarely changes substantially, even after a cyberattack.

That data comes from the organization’s new Global Advanced Threat Landscape Report 2018. The researchers surveyed 1,300 IT security decision-makers, DevOps and app developer professionals, and line-of-business owners in seven countries.

The Cloud is Unsecured

Cloud computing is a major focus of this report, and the study results are scary. CyberArk says, “Automated processes inherent in cloud environments are responsible for prolific creation of privileged credentials and secrets. These credentials, if compromised, can give attackers a crucial jumping-off point to achieve lateral access across networks, data and applications — whether in the cloud or on-premises.”

The study shows that

  • 50% of IT professionals say their organization stores business-critical information in the cloud, including revenue-generating customer- facing applications
  • 43% say they commit regulated customer data to the cloud
  • 49% of respondents have no privileged account security strategy for the cloud

While we haven’t yet seen major breaches caused by tech failures of cloud vendors, we have seen many, many examples of customer errors with the cloud. Those errors, such as posting customer information to public cloud storage services without encryption or proper password control, have allowed open access to private information.

CyberArk’s view is dead right: “There are still gaps in the understanding of who is responsible for security in the cloud, even though the public cloud vendors are very clear that the enterprise is responsible for securing cloud workloads. Additionally, few understand the full impact of the unsecured secrets that proliferate in dynamic cloud environments and automated processes.”

In other words, nobody is stepping up to the plate. (Perhaps cloud vendors should scan their customers’ files and warn them if they are uploading unsecured files. Nah. That’ll never happen – because if there’s a failure of that monitoring system, the cloud vendor could be held liable for the breach.)

Endpoint Security Is Neglected

I was astonished that the CyberArk study shows only 52% of respondents keep their operating system and patches current. Yikes. It’s conventional wisdom that maintaining patches is about the lowest-hanging of the low-hanging fruit. Unpatched servers have been easy pickings for hackers over the past few years.

CyberArk’s analysis appears accurate here: ”End users deploy a lot of technologies to protect endpoints, and they face many competing factors. These include compliance drivers, end-user usability, endpoint configuration management and an increasingly highly mobile and remote user base, all of which make visibility and control harder. With advanced malware attacks over the past year including WannaCry and NotPetya, there is certainly room for greater prioritization around blocking credential theft as a critical step to preventing attackers from gaining access to the network and initiating lateral movement.”

Many Threats, Poor Planning

According to the study, the greatest cyber security threats expected by IT professionals are:

  • Targeted phishing attacks (56%)
  • Insider threats (51%)
  • Ransomware or malware (48%)
  • Unsecured privileged accounts (42%)
  • Unsecured data stored in the cloud (41%)

Meanwhile, 37% respondents say they store user passwords in Excel spreadsheets or in Word docs (hopefully not on the cloud).

Back to the cloud for a moment. The study says that “Almost all (94%) security respondents say their organizations store and serve data using public cloud services. And they are increasingly likely to entrust cloud providers with much more sensitive data than in the past. For instance, half (50%) of IT professionals say their organization stores business-critical information in the cloud, including revenue-generating customer-facing applications, and 43% say they commit regulated customer data to the cloud.”

And all that, with far too many companies reporting poor security practices when it comes to the cloud. Expect more breaches. Lots more.

Don’t be misled by the name: Serverless cloud computing contains servers. Lots of servers. What makes serverless “serverless” is that developers, IT administrators and business leaders don’t have to think about those servers. Ever.

In the serverless model, online computing power gets tapped automatically only at the moment it’s needed. This can save organizations money and, just as importantly, make the IT organization more agile when it comes to building and launching new applications. That’s why serverless has the potential to be a game-changer for enterprise.

“Serverless is the next logical step for computing,” says Bob Quillin, Oracle vice president of developer relations. “We went from a data center where you own everything, to the cloud with shared servers and centralized infrastructure, to serverless, where you don’t even care about the servers themselves.”

In the serverless model, developers write and deploy what are called “functions.” Those are slimmed-down applications that take one action, such as processing an e-commerce order or recording that a shipment arrived. They run those functions directly on the cloud, using technology that eliminates the need to manage the servers, since it delivers computing power the moment that a function gets called into action.

Both the economics and the speed-of-development benefits of serverless cloud computing are compelling. Here are four CEO-level insights from Quillin for thinking about serverless computing.

First: Serverless can save real money. In the old data center model, says Quillin, organizations had to buy and maintain expensive servers, infrastructure and real estate.

In a traditional cloud model, organizations turn that capital expense into an operating one by provisioning virtualized servers and infrastructure. That saves money compared with the old data center model, Quillin says, but “you are typically paying for compute resources that are running all the time—in increments of CPU hours at least.” If you create a cluster of cloud servers, you don’t typically build it up and break it down every day, and certainly not every hour, as needed. That’s just too much management and orchestration for most organizations.

Serverless, on the other hand, essentially lets you pay only for exactly the time that a workload runs. For closing the books, it may be a once-a-month charge for a few hours of computing time. For handling transactions, it might be a few tenths of a second whenever a customer makes a sale or an Internet of Things (IoT) device sends data.

For the rest of the list, and the full story, see my essay for the Wall Street Journal, “4 Things CEOs Should Know About Serverless Computing.”

DevOps is a technology discipline well-suited to cloud-native application development. When it only takes a few mouse clicks to create or manage cloud resources, why wouldn’t developers and IT operation teams work in sync to get new apps out the door and in front of user faster? The DevOps culture and tactics have done much to streamline everything from coding to software testing to application deployment.

Yet far from every organization has embraced DevOps, and not every organization that has tried DevOps has found the experience transformative. Perhaps that’s because the idea is relatively young (the term was coined around 2009), suggests Javed Mohammed, systems community manager at Oracle, or perhaps because different organization are at such different spots in DevOps’ technology adoption cycle. That idea—about where we are in the adoption of DevOps—became a central theme of a recent podcast discussion among tech experts. Following are some highlights.

Confusion about DevOps can arise because DevOps affects dev and IT teams in many ways. “It can apply to the culture piece, to the technology piece, to the process piece—and even how different teams interact, and how all of the different processes tie together,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC and co-author of Accelerate: The Science of Lean Software and DevOps.

The adoption and effectiveness of DevOps within a team depends on where each team is, and where organizations are. One team might be narrowly focused on the tech used to automate software deployment to the public, while another is looking at the culture and communication needed to release new features on a weekly or even daily basis. “Everyone is at a very, very different place,” Forsgren says.

Indeed, says Forsgren, some future-thinking organizations are starting to talk about what ‘DevOps Next’ is, extending the concept of developer-led operations beyond common best practices. At the same time, in other companies, there’s no DevOps. “DevOps isn’t even on their radar,” she sighs. Many experts, including Forsgren, see that DevOps is here, is working, and is delivering real value to software teams today—and is helping businesses create and deploy better software faster and less expensively. That’s especially true when it comes to cloud-native development, or when transitioning existing workloads from the data center into the cloud.

Read more in my essay, “DevOps: Sometimes Incredibly Transformative, Sometimes Not So Much.”

The VPN model of extending security through enterprise firewalls is dead, and the future now belongs to the Software Defined Perimeter (SDP). Firewalls imply that there’s an inside to the enterprise, a place where devices can communicate in a trusted manner. This being so, there must also be an outside where communications aren’t trusted. Residing between the two is that firewall which decides which traffic can egress and which can enter following deep inspection, based on scans and policies.

What about trusted applications requiring direct access to corporate resources from outside the firewall? That’s where Virtual Private Networks came in, by offering a way to push a hole in the firewall. VPNs are a complex mechanism for using encryption and secure tunnels to bridge multiple networks, such as a head-office and regional office network. They can also temporarily allow remote users to become part of the network.

VPNs are well established but perceived as difficult to configure on the endpoints, hard for IT to manage and challenging to scale for large deployments. There are also issues of software compatibility: not everything works through a VPN. Putting it bluntly, almost nobody likes VPNs and there is now a better way to securely connect mobile applications and Industrial Internet of Things (IIoT) devices into the world of datacenter servers and cloud-based applications.

Authenticate Then Connect

The Software Defined Perimeter depends on a rigorous process of identity verification of both client and server using a secure control channel, thereby replacing the VPN. The negotiation for trustworthy identification is based on cryptographic protocols like Transport Layer Security (TLS) which succeeds the old Secure Sockets Layer (SSL).

With identification and trust established by both parties, a secure data channel can be provisioned with specified bandwidth and quality. For example, the data channel might require very low latency and minimal jitter for voice messaging or it might need high bandwidth for streaming video, or alternatively be low-bandwidth and low-cost for data backups.

On the client side, the trust negotiation and data channel can be tied to a specific mobile application, perhaps an employee’s phone or tablet. The corporate customer account management app needs trusted access to the corporate database server, but no other phone service should be granted access.

SDP is based on the notion of authenticate-before-connect, which reminds me of reverse-charge phone calls of the distant past. A caller would ask the operator to place a reverse charge call to Sally on a specified number from her nephew, Bob. The operator placing the call would chat with Sally over the equivalent of the control channel. Only if the operator believed she was talking to Sally, and providing Sally accepted the charges, would the operator establish the Bob-to-Sally connection, which is the equivalent of the SDP data channel.

Read more in my essay for Network Computing, “Forget VPNs: the future is SDP.”

Companies can’t afford downtime. Employees need access to their applications and data 24/7, and so do other business applications, manufacturing and logistics management systems, and security monitoring centers. Anyone who thinks that the brute force effort of their hard-working IT administrators is enough to prevent system downtime just isn’t facing reality.

Traditional systems administrators and their admin tools can’t keep up with the complexity inherent in any modern enterprise. A recent survey of the Oracle Applications Users Group has found that despite significant progress in systems management automation, many customers still report that more than 80% of IT issues are first discovered and reported by users. The number of applications is spiraling up, while data increases at an even more rapid rate.

The boundaries between systems are growing more complex, especially with cloud-based and hybrid-cloud architectures. That reality is why Oracle, after analyzing a survey of its industry-leading customers, recently predicted that by 2020, more than 80% of application infrastructure operations will be managed autonomously.

Autonomously is an important word here. It means not only doing mundane day-to-day tasks including monitoring, tuning, troubleshooting, and applying fixes automatically, but also detecting and rapidly resolving issues. Even when it comes to the most complex problems, machines can simplify the analysis—sifting through the millions of possibilities to present simpler scenarios, to which people then can apply their expertise and judgment of what action to take.

Oracle asked, about the kind of activities that IT system administrators do. That includes on a daily, weekly, and monthly basis—things such as password resets, system reboots, software patches, and the like.

Expect that IT teams will soon reduce by several orders of magnitude the number of situations like those that need manual intervention. If an organization typically has 20,000 human-managed interventions per year, humans will need to touch only 20. The rest will be handled through systems that can apply automation combined with machine learning, which can analyze patterns and react faster than human admins to enable preventive maintenance, performance optimization, and problem resolution.

Read more in my article for Forbes, “Prediction: 80% of Routine IT Operations Will Soon Be Solved Autonomously.”

Not a connected car.Nobody wants bad guys to be able to hack connected cars. Equally importantly, they shouldn’t be able to hack any part of the multi-step communications path that lead from the connected car to the Internet to cloud services – and back again. Fortunately, companies are working across the automotive and security industries to make sure that does happen.

The consequences of cyberattacks against cars range from the bad to the horrific: Hackers might be able to determine that a driver is not home, and sell that information to robbers. Hackers could access accounts and passwords, and be able to leverage that information for identity theft, or steal information from bank accounts. Hackers might be able to immobilize vehicles, or modify/degrade the functionality of key safety features like brakes or steering. Hackers might even be able to seize control of the vehicle, and cause accidents or terrorist incidents.

Horrific. Thankfully, companies like semiconductor leader Micron Technology, along with communication security experts NetFoundry, have a plan – and are partnering with vehicle manufacturers to embed secure, trustworthy hardware into connected cars. The result: Safety. Security. Trust. Vroom.

It starts with the Internet of Things

The IoT consists of autonomous computing units, connected to back-end services via the Internet. Those back-end services are often in the cloud, and in the case of connected cars, might offer anything from navigation to infotainment to preventive maintenance to firmware upgrades for build-in automotive features. Often, the back-end services would be offered through the automobile’s manufacturer, though they may be provisioned through third-party providers.

The communications chain for connected cars is lengthy. On the car side, it begins with an embedded component (think stereo head unit, predictive front-facing radar used for adaptive cruise control, or anti-lock brake monitoring system). The component will likely contain or be connected to a ECU – an embedded control unit, a circuit board with a microprocessor, firmware, RAM, and a network connection. The ECU, in turn, is connected via an in-vehicle network, which connected to a communications gateway.

That communications gateway talks to a telecommunications provider, which could change as the vehicle crosses service provider or national boundaries. The telco links to the Internet, the Internet links to a cloud provider (such as Amazon Web Services), and from there, there are services that talk to the automotive systems.

Trust is required at all stages of the communications. The vehicle must be certain that its embedded devices, ECUs, and firmware are not corrupted or hacked. The gateway needs to know that it’s talking to the real car and its embedded systems – not fakes or duplicates offered by hackers. It also needs to know that the cloud services are the genuine article, and not fakes. And of course, the cloud services must be assured that they are talking to the real, authenticated automotive gateway and in-vehicle components.

Read more about this in my feature for Business Continuity, “Building Cybertrust into the Connected Car.”

Farewell, Prius

A sad footnote to this blog post. Our faithful Prius, pictured above, was totaled in a collision. Nobody was injured, which is the most important, but the car is gone. May it rust in piece.

We all have heard the usual bold predictions for technology in 2018: Lots of cloud computing, self-driving cars, digital cryptocurrencies, 200-inch flat-screen televisions, and versions of Amazon’s Alexa smart speaker everywhere on the planet. Those types of predictions, however, are low-hanging fruit. They’re not bold. One might as well predict that there will be some sunshine, some rainy days, a big cyber-bank heist, and at least one smartphone catching fire.

Let’s dig for insights beyond the blindingly obvious. I talked to several tech leaders, deep-thinking individuals in California’s Silicon Valley, asking them for their predictions, their idea of new trends, and disruptions in the tech industry. Let’s see what caught their eye.

Gary Singh, VP of marketing, OnDot Systems, believes that 2018 will be the year when mobile banking will transform into digital banking — which is more disruptive than one would expect. “The difference between digital and mobile banking is that mobile banking is informational. You get information about your accounts,” he said. Singh continues, “But in terms of digital banking, it’s really about actionable insights, about how do you basically use your funds in the most appropriate way to get the best value for your dollar or your pound in terms of how you want to use your monies. So that’s one big shift that we would see start to happen from mobile to digital.”

Tom Burns, Vice President and General Manager of Dell EMC Networking, has been following Software-Defined Wide Area Networks. SD-WAN is a technology that allows enterprise WANs to thrive over the public Internet, replacing expensive fixed-point connections provisioned by carriers using technologies like MPLS. “The traditional way of connecting branches in office buildings and providing services to those particular branches is going to change,” Burns observed. “If you look at the traditional router, a proprietary architecture, dedicated lines. SD-WAN is offering a much lower cost but same level of service opportunity for customers to have that data center interconnectivity or branch connectivity providing some of the services, maybe a full even office in the box, but security services, segmentation services, at a much lower cost basis.”

NetFoundry’s co-founder, Mike Hallett, sees a bright future for Application Specific Networks, which link applications directly to cloud or data center applications. The focus is on the application, not on the device. “For 2018, when you think of the enterprise and the way they have to be more agile, flexible and faster to move to markets, particularly going from what I would call channel marketing to, say, direct marketing, they are going to need application-specific networking technologies.” Hallett explains that Application Specific Networks offer the ability to be able to connect from an application, from a cloud, from a device, from a thing, to any application or other device or thing quickly and with agility. Indeed, those connections, which are created using software, not hardware, could be created “within minutes, not within the weeks or months it might take, to bring up a very specific private network, being able to do that. So the year of 2018 will see enterprises move towards software-only networking.”

Mansour Karam, CEO and founder of Apstra, also sees software taking over the network. “I really see massive software-driven automation as a major trend. We saw technologies like intent-based networking emerge in 2017, and in 2018, they’re going to go mainstream,” he said.

There’s more

There are predictions around open networking, augmented reality, artificial intelligence – and more. See my full story in Upgrade Magazine, “From SD-WAN to automation to white-box switching: Five tech predictions for 2018.”

The pattern of cloud adoption moves something like the ketchup bottle effect: You tip the bottle and nothing comes out, so you shake the bottle and suddenly you have ketchup all over your plate.

That’s a great visual from Frank Munz, software architect and cloud evangelist at Munz & More, in Germany. Munz and a few other leaders in the Oracle community were interviewed on a podcast by Bob Rhubart, Architect Community Manager at Oracle, about the most important trends they saw in 2017. The responses covered a wide range of topics, from cloud to blockchain, from serverless to machine learning and deep learning.

During the 44-minute session, “What’s Hot? Tech Trends That Made a Real Difference in 2017,” the panel took some fascinating detours into the future of self-programming computers and the best uses of container technologies like Kubernetes. For those, you’ll need to listen to the podcast.

The panel included: Frank Munz; Lonneke Dikmans, chief product officer of eProseed, Netherlands; Lucas Jellema, CTO, AMIS Services, Netherlands; Pratik Patel, CTO, Triplingo, US; and Chris Richardson, founder and CEO, Eventuate, US. The program was recorded in San Francisco at Oracle OpenWorld and JavaOne.

The cloud’s tipping point

The ketchup quip reflects the cloud passing a tipping point of adoption in 2017. “For the first time in 2017, I worked on projects where large, multinational companies give up their own data center and move 100% to the cloud,” Munz said. These workload shifts are far from a rarity. As Dikmans said, the cloud drove the biggest change and challenge: “[The cloud] changes how we interact with customers, and with software. It’s convenient at times, and difficult at others.”

Security offered another way of looking at this tipping point. “Until recently, organizations had the impression that in the cloud, things were less secure and less well managed, in general, than they could do themselves,” said Jellema. Now, “people have come to realize that they’re not particularly good at specific IT tasks, because it’s not their core business.” They see that cloud vendors, whose core business is managing that type of IT, can often do those tasks better.

In 2017, the idea of shifting workloads en masse to the cloud and decommissioning data centers became mainstream and far less controversial.

But wait, there’s more! See about Blockchain, serverless computing, and pay-as-you-go machine learning, in my essay published in Forbes, “Tech Trends That Made A Real Difference In 2017.”

With lots of inexpensive, abundant computation resources available, nearly anything becomes possible. For example, you can process a lot of network data to identify patterns, identify intelligence, and product insight that can be used to automate networks. The road to Intent-Based Networking Systems (IBNS) and Application-Specific Networks (ASN) is a journey. That’s the belief of Rajesh Ghai, Research Director of Telecom and Carrier IP Networks at IDC.

Ghai defines IBNS as a closed-loop continuous implementation of several steps:

  • Declaration of intent, where the network administrator defines what the network is supposed to do
  • Translation of intent into network design and configuration.
  • Validation of the design using a model that decides if that configuration can actually be implemented,
  • Propagation of that configuration into the network devices via APIs.
  • Gather and study real-time telemetry from all the devices.
  • Use machine learning to determine whether desired state of policy has been achieved. And then repeat,

Related to that concept, Ghai explains, is ASN. “It’s also a concept which is software control and optimization and automation. The only difference is that ASN is more applicable to distributed applications over the internet than IBNS.”

IBNS Operates Networks as One System

“Think of intent-based networking as software that sits on top of your infrastructure and focusing on the networking infrastructure, and enables you to operate your network infrastructure as one system, as opposed to box per box,” explained Mansour Karam, Founder, CEO of Apstra, which offers IBNS solutions for enterprise data centers.

“To achieve this, we have to start with intent,” he continued. “Intent is both the high-level business outcomes that are required by the business, but then also we think of intent as applying to every one of those stages. You may have some requirements in how you want to build.”

Karam added, “Validation includes tests that you would run — we call them expectations — to validate that your network indeed is behaving as you expected, as per intent. So we have to think of a sliding scale of intent and then we also have to collect all the telemetry in order to close the loop and continuously validate that the network does what you want it to do. There is the notion of state at the core of an IBNS that really boils down to managing state at scale and representing it in a way that you can reason about the state of your system, compare it with the desired state and making the right adjustments if you need to.”

The upshot of IBNS, Karam said: If you have powerful automation you’re taking the human out of the equation, and so you get a much more agile network. You can recoup the revenues that otherwise you would have lost, because you’re unable to deliver your business services on time. You will reduce your outages massively, because 80% of outages are caused by human error. You reduce your operational expenses massively, because organizations spend $4 operating every dollar of CapEx, and 80% of it is manual operations. So if you take that out you should be able to recoup easily your entire CapEx spend on IBNS.”

ASN Gives Each Application It Own Network

“Application-Specific Networks, like Intent-Based Networking Systems, enable digital transformation, agility, speed, and automation,” explained Galeal Zino, Founder of NetFoundry, which offers an ASN platform.

He continued, “ASN is a new term, so I’ll start with a simple analogy. ASNs are like are private clubs — very, very exclusive private clubs — with exactly two members, the application and the network. ASN literally gives each application its own network, one that’s purpose-built and driven by the specific needs of that application. ASN merges the application world and the network world into software which can enable digital transformation with velocity, with scale, and with automation.”

Read more in my new article for Upgrade Magazine, “Manage smarter, more autonomous networks with Intent-Based Networking Systems and Application Specific Networking.”