The trash truck rumbles down the street, and its cameras pour video into the city’s data lake. An AI-powered application mines that image data looking for graffiti—and advises whether to dispatch a fully equipped paint crew or a squad with just soap and brushes.

Meanwhile, cameras on other city vehicles could feed the same data lake so another application detects piles of trash that should be collected. That information is used by an application to send the right clean-up squad. Citizens, too, can get into the act, by sending cell phone pictures of graffiti or litter to the city for AI-driven processing.

Applications like these provide the vision for the Intelligent Internet of Things Integration Consortium (I3). This is a new initiative launched by the University of Southern California (USC), the City of Los Angeles, and a number of stakeholders including researchers and industry. At USC, I3 is jointly managed by three institutes: Institute for Communication Technology Management (CTM), Center for Cyber-Physical Systems and the Internet of Things (CCI), and Integrated Media Systems Center (IMSC).

“We’re trying to make the I3 Consortium a big tent,” says Jerry Power, assistant professor at the USC Marshall School of Business’s Institute for Communication Technology Management (CTM). Power serves as executive director of the consortium. “Los Angeles is a founding member, but we’re talking to other cities and vendors. We want lots of people to participate in the process, whether a startup or a super-large corporation.”

As of now, there are 24 members of the consortium, including USC’s Viterbi School of Engineering and Marshall School of Business. And companies are contributing resources. Oracle’s Startup for Higher Education program, for example, is providing $75,000 a year in cloud infrastructure services to support the I3 Consortium’s first three years of development work.

The I3 Consortium needs a lot of computing power. The consortium allows the cities to move beyond data silos where information is confined to individual departments, such as transportation and sanitation, to one where data flows among departments, can be more easily managed, and also lets cities use data contributions from residents or even other governmental or commercial data providers. That information is consolidated into a city’s data lake that can be accessed by AI-powered applications across departments.

The I3 Consortium will provide a vehicle to manage the data flow into the data lake. Cyrus Shahabi, a professor at USC’s Viterbi School of Engineering, and director of its Integrated Media Systems Center (IMSC), is using Oracle Cloud credits to create advanced computation applications that apply vast amounts of processing needed to train AI-based, deep learning neural networks and use real-time I3-driven data lakes to recognize issues, such as graffiti or garbage, that drive action.

 

Read more about the I3 Consortium in my story for Forbes, “How AI Could Tackle City Problems Like Graffiti, Trash, And Fires.”

Users care passionately about their software being fast and responsive. You need to give your applications both 0-60 speed and the strongest long-term endurance. Here are 14 guidelines for choosing a deployment platform to optimize performance, whether your application runs in the data center or the cloud.

Faster! Faster! Faster! That killer app won’t earn your company a fortune if the software is slow as molasses. Sure, your development team did the best it could to write server software that offers the maximum performance, but that doesn’t mean diddly if those bits end up on a pokey old computer that’s gathering cobwebs in the server closet.

Users don’t care where it runs as long as it runs fast. Your job, in IT, is to make the best choices possible to enhance application speed, including deciding if it’s best to deploy the software in-house or host it in the cloud.

When choosing an application’s deployment platform, there are 14 things you can do to maximize the opportunity for the best overall performance. But first, let’s make two assumptions:

  • These guidelines apply only to choosing the best data center or cloud-based platform, not to choosing the application’s software architecture. The job today is simply to find the best place to run the software.
  • I presume that if you are talking about a cloud deployment, you are choosing infrastructure as a service (IaaS) instead of platform as a service (PaaS). What’s the difference? In PaaS, the operating system is provided by the host, such as Windows or Linux, .NET, or Java; all you do is provide the application. In IaaS, you can provide, install, and configure the operating system yourself, giving you more control over the installation.

Here’s the checklist

  1. Run the latest software. Whether in your data center or in the IaaS cloud, install the latest version of your preferred operating system, the latest core libraries, and the latest application stack. (That’s one reason to go with IaaS, since you can control updates.) If you can’t control this yourself, because you’re assigned a server in the data center, pick the server that has the latest software foundation.
  2. Run the latest hardware. Assuming we’re talking the x86 architecture, look for the latest Intel Xeon processors, whether in the data center or in the cloud. As of mid-2018, I’d want servers running the Xeon E5 v3 or later, or E7 v4 or later. If you use anything older than that, you’re not getting the most out of the applications or taking advantage of the hardware chipset. For example, some E7 v4 chips have significantly improved instructions-per-CPU-cycle processing, which is a huge benefit. Similarly, if you choose AMD or another processor, look for the latest chip architectures.
  3. If you are using virtualization, make sure the server has the best and latest hypervisor. The hypervisor is key to a virtual machine’s (VM) performance—but not all hypervisors are created equal. Many of the top hypervisors have multiple product lines as well as configuration settings that affect performance (and security). There’s no way to know which hypervisor is best for any particular application. So, assuming your organization lets you make the choice, test, test, test. However, in the not-unlikely event you are required to go with the company’s standard hypervisor, make sure it’s the latest version.
  4. Take Spectre and Meltdown into account. The patches for Spectre and Meltdown slow down servers, but the extent of the performance hit depends on the server, the server’s firmware, the hypervisor, the operating system, and your application. It would be nice to give an overall number, such as expect a 15 percent hit (a number that’s been bandied about, though some dispute its accuracy). However, there’s no way to know except by testing. Thus, it’s important to know if your server has been patched. If it hasn’t been yet, expect application performance to drop when the patch is installed. (If it’s not going to be patched, find a different host server!)
  5. Base the number of CPUs and cores and the clock speed on the application requirements. If your application and its core dependencies (such as the LAMP stack or the .NET infrastructure) are heavily threaded, the software will likely perform best on servers with multiple CPUs, each equipped with the greatest number of cores—think 24 cores. However, if the application is not particularly threaded or runs in a not-so-well-threaded environment, you’ll get the biggest bang with the absolute top clock speeds on an 8-core server.

But wait, there’s more!

Read the full list of 14 recommendations in my story for HPE Enteprise.nxt, “Checklist: Optimizing application performance at deployment.”

You wouldn’t enjoy paying a fine of 4 percent of your company’s total revenue. But that’s the potential penalty if your company is found in violation of the European Union’s new General Data Protection Regulation (GDPR), which goes into effect May 25, 2018. As you’ve probably read, organizations anywhere in the world are subject to GDPR if they have customers in the EU and are storing any of their personal data.

GDPR compliance is a complex topic. It’s too much for one article — heck, books galore are being written about it, seminars abound, and GDPR consultants are on every street corner.

One challenge is that GDPR is a regulation, not a how-to guide. It’s big on explaining penalties for failing to detect and report a data breach in a sufficiently timely manner. It’s not big on telling you how to detect that breach. Rather than tell you what to do, let’s see what could go wrong with your GDPR plans—to help you avoid that 4 percent penalty.

First, the ground rules: GDPR’s overarching goal is to protect citizens’ privacy. In particular, the regulation pertains to anything that can be used to directly or indirectly identify a person. Such data can be anything: a name, a photo, an email address, bank details, social network posts, medical information, or even a computer IP address. To that end, data breaches that may pose a risk to individuals must be disclosed to the authorities within 72 hours and to the affected individuals soon thereafter.

What does that mean? As part of the regulations, individuals must have the ability to see what data you have about them, correct that data if appropriate, or have that data deleted, again if appropriate. (If someone owes you money, they can’t ask you to delete that record.)

Enough preamble. Let’s get into ten common problems.

First: Your privacy and data retention policies aren’t compliant with GDPR

There’s no specific policy wording required by GDPR. However, the policies must meet the overall objectives on GDPR, as well as the requirements in any other jurisdictions in which you operate (such as the United States). What would Alan do? Look at policies from big multinationals that do business in Europe and copy what they do, working with your legal team. You’ve got to get it right.

Second: Your actual practices don’t match your privacy policy

It’s easy to create a compliant privacy policy but hard to ensure your company actually is following it. Do you claim that you don’t store IP addresses? Make sure you’re not. Do you claim that data about a European customer is never stored in a server in the United States? Make sure that’s truly the case.

For example, let’s say you store information about German customers in Frankfurt. Great. But if that data is backed up to a server in Toronto, maybe not great.

Third: Your third-party providers aren’t honoring your GDPR responsibilities

Let’s take that customer data in Frankfurt. Perhaps you have a third-party provider in San Francisco that does data analytics for you, or that runs credit reports or handles image resizing. In those processes, does your customer data ever leave the EU? Even if it stays within the EU, is it protected in ways that are compliant with GDPR and other regulations? It’s your responsibility to make sure: While you might sue a supplier for a breach, that won’t cancel out your own primary responsibility to protect your customers’ privacy.

A place to start with compliance: Do you have an accurate, up-to-date listing of all third-party providers that ever touch your data? You can’t verify compliance if you don’t know where your data is.

But wait, there’s more

You can read the entire list of common GDPR failures in my story for HPE Enterprise.nxt, “10 ways to fail at GDPR compliance.”

Chapter One: Christine Hall

Should the popular Linux operating system be referred to as “Linux” or “GNU/Linux”? It’s a thing, or at least it used to be, writes my friend Christine Hall in her aptly named article, “Is It Linux or GNU/Linux?, published in Linux Journal on May 11:

Some may remember that the Linux naming convention was a controversy that raged from the late 1990s until about the end of the first decade of the 21st century. Back then, if you called it “Linux”, the GNU/Linux crowd was sure to start a flame war with accusations that the GNU Project wasn’t being given due credit for its contribution to the OS. And if you called it “GNU/Linux”, accusations were made about political correctness, although operating systems are pretty much apolitical by nature as far as I can tell.

Christine (aka Bride of Linux) quotes a number of learned people. That includes Steven J. Vaughan-Nichols, one of the top experts in the politics of open-source software – and frequent critic of the antics of Richard M. Stallman (aka RMS) who founded the Free Software Foundation, and who insists that everyone call the software GNU/Linux.

Here’s what Steven (aka SJVN), said in the article:

“Enough already”, he said. “RMS tried, and failed, to create an operating system: Hurd. He and the Free Software Foundation’s endless attempts to plaster his GNU name to the work of Linus Torvalds and the other Linux kernel developers is disingenuous and an insult to their work. RMS gets credit for EMACS, GPL, and GCC. Linux? No.”

Another humble luminary sought out by Christine: Yours truly.

“For me it’s always, always, always, always Linux,” said Alan Zeichick, an analyst at Camden Associates who frequently speaks, consults and writes about open-source projects for the enterprise. “One hundred percent. Never GNU/Linux. I follow industry norms.”

To make a long story short: In the article, the consensus was for Linux, not GNU/Linux.

Chapter Two: figosdev

But then someone going by the handle “figosdev” authored a rebuttal, “Debunking the Usual Omission of GNU,” published on Techrights. To make a long story short, he believes that the operating system should be called GNU/Linux. Here’s my favorite part of figosdev’s missive (which was written in all lower-case):

ive heard about gnu and linux about a million times in over a decade. as of today ive heard of alan zeichick once, and camden associates (what do they even do?) once. im just going to call them linux, its the more popular term.

Riiight. figosdev never heard of me, fine (founder of SD Times, but figosdev probably never heard of that either). On the other hand, at least figosdev knows my name. I have no idea who figosdev is, except to infer that he/she/it is a developer on the fig component compiler project, since he/she/it is hiding behind a handle. And that brings me to

Chapter Three: Richi Jennings

Christine Hall’s article sparked a lively debate on Twitter. As part of it, my friend Richi Jennings (quoted in the original article) tweeted:

Let’s end the story here, at least for now. Linux forever!

Microservices are a software architecture that have become quite popular in conjunction with cloud-native applications. Microservices allow companies to add or update new or existing tech-powered features more easily—and quite frequently even reduce the operating expenses of a product. A microservices approach does this by making it easier to update a large, complex program without revising the entire application, thereby accelerating the process of software updates.

Think about major enterprise software such as a customer management application. Such programs are often written as a single, monolithic application. Instead, some parts of that application could be considered as neatly encapsulated functionality, such as the function that talks to an order-processing database to create a new order.

In the microservices architecture, developers could write that order-processing service, including its state, into its own program—a loosely-coupled service. The main customer management application would consist of many of those services and interact using the service’s applications programming interfaces (APIs). Here’s what you need to know about the business advantages from a microservices architectural approach, according to Boris Scholl, vice president of development for microservices at Oracle.

1. Microservices can let you add new features faster to your company’s vital applications

Microservices can reduce complexity for your main enterprise applications. By encapsulating each business scenario, such as ordering a product or shopping card functionality into its own services, the code base becomes smaller, easier to maintain and easier to test. When you want to add or update a feature, “you can go faster by updating the service, as opposed to having to change the functionality of a very large project,” says Scholl.

Managing and connecting these many microservices might sound complex. However, “we are lucky that the microservices technology is evolving so fast there are infrastructures and platforms that make the heavy lifting easier,” Scholl says.

2. Microservices can let you embrace new, modern technology like artificial intelligence (AI) more easily

Developers can use the programming languages, tools and frameworks that are best suited for each service. Those may not be the same languages, tools and platforms used for other services. For example, consider how you apply a new idea, like today’s machine learning capability, to a customer management program. If the customer management program is built with a monolithic approach using a specific language, it will be very hard to easily integrate the new functionality—not to mention you are bound to the language, framework and even version used in the monolithic application.

However, with a microservices approach, developers might write a machine learning-focused service in the Scala language. They might run that service in a specialized AI-based cloud service that has the hardware speed needed to process huge datasets. That Scala-based service gets easily integrated with the rest of the customer application that might be written in Java or some other language. “You get cost savings because you can use a different technology stack for each service,” says Scholl. “You can use the best technology for the service.”

3. Each part of an application written with microservices can have its own release cadence

This feature relates to speed and also control and governance, which can be important to highly regulated industries. “Perhaps some parts of your application can be updated yearly and that fits your needs,” says Scholl. “Other parts might need to be updated more often, if you are looking to be agile and react faster to the market or to take advantage of new technology.”

Let’s go back to the data analytics and machine learning example. Perhaps new machine-learning technology has become available, allowing data analytics to run in seconds instead of minutes. That opportunity can be exploited by updating the data-analytics microservices. Or say an order-processing system was moved from an on-premises database to a cloud database. In a microservices architecture, all the developers would need to do is update the service that accesses that order processing; the rest of the customer-management application would not need to be changed.

“Think about where you need to update more frequently,” says Scholl. “If you can identify those components that should be on a different release cycle, break off that functionality into a new service. Then, use an API to let that new service talk to the rest of your application.”

Read more, including about scalability and organizational agility, in my essay for the Wall Street Journal, “Tech Strategy: 5 Things CEOs Should Know About Microservices.”

Is the cloud ready for sensitive data? You bet it is. Some 90% of businesses in a new survey say that at least half of their cloud-based data is indeed sensitive, the kind that cybercriminals would love to get their hands on.

The migration to the cloud can’t come soon enough. About two-thirds of companies in the study say at least one cybersecurity incident has disrupted their operations within the past two years, and 80% say they’re concerned about the threat that cybercriminals pose to their data.

The good news is that 62% of organizations consider the security of cloud-based enterprise applications to be better than the security of their on-premises applications. Another 21% consider it as good. The caveat: Companies must be proactive about their cloud-based data and can’t naively assume that “someone else” is taking care of that security.

Those insights come from a brand-new threat report, the first ever jointly conducted by Oracle and KPMG. The “Oracle and KPMG Cloud Threat Report 2018,” to be released this month at the RSA Conference, fills a unique niche among the vast number of existing threat and security reports, including the well-respected Verizon Data Breach Investigations Report produced annually since 2008.

The difference is the Cloud Threat Report’s emphasis on hybrid cloud, and on organizations lifting and shifting workloads and data into the cloud. “In the threat landscape, you have a wide variety of reports around infrastructure, threat analytics, malware, penetrations, data breaches, and patch management,” says one of the designers of the study, Greg Jensen, senior principal director of Oracle’s Cloud Security Business. “What’s missing is pulling this all together for the journey to the cloud.”

Indeed, 87% of the 450 businesses surveyed say they have a cloud-first orientation. “That’s the kind of trust these organizations have in cloud-based technology,” Jensen says.

Here are data points that break that idea down into more detail:

  • 20% of respondents to the survey say the cloud is much more secure than their on-premises environments; 42% say the cloud is somewhat more secure; and 21% say the cloud is equally secure. Only 21% think the cloud is less secure.
  • 14% say that more than half of their data is in the cloud already, and 46% say that between a quarter and half of their data is in the cloud.

That cloud-based data is increasingly “sensitive,” the survey respondents say. That data includes information collected from customer relationship management systems, personally identifiable information (PII), payment card data, legal documents, product designs, source code, and other types of intellectual property.

Read more, including what cyberattacks say about the “pace gap,” in my essay in Forbes, “Threat Report: Companies Trust Cloud Security.”

Asking “which is the best programming language” is like asking about the most important cooking tool in your kitchen. Mixer? Spatula? Microwave? Cooktop? Measuring cup? Egg timer? Lemon zester? All are critical, depending on what you’re making, and how you like to cook.

The same is true with programming languages. Some are best at coding =applications that run natively on mobile devices — think Objective-C or Java. Others are good at encoding logic within a PDF file, or on a web page — think JavaScript. And still others are best at coding fast applications for virtual machines or running directly on the operating system — for many people, that’s C or C++. Want a general purpose language? Think Python, PHP. Specialized? R and Matlab are good for statistics and data analytics. And so-on.

Last summer, IEEE Spectrum offered its take, surveying its audience and writing up the “2017 Top Programming Languages.” The top 10 languages for the typical reader:

  1. Python
  2. C
  3. Java
  4. C++
  5. C#
  6. R
  7. JavaScript
  8. PHP
  9. Go
  10. Swift

The story’s author, Stephen Case, noted not much change in the most popular languages. “Python has continued its upward trajectory from last year and jumped two places to the No. 1 slot, though the top four—Python, C, Java, and C++—all remain very close in popularity. “

What Do The PYPL Say?

The IEEE Spectrum annual survey isn’t the only game in town. The PYPL (PopularitY of Programming Language) index uses raw data from Google Trends to see how often people search for language tutorials. The people behind PYPL say, “If you believe in collective wisdom, the PYPL Popularity of Programming Language index can help you decide which language to study, or which one to use in a new software project.”

Here’s their Top 10:

  1. Java
  2. Python
  3. JavaScript
  4. PHP
  5. C#
  6. C
  7. R
  8. Objective-C
  9. Swift
  10. MATLAB

Asking the RedMonk

Stephen O’Grady describes RedMonk’s Programming Language Rankings, as of January 2018, as being based on two key external sources:

We extract language rankings from GitHub and Stack Overflow, and combine them for a ranking that attempts to reflect both code (GitHub) and discussion (Stack Overflow) traction. The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion and usage in an effort to extract insights into potential future adoption trends.

The top languages found by RedMonk look similar to PYPL and IEEE Spectrum:

  1. JavaScript
  2. Java
  3. Python
  4. PHP
  5. C#
  6. C++
  7. CSS
  8. Ruby
  9. C
  10. Swift & Objective-C (tied)

Use the Best Tool for the Job

It would be tempting to use data like this to say, “From now on, everything we’re doing will be in Java,” or “We’re going to do all web coding in JavaScript and use C++ for applications.” Don’t do that. That would be like say, “We’re going to make everything in the microwave.” Sometimes you want the microwave, sure, but sometimes you want the crockpot, or the regular oven, or sous vide, or the propane grill in your back yard.

The goal is productivity. Use agile processes like Scrum to determine what your development teams are going to build, where those applications will run, and which features must be included. Then, let the developers choose languages that fit best – and that includes supporting experimentation. Let them use R. Let them do some coding in Python, if it improves productivity, and gets a better job done faster.

 

Albert Einstein famously said, “Everything should be made as simple as possible, but not simpler.” Agile development guru Venkat Subramaniam has a knack for taking that insight and illustrating just how desperately the software development process needs the lessons of Professor Einstein.

As the keynote speaker at the Oracle Code event in Los Angeles—the first in a 14-city tour of events for developers—Subramaniam describes the art of simplicity, and why and how complexity becomes the enemy. While few would argue that complex is better, that’s what we often end up creating, because complex applications or source code may make us feel smart. But if someone says our software design or core algorithm looks simple, well, we feel bad—perhaps the problem was easy and obvious.

Subramaniam, who’s president of Agile Developer and an instructional professor at the University of Houston, urges us instead to take pride in coming up with a simple solution. “It takes a lot of courage to say, ‘we don’t need to make this complex,’” he argues. (See his full keynote, or register for an upcoming Oracle Code event.)

Simplicity Is Not Simple

Simplicity is hard to define, so let’s start by considering what simple is not, says Subramaniam. In most cases, our first attempts at solving a problem won’t be simple at all. The most intuitive solution might be overly verbose, or inefficient, or perhaps difficult to understand, even by its programmers after the fact.

Simple is not clever. Clever software, or clever solutions, may feel worthwhile, and might cause people to pat developers on the back. But ultimately, it’s hard to understand, and can be hard to change later. “Clever code is self-obfuscating,” says Subramaniam, meaning that it can be incomprehensible. “Even programmers can’t understand their clever code a week later.”

Simple is not necessarily familiar. Subramaniam insists that we are drawn to the old, comfortable ways of writing software, even when those methods are terribly inefficient. He mentioned someone who wrote code with 70 “if/then” questions in a series—because it was familiar. But it certainly wasn’t simple, and would be nearly impossible to debug or modify later. Something that we’re not familiar with may actually be simpler than what we’re comfortable with. To fight complexity, Subramaniam recommends learning new approaches and staying up with the latest thinking and the latest paradigms.

Simple is not over-engineered. Sometimes you can overthink the problem. Perhaps that means trying to develop a generalized algorithm that can be reused to solve many problems, when the situation calls for a fast, basic solution to a single problem. Subramaniam cited Occam’s Razor: When choosing between two solutions, the simplest may be the best.

Simple is not terse. Program source code should be concise, which means that it’s small, but also clearly communicate the programmer’s intent. By contrast, something that’s terse may still execute correctly when compiled into software, but the human understanding may be lost. “Don’t confuse terse with concise,” warns Subramaniam. “Both are really small, but terse code is waiting to hurt you when you least expect it.”

Read more in my essay, “Practical Advice To Whip Complexity And Develop Simpler Software.”

As the saying goes, you can’t manage what you don’t measure. In a data-driven organization, the best tools for measuring the performance are business intelligence (BI) and analytics engines, which require data. And that explains why data warehouses continue to play such a crucial role in business. Data warehouses often provide the source of that data, by rolling up and summarizing key information from a variety of sources.

Data warehouses, which are themselves relational databases, can be complex to set up and manage on a daily basis. They typically require significant human involvement from database administrators (DBAs). In a large enterprise, a team of DBAs ensure that the data warehouse is extracting data from those disparate data sources, as well as accommodating new and changed data sources—and making sure the extracted data is summarized properly and stored in a structured manner that can be handled by other applications, including those BI and analytics tools.

On top of that, the DBAs are managing the data warehouse’s infrastructure. That includes everything from server processor utilization, the efficiency of storage, security of the data, backups, and more.

However, the labor-intensive nature of data warehouses is about to change, with the advent of Oracle Autonomous Data Warehouse Cloud, announced in October 2017. The self-driving, self-repairing, self-tuning functionality of Oracle’s Data Warehouse Cloud is good for the organization—and good for the DBAs.

Data-driven organizations need timely, up-to-date business intelligence. This can feed instant decision-making, short-term predictions and business adjustments, and long-term strategy. If the data warehouse goes down, slows down, or lacks some information feeds, the impact can be significant. No data warehouse may mean no daily operational dashboards and reports, or inaccurate dashboards or reports.

For C-level executives, Autonomous Data Warehouse can improve the value of the data warehouse. This boosts the responsiveness of business intelligence and other important applications, by improving availability and performance.

Stop worrying about uptime. Forget about disk-drive failures. Move beyond performance tuning. DBAs, you have a business to optimize.

Read more in my article, “Autonomous Capabilities Will Make Data Warehouses — And DBAs — More Valuable.”

The “throw it over the wall” problem is familiar to anyone who’s seen designers and builders create something that can’t actually be deployed or maintained out in the real world. In the tech world, avoiding this problem is a big part of what gave rise to DevOps.

DevOps, combines “development” and “IT operations.” It refers to a set of practices that help software developers and IT operations staff work better, together. DevOps emerged about a decade ago with the goal of tearing down the silos between the two groups, so that companies can get new apps and features out the door, faster and with fewer mistakes and less downtime in production.

DevOps is now widely accepted as a good idea, but that doesn’t mean it’s easy. It requires cultural shifts by two departments that not only have different working styles and toolsets, but where the teams may not even know or respect each other.

When DevOps is properly embraced and implemented, it can help get better software written more quickly. DevOps can make applications easier and less expensive to manage. It can simplify the process of updating software to respond to new requirements. Overall, a DevOps mindset can make your organization more competitive because you can respond quickly to problems, opportunities and industry pressures.

Is DevOps the right strategic fit for your organization? Here are six CEO-level insights about DevOps to help you consider that question:

  1. DevOps can and should drive business agility.DevOps often means supporting a more rapid rate of change in terms of delivering new software or updating existing applications. And it doesn’t just mean programmers knock out code faster. It means getting those new apps or features fully deployed and into customers’ hands. “A DevOps mindset represents development’s best ability to respond to business pressures by quickly bringing new features to market and we drive that rapid change by leveraging technology that lets us rewire our apps on an ongoing basis,” says Dan Koloski, vice president of product management at Oracle.

For the full story, see my essay for the Wall Street Journal, “Tech Strategy: 6 Things CEOs Should Know About DevOps.”

Simplified Java coding. Less garbage. Faster programs. Those are among the key features in the newly released Java 10, which arrived in developers’ hands only six months after the debut of Java 9 in September.

This pace is a significant change from Java’s previous cycle of one large release every two to three years. With its faster release cadence, Java is poised to provide developers with innovations twice every year, making the language and platform more attractive and competitive. Instead of waiting for a huge omnibus release, the Java community can choose to include new features as soon as those features are ready, in the next six-month Java release train. This gives developers access to the latest APIs, functions, language additions, and JVM updates much faster than ever before.

Java 10 is the first release on the new six-month schedule. It builds incrementally on the significant new functionality that appeared in Java 9, which had a multiyear gestation period.

Java 10 delivers 12 Java Enhancement Proposals (JEPs). Here’s the complete list, followed by a deeper look at three of the most significant JEPs:

  • Local-Variable Type Inference
  • Consolidate the JDK Forest into a Single Repository
  • Garbage-Collector Interface
  • Parallel Full GC for G1
  • Application Class-Data Sharing
  • Thread-Local Handshakes
  • Remove the Native-Header Generation Tool (javah)
  • Additional Unicode Language-Tag Extensions
  • Heap Allocation on Alternative Memory Devices
  • Experimental Java-Based JIT Compiler
  • Root Certificates
  • Time-Based Release Versioning

See my essay for Forbes, “What Java 10 And Java’s New 6-Month Release Cadence Mean For Developers.” We’ll look at three of the most significant JEPs: Local-Variable Type Inference, Parallel Full GC for G1, and the Experimental Java-Based JIT Compiler.

Blockchain is a distributed digital ledger technology in which blocks of transaction records can be added and viewed—but can’t be deleted or changed without detection. Here’s where the name comes from: a blockchain is an ever-growing sequential chain of transaction records, clumped together into blocks. There’s no central repository of the chain, which is replicated in each participant’s blockchain node, and that’s what makes the technology so powerful. Yes, blockchain was originally developed to underpin Bitcoin and is essential to the trust required for users to trade digital currencies, but that is only the beginning of its potential.

Blockchain neatly solves the problem of ensuring the validity of all kinds of digital records. What’s more, blockchain can be used for public transactions as well as for private business, inside a company or within an industry group. “Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days, or even weeks.”

That’s the power of blockchain: an immutable digital ledger for recording transactions. It can be used to power anonymous digital currencies—or farm-to-table vegetable tracking, business contracts, contractor licensing, real estate transfers, digital identity management, and financial transactions between companies or even within a single company.

“Blockchain doesn’t have to just be used for accounting ledgers,” says Rakhmilevich. “It can store any data, and you can use programmable smart contracts to evaluate and operate on this data. It provides nonrepudiation through digitally signed transactions, and the stored results are tamper proof. Because the ledger is replicated, there is no single source of failure, and no insider threat within a single organization can impact its integrity.”

It’s All About Distributed Ledgers

Several simple concepts underpin any blockchain system. The first is the block, which is a batch of one or more transactions, grouped together and hashed. The hashing process produces an error-checking and tamper-resistant code that will let anyone viewing the block see if it has been altered. The block also contains the hash of the previous block, which ties them together in a chain. The backward hashing makes it extremely difficult for anyone to modify a single block without detection.

A chain contains collections of blocks, which are stored on decentralized, distributed servers. The more the better, with every server containing the same set of blocks and the latest values of information, such as account balances. Multiple transactions are handled within a single block using an algorithm called a Merkle tree, or hash tree, which provides fault and fraud tolerance: if a server goes down, or if a block or chain is corrupted, the missing data can be reconstructed by polling other servers’ chains.

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can view relevant data, but not everything in the chain. A customer might be able to verify that a contractor has a valid business license and see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

When originally conceived, blockchain had a narrow set of protocols. They were designed to govern the creation of blocks, the grouping of hashes into the Merkle tree, the viewing of data encapsulated into the chain, and the validation that data has not been corrupted or tampered with. Over time, creators of blockchain applications (such as the many competing digital currencies) innovated and created their own protocols—which, due to their independent evolutionary processes, weren’t necessarily interoperable. By contrast, the success of general-purpose blockchain services, which might encompass computing services from many technology, government, and business players, created the need for industry standards—such as Hyperledger, a Linux Foundation project.

Read more in my feature article in Oracle Magazine, March/April 2018, “It’s All About Trust.”

Don’t be misled by the name: Serverless cloud computing contains servers. Lots of servers. What makes serverless “serverless” is that developers, IT administrators and business leaders don’t have to think about those servers. Ever.

In the serverless model, online computing power gets tapped automatically only at the moment it’s needed. This can save organizations money and, just as importantly, make the IT organization more agile when it comes to building and launching new applications. That’s why serverless has the potential to be a game-changer for enterprise.

“Serverless is the next logical step for computing,” says Bob Quillin, Oracle vice president of developer relations. “We went from a data center where you own everything, to the cloud with shared servers and centralized infrastructure, to serverless, where you don’t even care about the servers themselves.”

In the serverless model, developers write and deploy what are called “functions.” Those are slimmed-down applications that take one action, such as processing an e-commerce order or recording that a shipment arrived. They run those functions directly on the cloud, using technology that eliminates the need to manage the servers, since it delivers computing power the moment that a function gets called into action.

Both the economics and the speed-of-development benefits of serverless cloud computing are compelling. Here are four CEO-level insights from Quillin for thinking about serverless computing.

First: Serverless can save real money. In the old data center model, says Quillin, organizations had to buy and maintain expensive servers, infrastructure and real estate.

In a traditional cloud model, organizations turn that capital expense into an operating one by provisioning virtualized servers and infrastructure. That saves money compared with the old data center model, Quillin says, but “you are typically paying for compute resources that are running all the time—in increments of CPU hours at least.” If you create a cluster of cloud servers, you don’t typically build it up and break it down every day, and certainly not every hour, as needed. That’s just too much management and orchestration for most organizations.

Serverless, on the other hand, essentially lets you pay only for exactly the time that a workload runs. For closing the books, it may be a once-a-month charge for a few hours of computing time. For handling transactions, it might be a few tenths of a second whenever a customer makes a sale or an Internet of Things (IoT) device sends data.

For the rest of the list, and the full story, see my essay for the Wall Street Journal, “4 Things CEOs Should Know About Serverless Computing.”

DevOps is a technology discipline well-suited to cloud-native application development. When it only takes a few mouse clicks to create or manage cloud resources, why wouldn’t developers and IT operation teams work in sync to get new apps out the door and in front of user faster? The DevOps culture and tactics have done much to streamline everything from coding to software testing to application deployment.

Yet far from every organization has embraced DevOps, and not every organization that has tried DevOps has found the experience transformative. Perhaps that’s because the idea is relatively young (the term was coined around 2009), suggests Javed Mohammed, systems community manager at Oracle, or perhaps because different organization are at such different spots in DevOps’ technology adoption cycle. That idea—about where we are in the adoption of DevOps—became a central theme of a recent podcast discussion among tech experts. Following are some highlights.

Confusion about DevOps can arise because DevOps affects dev and IT teams in many ways. “It can apply to the culture piece, to the technology piece, to the process piece—and even how different teams interact, and how all of the different processes tie together,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC and co-author of Accelerate: The Science of Lean Software and DevOps.

The adoption and effectiveness of DevOps within a team depends on where each team is, and where organizations are. One team might be narrowly focused on the tech used to automate software deployment to the public, while another is looking at the culture and communication needed to release new features on a weekly or even daily basis. “Everyone is at a very, very different place,” Forsgren says.

Indeed, says Forsgren, some future-thinking organizations are starting to talk about what ‘DevOps Next’ is, extending the concept of developer-led operations beyond common best practices. At the same time, in other companies, there’s no DevOps. “DevOps isn’t even on their radar,” she sighs. Many experts, including Forsgren, see that DevOps is here, is working, and is delivering real value to software teams today—and is helping businesses create and deploy better software faster and less expensively. That’s especially true when it comes to cloud-native development, or when transitioning existing workloads from the data center into the cloud.

Read more in my essay, “DevOps: Sometimes Incredibly Transformative, Sometimes Not So Much.”

You are not the user. If you are the CEO, CTO, chief network architect, software developer – you aren’t the user of the software or systems that you are building, or at least, you aren’t the primary user. What you are looking for isn’t what your customer or employee is looking for. And the vocabulary you use isn’t the vocabulary your customer is using, and may not be what your partners say either.

Two trivial examples:

  1. I recently had my hair cut, and the stylist asked me, “Do you need any product?” Well, I don’t use product. I use shampoo. “Product” is stylist-speak, not customer-speak.
  2. For lunch one day, I stopped at a fast-food chain. Yes, yes, I know, not the healthiest. When my meal was ready, I heard over the speaker, “Order 143, your order is up.” Hmm. Up? In customer-speak, it should have been, “Your order is ready.”

In the essay, “You Are Not the User: The False-Consensus Effect,” Raluca Budiu observes:

While many people who earn a living from developing software will write tons of programs to make their own life easier, much, if not most, of their output will in fact be intended for other people — people who are not working in a cubicle nearby, or not even in the same building. These “users” are usually very different than those who write the code, even in the rare case where they are developers: they have different backgrounds, different experiences with user interfaces, different mindsets, different mental models, and different goals. They are not us.

Badiu defines the false-consensus effect as, “The false-consensus effect refers to people’s tendency to assume that others share their beliefs and will behave similarly in a given context.” And that is more than designing cool software. Good design, and avoiding a false consensus, requires real-life situations with real-life customers or end users.

The way I navigate a grocery store is not the way that the store’s designer, or store’s manager, navigates it. It’s certainly not the way that the store’s manager navigates it. Or its chief risk officer. That’s why grocery stores spend a fortune observing users and testing different layouts to not only maximize sales and profitability, but also maximize the user’s satisfaction. A good design often requires a balance between the needs of the designer and the needs of the users.

My wife was recently frustrated when navigating an insurance company’s website. It was clearly not designed for her use case. Frankly, it’s hard to imagine anyone being satisfied with that website. And how about the process of logging into a WiFi network in a hotel, airport, or coffee shop? Could it be more difficult?

Focus on the User Experience

The Nielsen Norman Group, experts in usability, have offered a list of “10 Usability Heuristics of User Interface Design.” While Jakob Nielsen is focused on the software user experience, these are rules that we should follow in many other situations. Consider this point:

Match between system and the real world: The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.

Yes, and how about

Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

That’s so familiar. How many of us have been frustrated by dialog boxes, not knowing exactly what will happen if we press “Cancel” or “Okay”?

Design Thinking

The article “Design Thinking” from Sarah Gibbons talks about what we should do when designing systems. That means getting them in front of real people:

Prototype: Build real, tactile representations for a subset of your ideas. The goal of this phase is to understand what components of your ideas work, and which do not. In this phase you begin to weigh the impact vs. feasibility of your ideas through feedback on your prototypes.

Test: Return to your users for feedback. Ask yourself ‘Does this solution meet users’ needs?’ and ‘Has it improved how they feel, think, or do their tasks?’

Put your prototype in front of real customers and verify that it achieves your goals. Has the users’ perspective during onboarding improved? Does the new landing page increase time or money spent on your site? As you are executing your vision, continue to test along the way.

Never forget, you are not the user.

“The functional style of programming is very charming,” insists Venkat Subramaniam. “The code begins to read like the problem statement. We can relate to what the code is doing and we can quickly understand it.” Not only that, Subramaniam explains in his keynote address for Oracle Code Online, but as implemented in Java 8 and beyond, functional-style code is lazy—and that laziness makes for efficient operations because the runtime isn’t doing unnecessary work.

Subramaniam, president of Agile Developer and an instructional professor at the University of Houston, believes that laziness is the secret to success, both in life and in programming. Pretend that your boss tells you on January 10 that a certain hourlong task must be done before April 15. A go-getter might do that task by January 11.

That’s wrong, insists Subramaniam. Don’t complete that task until April 14. Why? Because the results of the boss’s task aren’t needed yet, and the requirements may change before the deadline, or the task might be canceled altogether. Or you might even leave the job on March 13. This same mindset should apply to your programming: “Efficiency often means not doing unnecessary work.”

Subramaniam received the JavaOne RockStar award three years in a row and was inducted into the Java Champions program in 2013 for his efforts in motivating and inspiring software developers around the world. In his Oracle Code Online keynote, he explored how functional-style programming is implemented in the latest versions of Java, and why he’s so enthusiastic about using this style for applications that process lots and lots of data—and where it’s important to create code that is easy to read, easy to modify, and easy to test.

Functional Versus Imperative Programming

The old mainstream of imperative programming, which has been a part of the Java language from day one, relies upon developers to explicitly code not only what they want the program to do, but also how to do it. Take software that has a huge amount of data to process; the programmer would normally create a loop that examines each piece of data, and if appropriate, take specific action on that data with each iteration of the loop. It’s up to the developer to create the loop and manage it—in addition to coding the business logic to be performed on the data.

The imperative model, argues Subramaniam, results in what he calls “accidental complexity”—each line of code might perform multiple functions, which makes it hard to understand, modify, test, and verify. And, the developer must do a lot of work to set up and manage the data and iterations. “You get bogged down with the details,” he said. This not only introduces complexity, but makes code hard to change.”

By contrast, when using a functional style of programming, developers can focus almost entirely on what is to be done, while ignoring the how. The how is handled by the underlying library of functions, which are defined separately and applied to the data as required. Subramaniam says that functional-style programming provides highly expressive code, where each line of code does only one thing: “The code becomes easier to work with, and easier to write.”

Subramaniam adds that in functional-style programming, “The code becomes the business logic.” Read more in my essay published in Forbes, “Lazy Java Code Makes Applications Elegant, Sophisticated — And Efficient at Runtime.”

 

When the little wireless speaker in your kitchen acts on your request to add chocolate milk to your shopping list, there’s artificial intelligence (AI) working in the cloud, to understand your speech, determine what you want to do, and carry out the instruction.

When you send a text message to your HR department explaining that you woke up with a vision-blurring migraine, an AI-powered chatbot knows how to update your status to “out of the office” and notify your manager about the sick day.

When hackers attempt to systematically break into the corporate computer network over a period of weeks, AI sees the subtle patterns in historical log data, recognizes outliers in the packet traffic, raises the alarm, and recommends appropriate countermeasures.

AI is nearly everywhere in today’s society. Sometimes it’s fairly obvious (as with a chatbot), and sometimes AI is hidden under the covers (as with network security monitors). It’s a virtuous cycle: Modern cloud computing and algorithms make AI a fast, efficient, and inexpensive approach to problem-solving. Developers discover those cloud services and algorithms and imagine new ways to incorporate the latest AI functionality into their software. Businesses see the value of those advances (even if they don’t know that AI is involved), and everyone benefits. And quickly, the next wave of emerging technology accelerates the cycle again.

AI can improve the user experience, such as when deciphering spoken or written communications, or inferring actions based on patterns of past behavior. AI techniques are excellent at pattern-matching, making it easier for machines to accurately decipher human languages using context. One characteristic of several AI algorithms is flexibility in handling imprecise data: Human text. Specially, chatbots—where humans can type messages on their phones, and AI-driven software can understand what they say and carry on a conversation, providing desired information or taking the appropriate actions.

If you think AI is everywhere today, expect more tomorrow. AI-enhanced software-as-a-service and platform-as-a-service products will continue to incorporate additional AI to help make cloud-delivered and on-prem services more reliable, more performant, and more secure. AI-driven chatbots will find their ways into new, innovative applications, and speech-based systems will continue to get smarter. AI will handle larger and larger datasets and find its way into increasingly diverse industries.

Sometimes you’ll see the AI and know that you’re talking to a bot. Sometimes the AI will be totally hidden, as you marvel at the, well, uncanny intelligence of the software, websites, and even the Internet of Things. If you don’t believe me, ask a chatbot.

Read more in my feature article in the January/February 2018 edition of Oracle Magazine, “It’s Pervasive: AI Is Everywhere.”

Millions of developers are using Artificial Intelligence (AI) or Machine Learning (ML) in their projects, says Evans Data Corp. Evans’ latest Global Development and Demographics Study, released in January 2018, says that 29% of developers worldwide, or 6,452,000 in all, are currently using some form of AI or ML. What’s more, says the study, an additional 5.8 million expect to use AI or ML within the next six months.

ML is actually a subset of AI. To quote expertsystem.com,

In practice, artificial intelligence – also simply defined as AI – has come to represent the broad category of methodologies that teach a computer to perform tasks as an “intelligent” person would. This includes, among others, neural networks or the “networks of hardware and software that approximate the web of neurons in the human brain” (Wired); machine learning, which is a technique for teaching machines to learn; and deep learning, which helps machines learn to go deeper into data to recognize patterns, etc. Within AI, machine learning includes algorithms that are developed to tell a computer how to respond to something by example.

The same site defines ML as,

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.

A related and popular AI-derived technology, by the way, is Deep Learning. DL uses simulated neural networks to attempt to mimic the way a human brain learns and reacts. To quote from Rahul Sharma on Techgenix,

Deep learning is a subset of machine learning. The core of deep learning is associated with neural networks, which are programmatic simulations of the kind of decision making that takes place inside the human brain. However, unlike the human brain, where any neuron can establish a connection with some other proximate neuron, neural networks have discrete connections, layers, and data propagation directions.

Just like machine learning, deep learning is also dependent on the availability of massive volumes of data for the technology to “train” itself. For instance, a deep learning system meant to identify objects from images will need to run millions of test cases to be able to build the “intelligence” that lets it fuse together several kinds of analysis together, to actually identify the object from an image.

Why So Many AI Developers? Why Now?

You can find AI, ML and DL everywhere, it seems. There are highly visible projects, like self-driving cars, or the speech recognition software inside Amazon’s Alexa smart speakers. That’s merely the tip of the iceberg. These technologies are embedded into the Internet of Things, into smart analytics and predictive analytics, into systems management, into security scanners, into Facebook, into medical devices.

A modern but highly visible application of AI/ML are chatbots – software that can communicate with humans via textual interfaces. Some companies use chatbots on their websites or on social media channels (like Twitter) to talk to customers and provide basic customer services. Others use the tech within a company, such as in human-resources applications that let employees make requests (like scheduling vacation) by simply texting the HR chatbot.

AI is also paying off in finance. The technology can help service providers (like banks or payment-card transaction clearinghouses) more accurately review transactions to see if they are fraudulent, and improve overall efficiency. According to John Rampton, writing for the Huffington Post, AI investment by financial tech companies was more than $23 billion in 2016. The benefits of AI, he writes, include:

  • Increasing Security
  • Reducing Processing Times
  • Reducing Duplicate Expenses and Human Error
  • Increasing Levels of Automation
  • Empowering Smaller Companies

Rampton also explains that AI can offer game-changing insights:

One of the most valuable benefits AI brings to organizations of all kinds is data. The future of Fintech is largely reliant on gathering data and staying ahead of the competition, and AI can make that happen. With AI, you can process a huge volume of data which will, in turn, offer you some game-changing insights. These insights can be used to create reports that not only increase productivity and revenue, but also help with complex decision-making processes.

What’s happening in fintech with AI is nothing short of revolutionary. That’s true of other industries as well. Instead of asking why so many developers, 29%, are focusing on AI, we should ask, “Why so few?”

I’m #1! Well, actually #4 and #7. During 2017, I wrote several article for Hewlett Packard Enterprise’s online magazine, Enterprise.nxt Insights, and two of them were quite successful – named as #4 and #7 in the site’s list of Top 10 Articles for 2017.

Article #7 was, “4 lessons for modern software developers from 1970s mainframe programing.” Based entirely on my own experiences, the article began,

Eight megabytes of memory is plenty. Or so we believed back in the late 1970s. Our mainframe programs usually ran in 8 MB virtual machines (VMs) that had to contain the program, shared libraries, and working storage. Though these days, you might liken those VMs more to containers, since the timesharing operating system didn’t occupy VM space. In fact, users couldn’t see the OS at all.

In that mainframe environment, we programmers learned how to be parsimonious with computing resources, which were expensive, limited, and not always available on demand. We learned how to minimize the costs of computation, develop headless applications, optimize code up front, and design for zero defects. If the very first compilation and execution of a program failed, I was seriously angry with myself.

Please join me on a walk down memory lane as I revisit four lessons I learned while programming mainframes and teaching mainframe programming in the era of Watergate, disco on vinyl records, and Star Wars—and which remain relevant today.

Article #4 was, “The OWASP Top 10 is killing me, and killing you! It began,

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10, a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Click the links (or pictures) above and enjoy the articles! And kudos to my prolific friend Steven J. Vaughan-Nichols, whose articles took the #3, #2 and #1 slots. He’s good. Damn good.

Agility – the ability to deliver projects quickly. That applies to new projects, as well as updates to existing projects. The agile software movement began when many smart people became frustrated with the classic model of development, where first the organization went through a complex process to develop requirements (which took months or years), and wrote software to address those requirements (which took months or years, or maybe never finished). By then, not only did the organization miss out on many opportunities, but perhaps the requirements were no longer valid – if they ever were.

With agile methodologies, the goal is to build software (or accomplish some complex task or action), in small incremental iterations. Each iteration delivers some immediate value, and after each iteration, there would be an evaluation of how well those who requested the project (the stakeholders) were satisfied, and what they wanted to do next. No laborious up-front requirements. No years of investment before there was any return on that investment.

One of the best-known agile frameworks is Scrum, developed by Jeff Sutherland and Ken Schwaber in the early 1990s. In my view, Scrum is noteworthy for several innovations, including:

  • The Scrum framework is simple enough for everyone involved to understand.
  • The Scrum framework is not a product.
  • Scrum itself is not tied to specific vendor’s project-management tools.
  • Work is performed in two-week increments, called Sprints.
  • Every day there is a brief meeting called a Daily Scrum.
  • Development is iterative, incremental, and outcomes are predictable.
  • The work must be transparent, as much as possible, to everyone involved.
  • The roles of participants in the project are defined extremely clearly.
  • The relationship between people in the various roles is also clearly defined.
  • A key participant is the Scrum Master, who helps everyone maximize the value of the team and the project.
  • There is a clear, unambiguous definition of what “Done” means for every action item.

Scrum itself is refined every year or two by Sutherland and Schwaber. The most recent version (if you can call it a version) is Scrum 2017; before that, it was revised in 2016 and 2013. While there aren’t that many significant changes from the original vision unveiled in 1995, here are three recent changes that, in my view, make Scrum better than ever – enough that it might be called Scrum 2.0. Well, maybe Scrum 1.5. You decide:

  1. The latest version acknowledges more clearly that Scrum, like other agile methodologies, is used for all sorts of projects, not merely creating or enhancing software. While the Scrum Guide is still development-focused, Scrum can be used for market research, product development, developing cloud services, and even managing schools and governments.
  2. The Daily Scrum will be more focused on exploring how well the work is driving toward the goals planned for the biweekly Sprint Goal. For example – what work will be done today to drive to the goal? What impediments likely to prevent us from meeting the goal? (Previously, the Daily Scrum was often viewed as a glorified status report meeting.)
  3. Scrum has a set of values, and those are now spelled out: “When the values of commitment, courage, focus, openness and respect are embodied and lived by the Scrum Team, the Scrum pillars of transparency, inspection, and adaptation come to life and build trust for everyone. The Scrum Team members learn and explore those values as they work with the Scrum events, roles and artifacts. Successful use of Scrum depends on people becoming more proficient in living these five values… Scrum Team members respect each other to be capable, independent people.”

The word “agile” is thrown around too often in business and technology, covering everything from planning a business acquisition to planning a network upgrade. Scrum is one of the best-known agile methodologies, and the framework is very well suited for all sorts of projects where it’s not feasible to determine a full set of requirements up front, and there’s a need to immediately begin delivering some functionality (or accomplish parts of the tasks). That Scrum continues to evolve will help ensure its value in the coming years… and decades.

AI is an emerging technology – always, has been always will be. Back in the early 1990s, I was editor of AI Expert Magazine. Looking for something else in my archives, I found this editorial, dated February 1991.

What do you think? Is AI real yet?

The bad news: There are servers used in serverless computing. Real servers, with whirring fans and lots of blinking lights, installed in racks inside data centers inside the enterprise or up in the cloud.

The good news: You don’t need to think about those servers in order to use their functionality to write and deploy enterprise software. Your IT administrators don’t need to provision or maintain those servers, or think about their processing power, memory, storage, or underlying software infrastructure. It’s all invisible, abstracted away.

The whole point of serverless computing is that there are small blocks of code that do one thing very efficiently. Those blocks of code are designed to run in containers so that they are scalable, easy to deploy, and can run in basically any computing environment. The open Docker platform has become the de facto industry standard for containers, and as a general rule, developers are seeing the benefits of writing code that target Docker containers, instead of, say, Windows servers or Red Hat Linux servers or SuSE Linux servers, or any specific run-time environment. Docker can be hosted in a data center or in the cloud, and containers can be easily moved from one Docker host to another, adding to its appeal.

Currently, applications written for Docker containers still need to be managed by enterprise IT developers or administrators. That means deciding where to create the containers, ensuring that the container has sufficient resources (like memory and processing power) for the application, actually installing the application into the container, running/monitoring the application while it’s running, and then adding more resources if required. Helping do that is Kubernetes, an open container management and orchestration system for Docker. So while containers greatly assist developers and admins in creating portable code, the containers still need to be managed.

That’s where serverless comes in. Developers write their bits of code (such as to read or write from a database, or encrypt/decrypt data, or search the Internet, or authenticate users, or to format output) to run in a Docker container. However, instead of deploying directly to Docker, or using Kubernetes to handle deployment, they write their code as a function, and then deploy that function onto a serverless platform, like the new Fn project. Other applications can call that function (perhaps using a RESTful API) to do the required operation, and the serverless platform then takes care of everything else automatically behind the scenes, running the code when needed, idling it when not needed.

Read my essay, “Serverless Computing: What It Is, Why You Should Care,” to find out more.

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10. That’s a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Note that the top 10 list doesn’t directly represent the 10 most common attacks. Rather, it’s a ranking of risk. There are four factors used for this calculation. One is the likelihood that applications would have specific vulnerabilities; that’s based on data provided by companies. That’s the only “hard” metric in the OWASP Top 10. The other three risk factors are based on professional judgement.

It boggles the mind that a majority of top 10 issues appear across the 2007, 2010, 2013, and draft 2017 OWASP lists. That doesn’t mean that these application security vulnerabilities have to remain on your organization’s list of top problems, though—you can swat those flaws.

Read more in my essay, “The OWASP Top 10 is killing me, and killing you!

Those are two popular ways of migrating enterprise assets to the cloud:

  1. Write new cloud-native applications.
  2. Lift-and-shift existing data center applications to the cloud.

Gartner’s definition: “Lift-and-shift means that workloads are migrated to cloud IaaS in as unchanged a manner as possible, and change is done only when absolutely necessary. IT operations management tools from the existing data center are deployed into the cloud environment largely unmodified.”

There’s no wrong answer, no wrong way of proceeding. Some data center applications (including servers and storage) may be easier to move than others. Some cloud-native apps may be easier to write than others. Much depends on how much interconnectivity there is between the applications and other software; that’s why, for example, public-facing websites are often easiest to move to the web, while tightly coupled internal software, such as inventory control or factory-floor automation, can be trickier.

That’s why in some cases, a hybrid strategy is best. Some parts of the applications are moved up to the cloud, while others remain in the data centers, with SD-WANs or other connectivity linking everything together in a secure manner.

In other words, no one size fits all. And no one timeframe fits all, especially when it comes to lifting-and-shifting.

Saas? Paas? It Depends.

A recent survey from the Oracle Applications User Group (OAUG) showed that 70% of respondents who have plans to adopt Oracle Cloud solutions will do so in the next three years. About 35% plan to implement Software-as-a-Service (SaaS) solutions to run with their existing Oracle on-premises installations and 29 percent planning to use Platform-as-a-Service (PaaS) services to accelerate software development efforts in the next 12 months.

Joe Paiva, CIO of the U.S. Commerce Department’s International Trade Administration (ITA), is a fan of lift-and-shift. He said at a cloud conference that “Sometimes it makes sense because it gets you there. That was the key. We had to get there because we would be no worse off or no better off, and we were still spending a lot of money, but it got us to the cloud. Then we started doing rationalization of hardware and applications, and dropped our bill to Amazon by 40 percent compared to what we were spending in our government data center. We were able to rationalize the way we use the service.” Paiva estimates government agencies could save 5%-15% using lift-and-shift.

The benefits of moving existing workloads to the cloud are almost entirely financial. If you can shut down a data center and pay less to run the application in the cloud, it’s can be a good short-term solution with immediate ROI. Gartner cautions, however, that lift and shift “generally results in little created value. Plus, it can be a more expensive option and does not deliver immediate cost savings.” Much depends on how much it costs to run that application today.

A Multi-Track Process for Cloud Migration

The real benefits of new cloud development and deployment architectures take time to realize. For many organizations, there may be a multi-track process:

First track: Lift-and-shift existing workloads that are relatively easy to migrate, while simultaneously writing cloud-native applications for new projects. Those provide the biggest and fastest return on investment, while leaving data center workloads in place and untouched.

Second track: Write cloud-native applications for the remaining data-center workloads, the ones impractical to migrate in their existing form. These will be slower, but the payoff would result in the ability to turn off some or all existing data centers – and eliminating their associated expenses, such as power and cooling, bandwidth, and physical space.

Third track: At some point, revisit the lifted-and-shifted workloads to see which would significantly benefit from being rewritten as cloud-native apps. Unless there is an order of magnitude increase in efficiency, or significant added functionality, the financial returns won’t be high – or may be nonexistent. For some applications, it may never make sense to redesign and rewrite them in a cloud-native way. So, those old enterprise applications may live on for years to come.

About a decade ago, I purchased a piece of a mainframe on eBay — the name ID bar. Carved from a big block of aluminum, it says “IBM System/370 168,” and it hangs proudly over my desk.

My time on mainframes was exclusively with the IBM System/370 series. With a beautiful IBM 3278 color display terminal on my desk, and, later, a TeleVideo 925 terminal and an acoustic coupler at home, I was happier than anyone had a right to be.

We refreshed our hardware often. The latest variant I worked on was the System/370 4341, introduced in early 1979, which ran faster and cooler than the slower, very costly 3031 mainframes we had before. I just found this on the IBM archives: “The 4341, under a 24-month contract, can be leased for $5,975 a month with two million characters of main memory and for $6,725 a month with four million characters. Monthly rental prices are $7,021 and $7,902; purchase prices are $245,000 and $275,000, respectively.” And we had three, along with tape drives, disk drives (in IBM-speak, DASD, for Direct Access Storage Devices), and high-speed line printers. Not cheap!

Our operating system on those systems was called Virtual Machine, or VM/370. It consisted of two parts, Control Program and Conversational Monitoring System. CP was the timesharing operating system – in modern virtualization terms, the hypervisor running on the bare metal. CMS was the user interface that users logged into, and provide access to not only a text-based command console, but also file storage and a library of tools, such as compilers. (We often referred to the platform as CP/CMS).

Thanks to VM/370, each user believed she had access to a 100% dedicated and isolated System/370 mainframe, with every resource available and virtualized. (I.e., she appeared to have dedicated access to tape drives, but they appeared non-functional if her tape(s) weren’t loaded, or if she didn’t buy access to the drives.)

My story about mainframes isn’t just reminiscing about the time of dinosaurs. When programming those computers, which I did full-time in the late 1970s and early 1980s, I learned a lot of lessons that are very applicable today. Read all about that in my article for HP Enterprise Insights, “4 lessons for modern software developers from 1970s mainframe programming.”

To get the most benefit from the new world of cloud-native server applications, forget about the old way of writing software. In the old model, architects designed software. Programmers wrote the code, and testers tested it on test server. Once the testing was complete, the code was “thrown over the wall” to administrators, who installed the software on production servers, and who essentially owned the applications moving forward, only going back to the developers if problems occurred.

The new model, which appeared about 10 years ago is called “DevOps,” or developer operations. In the DevOps model, architects, developers, testers, and administrators collaborate much more closely to create and manage applications. Specifically, developers play a much broader role in the day-to-day administration of deployed applications, and use information about how the applications are running to tune and enhance those applications.

The involvement of developers in administration made DevOps perfect for cloud computing. Because administrators had fewer responsibilities (i.e., no hardware to worry about), it was less threatening for those developers and administrators to collaborate as equals.

Change matters

In that old model of software development and deployment, developers were always change agents. They created new stuff, or added new capabilities to existing stuff. They embraced change, including new technologies – and the faster they created change (i.e., wrote code), the more competitive their business.

By contrast, administrators are tasked with maintaining uptime, while ensuring security. Change is not a virtue to those departments. While admins must accept change as they install new applications, it’s secondary to maintaining stability. Indeed, admins could push back against deploying software if they believed those apps weren’t reliable, or if they might affect the overall stability of the data center as a whole.

With DevOps, everyone can embrace change. One of the ways that works, with cloud computing, is to reduce the risk that an unstable application can damage system reliability. In the cloud, applications can be build and deployed using bare-metal servers (like in a data center), or in virtual machines or containers.

DevOps works best when software is deployed in VMs or containers, since those are isolated from other systems – thereby reducing risk. Turns out that administrators do like change, if there’s minimal risk that changes will negatively affect overall system reliability, performance, and uptime.

Benefits of DevOps

Goodbye, CapEx, hello, OpEx. Cloud computing moves enterprises from capital-expense data centers (which must be built, electrified, cooled, networked, secured, stocked with servers, and refreshed periodically) to operational-expense service (where the business pays monthly for the processors, memory, bandwidth, and storage reserved and/or consumed). When you couple those benefits that with virtual machines, containers, and DevOps, you get:

  • Easier Maintenance: It can be faster to apply patches and fixes to software virtual machines – and use snapshots to roll back if needed.
  • Better Security: Cloud platform vendors offer some security monitoring tools, and it’s relatively easy to install top-shelf protections like next-generation firewalls – themselves offered as cloud services.
  • Improved Agility: Studies show that the process of designing, coding, testing, and deploying new applications can be 10x faster than traditional data center methods, because the cloud reduces and provides robust resources.
  • Lower Cost: Vendors such as Amazon, Google, Microsoft, and Oracle, are aggressively lowering prices to gain market share — and in many cases, those prices are an order of magnitude below what it could cost to provision an enterprise data center.
  • Massive Scale: Need more power? Need more bandwidth? Need more storage? Push a button, and the resources are live. If those needs are short-term, you can turn the dials back down, to lower the monthly bill. You can’t do that in a data center.

Rapidly evolving

The technologies used in creating cloud-native applications are evolving rapidly. Containers, for example, are relatively new, yet are becoming incredibly popular because they require 4x-10x fewer resources than VMs – thereby slashing OpEx costs even further. Software development and management tools, like Kubernetes (for orchestration of multiple containers), Chef (which makes it easy to manage cloud infrastructure), Puppet (which automates pushing out cloud service configurations), and OpenWhisk (which strips down cloud services to “serverless” basics) push the revolution farther.

DevOps is more important than the meaningless “developer operations” moniker implies. It’s a whole new, faster way of doing computing with cloud-native applications. Because rapid change means everything in achieving business agility, everyone wins.

Despite Elon Musk’s warnings this summer, there’s not a whole lot of reason to lose any sleep worrying about Skynet and the Terminator. Artificial Intelligence (AI) is far from becoming a maleficent, all-knowing force. The only “Apocalypse” on the horizon right now is an over reliance by humans on machine learning and expert systems, as demonstrated by the deaths of Tesla owners who took their hands off the wheel.

Examples of what currently pass for “Artificial Intelligence” — technologies such as expert systems and machine learning — are excellent for creating software. AI software is truly valuable help in contexts that involve pattern recognition, automated decision-making, and human-to-machine conversations. Both types of AI have been around for decades. And both are only as good as the source information they are based on. For that reason, it’s unlikely that AI will replace human beings’ judgment on important tasks requiring decisions more complex than “yes or no” any time soon.

Expert systems, also known as rule-based or knowledge-based systems, are when computers are programmed with explicit rules, written down by human experts. The computers can then run the same rules but much faster, 24×7, to come up with the same conclusions as the human experts. Imagine asking an oncologist how she diagnoses cancer and then programming medical software to follow those same steps. For a particular diagnosis, an oncologist can study which of those rules was activated to validate that the expert system is working correctly.

However, it takes a lot of time and specialized knowledge to create and maintain those rules, and extremely complex rule systems can be difficult to validate. Needless to say, expert systems can’t function beyond their rules.

By contrast, machine learning allows computers to come to a decision—but without being explicitly programmed. Instead, they are shown hundreds or thousands of sample data sets and told how they should be categorized, such as “cancer | no cancer,” or “stage 1 | stage 2 | stage 3 cancer.”

Read more about this, including my thoughts on machine learning, pattern recognition, expert systems, and comparisons to human intelligence, in my story for Ars Technica, “Never mind the Elon—the forecast isn’t that spooky for AI in business.”

HP-35 slide rule calculatorAt the current rate of rainfall, when will your local reservoir overflow its banks? If you shoot a rocket at an angle of 60 degrees into a headwind, how far will it fly with 40 pounds of propellant and a 5-pound payload? Assuming a 100-month loan for $75,000 at 5.11 percent, what will the payoff balance be after four years? If a lab culture is doubling every 14 hours, how many viruses will there be in a week?

Those sorts of questions aren’t asked by mathematicians, who are the people who derive equations to solve problems in a general way. Rather, they are asked by working engineers, technicians, military ballistics officers, and financiers, all of whom need an actual number: Given this set of inputs, tell me the answer.

Before the modern era (say, the 1970s), these problems could be hard to solve. They required a lot of pencils and paper, a book of tables, or a slide rule. Mathematicians never carried slide rules, but astronauts did, as their backup computers.

However, slide rules had limitations. They were good to about three digits of accuracy, no more, in the hands of a skilled operator. Three digits was fine for real-world engineering, but not enough for finance. With slide rules, you had to keep track of the decimal point yourself: The slide rule might tell you the answer is 641, but you had to know if that was 64.1 or 0.641 or 641.0. And if you were chaining calculations (needed in all but the simplest problems), accuracy dropped with each successive operation.

Everything the slide rule could do, a so-called slide-rule calculator could do better—and more accurately. Slide rules are really good at few things. Multiplication and division? Easy. Exponents, like 613? Easy. Doing trig, like sines, cosines, and tangents? Easy. Logarithms? Easy.

Hewlett-Packard unleashed a monster when it created the HP-9100A desktop calculator, released in 1968 at a price of about $5,000. The HP-9100A did everything a slide rule could do, and more—such as trig, polar/rectangular conversions, and exponents and roots. However, it was big and it was expensive—about $35,900 in 2017 dollars, or the price of a nice car! HP had a market for the HP-9100A, since it already sold test equipment into many labs. However, something better was needed, something affordable, something that could become a mass-market item. And that became the pocket slide-rule calculator revolution, starting off with the amazing HP-35.

If you look at the HP-35 today, it seems laughably simplistic. The calculator app in your smartphone is much more powerful. However, back in 1972, and at a price of only $395 ($2,350 in 2017 dollars), the HP-35 changed the world. Companies like General Electric ordered tens of thousands of units. It was crazy, especially for a device that had a few minor math bugs in its first shipping batch (HP gave everyone a free replacement).

Read more about early slide-rule calculators — and the more advanced card-programmable models like the HP-65 and HP-67, in my story, “The early history of HP calculators.”

HP-65 and HP-67 card-programmable calculators

It’s almost painful to see an issue of SD Times without my name printed in the masthead. From Editor-in-Chief to Editorial Director to Founding Editor to… nothing. However, it’s all good!

My company, BZ Media, is selling our flagship print publication, SD Times, to a startup, D2 Emerge LLC. The deal shall formally close in a few weeks. If you’ve been following SD Times, you’ll recognize the two principals of the startup, David Lyman and David Rubinstein. (Thus, the “D2” part of the name.)

BZ Media co-founder Ted Bahr and I wish David, and David, and SD Times, and its staff, readers, and advertisers, nothing but success. (I retired from BZ Media mid-2013, becoming a silent partner with no involvement in day-to-day operations.)

D2 Emerge is ready to roll. Here’s what David Rubinstein wrote in the July 2017 issue (download it here):

The Times, it is a-changin’

There’s a saying that goes ‘when one chapter closes, another one begins.’

This issue of SD Times marks the close of the BZ Media chapter of this publication’s history and opens the chapter on D2 Emerge LLC, a new-age publishing and marketing company founded by two long-time members of the SD Times team: the publisher, David Lyman, and the editor-in-chief … me!

We will work hard to maintain the quality of SD Times and build on the solid foundation that has been built over the past 17 years. Wherever we go, we hear from readers who tell us they look forward to each issue, and they say they’re learning about things they didn’t know they needed to know. And we’re proud of that.

The accolades are certainly nice — and always welcome. Yet, there is nothing more important to us than the stories we tell. Whether putting a spotlight on new trends in the industry and analyzing what they mean, profiling the amazing, brilliant people behind the innovation in our industry, or helping software providers tell their unique stories to the industry, our mission is to inform, enlighten and even entertain.

But, as much as things will stay the same, there will be some changes. We will look to introduce you to different voices and perspectives from the industry, inviting subject matter experts to share their knowledge and vision of changes in our industry. The exchange of ideas and free flow of information are the bedrock of our publishing philosophy.

We will somewhat broaden the scope of our coverage to include topics that might once have been thought of as ancillary to software development but are now important areas for you to follow as silos explode and walls come tumbling down in IT shops around the world.

We will work to improve our already excellent digital offerings by bettering the user experience and the way in which we deliver content to you. So, whether you’re reading SD Times on a desktop at work, or on a tablet at a coffee shop, or even on your cellphone at the beach, we want you have the same wonderful experience.

For our advertisers, we will help guide you toward the best way to reach our readers, whether through white papers, webinars, or strategic ad placement across our platforms. And, we will look

to add to an already robust list of services we can provide to help you tailor your messages in a way that best suits our readers.

BZ Media was a traditional publishing company, with a print-first attitude (only because there weren’t any viable digital platforms back in 2000). D2 Emerge offers an opportunity to strike the right balance between a digital-first posture and all that is good about print publishing.

I would be remiss if I didn’t acknowledge BZ Media founders Ted Bahr and Alan Zeichick, who took a cynical, grizzled daily newspaperman and turned him into a cynical, grizzled technology editor. But as I often say, covering this space is never dull. Years ago, I covered sports for a few newspapers, and after a while, I saw that I had basically seen every outcome there was: A walk-off home run, a last-second touchdown, a five-goal hockey game. The only thing that seemed to change were the players. Sure, once in a while a once-in-a-lifetime player comes along, and we all enjoy his feats. But mostly sports do not change.

Technology, on the other hand, changes at breakneck speed. As we worked to acquire SD Times, I had a chance to look back at the first issues we published, and realized just how far we’ve come. Who could have known in 2000, when we were writing about messaging middleware and Enterprise JavaBeans that one day we’d be writing about microservices architectures and augmented reality?

Back then, we covered companies such as Sun Microsystems, Metrowerks, IONA, Rational Software, BEA Systems, Allaire Corp, Bluestone Software and many more that were either acquired or couldn’t keep up with changes in the industry.

The big news at the JavaOne conference in 2000 was extreme clustering of multiple JVMs on a single server, while elsewhere, the creation of an XML Signature specification looked to unify authentication, and Corel Corp. was looking for cash to stay alive after a proposed merger with Borland Corp. (then Inprise) fell apart.

So now, we’re excited to begin the next chapter in the storied (pardon the pun) history of SD Times, and we’re glad you’re coming along with us as OUR story unfolds.

CNN didn’t get the memo. After all the progress that’s been made to eliminate the requirement for using Adobe’s Flash player by so many streaming-media websites, CNNgo still requires the problematic plug-in, as you can see by the screen I saw just a few minutes ago.


Have you not heard of HTML5, oh, CNN programmers? Perhaps the techies at CNN should read “Why Adobe Flash is a Security Risk and Why Media Companies Still Use it.” After that, “Gone in a Flash: Top 10 Vulnerabilities Used by Exploit Kits.”

Yes, Adobe keeps patching Flash to make it less insecure. Lots and lots of patches, says the story “Patch Tuesday: Adobe Flash Player receives updates for 13 security issues,” publishing in January. That comes in the heels of 17 security flaws patched in December 2016.

And yes, there were more critical patches issued on June 13, 2017. Flash. Just say no. Goodbye, CNNgo, until you stop requiring that prospective customers utilize such a buggy, flawed media player.

And no, I didn’t enable the use of Flash. Guess I’ll never see what CNN wanted to show me. No great loss.