Ted Bahr & Alan Zeichick

February 23, 2000 — the debut issue of SD Times hit the stands and changed my world. Launched as a printed semi-monthly newspaper in tabloid size, SD Times grew into the world’s leading publication for software development managers.

Ted Bahr and I formed BZ Media in mid-1999. SD Times was the first of our many publications, conferences, and websites, all B2B for the technology industry. Today, SD Times flourishes as part of D2 Emerge, and we couldn’t be more proud to see our beloved SD Times continue to serve this important audience.

Let’s look back. David Rubinstein — who started out as executive editor of SD Times and is now co-owner of D2 Emerge and editor-in-chief of the magazine — put out a great 20th anniversary issue. (The other D is Dave Lyman.)

The special issue includes essays from me (page 18) and from Ted (page 12). Dave wrote a remembrance column (page 46) and art director Mara Leonardi shares some of her favorite SD Times covers and images (page 20).

Click here to read the anniversary issue or download it as a PDF.

Meanwhile, my favorite part of the special 20th anniversary issues are the old photos.

  • There’s one of Ted and Alan, doing our silly “‘I’m the B’ and ‘I’m the Z’” schtick based on Saturday Night Live’s Hans und Franz.
  • There’s one of the crazy launch crew celebrating the release of the first issue.
  • There’s one of BZ Media employees standing in the water for some unknown reason.
  • There’s a lot of alcohol being consumed. That’s what happens when your offices are next to a bar.

I love those people, and miss working with every single one of them. Thank you, Ted, Dave, Dave, Mara, Erzi, Eddie, Viena, Pat, Rebecca, Erin, Katie, Alex, Whitney, Adam, Stacy, Yvonne, Christina, Jon, Paula, David, Craig, Marilyn, Robin, LuAnn, Julie, Charlie, PJ, Lindsey, Agnes, Victoria, Catherine, Sabrina, Kathy, Jennifer, Jeff, Brenner, Doug, Dan, Lisa, Brian, Michele, Polina, Anne, Suzanne, Ryan, Jeanie, Josette, Debbie, Michelle, Nicole, Greg, Usman, Robert, Robbie, and so many others for making SD Times and BZ Media a success. Those were among the best years of my life.

Java Magazine home page
Java Magazine home page

I’m back in the saddle again, if by “saddle” you mean editing a magazine. Today, I took over the helm of Oracle’s Java Magazine, one of the world’s leading publications for software developers, with about 260,000 subscribers. The previous e-in-c, Andrew Binstock, moved on after five years to work on other projects; he leaves big shoes to fill.

As Andrew wrote,

This is my last issue of Java Magazine. After five very enjoyable years at the helm, I’m ready to take on other challenges, including getting back to working on my preferred coding projects. I will surely pop up here and there with articles and reviews (likely even in this magazine). If you’ve enjoyed my work, I invite you to follow me on Twitter (@platypusguy) or to reach out to me on LinkedIn, where I accept all invitations. At the moment, I am currently participating in interviewing prospective successors and I’ll make sure that Oracle has a good person in place to carry on. From the bottom of my heart, thank you all for being readers; and to many of you, I send additional gratitude for your thoughtful comments and suggestions over the years. It’s been truly an honor.

I am honored to follow in Andrew’s footsteps.

Solve the puzzle: A company’s critical customer data is in a multiterabyte on-premises database, and the digital marketing application that uses that data to manage and execute campaigns runs in the cloud. How can the cloud-based marketing software quickly access and leverage that on-premises data?

It’s a puzzle that one small consumer-engagement consulting company, Embel Assist, found its clients facing. The traditional solution, perhaps, would be to periodically replicate the on-premises database in the cloud using extract-transfer-load (ETL) software, but that may take too much time and bandwidth, especially when processing terabytes of data. What’s more, the replicated data could quickly become out of date.

Using cloud-based development and computing resources, Embel Assist found another way to crack this problem. It created an app called EALink that acts as a smart interface between an organization’s customer data sources and Oracle Eloqua, a cloud-based marketing automation platform. EALink also shows how development using Oracle Cloud Infrastructure creates new opportunities for a small and creative company to take on big enterprise data challenges.

Say the on-premises CRM system for a drugstore chain has 1 million customer records. The chain wants an e-mail campaign to reach customers who made their last purchase more than a month ago, who live within 20 miles of one set of stores, and who purchased products related to a specific condition. Instead of exporting the entire database into Eloqua, EALink runs the record-extraction query on the CRM system and sends Eloqua only the minimum information needed to execute the campaign. And, the query is run when the campaign is being executed, so the campaign information won’t be out of date.

Learn more about Embel Assist in my story for Forbes, “Embel Assist Links Marketing Apps With Enterprise Data.”

Wayne Rash

A talented programmer is a valued asset to any organization. But that doesn’t mean you shouldn’t take steps to protect yourself and your organization, writes Wayne Rash in his new article for PC Magazine, “Protect Your Business During Custom Coding Projects.”

Wayne begins the story with an uncomfortable anecdote:

On July 19, 2019, contract programmer David Tinley pleaded guilty to charges that he intentionally damaged computers belonging to Siemens Corporation. According to filings in the case, Tinley planted logic bombs into the code he was developing for Siemens at its Monroeville, Pennsylvania location. Those logic bombs, which were sections of code that were timed to create disruption weeks or months after a project was finished, were intended to ensure that Tinsley had a constant stream of revenue from having to fix the problems that were assumed to be bugs. When he was called in to fix a problem, Tinsley simply changed the date on the logic bomb so that it would go off again later.

Eventually, another programmer was called in to fix Tinsley’s code while he was on vacation, and it was then that the plot was uncovered. 62-year-old Tinsley had been working for Siemens for about 12 years before he was caught, but during that time, he was never under any suspicion. Sentencing is set for November 8, 2019, and Tinsley could spend up to 10 years in prison and pay fines up to $250,000.

The article quotes yours truly. Here’s part of it:

“A code review is probably a best way to find out what’s in your code,” said Alan Zeichick, Principal Analyst at Camden Associates, “including things like logic bombs, security vulnerabilities, or stupid errors [such as hard wiring the location of a database].”

“There are other reasons to do code reviews,” Zeichick added. “It helps your development team get a better understanding of how development works, helps junior programmers get a better understanding. Code reviews are also good for helping the team manager get a handle on the quality of the development team and get an estimate of how long it will take to finish the job.

Zeichick said that there are a couple of ways to conduct code reviews. “You can have a team where there are two people working on it or you can meet in a conference room to review code.”

Teams in which each member reviews someone else’s code are growing in popularity as programmers get harder to find. But in larger organizations, periodic meetings to review code are still useful because then several sets of eyes get to help in the review process. Zeichick said that even the most senior programmers should have their code reviewed.

There’s plenty more, so read Wayne’s article already.

Charles Nutter remembers when, working as a Java architect, he attended a conference and saw the Ruby programming language for the first time. And he was blown away. “I was just stunned that I understood every piece of code, every example, without knowing the language at all. It was super easy for me to understand the code.”

As a Java developer, Nutter began looking for an existing way to run Ruby within a Java runtime environment, specifically a Java virtual machine (JVM). This would let Ruby programs run on any hardware or software platform supported by a JVM, and would facilitate writing polyglot applications that used some Java and some Ruby, with developers free to choose whichever language was best for a particular task.

Nutter found the existing Ruby-on-JVM project, JRuby. However, “it had not been moving forward very quickly. It had been kind of stalled out for some years.” So, he became involved, helping drive support for a popular web application framework, Ruby on Rails, to run within a JVM.

“We made it work,” says Nutter. “In 2005 and 2006, we got Rails to run on top of the JVM—and it was the first time any major framework from off the Java platform had ever been run on top of the JVM.”

Want to be like Nutter someday? His career advice is direct: Contribute to an open source community, even if it’s a little daunting, and even if some people in that community are, well, rude to newcomers.

“Don’t be afraid to get out into the open source community,” Nutter says. “Get out into the public community, do talks, submit bugs, submit patches. It’s going to be discouraging, and there’s a lot of jerks out there that will scare you away. Don’t let them. Get into the heart of the community and don’t be afraid to help contribute or ask questions.”

For his successful coleadership of JRuby during more than a decade, and for his broader leadership in the software industry, Nutter was recently honored with a Groundbreaker Award. The award was presented at Oracle Code One in San Francisco, where we had a long chat. Read what we talked about in my article for Forbes, “A Java Developer Walks Into A Ruby Conference: Charles Nutter’s Open Source Journey.”

Doug Cutting stands head-and-shoulders above most developers I’ve met—figuratively, as well as literally. As one of the founders of the Hadoop open source project, which allows many Big Data projects to scale to handle huge problems and immense quantities of data, Cutting is revered. Plus, Doug Cutting is tall. Very tall. (Lots taller than I am.)

“Six-foot-eight, or 2 meters 3 centimeters, for the record,” Cutting volunteers when we meet.

In the software industry, Cutting looms large for two major open source successes, proof that innovation lightning sometimes strikes twice. Hadoop, managed by the Apache Software Foundation, is at its heart a framework and file system that manages distributed computing—that is, it allows many computers to work together in a cluster to solve hard problems.

Hadoop provided the initial foundation for many companies’ big data efforts. The software let them pull in data from multiple sources for analysis using clusters of dozens, or hundreds, of servers. The other project, also managed by Apache, is Lucene, a Java library that lets programmers build fast text indexing and searching into their applications.

In his day job, Cutting serves as the chief architect for Cloudera, one of the largest open source software companies. He also serves as an evangelist for the open source movement, inspiring contributions to Hadoop and Lucene and also many other projects.

Cutting was recently honored with a Groundbreaker Award, presented at Oracle Code One in San Francisco. He talked to me about collaborating on open source software, creating a fulfilling career in software, understanding how technology affects society, and the meaning of the word “Hadoop.” Read Cutting’s thoughts about everything from building a career in open source to the meaning of data science in my article for Forbes, “Hadoop Pioneer Says Developers Should Build Open Source Into Their Career Plans.”

Consider an employee who normally fills out his weekly time card on Thursday afternoon, because he doesn’t work most Fridays. Machine learning that’s built into a payroll application could help the app learn the individual working habits of each employee. Having learned this specific pattern, the app could ask him if he meant to fill out the time card when he goes to log out of the system Thursday. There’s no policy there: It’s a behavior pattern that machine learning can pick up on.

In fact, modern-day AI might be able to fill in the time card automatically, and present it to the employee for review and approval. That save even more time, and potentially eliminates errors. This capability, known as “auto defaulting,” could have applications for nearly every form-based application, from accounting to inventory to sales reporting.

Executives wrestle with how to take advantage of artificial intelligence capabilities. That’s especially true now that cloud computing resources have made the technology accessible to companies of all sizes. One of the fastest roads to AI payoff comes from using AI capabilities embedded in applications that your employees use every day—like that time card app.

Smart classification, smart recognition, and smart predictions. Those are three big buckets that encompass many cutting-edge AI and machine learning capabilities.

  • Smart classification involves studying both structured and unstructured data to take action based on what it means, such as to automatically identity unreliable suppliers, properly interpret complex invoices, and categorize consumers based on their current activities and past history.
  • Smart recognition looks to find anomalies in the data to find innocent errors—not-so-innocent errors. Smart recognition can help stop fraud, enforce corporate and compliance policies, and even speed financial reconciliations.
  • Smart predictions go farther, such as offering proactive advice to sales reps, making recommendations in e-commerce, or providing suggestions for service reps on how to direct a customer. Pattern-matching can come into play here, such as predicting which add-on product recommendation a customer’s most likely to buy.

Learn more in my story for Forbes, “Want A Bigger Bang From AI? Embed It Into Your Apps.”

Oracle Database is the world’s most popular enterprise database. This year’s addition of autonomous operating capabilities to the cloud version of Oracle Database is one of the most important advances in the database’s history. What does it mean for a database to be “autonomous?” Let’s look under the covers of Oracle Autonomous Database to show just a few of the ways it does that.

Oracle Autonomous Database is a fully managed cloud service. Like all cloud services, the database runs on servers in cloud data centers—in this case, on hardware called Oracle Exadata Database Machine that’s specifically designed and tuned for high-performance, high-availability workloads. The tightly controlled and optimized hardware enables some of the autonomous functionality we’ll discuss shortly.

While the autonomous capability of Oracle Autonomous Database is new, it builds on scores of automation features that Oracle has been building into its Oracle database software and the Exadata database hardware for years. The goals of the autonomous functions are twofold: First, to lower operating costs by reducing costly and tedious manual administration, and second, to improve service levels through automation and fewer human errors.

My essay in Forbes, “What Makes Oracle Autonomous Database Truly ‘Autonomous,’” shows of how the capabilities in Oracle Autonomous Database change the game for database administrators (DBAs). The benefits: DBAs are freed them from mundane tasks and letting them focus on higher-value work.

“All aboooooaaaaard!” Whether you love to watch the big freight engines rumble by, or you just ride a commuter train to work, the safety rules around trains are pretty simple for most of us. Look both ways before crossing the track, and never try to beat a train, for example. If you’re a rail operator, however, safety is a much more complicated challenge—such as making sure you always have the right people on the right positions, and ensuring that the crew is properly trained, rested, and has up-to-date safety certifications.

Helping rail operators tackle that huge challenge is CrewPro, the railroad crew management software from PS Technology, a wholly owned subsidiary of the Union Pacific RailroadThe original versions of this package run on mainframes and still are used by railroads ranging from the largest Class I freight operators to local rail-based passenger transit systems in major US cities.

Those railroad operators use CrewPro to handle complex staffing issues on the engines and on the ground. The demands include scheduling based on availability and seniority; tracking mandatory rest status; and managing certifications and qualifications, including pending certification expirations.

Smaller railroads, though, don’t have the sophisticated IT departments needed to stand up this fully automated crew management system. That’s which is why PS Technology launched a cloud version that saw its first railroad customer online in April. “There are more than 600 short line railroads, and that is our growth area,” says Seenu Chundru, president of PS Technology. “They don’t want to host this type of software on premises.”

Learn more about this in my story for Forbes, “Railroads Roll Ahead With Cloud-Based Crew Management.”

Knowledge is power—and knowledge with the right context at the right moment is the most powerful of all. Emerging technologies will leverage the power of context to help people become more efficient, and one of the first to do so is a new generation of business-oriented digital assistants.

Let’s start by distinguishing a business digital assistant from consumer products such as Apple’s Siri, Amazon’s Echo, and Google’s Home. Those cloud-based technologies have proved themselves at tasks like information retrieval (“How long is my commute today?”) and personal organization (“Add diapers to my shopping list”). Those services have some limited context about you, like your address book, calendar, music library, and shopping cart. What they don’t have is deep knowledge about your job, your employer, and your customers.

In contrast, a business digital assistant needs much richer context to handle the kind of complex tasks we do at work, says Amit Zavery, executive vice president of product development at Oracle. Which sorts of business tasks? How about asking a digital assistant to summarize the recent orders from a company’s three biggest customers in Dallas; set up a conference call with everyone involved with a particular client account; create a report of all employees who haven’t completed information security training; figure out the impact of a canceled meeting on a travel plan; or pull reports on accounts receivable deviations from expected norms?

Those are usually tasks for human associates—often a tech-savvy person in supply chain, sales, finance, or human resources. That’s because so many business tasks require context about the employee making the request and about the organization itself, Zavery says. A digital assistant’s goal should be to reduce the amount of mental energy and physical steps needed to perform such tasks.

Learn more in my article for Forbes, “The One Thing Digital Assistants Need To Become Useful At Work: Context.”

Blockchain and the cloud go together like organic macaroni and cheese. What’s the connection? Choosy shoppers would like to know that their organic food is tracked from farm to shelf, to make sure they’re getting what’s promised on the label. Blockchain provides an immutable ledger perfect for tracking cheese, for example, as it goes from dairy to cheesemaker to distributor to grocer.

Oracle’s new Blockchain Cloud Service provides a platform for each participant in a supply chain to register transactions. Within that blockchain, each participant—and regulators, if appropriate—can review those transactions to ensure that promises are being kept, and that data has not been tampered with. Use cases range from supply chains and financial transactions to data sharing inside a company.

Launched this month, Oracle Blockchain Cloud Service has the features that an enterprise needs to move from experimenting with blockchain to creating production applications. It addresses some of the biggest challenges facing developers and administrators, such as mastering the peer-to-peer protocols used to link blockchain servers, ensuring resiliency and high availability, and ensuring that security is solid. For example, developers previously had to code one-off integrations using complex APIs; Oracle’s Blockchain Cloud Service provides integration accelerators with sample templates and design patterns for many Oracle and third-party applications in the cloud and running on-premises in the data center.

Oracle Blockchain Cloud Service provides the kind of resilience, recoverability, security, and global reach that enterprises require before they’d trust their supply chain and customer experience to blockchain. With blockchain implemented as a managed cloud service, organizations also get a system that’s ready to be integrated with other enterprise applications, and where Oracle handles the back end to ensure availability and security.

Read more about this in my story for Forbes, “Oracle Helps You Put Blockchain Into Real-World Use With New Cloud Service.”

The trash truck rumbles down the street, and its cameras pour video into the city’s data lake. An AI-powered application mines that image data looking for graffiti—and advises whether to dispatch a fully equipped paint crew or a squad with just soap and brushes.

Meanwhile, cameras on other city vehicles could feed the same data lake so another application detects piles of trash that should be collected. That information is used by an application to send the right clean-up squad. Citizens, too, can get into the act, by sending cell phone pictures of graffiti or litter to the city for AI-driven processing.

Applications like these provide the vision for the Intelligent Internet of Things Integration Consortium (I3). This is a new initiative launched by the University of Southern California (USC), the City of Los Angeles, and a number of stakeholders including researchers and industry. At USC, I3 is jointly managed by three institutes: Institute for Communication Technology Management (CTM), Center for Cyber-Physical Systems and the Internet of Things (CCI), and Integrated Media Systems Center (IMSC).

“We’re trying to make the I3 Consortium a big tent,” says Jerry Power, assistant professor at the USC Marshall School of Business’s Institute for Communication Technology Management (CTM). Power serves as executive director of the consortium. “Los Angeles is a founding member, but we’re talking to other cities and vendors. We want lots of people to participate in the process, whether a startup or a super-large corporation.”

As of now, there are 24 members of the consortium, including USC’s Viterbi School of Engineering and Marshall School of Business. And companies are contributing resources. Oracle’s Startup for Higher Education program, for example, is providing $75,000 a year in cloud infrastructure services to support the I3 Consortium’s first three years of development work.

The I3 Consortium needs a lot of computing power. The consortium allows the cities to move beyond data silos where information is confined to individual departments, such as transportation and sanitation, to one where data flows among departments, can be more easily managed, and also lets cities use data contributions from residents or even other governmental or commercial data providers. That information is consolidated into a city’s data lake that can be accessed by AI-powered applications across departments.

The I3 Consortium will provide a vehicle to manage the data flow into the data lake. Cyrus Shahabi, a professor at USC’s Viterbi School of Engineering, and director of its Integrated Media Systems Center (IMSC), is using Oracle Cloud credits to create advanced computation applications that apply vast amounts of processing needed to train AI-based, deep learning neural networks and use real-time I3-driven data lakes to recognize issues, such as graffiti or garbage, that drive action.

 

Read more about the I3 Consortium in my story for Forbes, “How AI Could Tackle City Problems Like Graffiti, Trash, And Fires.”

Users care passionately about their software being fast and responsive. You need to give your applications both 0-60 speed and the strongest long-term endurance. Here are 14 guidelines for choosing a deployment platform to optimize performance, whether your application runs in the data center or the cloud.

Faster! Faster! Faster! That killer app won’t earn your company a fortune if the software is slow as molasses. Sure, your development team did the best it could to write server software that offers the maximum performance, but that doesn’t mean diddly if those bits end up on a pokey old computer that’s gathering cobwebs in the server closet.

Users don’t care where it runs as long as it runs fast. Your job, in IT, is to make the best choices possible to enhance application speed, including deciding if it’s best to deploy the software in-house or host it in the cloud.

When choosing an application’s deployment platform, there are 14 things you can do to maximize the opportunity for the best overall performance. But first, let’s make two assumptions:

  • These guidelines apply only to choosing the best data center or cloud-based platform, not to choosing the application’s software architecture. The job today is simply to find the best place to run the software.
  • I presume that if you are talking about a cloud deployment, you are choosing infrastructure as a service (IaaS) instead of platform as a service (PaaS). What’s the difference? In PaaS, the operating system is provided by the host, such as Windows or Linux, .NET, or Java; all you do is provide the application. In IaaS, you can provide, install, and configure the operating system yourself, giving you more control over the installation.

Here’s the checklist

  1. Run the latest software. Whether in your data center or in the IaaS cloud, install the latest version of your preferred operating system, the latest core libraries, and the latest application stack. (That’s one reason to go with IaaS, since you can control updates.) If you can’t control this yourself, because you’re assigned a server in the data center, pick the server that has the latest software foundation.
  2. Run the latest hardware. Assuming we’re talking the x86 architecture, look for the latest Intel Xeon processors, whether in the data center or in the cloud. As of mid-2018, I’d want servers running the Xeon E5 v3 or later, or E7 v4 or later. If you use anything older than that, you’re not getting the most out of the applications or taking advantage of the hardware chipset. For example, some E7 v4 chips have significantly improved instructions-per-CPU-cycle processing, which is a huge benefit. Similarly, if you choose AMD or another processor, look for the latest chip architectures.
  3. If you are using virtualization, make sure the server has the best and latest hypervisor. The hypervisor is key to a virtual machine’s (VM) performance—but not all hypervisors are created equal. Many of the top hypervisors have multiple product lines as well as configuration settings that affect performance (and security). There’s no way to know which hypervisor is best for any particular application. So, assuming your organization lets you make the choice, test, test, test. However, in the not-unlikely event you are required to go with the company’s standard hypervisor, make sure it’s the latest version.
  4. Take Spectre and Meltdown into account. The patches for Spectre and Meltdown slow down servers, but the extent of the performance hit depends on the server, the server’s firmware, the hypervisor, the operating system, and your application. It would be nice to give an overall number, such as expect a 15 percent hit (a number that’s been bandied about, though some dispute its accuracy). However, there’s no way to know except by testing. Thus, it’s important to know if your server has been patched. If it hasn’t been yet, expect application performance to drop when the patch is installed. (If it’s not going to be patched, find a different host server!)
  5. Base the number of CPUs and cores and the clock speed on the application requirements. If your application and its core dependencies (such as the LAMP stack or the .NET infrastructure) are heavily threaded, the software will likely perform best on servers with multiple CPUs, each equipped with the greatest number of cores—think 24 cores. However, if the application is not particularly threaded or runs in a not-so-well-threaded environment, you’ll get the biggest bang with the absolute top clock speeds on an 8-core server.

But wait, there’s more!

Read the full list of 14 recommendations in my story for HPE Enteprise.nxt, “Checklist: Optimizing application performance at deployment.”

You wouldn’t enjoy paying a fine of 4 percent of your company’s total revenue. But that’s the potential penalty if your company is found in violation of the European Union’s new General Data Protection Regulation (GDPR), which goes into effect May 25, 2018. As you’ve probably read, organizations anywhere in the world are subject to GDPR if they have customers in the EU and are storing any of their personal data.

GDPR compliance is a complex topic. It’s too much for one article — heck, books galore are being written about it, seminars abound, and GDPR consultants are on every street corner.

One challenge is that GDPR is a regulation, not a how-to guide. It’s big on explaining penalties for failing to detect and report a data breach in a sufficiently timely manner. It’s not big on telling you how to detect that breach. Rather than tell you what to do, let’s see what could go wrong with your GDPR plans—to help you avoid that 4 percent penalty.

First, the ground rules: GDPR’s overarching goal is to protect citizens’ privacy. In particular, the regulation pertains to anything that can be used to directly or indirectly identify a person. Such data can be anything: a name, a photo, an email address, bank details, social network posts, medical information, or even a computer IP address. To that end, data breaches that may pose a risk to individuals must be disclosed to the authorities within 72 hours and to the affected individuals soon thereafter.

What does that mean? As part of the regulations, individuals must have the ability to see what data you have about them, correct that data if appropriate, or have that data deleted, again if appropriate. (If someone owes you money, they can’t ask you to delete that record.)

Enough preamble. Let’s get into ten common problems.

First: Your privacy and data retention policies aren’t compliant with GDPR

There’s no specific policy wording required by GDPR. However, the policies must meet the overall objectives on GDPR, as well as the requirements in any other jurisdictions in which you operate (such as the United States). What would Alan do? Look at policies from big multinationals that do business in Europe and copy what they do, working with your legal team. You’ve got to get it right.

Second: Your actual practices don’t match your privacy policy

It’s easy to create a compliant privacy policy but hard to ensure your company actually is following it. Do you claim that you don’t store IP addresses? Make sure you’re not. Do you claim that data about a European customer is never stored in a server in the United States? Make sure that’s truly the case.

For example, let’s say you store information about German customers in Frankfurt. Great. But if that data is backed up to a server in Toronto, maybe not great.

Third: Your third-party providers aren’t honoring your GDPR responsibilities

Let’s take that customer data in Frankfurt. Perhaps you have a third-party provider in San Francisco that does data analytics for you, or that runs credit reports or handles image resizing. In those processes, does your customer data ever leave the EU? Even if it stays within the EU, is it protected in ways that are compliant with GDPR and other regulations? It’s your responsibility to make sure: While you might sue a supplier for a breach, that won’t cancel out your own primary responsibility to protect your customers’ privacy.

A place to start with compliance: Do you have an accurate, up-to-date listing of all third-party providers that ever touch your data? You can’t verify compliance if you don’t know where your data is.

But wait, there’s more

You can read the entire list of common GDPR failures in my story for HPE Enterprise.nxt, “10 ways to fail at GDPR compliance.”

Chapter One: Christine Hall

Should the popular Linux operating system be referred to as “Linux” or “GNU/Linux”? It’s a thing, or at least it used to be, writes my friend Christine Hall in her aptly named article, “Is It Linux or GNU/Linux?, published in Linux Journal on May 11:

Some may remember that the Linux naming convention was a controversy that raged from the late 1990s until about the end of the first decade of the 21st century. Back then, if you called it “Linux”, the GNU/Linux crowd was sure to start a flame war with accusations that the GNU Project wasn’t being given due credit for its contribution to the OS. And if you called it “GNU/Linux”, accusations were made about political correctness, although operating systems are pretty much apolitical by nature as far as I can tell.

Christine (aka Bride of Linux) quotes a number of learned people. That includes Steven J. Vaughan-Nichols, one of the top experts in the politics of open-source software – and frequent critic of the antics of Richard M. Stallman (aka RMS) who founded the Free Software Foundation, and who insists that everyone call the software GNU/Linux.

Here’s what Steven (aka SJVN), said in the article:

“Enough already”, he said. “RMS tried, and failed, to create an operating system: Hurd. He and the Free Software Foundation’s endless attempts to plaster his GNU name to the work of Linus Torvalds and the other Linux kernel developers is disingenuous and an insult to their work. RMS gets credit for EMACS, GPL, and GCC. Linux? No.”

Another humble luminary sought out by Christine: Yours truly.

“For me it’s always, always, always, always Linux,” said Alan Zeichick, an analyst at Camden Associates who frequently speaks, consults and writes about open-source projects for the enterprise. “One hundred percent. Never GNU/Linux. I follow industry norms.”

To make a long story short: In the article, the consensus was for Linux, not GNU/Linux.

Chapter Two: figosdev

But then someone going by the handle “figosdev” authored a rebuttal, “Debunking the Usual Omission of GNU,” published on Techrights. To make a long story short, he believes that the operating system should be called GNU/Linux. Here’s my favorite part of figosdev’s missive (which was written in all lower-case):

ive heard about gnu and linux about a million times in over a decade. as of today ive heard of alan zeichick once, and camden associates (what do they even do?) once. im just going to call them linux, its the more popular term.

Riiight. figosdev never heard of me, fine (founder of SD Times, but figosdev probably never heard of that either). On the other hand, at least figosdev knows my name. I have no idea who figosdev is, except to infer that he/she/it is a developer on the fig component compiler project, since he/she/it is hiding behind a handle. And that brings me to

Chapter Three: Richi Jennings

Christine Hall’s article sparked a lively debate on Twitter. As part of it, my friend Richi Jennings (quoted in the original article) tweeted:

Let’s end the story here, at least for now. Linux forever!

Microservices are a software architecture that have become quite popular in conjunction with cloud-native applications. Microservices allow companies to add or update new or existing tech-powered features more easily—and quite frequently even reduce the operating expenses of a product. A microservices approach does this by making it easier to update a large, complex program without revising the entire application, thereby accelerating the process of software updates.

Think about major enterprise software such as a customer management application. Such programs are often written as a single, monolithic application. Instead, some parts of that application could be considered as neatly encapsulated functionality, such as the function that talks to an order-processing database to create a new order.

In the microservices architecture, developers could write that order-processing service, including its state, into its own program—a loosely-coupled service. The main customer management application would consist of many of those services and interact using the service’s applications programming interfaces (APIs). Here’s what you need to know about the business advantages from a microservices architectural approach, according to Boris Scholl, vice president of development for microservices at Oracle.

1. Microservices can let you add new features faster to your company’s vital applications

Microservices can reduce complexity for your main enterprise applications. By encapsulating each business scenario, such as ordering a product or shopping card functionality into its own services, the code base becomes smaller, easier to maintain and easier to test. When you want to add or update a feature, “you can go faster by updating the service, as opposed to having to change the functionality of a very large project,” says Scholl.

Managing and connecting these many microservices might sound complex. However, “we are lucky that the microservices technology is evolving so fast there are infrastructures and platforms that make the heavy lifting easier,” Scholl says.

2. Microservices can let you embrace new, modern technology like artificial intelligence (AI) more easily

Developers can use the programming languages, tools and frameworks that are best suited for each service. Those may not be the same languages, tools and platforms used for other services. For example, consider how you apply a new idea, like today’s machine learning capability, to a customer management program. If the customer management program is built with a monolithic approach using a specific language, it will be very hard to easily integrate the new functionality—not to mention you are bound to the language, framework and even version used in the monolithic application.

However, with a microservices approach, developers might write a machine learning-focused service in the Scala language. They might run that service in a specialized AI-based cloud service that has the hardware speed needed to process huge datasets. That Scala-based service gets easily integrated with the rest of the customer application that might be written in Java or some other language. “You get cost savings because you can use a different technology stack for each service,” says Scholl. “You can use the best technology for the service.”

3. Each part of an application written with microservices can have its own release cadence

This feature relates to speed and also control and governance, which can be important to highly regulated industries. “Perhaps some parts of your application can be updated yearly and that fits your needs,” says Scholl. “Other parts might need to be updated more often, if you are looking to be agile and react faster to the market or to take advantage of new technology.”

Let’s go back to the data analytics and machine learning example. Perhaps new machine-learning technology has become available, allowing data analytics to run in seconds instead of minutes. That opportunity can be exploited by updating the data-analytics microservices. Or say an order-processing system was moved from an on-premises database to a cloud database. In a microservices architecture, all the developers would need to do is update the service that accesses that order processing; the rest of the customer-management application would not need to be changed.

“Think about where you need to update more frequently,” says Scholl. “If you can identify those components that should be on a different release cycle, break off that functionality into a new service. Then, use an API to let that new service talk to the rest of your application.”

Read more, including about scalability and organizational agility, in my essay for the Wall Street Journal, “Tech Strategy: 5 Things CEOs Should Know About Microservices.”

Is the cloud ready for sensitive data? You bet it is. Some 90% of businesses in a new survey say that at least half of their cloud-based data is indeed sensitive, the kind that cybercriminals would love to get their hands on.

The migration to the cloud can’t come soon enough. About two-thirds of companies in the study say at least one cybersecurity incident has disrupted their operations within the past two years, and 80% say they’re concerned about the threat that cybercriminals pose to their data.

The good news is that 62% of organizations consider the security of cloud-based enterprise applications to be better than the security of their on-premises applications. Another 21% consider it as good. The caveat: Companies must be proactive about their cloud-based data and can’t naively assume that “someone else” is taking care of that security.

Those insights come from a brand-new threat report, the first ever jointly conducted by Oracle and KPMG. The “Oracle and KPMG Cloud Threat Report 2018,” to be released this month at the RSA Conference, fills a unique niche among the vast number of existing threat and security reports, including the well-respected Verizon Data Breach Investigations Report produced annually since 2008.

The difference is the Cloud Threat Report’s emphasis on hybrid cloud, and on organizations lifting and shifting workloads and data into the cloud. “In the threat landscape, you have a wide variety of reports around infrastructure, threat analytics, malware, penetrations, data breaches, and patch management,” says one of the designers of the study, Greg Jensen, senior principal director of Oracle’s Cloud Security Business. “What’s missing is pulling this all together for the journey to the cloud.”

Indeed, 87% of the 450 businesses surveyed say they have a cloud-first orientation. “That’s the kind of trust these organizations have in cloud-based technology,” Jensen says.

Here are data points that break that idea down into more detail:

  • 20% of respondents to the survey say the cloud is much more secure than their on-premises environments; 42% say the cloud is somewhat more secure; and 21% say the cloud is equally secure. Only 21% think the cloud is less secure.
  • 14% say that more than half of their data is in the cloud already, and 46% say that between a quarter and half of their data is in the cloud.

That cloud-based data is increasingly “sensitive,” the survey respondents say. That data includes information collected from customer relationship management systems, personally identifiable information (PII), payment card data, legal documents, product designs, source code, and other types of intellectual property.

Read more, including what cyberattacks say about the “pace gap,” in my essay in Forbes, “Threat Report: Companies Trust Cloud Security.”

Asking “which is the best programming language” is like asking about the most important cooking tool in your kitchen. Mixer? Spatula? Microwave? Cooktop? Measuring cup? Egg timer? Lemon zester? All are critical, depending on what you’re making, and how you like to cook.

The same is true with programming languages. Some are best at coding =applications that run natively on mobile devices — think Objective-C or Java. Others are good at encoding logic within a PDF file, or on a web page — think JavaScript. And still others are best at coding fast applications for virtual machines or running directly on the operating system — for many people, that’s C or C++. Want a general purpose language? Think Python, PHP. Specialized? R and Matlab are good for statistics and data analytics. And so-on.

Last summer, IEEE Spectrum offered its take, surveying its audience and writing up the “2017 Top Programming Languages.” The top 10 languages for the typical reader:

  1. Python
  2. C
  3. Java
  4. C++
  5. C#
  6. R
  7. JavaScript
  8. PHP
  9. Go
  10. Swift

The story’s author, Stephen Case, noted not much change in the most popular languages. “Python has continued its upward trajectory from last year and jumped two places to the No. 1 slot, though the top four—Python, C, Java, and C++—all remain very close in popularity. “

What Do The PYPL Say?

The IEEE Spectrum annual survey isn’t the only game in town. The PYPL (PopularitY of Programming Language) index uses raw data from Google Trends to see how often people search for language tutorials. The people behind PYPL say, “If you believe in collective wisdom, the PYPL Popularity of Programming Language index can help you decide which language to study, or which one to use in a new software project.”

Here’s their Top 10:

  1. Java
  2. Python
  3. JavaScript
  4. PHP
  5. C#
  6. C
  7. R
  8. Objective-C
  9. Swift
  10. MATLAB

Asking the RedMonk

Stephen O’Grady describes RedMonk’s Programming Language Rankings, as of January 2018, as being based on two key external sources:

We extract language rankings from GitHub and Stack Overflow, and combine them for a ranking that attempts to reflect both code (GitHub) and discussion (Stack Overflow) traction. The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion and usage in an effort to extract insights into potential future adoption trends.

The top languages found by RedMonk look similar to PYPL and IEEE Spectrum:

  1. JavaScript
  2. Java
  3. Python
  4. PHP
  5. C#
  6. C++
  7. CSS
  8. Ruby
  9. C
  10. Swift & Objective-C (tied)

Use the Best Tool for the Job

It would be tempting to use data like this to say, “From now on, everything we’re doing will be in Java,” or “We’re going to do all web coding in JavaScript and use C++ for applications.” Don’t do that. That would be like say, “We’re going to make everything in the microwave.” Sometimes you want the microwave, sure, but sometimes you want the crockpot, or the regular oven, or sous vide, or the propane grill in your back yard.

The goal is productivity. Use agile processes like Scrum to determine what your development teams are going to build, where those applications will run, and which features must be included. Then, let the developers choose languages that fit best – and that includes supporting experimentation. Let them use R. Let them do some coding in Python, if it improves productivity, and gets a better job done faster.

 

Albert Einstein famously said, “Everything should be made as simple as possible, but not simpler.” Agile development guru Venkat Subramaniam has a knack for taking that insight and illustrating just how desperately the software development process needs the lessons of Professor Einstein.

As the keynote speaker at the Oracle Code event in Los Angeles—the first in a 14-city tour of events for developers—Subramaniam describes the art of simplicity, and why and how complexity becomes the enemy. While few would argue that complex is better, that’s what we often end up creating, because complex applications or source code may make us feel smart. But if someone says our software design or core algorithm looks simple, well, we feel bad—perhaps the problem was easy and obvious.

Subramaniam, who’s president of Agile Developer and an instructional professor at the University of Houston, urges us instead to take pride in coming up with a simple solution. “It takes a lot of courage to say, ‘we don’t need to make this complex,’” he argues. (See his full keynote, or register for an upcoming Oracle Code event.)

Simplicity Is Not Simple

Simplicity is hard to define, so let’s start by considering what simple is not, says Subramaniam. In most cases, our first attempts at solving a problem won’t be simple at all. The most intuitive solution might be overly verbose, or inefficient, or perhaps difficult to understand, even by its programmers after the fact.

Simple is not clever. Clever software, or clever solutions, may feel worthwhile, and might cause people to pat developers on the back. But ultimately, it’s hard to understand, and can be hard to change later. “Clever code is self-obfuscating,” says Subramaniam, meaning that it can be incomprehensible. “Even programmers can’t understand their clever code a week later.”

Simple is not necessarily familiar. Subramaniam insists that we are drawn to the old, comfortable ways of writing software, even when those methods are terribly inefficient. He mentioned someone who wrote code with 70 “if/then” questions in a series—because it was familiar. But it certainly wasn’t simple, and would be nearly impossible to debug or modify later. Something that we’re not familiar with may actually be simpler than what we’re comfortable with. To fight complexity, Subramaniam recommends learning new approaches and staying up with the latest thinking and the latest paradigms.

Simple is not over-engineered. Sometimes you can overthink the problem. Perhaps that means trying to develop a generalized algorithm that can be reused to solve many problems, when the situation calls for a fast, basic solution to a single problem. Subramaniam cited Occam’s Razor: When choosing between two solutions, the simplest may be the best.

Simple is not terse. Program source code should be concise, which means that it’s small, but also clearly communicate the programmer’s intent. By contrast, something that’s terse may still execute correctly when compiled into software, but the human understanding may be lost. “Don’t confuse terse with concise,” warns Subramaniam. “Both are really small, but terse code is waiting to hurt you when you least expect it.”

Read more in my essay, “Practical Advice To Whip Complexity And Develop Simpler Software.”

As the saying goes, you can’t manage what you don’t measure. In a data-driven organization, the best tools for measuring the performance are business intelligence (BI) and analytics engines, which require data. And that explains why data warehouses continue to play such a crucial role in business. Data warehouses often provide the source of that data, by rolling up and summarizing key information from a variety of sources.

Data warehouses, which are themselves relational databases, can be complex to set up and manage on a daily basis. They typically require significant human involvement from database administrators (DBAs). In a large enterprise, a team of DBAs ensure that the data warehouse is extracting data from those disparate data sources, as well as accommodating new and changed data sources—and making sure the extracted data is summarized properly and stored in a structured manner that can be handled by other applications, including those BI and analytics tools.

On top of that, the DBAs are managing the data warehouse’s infrastructure. That includes everything from server processor utilization, the efficiency of storage, security of the data, backups, and more.

However, the labor-intensive nature of data warehouses is about to change, with the advent of Oracle Autonomous Data Warehouse Cloud, announced in October 2017. The self-driving, self-repairing, self-tuning functionality of Oracle’s Data Warehouse Cloud is good for the organization—and good for the DBAs.

Data-driven organizations need timely, up-to-date business intelligence. This can feed instant decision-making, short-term predictions and business adjustments, and long-term strategy. If the data warehouse goes down, slows down, or lacks some information feeds, the impact can be significant. No data warehouse may mean no daily operational dashboards and reports, or inaccurate dashboards or reports.

For C-level executives, Autonomous Data Warehouse can improve the value of the data warehouse. This boosts the responsiveness of business intelligence and other important applications, by improving availability and performance.

Stop worrying about uptime. Forget about disk-drive failures. Move beyond performance tuning. DBAs, you have a business to optimize.

Read more in my article, “Autonomous Capabilities Will Make Data Warehouses — And DBAs — More Valuable.”

The “throw it over the wall” problem is familiar to anyone who’s seen designers and builders create something that can’t actually be deployed or maintained out in the real world. In the tech world, avoiding this problem is a big part of what gave rise to DevOps.

DevOps, combines “development” and “IT operations.” It refers to a set of practices that help software developers and IT operations staff work better, together. DevOps emerged about a decade ago with the goal of tearing down the silos between the two groups, so that companies can get new apps and features out the door, faster and with fewer mistakes and less downtime in production.

DevOps is now widely accepted as a good idea, but that doesn’t mean it’s easy. It requires cultural shifts by two departments that not only have different working styles and toolsets, but where the teams may not even know or respect each other.

When DevOps is properly embraced and implemented, it can help get better software written more quickly. DevOps can make applications easier and less expensive to manage. It can simplify the process of updating software to respond to new requirements. Overall, a DevOps mindset can make your organization more competitive because you can respond quickly to problems, opportunities and industry pressures.

Is DevOps the right strategic fit for your organization? Here are six CEO-level insights about DevOps to help you consider that question:

  1. DevOps can and should drive business agility.DevOps often means supporting a more rapid rate of change in terms of delivering new software or updating existing applications. And it doesn’t just mean programmers knock out code faster. It means getting those new apps or features fully deployed and into customers’ hands. “A DevOps mindset represents development’s best ability to respond to business pressures by quickly bringing new features to market and we drive that rapid change by leveraging technology that lets us rewire our apps on an ongoing basis,” says Dan Koloski, vice president of product management at Oracle.

For the full story, see my essay for the Wall Street Journal, “Tech Strategy: 6 Things CEOs Should Know About DevOps.”

Simplified Java coding. Less garbage. Faster programs. Those are among the key features in the newly released Java 10, which arrived in developers’ hands only six months after the debut of Java 9 in September.

This pace is a significant change from Java’s previous cycle of one large release every two to three years. With its faster release cadence, Java is poised to provide developers with innovations twice every year, making the language and platform more attractive and competitive. Instead of waiting for a huge omnibus release, the Java community can choose to include new features as soon as those features are ready, in the next six-month Java release train. This gives developers access to the latest APIs, functions, language additions, and JVM updates much faster than ever before.

Java 10 is the first release on the new six-month schedule. It builds incrementally on the significant new functionality that appeared in Java 9, which had a multiyear gestation period.

Java 10 delivers 12 Java Enhancement Proposals (JEPs). Here’s the complete list, followed by a deeper look at three of the most significant JEPs:

  • Local-Variable Type Inference
  • Consolidate the JDK Forest into a Single Repository
  • Garbage-Collector Interface
  • Parallel Full GC for G1
  • Application Class-Data Sharing
  • Thread-Local Handshakes
  • Remove the Native-Header Generation Tool (javah)
  • Additional Unicode Language-Tag Extensions
  • Heap Allocation on Alternative Memory Devices
  • Experimental Java-Based JIT Compiler
  • Root Certificates
  • Time-Based Release Versioning

See my essay for Forbes, “What Java 10 And Java’s New 6-Month Release Cadence Mean For Developers.” We’ll look at three of the most significant JEPs: Local-Variable Type Inference, Parallel Full GC for G1, and the Experimental Java-Based JIT Compiler.

Blockchain is a distributed digital ledger technology in which blocks of transaction records can be added and viewed—but can’t be deleted or changed without detection. Here’s where the name comes from: a blockchain is an ever-growing sequential chain of transaction records, clumped together into blocks. There’s no central repository of the chain, which is replicated in each participant’s blockchain node, and that’s what makes the technology so powerful. Yes, blockchain was originally developed to underpin Bitcoin and is essential to the trust required for users to trade digital currencies, but that is only the beginning of its potential.

Blockchain neatly solves the problem of ensuring the validity of all kinds of digital records. What’s more, blockchain can be used for public transactions as well as for private business, inside a company or within an industry group. “Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days, or even weeks.”

That’s the power of blockchain: an immutable digital ledger for recording transactions. It can be used to power anonymous digital currencies—or farm-to-table vegetable tracking, business contracts, contractor licensing, real estate transfers, digital identity management, and financial transactions between companies or even within a single company.

“Blockchain doesn’t have to just be used for accounting ledgers,” says Rakhmilevich. “It can store any data, and you can use programmable smart contracts to evaluate and operate on this data. It provides nonrepudiation through digitally signed transactions, and the stored results are tamper proof. Because the ledger is replicated, there is no single source of failure, and no insider threat within a single organization can impact its integrity.”

It’s All About Distributed Ledgers

Several simple concepts underpin any blockchain system. The first is the block, which is a batch of one or more transactions, grouped together and hashed. The hashing process produces an error-checking and tamper-resistant code that will let anyone viewing the block see if it has been altered. The block also contains the hash of the previous block, which ties them together in a chain. The backward hashing makes it extremely difficult for anyone to modify a single block without detection.

A chain contains collections of blocks, which are stored on decentralized, distributed servers. The more the better, with every server containing the same set of blocks and the latest values of information, such as account balances. Multiple transactions are handled within a single block using an algorithm called a Merkle tree, or hash tree, which provides fault and fraud tolerance: if a server goes down, or if a block or chain is corrupted, the missing data can be reconstructed by polling other servers’ chains.

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can view relevant data, but not everything in the chain. A customer might be able to verify that a contractor has a valid business license and see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

When originally conceived, blockchain had a narrow set of protocols. They were designed to govern the creation of blocks, the grouping of hashes into the Merkle tree, the viewing of data encapsulated into the chain, and the validation that data has not been corrupted or tampered with. Over time, creators of blockchain applications (such as the many competing digital currencies) innovated and created their own protocols—which, due to their independent evolutionary processes, weren’t necessarily interoperable. By contrast, the success of general-purpose blockchain services, which might encompass computing services from many technology, government, and business players, created the need for industry standards—such as Hyperledger, a Linux Foundation project.

Read more in my feature article in Oracle Magazine, March/April 2018, “It’s All About Trust.”

Don’t be misled by the name: Serverless cloud computing contains servers. Lots of servers. What makes serverless “serverless” is that developers, IT administrators and business leaders don’t have to think about those servers. Ever.

In the serverless model, online computing power gets tapped automatically only at the moment it’s needed. This can save organizations money and, just as importantly, make the IT organization more agile when it comes to building and launching new applications. That’s why serverless has the potential to be a game-changer for enterprise.

“Serverless is the next logical step for computing,” says Bob Quillin, Oracle vice president of developer relations. “We went from a data center where you own everything, to the cloud with shared servers and centralized infrastructure, to serverless, where you don’t even care about the servers themselves.”

In the serverless model, developers write and deploy what are called “functions.” Those are slimmed-down applications that take one action, such as processing an e-commerce order or recording that a shipment arrived. They run those functions directly on the cloud, using technology that eliminates the need to manage the servers, since it delivers computing power the moment that a function gets called into action.

Both the economics and the speed-of-development benefits of serverless cloud computing are compelling. Here are four CEO-level insights from Quillin for thinking about serverless computing.

First: Serverless can save real money. In the old data center model, says Quillin, organizations had to buy and maintain expensive servers, infrastructure and real estate.

In a traditional cloud model, organizations turn that capital expense into an operating one by provisioning virtualized servers and infrastructure. That saves money compared with the old data center model, Quillin says, but “you are typically paying for compute resources that are running all the time—in increments of CPU hours at least.” If you create a cluster of cloud servers, you don’t typically build it up and break it down every day, and certainly not every hour, as needed. That’s just too much management and orchestration for most organizations.

Serverless, on the other hand, essentially lets you pay only for exactly the time that a workload runs. For closing the books, it may be a once-a-month charge for a few hours of computing time. For handling transactions, it might be a few tenths of a second whenever a customer makes a sale or an Internet of Things (IoT) device sends data.

For the rest of the list, and the full story, see my essay for the Wall Street Journal, “4 Things CEOs Should Know About Serverless Computing.”

DevOps is a technology discipline well-suited to cloud-native application development. When it only takes a few mouse clicks to create or manage cloud resources, why wouldn’t developers and IT operation teams work in sync to get new apps out the door and in front of user faster? The DevOps culture and tactics have done much to streamline everything from coding to software testing to application deployment.

Yet far from every organization has embraced DevOps, and not every organization that has tried DevOps has found the experience transformative. Perhaps that’s because the idea is relatively young (the term was coined around 2009), suggests Javed Mohammed, systems community manager at Oracle, or perhaps because different organization are at such different spots in DevOps’ technology adoption cycle. That idea—about where we are in the adoption of DevOps—became a central theme of a recent podcast discussion among tech experts. Following are some highlights.

Confusion about DevOps can arise because DevOps affects dev and IT teams in many ways. “It can apply to the culture piece, to the technology piece, to the process piece—and even how different teams interact, and how all of the different processes tie together,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC and co-author of Accelerate: The Science of Lean Software and DevOps.

The adoption and effectiveness of DevOps within a team depends on where each team is, and where organizations are. One team might be narrowly focused on the tech used to automate software deployment to the public, while another is looking at the culture and communication needed to release new features on a weekly or even daily basis. “Everyone is at a very, very different place,” Forsgren says.

Indeed, says Forsgren, some future-thinking organizations are starting to talk about what ‘DevOps Next’ is, extending the concept of developer-led operations beyond common best practices. At the same time, in other companies, there’s no DevOps. “DevOps isn’t even on their radar,” she sighs. Many experts, including Forsgren, see that DevOps is here, is working, and is delivering real value to software teams today—and is helping businesses create and deploy better software faster and less expensively. That’s especially true when it comes to cloud-native development, or when transitioning existing workloads from the data center into the cloud.

Read more in my essay, “DevOps: Sometimes Incredibly Transformative, Sometimes Not So Much.”

You are not the user. If you are the CEO, CTO, chief network architect, software developer – you aren’t the user of the software or systems that you are building, or at least, you aren’t the primary user. What you are looking for isn’t what your customer or employee is looking for. And the vocabulary you use isn’t the vocabulary your customer is using, and may not be what your partners say either.

Two trivial examples:

  1. I recently had my hair cut, and the stylist asked me, “Do you need any product?” Well, I don’t use product. I use shampoo. “Product” is stylist-speak, not customer-speak.
  2. For lunch one day, I stopped at a fast-food chain. Yes, yes, I know, not the healthiest. When my meal was ready, I heard over the speaker, “Order 143, your order is up.” Hmm. Up? In customer-speak, it should have been, “Your order is ready.”

In the essay, “You Are Not the User: The False-Consensus Effect,” Raluca Budiu observes:

While many people who earn a living from developing software will write tons of programs to make their own life easier, much, if not most, of their output will in fact be intended for other people — people who are not working in a cubicle nearby, or not even in the same building. These “users” are usually very different than those who write the code, even in the rare case where they are developers: they have different backgrounds, different experiences with user interfaces, different mindsets, different mental models, and different goals. They are not us.

Badiu defines the false-consensus effect as, “The false-consensus effect refers to people’s tendency to assume that others share their beliefs and will behave similarly in a given context.” And that is more than designing cool software. Good design, and avoiding a false consensus, requires real-life situations with real-life customers or end users.

The way I navigate a grocery store is not the way that the store’s designer, or store’s manager, navigates it. It’s certainly not the way that the store’s manager navigates it. Or its chief risk officer. That’s why grocery stores spend a fortune observing users and testing different layouts to not only maximize sales and profitability, but also maximize the user’s satisfaction. A good design often requires a balance between the needs of the designer and the needs of the users.

My wife was recently frustrated when navigating an insurance company’s website. It was clearly not designed for her use case. Frankly, it’s hard to imagine anyone being satisfied with that website. And how about the process of logging into a WiFi network in a hotel, airport, or coffee shop? Could it be more difficult?

Focus on the User Experience

The Nielsen Norman Group, experts in usability, have offered a list of “10 Usability Heuristics of User Interface Design.” While Jakob Nielsen is focused on the software user experience, these are rules that we should follow in many other situations. Consider this point:

Match between system and the real world: The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.

Yes, and how about

Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

That’s so familiar. How many of us have been frustrated by dialog boxes, not knowing exactly what will happen if we press “Cancel” or “Okay”?

Design Thinking

The article “Design Thinking” from Sarah Gibbons talks about what we should do when designing systems. That means getting them in front of real people:

Prototype: Build real, tactile representations for a subset of your ideas. The goal of this phase is to understand what components of your ideas work, and which do not. In this phase you begin to weigh the impact vs. feasibility of your ideas through feedback on your prototypes.

Test: Return to your users for feedback. Ask yourself ‘Does this solution meet users’ needs?’ and ‘Has it improved how they feel, think, or do their tasks?’

Put your prototype in front of real customers and verify that it achieves your goals. Has the users’ perspective during onboarding improved? Does the new landing page increase time or money spent on your site? As you are executing your vision, continue to test along the way.

Never forget, you are not the user.

“The functional style of programming is very charming,” insists Venkat Subramaniam. “The code begins to read like the problem statement. We can relate to what the code is doing and we can quickly understand it.” Not only that, Subramaniam explains in his keynote address for Oracle Code Online, but as implemented in Java 8 and beyond, functional-style code is lazy—and that laziness makes for efficient operations because the runtime isn’t doing unnecessary work.

Subramaniam, president of Agile Developer and an instructional professor at the University of Houston, believes that laziness is the secret to success, both in life and in programming. Pretend that your boss tells you on January 10 that a certain hourlong task must be done before April 15. A go-getter might do that task by January 11.

That’s wrong, insists Subramaniam. Don’t complete that task until April 14. Why? Because the results of the boss’s task aren’t needed yet, and the requirements may change before the deadline, or the task might be canceled altogether. Or you might even leave the job on March 13. This same mindset should apply to your programming: “Efficiency often means not doing unnecessary work.”

Subramaniam received the JavaOne RockStar award three years in a row and was inducted into the Java Champions program in 2013 for his efforts in motivating and inspiring software developers around the world. In his Oracle Code Online keynote, he explored how functional-style programming is implemented in the latest versions of Java, and why he’s so enthusiastic about using this style for applications that process lots and lots of data—and where it’s important to create code that is easy to read, easy to modify, and easy to test.

Functional Versus Imperative Programming

The old mainstream of imperative programming, which has been a part of the Java language from day one, relies upon developers to explicitly code not only what they want the program to do, but also how to do it. Take software that has a huge amount of data to process; the programmer would normally create a loop that examines each piece of data, and if appropriate, take specific action on that data with each iteration of the loop. It’s up to the developer to create the loop and manage it—in addition to coding the business logic to be performed on the data.

The imperative model, argues Subramaniam, results in what he calls “accidental complexity”—each line of code might perform multiple functions, which makes it hard to understand, modify, test, and verify. And, the developer must do a lot of work to set up and manage the data and iterations. “You get bogged down with the details,” he said. This not only introduces complexity, but makes code hard to change.”

By contrast, when using a functional style of programming, developers can focus almost entirely on what is to be done, while ignoring the how. The how is handled by the underlying library of functions, which are defined separately and applied to the data as required. Subramaniam says that functional-style programming provides highly expressive code, where each line of code does only one thing: “The code becomes easier to work with, and easier to write.”

Subramaniam adds that in functional-style programming, “The code becomes the business logic.” Read more in my essay published in Forbes, “Lazy Java Code Makes Applications Elegant, Sophisticated — And Efficient at Runtime.”

 

When the little wireless speaker in your kitchen acts on your request to add chocolate milk to your shopping list, there’s artificial intelligence (AI) working in the cloud, to understand your speech, determine what you want to do, and carry out the instruction.

When you send a text message to your HR department explaining that you woke up with a vision-blurring migraine, an AI-powered chatbot knows how to update your status to “out of the office” and notify your manager about the sick day.

When hackers attempt to systematically break into the corporate computer network over a period of weeks, AI sees the subtle patterns in historical log data, recognizes outliers in the packet traffic, raises the alarm, and recommends appropriate countermeasures.

AI is nearly everywhere in today’s society. Sometimes it’s fairly obvious (as with a chatbot), and sometimes AI is hidden under the covers (as with network security monitors). It’s a virtuous cycle: Modern cloud computing and algorithms make AI a fast, efficient, and inexpensive approach to problem-solving. Developers discover those cloud services and algorithms and imagine new ways to incorporate the latest AI functionality into their software. Businesses see the value of those advances (even if they don’t know that AI is involved), and everyone benefits. And quickly, the next wave of emerging technology accelerates the cycle again.

AI can improve the user experience, such as when deciphering spoken or written communications, or inferring actions based on patterns of past behavior. AI techniques are excellent at pattern-matching, making it easier for machines to accurately decipher human languages using context. One characteristic of several AI algorithms is flexibility in handling imprecise data: Human text. Specially, chatbots—where humans can type messages on their phones, and AI-driven software can understand what they say and carry on a conversation, providing desired information or taking the appropriate actions.

If you think AI is everywhere today, expect more tomorrow. AI-enhanced software-as-a-service and platform-as-a-service products will continue to incorporate additional AI to help make cloud-delivered and on-prem services more reliable, more performant, and more secure. AI-driven chatbots will find their ways into new, innovative applications, and speech-based systems will continue to get smarter. AI will handle larger and larger datasets and find its way into increasingly diverse industries.

Sometimes you’ll see the AI and know that you’re talking to a bot. Sometimes the AI will be totally hidden, as you marvel at the, well, uncanny intelligence of the software, websites, and even the Internet of Things. If you don’t believe me, ask a chatbot.

Read more in my feature article in the January/February 2018 edition of Oracle Magazine, “It’s Pervasive: AI Is Everywhere.”

Millions of developers are using Artificial Intelligence (AI) or Machine Learning (ML) in their projects, says Evans Data Corp. Evans’ latest Global Development and Demographics Study, released in January 2018, says that 29% of developers worldwide, or 6,452,000 in all, are currently using some form of AI or ML. What’s more, says the study, an additional 5.8 million expect to use AI or ML within the next six months.

ML is actually a subset of AI. To quote expertsystem.com,

In practice, artificial intelligence – also simply defined as AI – has come to represent the broad category of methodologies that teach a computer to perform tasks as an “intelligent” person would. This includes, among others, neural networks or the “networks of hardware and software that approximate the web of neurons in the human brain” (Wired); machine learning, which is a technique for teaching machines to learn; and deep learning, which helps machines learn to go deeper into data to recognize patterns, etc. Within AI, machine learning includes algorithms that are developed to tell a computer how to respond to something by example.

The same site defines ML as,

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.

A related and popular AI-derived technology, by the way, is Deep Learning. DL uses simulated neural networks to attempt to mimic the way a human brain learns and reacts. To quote from Rahul Sharma on Techgenix,

Deep learning is a subset of machine learning. The core of deep learning is associated with neural networks, which are programmatic simulations of the kind of decision making that takes place inside the human brain. However, unlike the human brain, where any neuron can establish a connection with some other proximate neuron, neural networks have discrete connections, layers, and data propagation directions.

Just like machine learning, deep learning is also dependent on the availability of massive volumes of data for the technology to “train” itself. For instance, a deep learning system meant to identify objects from images will need to run millions of test cases to be able to build the “intelligence” that lets it fuse together several kinds of analysis together, to actually identify the object from an image.

Why So Many AI Developers? Why Now?

You can find AI, ML and DL everywhere, it seems. There are highly visible projects, like self-driving cars, or the speech recognition software inside Amazon’s Alexa smart speakers. That’s merely the tip of the iceberg. These technologies are embedded into the Internet of Things, into smart analytics and predictive analytics, into systems management, into security scanners, into Facebook, into medical devices.

A modern but highly visible application of AI/ML are chatbots – software that can communicate with humans via textual interfaces. Some companies use chatbots on their websites or on social media channels (like Twitter) to talk to customers and provide basic customer services. Others use the tech within a company, such as in human-resources applications that let employees make requests (like scheduling vacation) by simply texting the HR chatbot.

AI is also paying off in finance. The technology can help service providers (like banks or payment-card transaction clearinghouses) more accurately review transactions to see if they are fraudulent, and improve overall efficiency. According to John Rampton, writing for the Huffington Post, AI investment by financial tech companies was more than $23 billion in 2016. The benefits of AI, he writes, include:

  • Increasing Security
  • Reducing Processing Times
  • Reducing Duplicate Expenses and Human Error
  • Increasing Levels of Automation
  • Empowering Smaller Companies

Rampton also explains that AI can offer game-changing insights:

One of the most valuable benefits AI brings to organizations of all kinds is data. The future of Fintech is largely reliant on gathering data and staying ahead of the competition, and AI can make that happen. With AI, you can process a huge volume of data which will, in turn, offer you some game-changing insights. These insights can be used to create reports that not only increase productivity and revenue, but also help with complex decision-making processes.

What’s happening in fintech with AI is nothing short of revolutionary. That’s true of other industries as well. Instead of asking why so many developers, 29%, are focusing on AI, we should ask, “Why so few?”

I’m #1! Well, actually #4 and #7. During 2017, I wrote several article for Hewlett Packard Enterprise’s online magazine, Enterprise.nxt Insights, and two of them were quite successful – named as #4 and #7 in the site’s list of Top 10 Articles for 2017.

Article #7 was, “4 lessons for modern software developers from 1970s mainframe programing.” Based entirely on my own experiences, the article began,

Eight megabytes of memory is plenty. Or so we believed back in the late 1970s. Our mainframe programs usually ran in 8 MB virtual machines (VMs) that had to contain the program, shared libraries, and working storage. Though these days, you might liken those VMs more to containers, since the timesharing operating system didn’t occupy VM space. In fact, users couldn’t see the OS at all.

In that mainframe environment, we programmers learned how to be parsimonious with computing resources, which were expensive, limited, and not always available on demand. We learned how to minimize the costs of computation, develop headless applications, optimize code up front, and design for zero defects. If the very first compilation and execution of a program failed, I was seriously angry with myself.

Please join me on a walk down memory lane as I revisit four lessons I learned while programming mainframes and teaching mainframe programming in the era of Watergate, disco on vinyl records, and Star Wars—and which remain relevant today.

Article #4 was, “The OWASP Top 10 is killing me, and killing you! It began,

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10, a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Click the links (or pictures) above and enjoy the articles! And kudos to my prolific friend Steven J. Vaughan-Nichols, whose articles took the #3, #2 and #1 slots. He’s good. Damn good.