We all have heard the usual bold predictions for technology in 2018: Lots of cloud computing, self-driving cars, digital cryptocurrencies, 200-inch flat-screen televisions, and versions of Amazon’s Alexa smart speaker everywhere on the planet. Those types of predictions, however, are low-hanging fruit. They’re not bold. One might as well predict that there will be some sunshine, some rainy days, a big cyber-bank heist, and at least one smartphone catching fire.

Let’s dig for insights beyond the blindingly obvious. I talked to several tech leaders, deep-thinking individuals in California’s Silicon Valley, asking them for their predictions, their idea of new trends, and disruptions in the tech industry. Let’s see what caught their eye.

Gary Singh, VP of marketing, OnDot Systems, believes that 2018 will be the year when mobile banking will transform into digital banking — which is more disruptive than one would expect. “The difference between digital and mobile banking is that mobile banking is informational. You get information about your accounts,” he said. Singh continues, “But in terms of digital banking, it’s really about actionable insights, about how do you basically use your funds in the most appropriate way to get the best value for your dollar or your pound in terms of how you want to use your monies. So that’s one big shift that we would see start to happen from mobile to digital.”

Tom Burns, Vice President and General Manager of Dell EMC Networking, has been following Software-Defined Wide Area Networks. SD-WAN is a technology that allows enterprise WANs to thrive over the public Internet, replacing expensive fixed-point connections provisioned by carriers using technologies like MPLS. “The traditional way of connecting branches in office buildings and providing services to those particular branches is going to change,” Burns observed. “If you look at the traditional router, a proprietary architecture, dedicated lines. SD-WAN is offering a much lower cost but same level of service opportunity for customers to have that data center interconnectivity or branch connectivity providing some of the services, maybe a full even office in the box, but security services, segmentation services, at a much lower cost basis.”

NetFoundry’s co-founder, Mike Hallett, sees a bright future for Application Specific Networks, which link applications directly to cloud or data center applications. The focus is on the application, not on the device. “For 2018, when you think of the enterprise and the way they have to be more agile, flexible and faster to move to markets, particularly going from what I would call channel marketing to, say, direct marketing, they are going to need application-specific networking technologies.” Hallett explains that Application Specific Networks offer the ability to be able to connect from an application, from a cloud, from a device, from a thing, to any application or other device or thing quickly and with agility. Indeed, those connections, which are created using software, not hardware, could be created “within minutes, not within the weeks or months it might take, to bring up a very specific private network, being able to do that. So the year of 2018 will see enterprises move towards software-only networking.”

Mansour Karam, CEO and founder of Apstra, also sees software taking over the network. “I really see massive software-driven automation as a major trend. We saw technologies like intent-based networking emerge in 2017, and in 2018, they’re going to go mainstream,” he said.

There’s more

There are predictions around open networking, augmented reality, artificial intelligence – and more. See my full story in Upgrade Magazine, “From SD-WAN to automation to white-box switching: Five tech predictions for 2018.”

The pattern of cloud adoption moves something like the ketchup bottle effect: You tip the bottle and nothing comes out, so you shake the bottle and suddenly you have ketchup all over your plate.

That’s a great visual from Frank Munz, software architect and cloud evangelist at Munz & More, in Germany. Munz and a few other leaders in the Oracle community were interviewed on a podcast by Bob Rhubart, Architect Community Manager at Oracle, about the most important trends they saw in 2017. The responses covered a wide range of topics, from cloud to blockchain, from serverless to machine learning and deep learning.

During the 44-minute session, “What’s Hot? Tech Trends That Made a Real Difference in 2017,” the panel took some fascinating detours into the future of self-programming computers and the best uses of container technologies like Kubernetes. For those, you’ll need to listen to the podcast.

The panel included: Frank Munz; Lonneke Dikmans, chief product officer of eProseed, Netherlands; Lucas Jellema, CTO, AMIS Services, Netherlands; Pratik Patel, CTO, Triplingo, US; and Chris Richardson, founder and CEO, Eventuate, US. The program was recorded in San Francisco at Oracle OpenWorld and JavaOne.

The cloud’s tipping point

The ketchup quip reflects the cloud passing a tipping point of adoption in 2017. “For the first time in 2017, I worked on projects where large, multinational companies give up their own data center and move 100% to the cloud,” Munz said. These workload shifts are far from a rarity. As Dikmans said, the cloud drove the biggest change and challenge: “[The cloud] changes how we interact with customers, and with software. It’s convenient at times, and difficult at others.”

Security offered another way of looking at this tipping point. “Until recently, organizations had the impression that in the cloud, things were less secure and less well managed, in general, than they could do themselves,” said Jellema. Now, “people have come to realize that they’re not particularly good at specific IT tasks, because it’s not their core business.” They see that cloud vendors, whose core business is managing that type of IT, can often do those tasks better.

In 2017, the idea of shifting workloads en masse to the cloud and decommissioning data centers became mainstream and far less controversial.

But wait, there’s more! See about Blockchain, serverless computing, and pay-as-you-go machine learning, in my essay published in Forbes, “Tech Trends That Made A Real Difference In 2017.”

“The functional style of programming is very charming,” insists Venkat Subramaniam. “The code begins to read like the problem statement. We can relate to what the code is doing and we can quickly understand it.” Not only that, Subramaniam explains in his keynote address for Oracle Code Online, but as implemented in Java 8 and beyond, functional-style code is lazy—and that laziness makes for efficient operations because the runtime isn’t doing unnecessary work.

Subramaniam, president of Agile Developer and an instructional professor at the University of Houston, believes that laziness is the secret to success, both in life and in programming. Pretend that your boss tells you on January 10 that a certain hourlong task must be done before April 15. A go-getter might do that task by January 11.

That’s wrong, insists Subramaniam. Don’t complete that task until April 14. Why? Because the results of the boss’s task aren’t needed yet, and the requirements may change before the deadline, or the task might be canceled altogether. Or you might even leave the job on March 13. This same mindset should apply to your programming: “Efficiency often means not doing unnecessary work.”

Subramaniam received the JavaOne RockStar award three years in a row and was inducted into the Java Champions program in 2013 for his efforts in motivating and inspiring software developers around the world. In his Oracle Code Online keynote, he explored how functional-style programming is implemented in the latest versions of Java, and why he’s so enthusiastic about using this style for applications that process lots and lots of data—and where it’s important to create code that is easy to read, easy to modify, and easy to test.

Functional Versus Imperative Programming

The old mainstream of imperative programming, which has been a part of the Java language from day one, relies upon developers to explicitly code not only what they want the program to do, but also how to do it. Take software that has a huge amount of data to process; the programmer would normally create a loop that examines each piece of data, and if appropriate, take specific action on that data with each iteration of the loop. It’s up to the developer to create the loop and manage it—in addition to coding the business logic to be performed on the data.

The imperative model, argues Subramaniam, results in what he calls “accidental complexity”—each line of code might perform multiple functions, which makes it hard to understand, modify, test, and verify. And, the developer must do a lot of work to set up and manage the data and iterations. “You get bogged down with the details,” he said. This not only introduces complexity, but makes code hard to change.”

By contrast, when using a functional style of programming, developers can focus almost entirely on what is to be done, while ignoring the how. The how is handled by the underlying library of functions, which are defined separately and applied to the data as required. Subramaniam says that functional-style programming provides highly expressive code, where each line of code does only one thing: “The code becomes easier to work with, and easier to write.”

Subramaniam adds that in functional-style programming, “The code becomes the business logic.” Read more in my essay published in Forbes, “Lazy Java Code Makes Applications Elegant, Sophisticated — And Efficient at Runtime.”

 

At least, I think it’s Swedish! Just stumbled across this. I hope they bought the foreign rights to one of my articles…


With lots of inexpensive, abundant computation resources available, nearly anything becomes possible. For example, you can process a lot of network data to identify patterns, identify intelligence, and product insight that can be used to automate networks. The road to Intent-Based Networking Systems (IBNS) and Application-Specific Networks (ASN) is a journey. That’s the belief of Rajesh Ghai, Research Director of Telecom and Carrier IP Networks at IDC.

Ghai defines IBNS as a closed-loop continuous implementation of several steps:

  • Declaration of intent, where the network administrator defines what the network is supposed to do
  • Translation of intent into network design and configuration.
  • Validation of the design using a model that decides if that configuration can actually be implemented,
  • Propagation of that configuration into the network devices via APIs.
  • Gather and study real-time telemetry from all the devices.
  • Use machine learning to determine whether desired state of policy has been achieved. And then repeat,

Related to that concept, Ghai explains, is ASN. “It’s also a concept which is software control and optimization and automation. The only difference is that ASN is more applicable to distributed applications over the internet than IBNS.”

IBNS Operates Networks as One System

“Think of intent-based networking as software that sits on top of your infrastructure and focusing on the networking infrastructure, and enables you to operate your network infrastructure as one system, as opposed to box per box,” explained Mansour Karam, Founder, CEO of Apstra, which offers IBNS solutions for enterprise data centers.

“To achieve this, we have to start with intent,” he continued. “Intent is both the high-level business outcomes that are required by the business, but then also we think of intent as applying to every one of those stages. You may have some requirements in how you want to build.”

Karam added, “Validation includes tests that you would run — we call them expectations — to validate that your network indeed is behaving as you expected, as per intent. So we have to think of a sliding scale of intent and then we also have to collect all the telemetry in order to close the loop and continuously validate that the network does what you want it to do. There is the notion of state at the core of an IBNS that really boils down to managing state at scale and representing it in a way that you can reason about the state of your system, compare it with the desired state and making the right adjustments if you need to.”

The upshot of IBNS, Karam said: If you have powerful automation you’re taking the human out of the equation, and so you get a much more agile network. You can recoup the revenues that otherwise you would have lost, because you’re unable to deliver your business services on time. You will reduce your outages massively, because 80% of outages are caused by human error. You reduce your operational expenses massively, because organizations spend $4 operating every dollar of CapEx, and 80% of it is manual operations. So if you take that out you should be able to recoup easily your entire CapEx spend on IBNS.”

ASN Gives Each Application It Own Network

“Application-Specific Networks, like Intent-Based Networking Systems, enable digital transformation, agility, speed, and automation,” explained Galeal Zino, Founder of NetFoundry, which offers an ASN platform.

He continued, “ASN is a new term, so I’ll start with a simple analogy. ASNs are like are private clubs — very, very exclusive private clubs — with exactly two members, the application and the network. ASN literally gives each application its own network, one that’s purpose-built and driven by the specific needs of that application. ASN merges the application world and the network world into software which can enable digital transformation with velocity, with scale, and with automation.”

Read more in my new article for Upgrade Magazine, “Manage smarter, more autonomous networks with Intent-Based Networking Systems and Application Specific Networking.”

When the little wireless speaker in your kitchen acts on your request to add chocolate milk to your shopping list, there’s artificial intelligence (AI) working in the cloud, to understand your speech, determine what you want to do, and carry out the instruction.

When you send a text message to your HR department explaining that you woke up with a vision-blurring migraine, an AI-powered chatbot knows how to update your status to “out of the office” and notify your manager about the sick day.

When hackers attempt to systematically break into the corporate computer network over a period of weeks, AI sees the subtle patterns in historical log data, recognizes outliers in the packet traffic, raises the alarm, and recommends appropriate countermeasures.

AI is nearly everywhere in today’s society. Sometimes it’s fairly obvious (as with a chatbot), and sometimes AI is hidden under the covers (as with network security monitors). It’s a virtuous cycle: Modern cloud computing and algorithms make AI a fast, efficient, and inexpensive approach to problem-solving. Developers discover those cloud services and algorithms and imagine new ways to incorporate the latest AI functionality into their software. Businesses see the value of those advances (even if they don’t know that AI is involved), and everyone benefits. And quickly, the next wave of emerging technology accelerates the cycle again.

AI can improve the user experience, such as when deciphering spoken or written communications, or inferring actions based on patterns of past behavior. AI techniques are excellent at pattern-matching, making it easier for machines to accurately decipher human languages using context. One characteristic of several AI algorithms is flexibility in handling imprecise data: Human text. Specially, chatbots—where humans can type messages on their phones, and AI-driven software can understand what they say and carry on a conversation, providing desired information or taking the appropriate actions.

If you think AI is everywhere today, expect more tomorrow. AI-enhanced software-as-a-service and platform-as-a-service products will continue to incorporate additional AI to help make cloud-delivered and on-prem services more reliable, more performant, and more secure. AI-driven chatbots will find their ways into new, innovative applications, and speech-based systems will continue to get smarter. AI will handle larger and larger datasets and find its way into increasingly diverse industries.

Sometimes you’ll see the AI and know that you’re talking to a bot. Sometimes the AI will be totally hidden, as you marvel at the, well, uncanny intelligence of the software, websites, and even the Internet of Things. If you don’t believe me, ask a chatbot.

Read more in my feature article in the January/February 2018 edition of Oracle Magazine, “It’s Pervasive: AI Is Everywhere.”

Millions of developers are using Artificial Intelligence (AI) or Machine Learning (ML) in their projects, says Evans Data Corp. Evans’ latest Global Development and Demographics Study, released in January 2018, says that 29% of developers worldwide, or 6,452,000 in all, are currently using some form of AI or ML. What’s more, says the study, an additional 5.8 million expect to use AI or ML within the next six months.

ML is actually a subset of AI. To quote expertsystem.com,

In practice, artificial intelligence – also simply defined as AI – has come to represent the broad category of methodologies that teach a computer to perform tasks as an “intelligent” person would. This includes, among others, neural networks or the “networks of hardware and software that approximate the web of neurons in the human brain” (Wired); machine learning, which is a technique for teaching machines to learn; and deep learning, which helps machines learn to go deeper into data to recognize patterns, etc. Within AI, machine learning includes algorithms that are developed to tell a computer how to respond to something by example.

The same site defines ML as,

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.

A related and popular AI-derived technology, by the way, is Deep Learning. DL uses simulated neural networks to attempt to mimic the way a human brain learns and reacts. To quote from Rahul Sharma on Techgenix,

Deep learning is a subset of machine learning. The core of deep learning is associated with neural networks, which are programmatic simulations of the kind of decision making that takes place inside the human brain. However, unlike the human brain, where any neuron can establish a connection with some other proximate neuron, neural networks have discrete connections, layers, and data propagation directions.

Just like machine learning, deep learning is also dependent on the availability of massive volumes of data for the technology to “train” itself. For instance, a deep learning system meant to identify objects from images will need to run millions of test cases to be able to build the “intelligence” that lets it fuse together several kinds of analysis together, to actually identify the object from an image.

Why So Many AI Developers? Why Now?

You can find AI, ML and DL everywhere, it seems. There are highly visible projects, like self-driving cars, or the speech recognition software inside Amazon’s Alexa smart speakers. That’s merely the tip of the iceberg. These technologies are embedded into the Internet of Things, into smart analytics and predictive analytics, into systems management, into security scanners, into Facebook, into medical devices.

A modern but highly visible application of AI/ML are chatbots – software that can communicate with humans via textual interfaces. Some companies use chatbots on their websites or on social media channels (like Twitter) to talk to customers and provide basic customer services. Others use the tech within a company, such as in human-resources applications that let employees make requests (like scheduling vacation) by simply texting the HR chatbot.

AI is also paying off in finance. The technology can help service providers (like banks or payment-card transaction clearinghouses) more accurately review transactions to see if they are fraudulent, and improve overall efficiency. According to John Rampton, writing for the Huffington Post, AI investment by financial tech companies was more than $23 billion in 2016. The benefits of AI, he writes, include:

  • Increasing Security
  • Reducing Processing Times
  • Reducing Duplicate Expenses and Human Error
  • Increasing Levels of Automation
  • Empowering Smaller Companies

Rampton also explains that AI can offer game-changing insights:

One of the most valuable benefits AI brings to organizations of all kinds is data. The future of Fintech is largely reliant on gathering data and staying ahead of the competition, and AI can make that happen. With AI, you can process a huge volume of data which will, in turn, offer you some game-changing insights. These insights can be used to create reports that not only increase productivity and revenue, but also help with complex decision-making processes.

What’s happening in fintech with AI is nothing short of revolutionary. That’s true of other industries as well. Instead of asking why so many developers, 29%, are focusing on AI, we should ask, “Why so few?”

A friend insists that “the Internet is down” whenever he can’t get a strong wireless connection on his smartphone. With that type of logic, enjoy this photo found on the afore-mentioned Internet:

“Wi-Fi” is apparently now synonymous with “Internet” or “network.” It’s clear that we have come a long way from the origins of the Wi-Fi Alliance, which originally defined the term as meaning “Wireless Fidelity.” The vendor-driven alliance was formed in 1999 to jointly promote the broad family of IEEE 802.11 wireless local-area networking standards, as well as to insure interoperability through certifications.

But that was so last millennium! It’s all Wi-Fi, all the time. In that vein, let me propose three new acronyms:

  • Wi-Fi-Wi – Wireless networking, i.e., 802.11
  • Wi-Fi-Cu – Any conventionally cabled network
  • Wi-Fi-Fi – Networking over fiber optics (but not Fibre Channel)
  • Wi-Fi-FC – Wireless Fibre Channel, I suppose

You get the idea….

It’s all about the tradeoffs! You can have the chicken or the fish, but not both. You can have the big engine in your new car, but that means a stick shift—you can’t have the V8 and an automatic. Same for that cake you want to have and eat. Your business applications can be easy to use or secure—not both.

But some of those are false dichotomies, especially when it comes to security for data center and cloud applications. You can have it both ways. The systems can be easy to use and maintain, and they can be secure.

On the consumer side, consider two-factor authentication (2FA), whereby users receive a code number, often by text message to their phones, which they must type into a webpage to confirm their identity. There’s no doubt that 2FA makes systems more secure. The problem is that 2FA is a nuisance for the individual end user, because it slows down access to a desired resource or application. Unless you’re protecting your personal bank account, there’s little incentive for you to use 2FA. Thus, services that require 2FA frequently aren’t used, get phased out, are subverted, or are simply loathed.

Likewise, security measures specified by corporate policies can be seen as a nuisance or an impediment. Consider dividing an enterprise network into small “trusted” networks, such as by using virtual LANs or other forms of authenticating users, applications, or API calls. This setup can require considerable effort for internal developers to create, and even more effort to modify or update.

When IT decides to migrate an application from a data center to the cloud, the steps required to create API-level authentication across such a hybrid deployment can be substantial. The effort required to debug that security scheme can be horrific. As for audits to ensure adherence to the policy? Forget it. How about we just bypass it, or change the policy instead?

Multiply that simple scenario by 1,000 for all the interlinked applications and users at a typical midsize company. Or 10,000 or 100,000 at big ones. That’s why post-mortem examinations of so many security breaches show what appears to be an obvious lack of “basic” security. However, my guess is that in many of those incidents, the chief information security officer or IT staffers were under pressure to make systems, including applications and data sources, extremely easy for employees to access, and there was no appetite for creating, maintaining, and enforcing strong security measures.

Read more about these tradeoffs in my article on Forbes for Oracle Voice: “You Can Have Your Security Cake And Eat It, Too.”

I’m #1! Well, actually #4 and #7. During 2017, I wrote several article for Hewlett Packard Enterprise’s online magazine, Enterprise.nxt Insights, and two of them were quite successful – named as #4 and #7 in the site’s list of Top 10 Articles for 2017.

Article #7 was, “4 lessons for modern software developers from 1970s mainframe programing.” Based entirely on my own experiences, the article began,

Eight megabytes of memory is plenty. Or so we believed back in the late 1970s. Our mainframe programs usually ran in 8 MB virtual machines (VMs) that had to contain the program, shared libraries, and working storage. Though these days, you might liken those VMs more to containers, since the timesharing operating system didn’t occupy VM space. In fact, users couldn’t see the OS at all.

In that mainframe environment, we programmers learned how to be parsimonious with computing resources, which were expensive, limited, and not always available on demand. We learned how to minimize the costs of computation, develop headless applications, optimize code up front, and design for zero defects. If the very first compilation and execution of a program failed, I was seriously angry with myself.

Please join me on a walk down memory lane as I revisit four lessons I learned while programming mainframes and teaching mainframe programming in the era of Watergate, disco on vinyl records, and Star Wars—and which remain relevant today.

Article #4 was, “The OWASP Top 10 is killing me, and killing you! It began,

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10, a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Click the links (or pictures) above and enjoy the articles! And kudos to my prolific friend Steven J. Vaughan-Nichols, whose articles took the #3, #2 and #1 slots. He’s good. Damn good.

Amazon says that that a cloud-connected speaker/microphone was at the top of the charts: “This holiday season was better than ever for the family of Echo products. The Echo Dot was the #1 selling Amazon Device this holiday season, and the best-selling product from any manufacturer in any category across all of Amazon, with millions sold.”

The Echo products are an ever-expanding family of inexpensive consumer electronics from Amazon, which connect to a cloud-based service called Alexa. The devices are always listening for spoken commands, and will respond through conversation, playing music, turning on/off lights and other connected gadgets, making phone calls, and even by showing videos.

While Amazon doesn’t release sales figures for its Echo products, it’s clear that consumers love them. In fact, Echo is about to hit the road, as BMW will integrate the Echo technology (and Alexa cloud service) into some cars beginning this year. Expect other automakers to follow.

Why the Echo – and Apple’s Siri and Google’s Home? Speech.

The traditional way of “talking” to computers has been through the keyboard, augmented with a mouse used to select commands or input areas. Computers initially responded only to typed instructions using a command-line interface (CLI); this was replaced in the era of the Apple Macintosh and the first iterations of Microsoft Windows with windows, icons, menus, and pointing devices (WIMP). Some refer to the modern interface used on standard computers as a graphic user interface (GUI); embedded devices, such as network routers, might be controlled by either a GUI or a CLI.

Smartphones, tablets, and some computers (notably running Windows) also include touchscreens. While touchscreens have been around for decades, it’s only in the past few years they’ve gone mainstream. Even so, the primary way to input data was through a keyboard – even if it’s a “soft” keyboard implemented on a touchscreen, as on a smartphone.

Talk to me!

Enter speech. Sometimes it’s easier to talk, simply talk, to a device than to use a physical interface. Speech can be used for commands (“Alexa, turn up the thermostat” or “Hey Google, turn off the kitchen lights”) or for dictation.

Speech recognition is not easy for computers; in fact, it’s pretty difficult. However, improved microphones and powerful artificial-intelligence algorithms make speech recognition a lot easier. Helping the process: Cloud computing, which can throw nearly unlimited resources at speech recognition, including predictive analytics. Another helper: Constrained inputs, which means that when it comes to understanding commands, there are only so many words for the speech recognition system to decode. (Free-form dictation, like writing an essay using speech recognition, is a far harder problem.)

Speech recognition is only going to get better – and bigger. According to one report, “The speech and voice recognition market is expected to be valued at USD 6.19 billion in 2017and is likely to reach USD 18.30 billion by 2023, at a CAGR of 19.80% between 2017 and 2023. The growing impact of artificial intelligence (AI) on the accuracy of speech and voice recognition and the increased demand for multifactor authentication are driving the market growth.”

Helping the process: Cloud computing, which can throw nearly unlimited resources at speech recognition, including predictive analytics. Another helper: Constrained inputs, which means that when it comes to understanding commands, there are only so many words for the speech recognition system to decode. (Free-form dictation, like writing an essay using speech recognition, is a far harder problem.)

It’s a big market

Speech recognition is only going to get better – and bigger. According to one report, “The speech and voice recognition market is expected to be valued at USD 6.19 billion in 2017and is likely to reach USD 18.30 billion by 2023, at a CAGR of 19.80% between 2017 and 2023. The growing impact of artificial intelligence (AI) on the accuracy of speech and voice recognition and the increased demand for multifactor authentication are driving the market growth.” The report continues:

“The speech recognition technology is expected to hold the largest share of the market during the forecast period due to its growing use in multiple applications owing to the continuously decreasing word error rate (WER) of speech recognition algorithm with the developments in natural language processing and neural network technology. The speech recognition technology finds applications mainly across healthcare and consumer electronics sectors to produce health data records and develop intelligent virtual assistant devices, respectively.

“The market for the consumer vertical is expected to grow at the highest CAGR during the forecast period. The key factor contributing to this growth is the ability to integrate speech and voice recognition technologies into other consumer devices, such as refrigerators, ovens, mixers, and thermostats, with the growth of Internet of Things.”

Right now, many of us are talking to Alexa, talking to Siri, and talking to Google Home. Back in 2009, I owned a Ford car that had a primitive (and laughably inaccurate) infotainment system – today, a new car might do a lot better, perhaps due to embedded Alexa. Will we soon be talking to our ovens, to our laser printers and photocopiers, to our medical implants, to our assembly-line equipment, and to our network infrastructure? It wouldn’t surprise Alexa in the least.

Agility – the ability to deliver projects quickly. That applies to new projects, as well as updates to existing projects. The agile software movement began when many smart people became frustrated with the classic model of development, where first the organization went through a complex process to develop requirements (which took months or years), and wrote software to address those requirements (which took months or years, or maybe never finished). By then, not only did the organization miss out on many opportunities, but perhaps the requirements were no longer valid – if they ever were.

With agile methodologies, the goal is to build software (or accomplish some complex task or action), in small incremental iterations. Each iteration delivers some immediate value, and after each iteration, there would be an evaluation of how well those who requested the project (the stakeholders) were satisfied, and what they wanted to do next. No laborious up-front requirements. No years of investment before there was any return on that investment.

One of the best-known agile frameworks is Scrum, developed by Jeff Sutherland and Ken Schwaber in the early 1990s. In my view, Scrum is noteworthy for several innovations, including:

  • The Scrum framework is simple enough for everyone involved to understand.
  • The Scrum framework is not a product.
  • Scrum itself is not tied to specific vendor’s project-management tools.
  • Work is performed in two-week increments, called Sprints.
  • Every day there is a brief meeting called a Daily Scrum.
  • Development is iterative, incremental, and outcomes are predictable.
  • The work must be transparent, as much as possible, to everyone involved.
  • The roles of participants in the project are defined extremely clearly.
  • The relationship between people in the various roles is also clearly defined.
  • A key participant is the Scrum Master, who helps everyone maximize the value of the team and the project.
  • There is a clear, unambiguous definition of what “Done” means for every action item.

Scrum itself is refined every year or two by Sutherland and Schwaber. The most recent version (if you can call it a version) is Scrum 2017; before that, it was revised in 2016 and 2013. While there aren’t that many significant changes from the original vision unveiled in 1995, here are three recent changes that, in my view, make Scrum better than ever – enough that it might be called Scrum 2.0. Well, maybe Scrum 1.5. You decide:

  1. The latest version acknowledges more clearly that Scrum, like other agile methodologies, is used for all sorts of projects, not merely creating or enhancing software. While the Scrum Guide is still development-focused, Scrum can be used for market research, product development, developing cloud services, and even managing schools and governments.
  2. The Daily Scrum will be more focused on exploring how well the work is driving toward the goals planned for the biweekly Sprint Goal. For example – what work will be done today to drive to the goal? What impediments likely to prevent us from meeting the goal? (Previously, the Daily Scrum was often viewed as a glorified status report meeting.)
  3. Scrum has a set of values, and those are now spelled out: “When the values of commitment, courage, focus, openness and respect are embodied and lived by the Scrum Team, the Scrum pillars of transparency, inspection, and adaptation come to life and build trust for everyone. The Scrum Team members learn and explore those values as they work with the Scrum events, roles and artifacts. Successful use of Scrum depends on people becoming more proficient in living these five values… Scrum Team members respect each other to be capable, independent people.”

The word “agile” is thrown around too often in business and technology, covering everything from planning a business acquisition to planning a network upgrade. Scrum is one of the best-known agile methodologies, and the framework is very well suited for all sorts of projects where it’s not feasible to determine a full set of requirements up front, and there’s a need to immediately begin delivering some functionality (or accomplish parts of the tasks). That Scrum continues to evolve will help ensure its value in the coming years… and decades.

Criminals steal money from banks. Nothing new there: As Willie Sutton famously said, “I rob banks because that’s where the money is.”

Criminals steal money from other places too. While many cybercriminals target banks, the reality is that there are better places to steal money, or at least, steal information that can be used to steal money. That’s because banks are generally well-protected – and gas stations, convenience stores, smaller on-line retailers, and even payment processors are likely to have inadequate defenses — or make stupid mistakes that aren’t caught by security professionals.

Take TIO Networks, a bill-payment service purchased by PayPal for US$233 in July 2017. TIO processed more than $7 billion in bill payments last year, serving more than 10,000 vendors and 16 million consumers.

Hackers now know critical information about all 16 million TIO customers. According to Paymts.com, “… the data that may have been impacted included names, addresses, bank account details, Social Security numbers and login information. How much of those details fell into the hands of cybercriminals depends on how many of TIO’s services the consumers used.”

PayPal has said,

“The ongoing investigation has uncovered evidence of unauthorized access to TIO’s network, including locations that stored personal information of some of TIO’s customers and customers of TIO billers. TIO has begun working with the companies it services to notify potentially affected individuals. We are working with a consumer credit reporting agency to provide free credit monitoring memberships. Individuals who are affected will be contacted directly and receive instructions to sign up for monitoring.”

Card Skimmers and EMV Chips

Another common place where money changes hands: The point-of-purchase device. Consider payment-card skimmers – that is, a hardware device secretly installed into a retail location’s card reader, often at an unattended location like a gasoline pump.

The amount of fraud caused by skimmers copying information on payment cards is expected to rise from $3.1 billion in 2015 to $6.4 billion in 2018, affecting about 16 million cardholders. Those are for payment cards that don’t have the integrated EMV chip, or for transactions that don’t use the EMV system.

EMV chips, also known as chip-and-PIN or chip-and-signature, are named for the three companies behind the technology standards – Europay, MasterCard, and Visa. Chip technology, which is seen as a nuisance by consumers, has dramatically reduced the amount of fraud by generating a unique, non-repeatable transaction code for each purchase.

The rollout of EMV, especially in the United States, is painfully slow. Many merchants still haven’t upgraded to the new card-reader devices or back-end financial services to handle those transactions. For example, there are very few fuel stations using chips to validate transactions, and so pay-at-the-pump in U.S. is universally still dependent on the mag stripe reader. That presents numerous opportunities for thieves to install skimmers on that stripe reader, and be able to steal payment card information.

For an excellent, well-illustrated primer on skimmers and skimmer-related fraud at gas stations, see “As gas station skimmer card fraud increases, here’s how to cut your risk.” Theft at the point of purchase, or at payment processors, will continue as long as companies fail to execute solid security practices – and continue to accept non-EMV payment card transactions, including allowing customers to type their credit- or debit-card numbers onto websites. Those are both threats for the foreseeable future, especially since desktops, notebooks, and mobile device don’t have built-in EMV chip readers.

Crooks are clever, and are everywhere. They always have been. Money theft and fraud – no matter how secure the banks are, it’s not going away any time soon.

SysSecOps is a new phrase, still unseen by many IT and security administrators – however it’s being discussed within the market, by analysts, and at technical conferences. SysSecOps, or Systems & Security Operations, describes the practice of combining security groups and IT operations groups to be able to make sure the health of enterprise technology – and having the tools to be able to respond most effectively when issues happen.

SysSecOps concentrates on taking down the info walls, disrupting the silos, that get between security groups and IT administrators. IT operations personnel are there to make sure that end-users can access applications, and that important infrastructure is running at all times. They want to optimize access and availability, and require the data required to do that job – like that a new employee needs to be provisioned, or a hard disk drive in a RAID array has actually stopped working, that a new partner needs to be provisioned with access to a secure document repository, or that an Oracle database is ready to be moved to the cloud. It’s everything about innovation to drive business.

Very Same Data, Various Use-Cases

Endpoint and network monitoring details and analytics are clearly customized to fit the diverse needs of IT and security. However, the underlying raw data is in fact the exact same. The IT and security groups simply are looking at their own domain’s issues and scenarios – and doing something about it based upon those use-cases.

Yet in some cases the IT and security groups have to interact. Like provisioning that brand-new organization partner: It must touch all the ideal systems, and be done securely. Or if there is a problem with a remote endpoint, such as a mobile phone or a mechanism on the Industrial Internet of Things, IT and security might have to work together to identify exactly what’s going on. When IT and security share the exact same data sources, and have access to the very same tools, this job becomes a lot easier – and hence SysSecOps.

Envision that an IT administrator spots that a server hard drive is nearing full capacity – and this was not anticipated. Perhaps the network had actually been breached, and the server is now being utilized to steam pirated films throughout the Web. It happens, and finding and resolving that issue is a task for both IT and security. The data gathered by endpoint instrumentation, and showed through a SysSecOps-ready tracking platform, can assist both sides working together more effectively than would happen with conventional, distinct, IT and security tools.

SysSecOps: It’s a brand-new term, and a brand-new idea, and it’s resonating with both IT and security groups. You can discover more about this in a brief 9 minute video, where I talk with numerous market specialists about this subject: “Exactly what is SysSecOps?

Ransomware is genuine, and is threatening individuals, services, schools, medical facilities, governments – and there’s no indication that ransomware is stopping. In fact, it’s probably increasing. Why? Let’s be honest: Ransomware is probably the single most efficient attack that hackers have ever created. Anybody can develop ransomware utilizing easily available tools; any cash received is likely in untraceable Bitcoin; and if something goes wrong with decrypting someone’s disk drive, the hacker isn’t impacted.

A business is hit with ransomware every 40 seconds, according to some sources, and 60% of all malware were ransomware. It strikes all sectors. No industry is safe. And with the increase of RaaS (Ransomware-as-a-Service) it’s going to get worse.

Fortunately: We can fight back. Here’s a 4 step fight plan.

Four steps to good fundamental hygiene

  1. Training employees on handling destructive e-mails. There are falsified messages from service partners. There’s phishing and target spearphishing. Some will survive email spam/malware filters; workers need to be taught not to click links in those messages, or naturally, not to give permission for plugins or apps to be installed. However, some malware, like ransomware, will get through, typically making use of obsolete software applications or unpatched systems, just like in the Equifax breach.
  2. Patch everything. Guaranteeing that end points are completely patched and completely updated with the current, most safe OS, applications, utilities, device drivers, and code libraries. In this way, if there is an attack, the end point is healthy, and has the ability to best battle the infection.
  3. Ransomware isn’t really a technology or security problem. It’s an organization problem. And it’s a lot more than the ransom that is demanded. That’s peanuts compared to loss of efficiency because of downtime, bad public relations, angry clients if service is interfered with, and the expense of rebuilding lost data. (And that assumes that valuable intellectual property or protected financial or consumer health data isn’t really stolen.).
  4. Backup, backup, backup, and safeguard those backups. If you do not have safe, protected backups, you cannot restore data and core infrastructure in a timely fashion. That includes making day-to-day snapshots of virtual machines, databases, applications, source code, and configuration files.

By the way, businesses need tools to discover, determine, and avoid malware like ransomware from dispersing. This needs continuous visibility and reporting of what’s taking place in the environment – consisting of “zero day” attacks that have not been seen before. Part of that is keeping an eye on end points, from the smart phone to the PC to the server to the cloud, to make sure that endpoints are up-to-date and secure, which no unexpected changes have been made to their underlying configuration. That way, if a machine is contaminated by ransomware or other malware, the breach can be discovered quickly, and the device separated and closed down pending forensics and healing. If an end point is breached, quick containment is critical.

Read more in my guest story for Chuck Leaver’s blog, “Prevent And Manage Ransomware With These 4 Steps.”

AI is an emerging technology – always, has been always will be. Back in the early 1990s, I was editor of AI Expert Magazine. Looking for something else in my archives, I found this editorial, dated February 1991.

What do you think? Is AI real yet?

In The Terminator, the Skynet artificial intelligence was turned on to track down hacking a military computer network. Turns out the hacker was Skynet itself. Is there a lesson there? Could AI turn against us, especially as it relates to the security domain?

That was one of the points I made while moderating a discussion of cybersecurity and AI back in October 2017. Here’s the start of a blog post written by my friend Tami Casey about the panel:

Mention artificial intelligence (AI) and security and a lot of people think of Skynet from The Terminator movies. Sure enough, at a recent Bay Area Cyber Security Meetup group panel on AI and machine learning, it was moderator Alan Zeichick – technology analyst, journalist and speaker – who first brought it up. But that wasn’t the only lively discussion during the panel, which focused on AI and cybersecurity.

I found two areas of discussion particularly interesting, which drew varying opinions from the panelists. One, around the topic of AI eliminating jobs and thoughts on how AI may change a security practitioner’s job, and two, about the possibility that AI could be misused or perhaps used by malicious actors with unintended negative consequences.

It was a great panel. I enjoyed working with the Meetup folks, and the participants: Allison Miller (Google), Ali Mesdaq (Proofpoint), Terry Ray (Imperva), Randy Dean (Launchpad.ai & Fellowship.ai).

You can read the rest of Tami’s blog here, and also watch a video of the panel.

Smart televisions, talking home assistants, consumer wearables – that’s not the real story of the Internet of Things. While those are fun and get great stories on blogs and morning news reports, the real IoT is the Industrial IoT. That’s where businesses will truly be transformed, with intelligent, connected devices working together to improve services, reduce friction, and disrupt everything. Everything.

According to Grand View Research, the Industrial IoT (IIoT) market will be $933.62 billion by 2025. “The ability of IoT to reduce costs has been the prime factor for its adoption in the industrial sector. However, several significant investment incentives, such as increased productivity, process automation, and time-to-market, have also been boosting this adoption. The falling prices of sensors have reduced the overall cost associated with data collection and analytics,” says the report.

The report continues,

An emerging trend among enterprises worldwide is the transformation of technical focus to improving connectivity in order to undertake data collection with the right security measures in place and with improved connections to the cloud. The emergence of low-power hardware devices, cloud integration, big data analytics, robotics & automation, and smart sensors are also driving IIoT market growth.

Markets and Markets

Markets & Markets predicts that IIoT will be worth $195.47 billion by 2022. The company says,

A key influencing factor for the growth of the IIoT market is the need to implement predictive maintenance techniques in industrial equipment to monitor their health and avoid unscheduled downtimes in the production cycle. Factors which driving the IIoT market include technological advancements in semiconductor and electronics industry and evolution of cloud computing technologies.

The manufacturing vertical is witnessing a transformation through the implementation of the smart factory concept and factory automation technologies. Government initiatives such as Industrie 4.0 in Germany and Plan Industriel in France are expected to promote the implementation of the IIoT solutions in Europe. Moreover, leading countries in the manufacturing vertical such as U.S., China, and India are expected to further expand their manufacturing industries and deploy smart manufacturing technologies to increase this the contribution of this vertical to their national GDPs.

The IIoT market for camera systems is expected to grow at the highest rate between 2016 and 2022. Camera systems are mainly used in the retail and transportation verticals. The need of security and surveillance in these sectors is the key reason for the high growth rate of the market for camera systems. In the retail sector, the camera systems are used for capturing customer behavior, moment tracking, people counting, and heat mapping. The benefits of installation of surveillance systems include the safety at the workplace, and the prevention of theft and other losses, sweet hearting, and other retail crimes. Video analytics plays a vital role for security purpose in various areas in transportation sector including airports, railway stations, and large public places. Also, intelligent camera systems are used for traffic monitoring, and incident detection and reporting.

Accenture

The huge research firm Accenture says that the IIoT will add $14.2 trillion to the global economy by 2030. That’s not talking about the size of the market, but the overall lift that IIoT will have. By any measure, that’s staggering. Accenture reports,

Today, the IIoT is helping to improve productivity, reduce operating costs and enhance worker safety. For example, in the petroleum industry, wearable devices sense dangerous chemicals and unmanned aerial vehicles can inspect remote pipelines.

However, the longer-term economic and employment potential will require companies to establish entirely new product and service hybrids that disrupt their own markets and generate fresh revenue streams. Many of these will underpin the emergence of the “outcome economy,” where organizations shift from selling products to delivering measurable outcomes. These may range from guaranteed energy savings in commercial buildings to guaranteed crop yields in a specific parcel of farmland.

IIoT Is a Work in Progress

The IIoT is going to have huge impact. But it hasn’t yet, not on any large scale. As Accenture says,

When Accenture surveyed more than 1,400 C-suite decision makers—including 736 CEOs—from some of the world’s largest companies, the vast majority (84 percent) believe their organizations have the capability to create new, service-based income streams from the IIoT.

But scratch beneath the surface and the gloss comes off. Seventy-three percent confess that their companies have yet to make any concrete progress. Just 7 percent have developed a comprehensive strategy with investments to match.

Challenge and opportunity: That’s the Industrial Internet of Things. Watch this space.

The bad news: There are servers used in serverless computing. Real servers, with whirring fans and lots of blinking lights, installed in racks inside data centers inside the enterprise or up in the cloud.

The good news: You don’t need to think about those servers in order to use their functionality to write and deploy enterprise software. Your IT administrators don’t need to provision or maintain those servers, or think about their processing power, memory, storage, or underlying software infrastructure. It’s all invisible, abstracted away.

The whole point of serverless computing is that there are small blocks of code that do one thing very efficiently. Those blocks of code are designed to run in containers so that they are scalable, easy to deploy, and can run in basically any computing environment. The open Docker platform has become the de facto industry standard for containers, and as a general rule, developers are seeing the benefits of writing code that target Docker containers, instead of, say, Windows servers or Red Hat Linux servers or SuSE Linux servers, or any specific run-time environment. Docker can be hosted in a data center or in the cloud, and containers can be easily moved from one Docker host to another, adding to its appeal.

Currently, applications written for Docker containers still need to be managed by enterprise IT developers or administrators. That means deciding where to create the containers, ensuring that the container has sufficient resources (like memory and processing power) for the application, actually installing the application into the container, running/monitoring the application while it’s running, and then adding more resources if required. Helping do that is Kubernetes, an open container management and orchestration system for Docker. So while containers greatly assist developers and admins in creating portable code, the containers still need to be managed.

That’s where serverless comes in. Developers write their bits of code (such as to read or write from a database, or encrypt/decrypt data, or search the Internet, or authenticate users, or to format output) to run in a Docker container. However, instead of deploying directly to Docker, or using Kubernetes to handle deployment, they write their code as a function, and then deploy that function onto a serverless platform, like the new Fn project. Other applications can call that function (perhaps using a RESTful API) to do the required operation, and the serverless platform then takes care of everything else automatically behind the scenes, running the code when needed, idling it when not needed.

Read my essay, “Serverless Computing: What It Is, Why You Should Care,” to find out more.

Critical information about 46 million Malaysians were leaked online onto the Dark Web. The stolen data included mobile phone numbers from telcos and mobile virtual network operators (MVNOs), prepaid phone numbers, customers details including physical addresses – and even the unique IMEI and IMSI registration numbers associated with SIM cards.

Isolated instance from one rogue carrier? No. The carriers included Altel, Celcom, DiGi, Enabling Asia, Friendimobile, Maxis, MerchantTradeAsia, PLDT, RedTone, TuneTalk, Umobile and XOX; news about the breach were first published 19 October 2017 by a Malaysian online community.

When did the breach occur? According to lowyat.net, “Time stamps on the files we downloaded indicate the leaked data was last updated between May and July 2014 between the various telcos.”

That’s more than three years between theft of the information and its discovery. We have no idea if the carriers had already discovered the losses, and chose not to disclose the breaches.

A huge delay between a breach and its disclosure is not unusual. Perhaps things will change once the General Data Protection Regulation (GDPR) kicks in next year, when organizations must reveal a breach within three days of discovery. That still leaves the question of discovery. It simply takes too long!

According to Mandiant, the global average dwell time (time between compromise and detection) is 146 days. In some areas, it’s far worse: the EMEA region has a dwell time of 469 days. Research from the Ponemon Institute says that it takes an average of 98 days for financial services companies to detect intrusion on their networks, and 197 days in retail. It’s not surprising that the financial services folks do a better job – but three months seems like a very long time.

An article headline from InfoSecurity Magazine says it all: “Hackers Spend 200+ Days Inside Systems Before Discovery.” Verizon’s Data Breach Investigations Report for 2017 has some depressing news: “Breach timelines continue to paint a rather dismal picture — with time-to-compromise being only seconds, time-to-exfiltration taking days, and times to discovery and containment staying firmly in the months camp. Not surprisingly, fraud detection was the most prominent discovery method, accounting for 85% of all breaches, followed by law enforcement which was seen in 4% of cases.”

What Can You Do?

There are two relevant statistics. The first is time-to-discovery, and the other is time-to-disclosure, whether to regulators or customers.

  • Time-to-disclosure is a matter of policy, not technology. There are legal aspects, public-relations, financial (what if the breach happens during a “quiet period” prior to announcing results?), regulatory, and even law-enforcement (what if investigators are laying a trap, and don’t want to tip off that the breach has been discovered?).
  • Time-to-discovery, on the other hand, is a matter of technology (and the willingness to use it). What doesn’t work? Scanning log files using manual or semi-automated methods. Excel spreadsheets won’t save you here!

What’s needed are comprehensive endpoint monitoring capabilities, coupled with excellent threat intelligence and real-time analytics driven by machine learning. Nothing else can correlate huge quantities of data from such widely disparate sources, and hope to discover outliers based on patterns.

Discovery and containment takes months, says Verizon. You can’t have containment without discovery. With current methods, we’ve seen that discovery takes months or years, if it’s every detected at all. Endpoint monitoring technology, when coupled with machine learning — and with 24×7 managed security software providers — can reduce that to seconds or minutes.

There is no excuse for breaches staying hidden for three years or longer. None. That’s no way to run a business.

Humans can’t keep up. At least, not when it comes to meeting the rapidly expanding challenges inherent to enterprise cybersecurity. There are too many devices, too many applications, too many users, and too many megabytes of log files for humans to make sense of it all. Moving forward, effective cybersecurity is going to be a “Battle of the Bots,” or to put it less dramatically, machine versus machine.

Consider the 2015 breach at the U.S. Government’s Office of Personnel Management (OPM). According to a story in Wired, “The Office of Personnel Management repels 10 million attempted digital intrusions per month—mostly the kinds of port scans and phishing attacks that plague every large-scale Internet presence.” Yet despite sophisticated security mechanisms, hackers managed to steal millions of records on applications for security clearances, personnel files, and even 5.6 digital images of government employee fingerprints. (In August 2017, the FBI arrested a Chinese national in connection with that breach.)

Traditional security measures are often slow, and potentially ineffective. Take the practice of applying patches and updates to address new-found software vulnerabilities. Companies now have too many systems in play for the process of finding and installing patches to be effectively handled manually,

Another practice that can’t be handled manually: Scanning log files to identify abnormalities and outliers in data traffic. While there are many excellent tools for reviewing those files, they are often slow and aren’t good at aggregating lots across disparate silos (such as a firewall, a web application server, and an Active Directory user authentication system). Thus, results may not be comprehensive, patterns may be missed, and results of deep analysis may not be returned in real time.

Read much more about this in my new essay, “Machine Versus Machine: The New Battle For Enterprise Cybersecurity.”

Still no pastrami sandwich. Still no guinea pig. What’s the deal with the cigarette?

I installed iOS 11.1 yesterday, tantalized by Apple’s boasting of tons of new emoji. Confession: Emoji are great fun. Guess what I looked for right after the completed software install?

Many of the 190 new emoji are skin-tone variations on new or existing people or body parts. That’s good: Not everyone is yellow, like the Simpsons. (If you don’t count the different skin-tone versions, there are about 70 new graphics.)

New emoji that I like:

  • Steak. Yum!
  • Shushing finger face. Shhhh!
  • Cute hedgehog. Awww!
  • Scottish flag. Och aye!

What’s still stupidly missing:

  • Pastrami sandwich. Sure, there’s a new sandwich emoji, but it’s not a pastrami sandwich. Boo.
  • There’s a cheeseburger (don’t get me started on the cheese top/bottom debate), but nothing for those who don’t put cheese on their burgers at all. Grrrr.
  • Onion rings. They’ve got fries, but no rings. Waah.
  • Coffee with creamer. I don’t drink my coffee black. Bleh.
  • Guinea pig. That’s our favorite pet, but no cute little caviidae in the emoji. Wheek!

I still don’t like the cigarette emoji, but I guess once they added it in 2015, they couldn’t delete it.

Here is a complete list of all the emoji, according to PopSugar. What else is missing?

You want to read Backlinko’s “The Definitive Guide To SEO In 2018.” Backlinko is an SEO consultancy founded by Brian Dean. The “Definitive Guide” is a cheerfully illustrated infographic – a lengthy infographic – broken up into several useful chapters:

  • RankBrain & User Experience Signals
  • Become a CTR Jedi
  • Comprehensive, In-Depth Content Wins
  • Get Ready for Google’s Mobile-first Index
  • Go All-In With Video (Or Get Left Behind)
  • Pay Attention to Voice Search
  • Don’t Forget: Content and Links Are Key
  • Quick Tips for SEO in 2018

Some of these section had advice that I knew; others were pretty much new to me, such as the voice search section. I’ll also admit to being very out-of-date on how Google’s ranking systems work; it changes often, and my last deep dive was circa 2014. Oops.

The advice in this document is excellent and well-explained. For example, on RankBrain:

Last year Google announced that RankBrain was their third most important ranking factor: “In the few months it has been deployed, RankBrain has become the third-most important signal contributing to the result of a search query.”

And as Google refines its algorithm, RankBrain is going to become even MORE important in 2018. The question is: What is RankBrain, exactly? And how can you optimize for it?

RankBrain is a machine learning system that helps Google sort their search results. That might sound complicated, but it isn’t. RankBrain simply measures how users interact with the search results… and ranks them accordingly.

The document then goes into a very helpful example, digging into the concept of Dwell Time (that is, how long someone spends on the page). The “Definitive Guide” also provides some very useful metrics about targets for click-through rate (CTR), dwell time, length and depth of content, and more. For example, the document says,

One industry study found that organic CTR is down 37% since 2015. It’s no secret why: Google is crowding out the organic search results with Answer Boxes, Ads, Carousels, “People also ask” sections, and more. And to stand out, your result needs to scream “click on me!”…or else it’ll be ignored.

All of the advice is good, but of course, I’m not always going to follow it. For example, the “Definitive Guide” says:

How can you write the type of in-depth content that Google wants to see? First, publish content that’s at least 2,000 words. That way, you can cover everything a Google searcher needs to know about that topic. In fact, our ranking factors study found that longer content (like ultimate guides and long-form blog posts) outranked short articles in Google.

Well, this post isn’t even close to 2,000 words. Oops. Read the “Definitive Guide,” you’ll be glad you did.

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10. That’s a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Note that the top 10 list doesn’t directly represent the 10 most common attacks. Rather, it’s a ranking of risk. There are four factors used for this calculation. One is the likelihood that applications would have specific vulnerabilities; that’s based on data provided by companies. That’s the only “hard” metric in the OWASP Top 10. The other three risk factors are based on professional judgement.

It boggles the mind that a majority of top 10 issues appear across the 2007, 2010, 2013, and draft 2017 OWASP lists. That doesn’t mean that these application security vulnerabilities have to remain on your organization’s list of top problems, though—you can swat those flaws.

Read more in my essay, “The OWASP Top 10 is killing me, and killing you!

Apply patches. Apply updates. Those are considered to be among the lowest-hanging of the low-hanging fruit for IT cybersecurity. When commercial products release patches, download and install the code right away. When open-source projects disclose a vulnerability, do the appropriate update as soon as you can, everyone says.

A problem is that there are so many patches and updates. They’re found in everything from device firmware to operating systems, to back-end server software to mobile apps. To be able to even discover all the patches is a huge effort. You have to know:

  • All the hardware and software in your organization — so you can scan the vendors’ websites or emails for update notices. This may include the data center, the main office, remote offices, and employees homes. Oh, and rogue software installed without knowledge of IT.
  • The versions of all the hardware and software instances — you can tell which updates apply to you, and which don’t. Sometimes there may be an old version somewhere that’s never been patched.
  • The dependencies. Installing a new operating system may break some software. Installing a new version of a database may require changes on a web application server.
  • The location of each of those instances — so you can know which ones need patching. Sometimes this can be done remotely, but other times may require a truck roll.
  • The administrator access links, usernames and password — hopefully, those are not set to “admin/admin.” The downside of changing default admin passwords is that you have to remember the new ones. Sure, sometimes you can make changes with, say, any Active Director user account with the proper privileges. That won’t help you, though, with most firmware or mobile devices.

The above steps are merely for discovery of the issue and the existence of a patch. You haven’t protected anything until you’ve installed the patch, which often (but not always) requires taking the hardware, software, or service offline for minutes or hours. This requires scheduling. And inconvenience. Even if you have patch-management tools (and there are many available), too many low-hanging fruit can be overlooked.

You Can’t Wait for That Downtime Window

As Oracle CEO Larry Ellison made important points about patching at his keynote at OpenWorld 2017,

Our data centers are enormously complicated. There are lots of servers and storage and operating systems, virtual machines, containers and databases, data stores, file systems. And there are thousands of them, tens of thousands, hundreds of thousands of them. It’s hard for people to locate all these things and patch them. They have to be aware there’s a vulnerability. It’s got to be an automated process.

You can’t wait for a downtime window, where you say, “Oh, I can’t take the system down. I know I’ve got to patch this, but we have scheduled downtime middle of next month.” Well, that’s wrong thinking and that’s kind of lack of priority for security.

All that said, patching and updating must be a priority. Dr. Ron Layton, Deputy Assistant Director of the U.S. Secret Service, said at the NetEvents Global Press Summit, September 2017:

Most successful hacks and breaches – most of them – were because low-level controls were not in place. That’s it. That’s it. Patch management. It’s the low-level stuff that will get you to the extent that the bad guys will say, I’m not going to go here. I’m going to go somewhere else. That’s it.

The Scale of Security Issues Is Huge

I receive many regular email from various defect-tracking and patch-awareness lists. Here’s one weekly sample from the CERT teams at U.S. Dept. of Homeland Security. IT pros won’t be surprised at how large it is: https://www.us-cert.gov/ncas/bulletins/SB17-296

There are 25 high-severity vulnerabilities on this list, most from Microsoft, some from Oracle. Lots of medium-severity vulnerabilities from Microsoft, OpenText, Oracle, and WPA – the latter being the widely reported bug in Wi-Fi Protected Access. In addition, there are a few low-severity vulnerability, and then page after page of those labeled “severity not yet assigned.” The list goes on and on, even hitting infrastructure products from Cisco and F5. And lots more WiFi issues.

This is a typical week – and not all those vulnerabilities in the CERT report have patches yet. CERT is only one source, by the way. Want more? Here’s a list of security-related updates from Apple. Here is a list of a security updates from Juniper Networks. A list of from Microsoft. And Red Hat too.

So: When security analysts say that enterprises merely need to keep up with patches and fixes, well, yes, that’s the low-hanging fruit. However, nobody talks about how much of that low-hanging fruit there is. The amount is overwhelming in an enterprise. No wonder some rotten fruit slip through the cracks.

Open source software (OSS) offers many benefits for organizations large and small—not the least of which is the price tag, which is often zero. Zip. Nada. Free-as-in-beer. Beyond that compelling price tag, what you often get with OSS is a lack of a hidden agenda. You can see the project, you can see the source code, you can see the communications, you can see what’s going on in the support forums.

When OSS goes great, everyone is happy, from techies to accounting teams. Yes, the legal department may want to scrutinize the open source license to make sure your business is compliant, but in most well-performing scenarios, the lawyers are the only ones frowning. (But then again, the lawyers frown when scrutinizing commercial closed-source software license agreements too, so you can’t win.)

The challenge with OSS is that it can be hard to manage, especially when something goes wrong. Depending on the open source package, there can be a lot of mysteries, which can make ongoing support, including troubleshooting and performance tuning, a real challenge. That’s because OSS is complex.

It’s not like you can say, well, here’s my Linux distribution on my server. Oh, and here’s my open source application server, and my open source NoSQL database, and my open source log suite. In reality, those bits of OSS may be from separate OSS projects, which may (or may not) have been tested for how well they work together.

A separate challenge is that because OSS is often free-as-in-beer, the software may not be in the corporate inventory. That’s especially common if the OSS is in the form of a library or an API that might be built into other applications you’ve written yourself. The OSS might be invisible but with the potential to break or cause problems down the road.

You can’t manage what you don’t know about

When it comes to OSS, there may be a lot you don’t know about, such as those license terms or interoperability gotchas. Worse, there can be maintenance issues — and security issues. Ask yourself: Does your organization know all the OSS it has installed on servers on-prem or in the cloud? Coded into custom applications? Are you sure that all patches and fixes have been installed (and installed correctly), even on virtual machine templates, and that there are no security vulnerabilities?

In my essay “The six big gotchas: The impact of open source on data centers,” we’ll dig into the key topics: License management, security, patch management, maximizing uptime, maximizing performance, and supporting the OSS.

Those are two popular ways of migrating enterprise assets to the cloud:

  1. Write new cloud-native applications.
  2. Lift-and-shift existing data center applications to the cloud.

Gartner’s definition: “Lift-and-shift means that workloads are migrated to cloud IaaS in as unchanged a manner as possible, and change is done only when absolutely necessary. IT operations management tools from the existing data center are deployed into the cloud environment largely unmodified.”

There’s no wrong answer, no wrong way of proceeding. Some data center applications (including servers and storage) may be easier to move than others. Some cloud-native apps may be easier to write than others. Much depends on how much interconnectivity there is between the applications and other software; that’s why, for example, public-facing websites are often easiest to move to the web, while tightly coupled internal software, such as inventory control or factory-floor automation, can be trickier.

That’s why in some cases, a hybrid strategy is best. Some parts of the applications are moved up to the cloud, while others remain in the data centers, with SD-WANs or other connectivity linking everything together in a secure manner.

In other words, no one size fits all. And no one timeframe fits all, especially when it comes to lifting-and-shifting.

Saas? Paas? It Depends.

A recent survey from the Oracle Applications User Group (OAUG) showed that 70% of respondents who have plans to adopt Oracle Cloud solutions will do so in the next three years. About 35% plan to implement Software-as-a-Service (SaaS) solutions to run with their existing Oracle on-premises installations and 29 percent planning to use Platform-as-a-Service (PaaS) services to accelerate software development efforts in the next 12 months.

Joe Paiva, CIO of the U.S. Commerce Department’s International Trade Administration (ITA), is a fan of lift-and-shift. He said at a cloud conference that “Sometimes it makes sense because it gets you there. That was the key. We had to get there because we would be no worse off or no better off, and we were still spending a lot of money, but it got us to the cloud. Then we started doing rationalization of hardware and applications, and dropped our bill to Amazon by 40 percent compared to what we were spending in our government data center. We were able to rationalize the way we use the service.” Paiva estimates government agencies could save 5%-15% using lift-and-shift.

The benefits of moving existing workloads to the cloud are almost entirely financial. If you can shut down a data center and pay less to run the application in the cloud, it’s can be a good short-term solution with immediate ROI. Gartner cautions, however, that lift and shift “generally results in little created value. Plus, it can be a more expensive option and does not deliver immediate cost savings.” Much depends on how much it costs to run that application today.

A Multi-Track Process for Cloud Migration

The real benefits of new cloud development and deployment architectures take time to realize. For many organizations, there may be a multi-track process:

First track: Lift-and-shift existing workloads that are relatively easy to migrate, while simultaneously writing cloud-native applications for new projects. Those provide the biggest and fastest return on investment, while leaving data center workloads in place and untouched.

Second track: Write cloud-native applications for the remaining data-center workloads, the ones impractical to migrate in their existing form. These will be slower, but the payoff would result in the ability to turn off some or all existing data centers – and eliminating their associated expenses, such as power and cooling, bandwidth, and physical space.

Third track: At some point, revisit the lifted-and-shifted workloads to see which would significantly benefit from being rewritten as cloud-native apps. Unless there is an order of magnitude increase in efficiency, or significant added functionality, the financial returns won’t be high – or may be nonexistent. For some applications, it may never make sense to redesign and rewrite them in a cloud-native way. So, those old enterprise applications may live on for years to come.

About a decade ago, I purchased a piece of a mainframe on eBay — the name ID bar. Carved from a big block of aluminum, it says “IBM System/370 168,” and it hangs proudly over my desk.

My time on mainframes was exclusively with the IBM System/370 series. With a beautiful IBM 3278 color display terminal on my desk, and, later, a TeleVideo 925 terminal and an acoustic coupler at home, I was happier than anyone had a right to be.

We refreshed our hardware often. The latest variant I worked on was the System/370 4341, introduced in early 1979, which ran faster and cooler than the slower, very costly 3031 mainframes we had before. I just found this on the IBM archives: “The 4341, under a 24-month contract, can be leased for $5,975 a month with two million characters of main memory and for $6,725 a month with four million characters. Monthly rental prices are $7,021 and $7,902; purchase prices are $245,000 and $275,000, respectively.” And we had three, along with tape drives, disk drives (in IBM-speak, DASD, for Direct Access Storage Devices), and high-speed line printers. Not cheap!

Our operating system on those systems was called Virtual Machine, or VM/370. It consisted of two parts, Control Program and Conversational Monitoring System. CP was the timesharing operating system – in modern virtualization terms, the hypervisor running on the bare metal. CMS was the user interface that users logged into, and provide access to not only a text-based command console, but also file storage and a library of tools, such as compilers. (We often referred to the platform as CP/CMS).

Thanks to VM/370, each user believed she had access to a 100% dedicated and isolated System/370 mainframe, with every resource available and virtualized. (I.e., she appeared to have dedicated access to tape drives, but they appeared non-functional if her tape(s) weren’t loaded, or if she didn’t buy access to the drives.)

My story about mainframes isn’t just reminiscing about the time of dinosaurs. When programming those computers, which I did full-time in the late 1970s and early 1980s, I learned a lot of lessons that are very applicable today. Read all about that in my article for HP Enterprise Insights, “4 lessons for modern software developers from 1970s mainframe programming.”

To get the most benefit from the new world of cloud-native server applications, forget about the old way of writing software. In the old model, architects designed software. Programmers wrote the code, and testers tested it on test server. Once the testing was complete, the code was “thrown over the wall” to administrators, who installed the software on production servers, and who essentially owned the applications moving forward, only going back to the developers if problems occurred.

The new model, which appeared about 10 years ago is called “DevOps,” or developer operations. In the DevOps model, architects, developers, testers, and administrators collaborate much more closely to create and manage applications. Specifically, developers play a much broader role in the day-to-day administration of deployed applications, and use information about how the applications are running to tune and enhance those applications.

The involvement of developers in administration made DevOps perfect for cloud computing. Because administrators had fewer responsibilities (i.e., no hardware to worry about), it was less threatening for those developers and administrators to collaborate as equals.

Change matters

In that old model of software development and deployment, developers were always change agents. They created new stuff, or added new capabilities to existing stuff. They embraced change, including new technologies – and the faster they created change (i.e., wrote code), the more competitive their business.

By contrast, administrators are tasked with maintaining uptime, while ensuring security. Change is not a virtue to those departments. While admins must accept change as they install new applications, it’s secondary to maintaining stability. Indeed, admins could push back against deploying software if they believed those apps weren’t reliable, or if they might affect the overall stability of the data center as a whole.

With DevOps, everyone can embrace change. One of the ways that works, with cloud computing, is to reduce the risk that an unstable application can damage system reliability. In the cloud, applications can be build and deployed using bare-metal servers (like in a data center), or in virtual machines or containers.

DevOps works best when software is deployed in VMs or containers, since those are isolated from other systems – thereby reducing risk. Turns out that administrators do like change, if there’s minimal risk that changes will negatively affect overall system reliability, performance, and uptime.

Benefits of DevOps

Goodbye, CapEx, hello, OpEx. Cloud computing moves enterprises from capital-expense data centers (which must be built, electrified, cooled, networked, secured, stocked with servers, and refreshed periodically) to operational-expense service (where the business pays monthly for the processors, memory, bandwidth, and storage reserved and/or consumed). When you couple those benefits that with virtual machines, containers, and DevOps, you get:

  • Easier Maintenance: It can be faster to apply patches and fixes to software virtual machines – and use snapshots to roll back if needed.
  • Better Security: Cloud platform vendors offer some security monitoring tools, and it’s relatively easy to install top-shelf protections like next-generation firewalls – themselves offered as cloud services.
  • Improved Agility: Studies show that the process of designing, coding, testing, and deploying new applications can be 10x faster than traditional data center methods, because the cloud reduces and provides robust resources.
  • Lower Cost: Vendors such as Amazon, Google, Microsoft, and Oracle, are aggressively lowering prices to gain market share — and in many cases, those prices are an order of magnitude below what it could cost to provision an enterprise data center.
  • Massive Scale: Need more power? Need more bandwidth? Need more storage? Push a button, and the resources are live. If those needs are short-term, you can turn the dials back down, to lower the monthly bill. You can’t do that in a data center.

Rapidly evolving

The technologies used in creating cloud-native applications are evolving rapidly. Containers, for example, are relatively new, yet are becoming incredibly popular because they require 4x-10x fewer resources than VMs – thereby slashing OpEx costs even further. Software development and management tools, like Kubernetes (for orchestration of multiple containers), Chef (which makes it easy to manage cloud infrastructure), Puppet (which automates pushing out cloud service configurations), and OpenWhisk (which strips down cloud services to “serverless” basics) push the revolution farther.

DevOps is more important than the meaningless “developer operations” moniker implies. It’s a whole new, faster way of doing computing with cloud-native applications. Because rapid change means everything in achieving business agility, everyone wins.

“One of these things is not like the others,” the television show Sesame Street taught generations of children. Easy. Let’s move to the next level: “One or more of these things may or may not be like the others, and those variances may or may not represent systems vulnerabilities, failed patches, configuration errors, compliance nightmares, or imminent hardware crashes.” That’s a lot harder than distinguishing cookies from brownies.

Looking through gigabytes of log files and transactions records to spot patterns or anomalies is hard for humans: it’s slow, tedious, error-prone, and doesn’t scale. Fortunately, it’s easy for artificial intelligence (AI) software, such as the machine learning algorithms built into Oracle Management Cloud. What’s more, the machine learning algorithms can be used to direct manual or automated remediation efforts to improve security, compliance, and performance.

Consider how large-scale systems gradually drift away from their required (or desired) configuration, a key area of concern in the large enterprise. In his Monday, October 2 Oracle OpenWorld session on managing and securing systems at scale using AI, Prakash Ramamurthy, senior vice president of systems management at Oracle, talked about how drift happens. Imagine that you’ve applied a patch, but then later you spool up a virtual server that is running an old version of a critical service or contains an obsolete library with a known vulnerability. That server is out of compliance, Ramamurthy said. Drift.

Drift is bad, said Ramamurthy, and detecting and stopping drift is a core competency of Oracle Management Cloud. It starts with monitoring cloud and on-premises servers, services, applications, and logs, using machine learning to automatically understand normal behavior and identify anomalies. No training necessary here: A variety of machine learning algorithms teach themselves how to play the “one of these things is not like the others” game with your data, your systems, and your configuration, and also to classify the systems in ways that are operationally relevant. Even if those logs contain gigabytes of information on hundreds of thousands of transactions each second.

Learn more in my article for Forbes, “Catch The Drift With Machine Learning — Before The Drift Catches You.”