Users care passionately about their software being fast and responsive. You need to give your applications both 0-60 speed and the strongest long-term endurance. Here are 14 guidelines for choosing a deployment platform to optimize performance, whether your application runs in the data center or the cloud.

Faster! Faster! Faster! That killer app won’t earn your company a fortune if the software is slow as molasses. Sure, your development team did the best it could to write server software that offers the maximum performance, but that doesn’t mean diddly if those bits end up on a pokey old computer that’s gathering cobwebs in the server closet.

Users don’t care where it runs as long as it runs fast. Your job, in IT, is to make the best choices possible to enhance application speed, including deciding if it’s best to deploy the software in-house or host it in the cloud.

When choosing an application’s deployment platform, there are 14 things you can do to maximize the opportunity for the best overall performance. But first, let’s make two assumptions:

  • These guidelines apply only to choosing the best data center or cloud-based platform, not to choosing the application’s software architecture. The job today is simply to find the best place to run the software.
  • I presume that if you are talking about a cloud deployment, you are choosing infrastructure as a service (IaaS) instead of platform as a service (PaaS). What’s the difference? In PaaS, the operating system is provided by the host, such as Windows or Linux, .NET, or Java; all you do is provide the application. In IaaS, you can provide, install, and configure the operating system yourself, giving you more control over the installation.

Here’s the checklist

  1. Run the latest software. Whether in your data center or in the IaaS cloud, install the latest version of your preferred operating system, the latest core libraries, and the latest application stack. (That’s one reason to go with IaaS, since you can control updates.) If you can’t control this yourself, because you’re assigned a server in the data center, pick the server that has the latest software foundation.
  2. Run the latest hardware. Assuming we’re talking the x86 architecture, look for the latest Intel Xeon processors, whether in the data center or in the cloud. As of mid-2018, I’d want servers running the Xeon E5 v3 or later, or E7 v4 or later. If you use anything older than that, you’re not getting the most out of the applications or taking advantage of the hardware chipset. For example, some E7 v4 chips have significantly improved instructions-per-CPU-cycle processing, which is a huge benefit. Similarly, if you choose AMD or another processor, look for the latest chip architectures.
  3. If you are using virtualization, make sure the server has the best and latest hypervisor. The hypervisor is key to a virtual machine’s (VM) performance—but not all hypervisors are created equal. Many of the top hypervisors have multiple product lines as well as configuration settings that affect performance (and security). There’s no way to know which hypervisor is best for any particular application. So, assuming your organization lets you make the choice, test, test, test. However, in the not-unlikely event you are required to go with the company’s standard hypervisor, make sure it’s the latest version.
  4. Take Spectre and Meltdown into account. The patches for Spectre and Meltdown slow down servers, but the extent of the performance hit depends on the server, the server’s firmware, the hypervisor, the operating system, and your application. It would be nice to give an overall number, such as expect a 15 percent hit (a number that’s been bandied about, though some dispute its accuracy). However, there’s no way to know except by testing. Thus, it’s important to know if your server has been patched. If it hasn’t been yet, expect application performance to drop when the patch is installed. (If it’s not going to be patched, find a different host server!)
  5. Base the number of CPUs and cores and the clock speed on the application requirements. If your application and its core dependencies (such as the LAMP stack or the .NET infrastructure) are heavily threaded, the software will likely perform best on servers with multiple CPUs, each equipped with the greatest number of cores—think 24 cores. However, if the application is not particularly threaded or runs in a not-so-well-threaded environment, you’ll get the biggest bang with the absolute top clock speeds on an 8-core server.

But wait, there’s more!

Read the full list of 14 recommendations in my story for HPE Enteprise.nxt, “Checklist: Optimizing application performance at deployment.”

You wouldn’t enjoy paying a fine of 4 percent of your company’s total revenue. But that’s the potential penalty if your company is found in violation of the European Union’s new General Data Protection Regulation (GDPR), which goes into effect May 25, 2018. As you’ve probably read, organizations anywhere in the world are subject to GDPR if they have customers in the EU and are storing any of their personal data.

GDPR compliance is a complex topic. It’s too much for one article — heck, books galore are being written about it, seminars abound, and GDPR consultants are on every street corner.

One challenge is that GDPR is a regulation, not a how-to guide. It’s big on explaining penalties for failing to detect and report a data breach in a sufficiently timely manner. It’s not big on telling you how to detect that breach. Rather than tell you what to do, let’s see what could go wrong with your GDPR plans—to help you avoid that 4 percent penalty.

First, the ground rules: GDPR’s overarching goal is to protect citizens’ privacy. In particular, the regulation pertains to anything that can be used to directly or indirectly identify a person. Such data can be anything: a name, a photo, an email address, bank details, social network posts, medical information, or even a computer IP address. To that end, data breaches that may pose a risk to individuals must be disclosed to the authorities within 72 hours and to the affected individuals soon thereafter.

What does that mean? As part of the regulations, individuals must have the ability to see what data you have about them, correct that data if appropriate, or have that data deleted, again if appropriate. (If someone owes you money, they can’t ask you to delete that record.)

Enough preamble. Let’s get into ten common problems.

First: Your privacy and data retention policies aren’t compliant with GDPR

There’s no specific policy wording required by GDPR. However, the policies must meet the overall objectives on GDPR, as well as the requirements in any other jurisdictions in which you operate (such as the United States). What would Alan do? Look at policies from big multinationals that do business in Europe and copy what they do, working with your legal team. You’ve got to get it right.

Second: Your actual practices don’t match your privacy policy

It’s easy to create a compliant privacy policy but hard to ensure your company actually is following it. Do you claim that you don’t store IP addresses? Make sure you’re not. Do you claim that data about a European customer is never stored in a server in the United States? Make sure that’s truly the case.

For example, let’s say you store information about German customers in Frankfurt. Great. But if that data is backed up to a server in Toronto, maybe not great.

Third: Your third-party providers aren’t honoring your GDPR responsibilities

Let’s take that customer data in Frankfurt. Perhaps you have a third-party provider in San Francisco that does data analytics for you, or that runs credit reports or handles image resizing. In those processes, does your customer data ever leave the EU? Even if it stays within the EU, is it protected in ways that are compliant with GDPR and other regulations? It’s your responsibility to make sure: While you might sue a supplier for a breach, that won’t cancel out your own primary responsibility to protect your customers’ privacy.

A place to start with compliance: Do you have an accurate, up-to-date listing of all third-party providers that ever touch your data? You can’t verify compliance if you don’t know where your data is.

But wait, there’s more

You can read the entire list of common GDPR failures in my story for HPE Enterprise.nxt, “10 ways to fail at GDPR compliance.”

The public cloud is part of your network. But it’s also not part of your network. That can make security tricky, and sometimes become a nightmare.

The cloud represents resources that your business rents. Computational resources, like CPU and memory; infrastructure resources, like Internet bandwidth and Internal networks; storage resources; and management platforms, like the tools needed to provision and configure services.

Whether it’s Amazon Web Services, Microsoft Azure or Google Cloud Platform, it’s like an empty apartment that you rent for a year or maybe a few months. You start out with empty space, put in there whatever you want and use it however you want. Is such a short-term rental apartment your home? That’s a big question, especially when it comes to security. By the way, let’s focus on platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS), where your business has a great deal of control over how the resource is used — like an empty rental apartment.

We are not talking about software-as-a-service (SaaS), like Office 365 or Salesforce.com. That’s where you show up, pay your bill and use the resources as configured. That’s more like a hotel room: you sleep there, but you can’t change the furniture. Security is almost entirely the responsibility of the hotel; your security responsibility is to ensure that you don’t lose your key, and to refuse to open the door for strangers. The SaaS equivalent: Protect your user accounts and passwords, and ensure users only have the least necessary access privileges.

Why PaaS/IaaS are part of your network

As Peter Parker knows, Spider Man’s great powers require great responsibility. That’s true in the enterprise data center — and it’s true in PaaS/IaaS networks. The customer is responsible for provisioning servers, storage and virtual machines. Not only that, but the customer also is responsible for creating connections between the cloud service and other resources, such as an enterprise data center — in a hybrid cloud architecture — and other cloud providers — in a multi-cloud architecture.

The cloud provider sets terms for use of the PaaS/IaaS, and allows inbound and outbound connections. There are service level guarantees for availability of the cloud, and of servers that the cloud provider owns. Otherwise, everything is on the enterprise. Think of the PaaS/IaaS cloud as being a remote data center that the enterprise rents, but where you can’t physically visit and see your rented servers and infrastructure.

Why PaaS/IaaS are not part of your network

In short, except for the few areas that the cloud provider handles — availability, cabling, power supplies, connections to carrier networks, physical security — you own it. That means installing patches and fixes. That means instrumenting servers and virtual machines. That means protecting them with software-based firewalls. That means doing backups, whether using the cloud provider’s value-added services or someone else. That means anti-malware.

That’s not to minimize the benefits the cloud provider offers you. Power and cooling are a big deal. So are racks and cabling. So is that physical security, and having 24×7 on-site staffing in the event of hardware failures. Also, there’s click-of-a-button ability to provision and spool up new servers to handle demand, and then shut them back again when not needed. Cloud providers can also provide firewall services, communications encryption, and of course, consulting on security.

The word elastic is often used for cloud services. That’s what makes the cloud much more agile than an on-premise data center, or renting an equipment cage in a colocation center. It’s like renting an apartment where if you need a couple extra bedrooms for a few months, you can upsize.

For many businesses, that’s huge. Read more about how great cloud power requires great responsibility in my essay for SecurityNow, “Public Cloud, Part of the Network or Not, Remains a Security Concern.”

It’s standard practice for a company to ask its tech suppliers to fill out detailed questionnaires about their security practices. Companies use that information when choosing a supplier. Too much is at stake, in terms of company reputation and customer trust, to be anything but thorough with information security.

But how can a company’s IT security teams be most effective in that technology buying process? How do they get all the information they need, while also staying focused on what really matters and not wasting their time? Oracle Chief Security Officer Mary Ann Davidson at the recent RSA Conference offered her tips on this IT security risk assessment process. Drawing on her extensive experience as both supplier and buyer of technology and cloud services in her role at Oracle, Davidson shared advice from both points of view.

Advice on business risk assessments

It’s time to put out an RFP to engage new technology providers or to conduct an annual assessment of existing service providers. What do you ask in such a vendor security assessment questionnaire? There are many existing documents and templates, some focused on specific industries, others on regulated sectors or regulated information. Those should guide any assessment process, but aren’t the only factors, says Davidson. Consider these practical tips to get the crucial data you need, and avoid gathering a lot of information that will only distract you from issues that are important for keeping your data secure.

  1. Have a clear objective in mind. The purpose of the vendor security assessment questionnaire should be to assess the security performance of the vendor in light of the organization’s tolerance for risk on a given project.
  2. Limit the scope of an assessment to the potential security risks for services that the supplier is offering you. Those services are obviously critical, because they could affect your data, operations, and security. There is no value in focusing on a supplier’s purely internal systems if they don’t contain or connect to your data. By analogy, “you care about the security of a childcare provider’s facility,” says Davidson. “It’s not relevant to ask about the security of the facility owner’s vacation home in Lake Tahoe.”
  3. When possible, align the questions with internationally recognized, relevant, independently developed standards. It’s reasonable to expect service providers to offer open services that conform to true industry standards. Be wary of faux standards, which are the opposite of open—they could be designed to encourage tech buyers to trust what they think are specifications designed around industry consensus, but which are really pushing one tech supplier’s agenda or that of a third-party certification business.

There are a lot more tips in my story for Forbes, “IT Security Risk Assessments: Tips For Streamlining Supplier-Customer Communication.”

Chapter One: Christine Hall

Should the popular Linux operating system be referred to as “Linux” or “GNU/Linux”? It’s a thing, or at least it used to be, writes my friend Christine Hall in her aptly named article, “Is It Linux or GNU/Linux?, published in Linux Journal on May 11:

Some may remember that the Linux naming convention was a controversy that raged from the late 1990s until about the end of the first decade of the 21st century. Back then, if you called it “Linux”, the GNU/Linux crowd was sure to start a flame war with accusations that the GNU Project wasn’t being given due credit for its contribution to the OS. And if you called it “GNU/Linux”, accusations were made about political correctness, although operating systems are pretty much apolitical by nature as far as I can tell.

Christine (aka Bride of Linux) quotes a number of learned people. That includes Steven J. Vaughan-Nichols, one of the top experts in the politics of open-source software – and frequent critic of the antics of Richard M. Stallman (aka RMS) who founded the Free Software Foundation, and who insists that everyone call the software GNU/Linux.

Here’s what Steven (aka SJVN), said in the article:

“Enough already”, he said. “RMS tried, and failed, to create an operating system: Hurd. He and the Free Software Foundation’s endless attempts to plaster his GNU name to the work of Linus Torvalds and the other Linux kernel developers is disingenuous and an insult to their work. RMS gets credit for EMACS, GPL, and GCC. Linux? No.”

Another humble luminary sought out by Christine: Yours truly.

“For me it’s always, always, always, always Linux,” said Alan Zeichick, an analyst at Camden Associates who frequently speaks, consults and writes about open-source projects for the enterprise. “One hundred percent. Never GNU/Linux. I follow industry norms.”

To make a long story short: In the article, the consensus was for Linux, not GNU/Linux.

Chapter Two: figosdev

But then someone going by the handle “figosdev” authored a rebuttal, “Debunking the Usual Omission of GNU,” published on Techrights. To make a long story short, he believes that the operating system should be called GNU/Linux. Here’s my favorite part of figosdev’s missive (which was written in all lower-case):

ive heard about gnu and linux about a million times in over a decade. as of today ive heard of alan zeichick once, and camden associates (what do they even do?) once. im just going to call them linux, its the more popular term.

Riiight. figosdev never heard of me, fine (founder of SD Times, but figosdev probably never heard of that either). On the other hand, at least figosdev knows my name. I have no idea who figosdev is, except to infer that he/she/it is a developer on the fig component compiler project, since he/she/it is hiding behind a handle. And that brings me to

Chapter Three: Richi Jennings

Christine Hall’s article sparked a lively debate on Twitter. As part of it, my friend Richi Jennings (quoted in the original article) tweeted:

Let’s end the story here, at least for now. Linux forever!

Oracle CEO Mark Hurd is known as an avid tennis fan and supporter of the sport’s development, having played in college at Baylor University. At the Collision Conference last week in New Orleans, Hurd discussed the similar challenges facing tennis players and top corporate executives.

“I like this sport because tennis teaches that you’re out there by yourself,” said Hurd, who was interviewed on stage by CNBC reporter Aditi Roy. “Tennis is like being CEO: You can’t call time out, you can’t bring in a substitute,” Hurd said. “Tennis is a space where you have to go out every day, rain or shine, and you’ve got to perform. It’s just like the business world.”

Performance returned to the center of the conversation when Roy asked about Oracle’s acquisition strategy. Hurd noted that Oracle’s leadership team gives intense scrutiny to acquisitions of any size. “We don’t go out of our way to spend money — it’s our shareholder’s money,” he said. “We also think about dividends and buying stock back.”

When it comes to mergers and acquisitions, Oracle is driven by three top criteria, Hurd said. “First, the company has to fit strategically with where we are going,” he said. “Second, it has to make fiscal sense. And third, we have to be able to effectively run the acquisition.”

Hurd emphasized that he’s focused on the future, not a company’s past performance. “We are looking for companies that will be part of things 5 or 10 years from now, not 5 or 10 years ago,” he said. “We want to move forward, in platforms and applications.”

To a large extent, that future includes artificial intelligence. Hurd was quick to say, “I’m not looking for someone to say, ‘I have an AI solution in the cloud, come to me.’” Rather, Oracle wants to be able to integrate AI directly into its applications, in a way that gives customers clear business returns.

He used the example of employee recruitment. “We recruit 2,000 college students today. It used to be done manually, but now we use machine learning and algorithms to figure out where to source people.” Not only does the AI help find potential employees, but it can help evaluate whether the person would be successful at Oracle. “We could never have done that before,” Hurd added.

Read more about what Hurd said at Collision, including his advice for aspiring CEOs, in my story for Forbes, “Mark Hurd On The Perfect Sport For CEOs — And Other Leadership Insights.”

You can also watch the 20-minute entire interview here.

No doubt you’ve heard about blockchain. It’s the a distributed digital ledger technology that lets participants add and view blocks of transaction records, but not delete or change them without being detected.

Most of us know blockchain as the foundation of Bitcoin and other digital currencies. But blockchain is starting to enter the business mainstream as the trusted ledger for farm-to-table vegetable tracking, real estate transfers, digital identity management, financial transactions and all manner of contracts. Blockchain can be used for public transactions as well as for private business, inside a company or within an industry group.

What makes the technology so powerful is that there’s no central repository for this ever-growing sequential chain of transaction records, clumped together into blocks. Because that repository is replicated in each participant’s blockchain node, there is no single source of failure, and no insider threat within a single organization can impact its integrity.

“Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days or even weeks.”

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can be permitted to view relevant data, but not everything in the chain.

A customer, for instance, might be able to verify that a contractor has a valid business license. The customer might also see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

Business models and use cases

Blockchain is well-suited for managing transactions between companies or organizations that may not know each other well and where there’s no implicit or explicit trust. Rakhmilevich explains, “Blockchain works because it’s peer-to-peer…and it provides an easy-to-track history, which can serve as an audit trail,” he says.

What’s more, blockchain smart contracts are ideal for automating manual or semi-automated processes prone to errors or fraud. “Blockchain can help when there might be challenges in proving that the data has not been tampered with or when verifying the source of a particular update or transaction is important,” Rakhmilevich says.

Blockchain has uses in many industries, including banking, securities, government, retail, healthcare, manufacturing and transportation. Take healthcare: Blockchain can provide immutable records on clinical trials. Think about all the data being collected and flowing to the pharmaceutical companies and regulators, all available instantly and from verified participants.

Read more about blockchain in my article for the Wall Street Journal, “Blockchain: It’s All About Business—and Trust.”

Blame people for the SOC scalability challenge. On the other hand, don’t blame your people. It’s not their fault.

The security operations center (SOC) team is frequently overwhelmed, particularly the Tier 1 security analysts tasked with triage. As companies grow and add more technology — including the Internet of Things (IoT) — that means more alerts.

As the enterprise adds more sophisticated security tools, such as Endpoint Detection and Response (EDR), that means more alerts. And more complex alerts. You’re not going to see a blinking red light that says: “You’re being hacked.” Or if you do see such an alert, it’s not very helpful.

The problem is people, say experts at the 2018 RSA Conference, which wrapped up last week. Your SOC team — or teams — simply can’t scale fast enough to keep up with the ever-increasing demand. Let’s talk about the five biggest problems challenging SOC scalability.

Reason #1: You can’t afford to hire enough analysts

You certainly can’t afford to hire enough Tier 2 analysts who respond to real — or almost certainly real — incidents. According sites like Glassdoor and Indeed, be prepared to pay over $100,000 per year, per person.

Reason #2: You can’t even find enough analysts

We’ve created a growing demand for labor, and thus, we’ve created this labor shortage,” said Malcolm Harkins, chief security and trust officer of CylanceThere are huge numbers of open positions at all levels of information security, and that includes in-enterprise SOC team members. Sure, you could pay more, or do competitive recruiting, but go back to the previous point: You can’t afford that. Perhaps a managed security service provider can afford to keep raising salaries, because an MSSP can monetize that expense. An ordinary enterprise can’t, because security is an expense.

Reason #3: You can’t train the analysts

Even with the best security tools, analysts require constant training on threats and techniques — which is expensive to offer, especially for a smaller organization. And wouldn’t you know it, as soon as you get a group of triage specialists or incident responders trained up nicely, off they go for a better job.

Read more, including two more reasons, in my essay for SecurityNow, “It’s the People: 5 Reasons Why SOC Can’t Scale.”

Got Terminator? Microsoft is putting artificial intelligence in charge of automatically responding to detected threats, with a forthcoming update to Windows Defender ATP.

Microsoft is expanding its use of artificial intelligence and big data analytics behind the current levels of machine learning in its security platform. Today, AI is used for incident detection and investigation, filtering out false positives and making it easier for humans in the security operations center (SOC) team to determine the correct response to an incident.

Soon, customers will be able to allow the AI to respond to some incidents automatically. Redmond claims this will cut time-to-remediation down to minutes. In a blog post released April 17, Moti Gindi, general manager for Windows Cyber Defense, wrote: “Threat investigation and remediation decisions can be taken automatically by Windows Defender ATP based on extensive historical data collected, stored and analyzed in our cloud (‘time travel’).”

What type of remediation? No, robots won’t teleport from the future and shoot lasers at the cybercriminals. At least, that’s not an announced capability. Rather, Windows Defender ATP will signal the Azure Active Directory user management and Microsoft Intune mobile device management platforms to temporarily revoke access privileges to cloud storage and enterprise applications, such as Office 365.

After the risk has been evaluated — or after the CEO has yelled at the CISO from her sales trip overseas — the access revocation can be reversed. Another significant part of the Windows Defender ATP announcements: Threat signal sharing between Microsoft’s various cloud platforms, which up until now have operated pretty much autonomously in terms of security.

In the example Microsoft offered, threats coming via a phishing email detected by Outlook 365 will be correlated with malware blocked by OneDrive for Business. In this incarnation, signal sharing will bring together Office 365, Azure 365 and Windows Defender ATP.

Read more, including about Microsoft’s Mac support for security, in my essay for SecurityNow, “Microsoft Security Is Channeling the Terminator.”

Ransomware rules the cybercrime world – perhaps because ransomware attacks are often successful and financially remunerative for criminals. Ransomware features prominently in Verizon’s fresh-off-the-press 2018 Data Breach Investigations Report (DBIR). As the report says, although ransomware is still a relatively new type of attack, it’s growing fast:

Ransomware was first mentioned in the 2013 DBIR and we referenced that these schemes could “blossom as an effective tool of choice for online criminals”. And blossom they did! Now we have seen this style of malware overtake all others to be the most prevalent variety of malicious code for this year’s dataset. Ransomware is an interesting phenomenon that, when viewed through the mind of an attacker, makes perfect sense.

The DBIR explains that ransomware can be attempted with little risk or cost to the attacker. It can be successful because the attacker doesn’t need to monetize stolen data, only ransom the return of that data; and can be deployed across numerous devices in organizations to inflict more damage, and potentially justify bigger ransoms.

Botnets Are Also Hot

Ransomware wasn’t the only prominent attack; the 2018 DBIR also talks extensively about botnet-based infections. Verizon cites more than 43,000 breaches using customer credentials stolen from botnet-infected clients. It’s a global problem, says the DBIR, and can affect organizations in two primary ways:

The first way, you never even see the bot. Instead, your users download the bot, it steals their credentials, and then uses them to log in to your systems. This attack primarily targeted banking organizations (91%) though Information (5%) and Professional Services organizations (2%) were victims as well.

The second way organizations are affected involves compromised hosts within your network acting as foot soldiers in a botnet. The data shows that most organizations clear most bots in the first month (give or take a couple of days).

However, the report says, some bots may be missed during the disinfection process. This could result in a re-infection later.

Insiders Are Still Significant Threats

Overall, says Verizon, outsiders perpetrated most breaches, 73%. But don’t get too complacent about employees or contracts: Many involved internal actors, 28%. Yes, that adds to more than 100% because some outside attacks had inside help. Here’s who Verizon says is behind breaches:

  • 73% perpetrated by outsiders
  • 28% involved internal actors
  • 2% involved partners
  • 2% featured multiple parties
  • 50% of breaches were carried out by organized criminal groups
  • 12% of breaches involved actors identified as nation-state or state-affiliated

Email is still the delivery vector of choice for malware and other attacks. Many of those attacks were financially motivated, says the DBIR. Most worrying, a significant number of breaches took a long time to discover.

  • 49% of non-point-of-sale malware was installed via malicious email
  • 76% of breaches were financially motivated
  • 13% of breaches were motivated by the gain of strategic advantage (espionage)
  • 68% of breaches took months or longer to discover

Taking Months to Discover the Breach

To that previous point: Attackers can move fast, but defenders can take a while. To use a terrible analogy: If someone breaks into your car and steals your designer sunglasses, the time from their initial penetration (picking the lock or smashing the window) to compromising the asset (grabbing the glasses) might be a minute or less. The time to discovery (when you see the broken window or realize your glasses are gone) could be minutes if you parked at the mall – or days, if the car was left at the airport parking garage. The DBIR makes the same point about enterprise data breaches:

When breaches are successful, the time to compromise continues to be very short. While we cannot determine how much time is spent in intelligence gathering or other adversary preparations, the time from first action in an event chain to initial compromise of an asset is most often measured in seconds or minutes. The discovery time is likelier to be weeks or months. The discovery time is also very dependent on the type of attack, with payment card compromises often discovered based on the fraudulent use of the stolen data (typically weeks or months) as opposed to a stolen laptop which is discovered when the victim realizes they have been burglarized.

Good News, Bad News on Phishing

Let’s end on a positive note, or a sort of positive note. The 2018 DBIR notes that most people never click phishing emails: “When analyzing results from phishing simulations the data showed that in the normal (median) organization, 78% of people don’t click a single phish all year.”

The less good news: “On average 4% of people in any given phishing campaign will click it.” The DBIR notes that the more phishing emails someone has clicked, the more they are likely to click on phishing emails in the future. The report’s advice: “Part of your overall strategy to combat phishing could be that you can try and find those 4% of people ahead of time and plan for them to click.”

Good luck with that.

The purchase order looks legitimate, yet does it have all the proper approvals? Many lawyers reviewed this draft contract so is this the latest version? Can we prove that this essential document hasn’t been tampered with, before I sign it? Can we prove that these two versions of a document are absolutely identical?

Blockchain might be able to help solve these kinds of everyday trust issues related to documents, especially when they are PDFs—data files created using the Portable Document Format. Blockchain technology is best known for securing financial transactions, including powering new financial instruments such as Bitcoin. But blockchain’s ability to increase trust will likely find enterprise use cases solving common, non-financial information exchanges like these documents use.

Joris Schellekens, a software engineer and PDF expert at iText Software in Ghent, Belgium, recently presented his ideas for blockchain-supported documents at Oracle Code Los Angeles. Oracle Code is a series of free events around the world created to bring developers together to share fresh thinking and collaborate on ideas like these.

PDF’s Power and Limitations

The PDF file format was created in the early 1990s by Adobe Systems. PDF was a way to share richly formatted documents whose visual layout, text, and graphics would look the same, no matter which software created them or where they were viewed or printed. The PDF specification became an international standard in 2008.

Early on, Adobe and other companies implemented security features into PDF files. That included password protection, encryption, and digital signatures. In theory, the digital signatures should be able to prove who created, or at least who encrypted, a PDF document. However, depending on the hashing algorithm used, it’s not so difficult to subvert those protections to, for example, change a date/time stamp, or even the document content, says Schellekens. His company, iText Software, markets a software development kit and APIs for creating and manipulating PDFs.

“The PDF specification contains the concept of an ID tuple,” or an immutable sequence of data, says Schellekens. “This ID tuple contains timestamps for when the file was created and when it was revised. However, the PDF spec is vague about how to implement these when creating the PDF.”

Even in the case of an unaltered PDF, the protections apply to the entire document, not to various parts of it. Consider a document that must be signed by multiple parties. Since not all certificate authorities store their private keys with equal vigilance, you might lack confidence about who really modified the document (e.g. signed it), at which times, and in which order. Or, you might not be confident that there were no modifications before or after someone signed it.

A related challenge: Signatures to a digital document generally must be made serially, one at a time. The PDF specification doesn’t allow for a document to be signed in parallel by several people (as is common with contract reviews and signatures) and then merged together.

Blockchain has the potential to solve such document problems, and several others besides. Read more in my story for Forbes, “Can Blockchain Solve Your Document And Digital Signature Headaches?

Asking “which is the best programming language” is like asking about the most important cooking tool in your kitchen. Mixer? Spatula? Microwave? Cooktop? Measuring cup? Egg timer? Lemon zester? All are critical, depending on what you’re making, and how you like to cook.

The same is true with programming languages. Some are best at coding =applications that run natively on mobile devices — think Objective-C or Java. Others are good at encoding logic within a PDF file, or on a web page — think JavaScript. And still others are best at coding fast applications for virtual machines or running directly on the operating system — for many people, that’s C or C++. Want a general purpose language? Think Python, PHP. Specialized? R and Matlab are good for statistics and data analytics. And so-on.

Last summer, IEEE Spectrum offered its take, surveying its audience and writing up the “2017 Top Programming Languages.” The top 10 languages for the typical reader:

  1. Python
  2. C
  3. Java
  4. C++
  5. C#
  6. R
  7. JavaScript
  8. PHP
  9. Go
  10. Swift

The story’s author, Stephen Case, noted not much change in the most popular languages. “Python has continued its upward trajectory from last year and jumped two places to the No. 1 slot, though the top four—Python, C, Java, and C++—all remain very close in popularity. “

What Do The PYPL Say?

The IEEE Spectrum annual survey isn’t the only game in town. The PYPL (PopularitY of Programming Language) index uses raw data from Google Trends to see how often people search for language tutorials. The people behind PYPL say, “If you believe in collective wisdom, the PYPL Popularity of Programming Language index can help you decide which language to study, or which one to use in a new software project.”

Here’s their Top 10:

  1. Java
  2. Python
  3. JavaScript
  4. PHP
  5. C#
  6. C
  7. R
  8. Objective-C
  9. Swift
  10. MATLAB

Asking the RedMonk

Stephen O’Grady describes RedMonk’s Programming Language Rankings, as of January 2018, as being based on two key external sources:

We extract language rankings from GitHub and Stack Overflow, and combine them for a ranking that attempts to reflect both code (GitHub) and discussion (Stack Overflow) traction. The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion and usage in an effort to extract insights into potential future adoption trends.

The top languages found by RedMonk look similar to PYPL and IEEE Spectrum:

  1. JavaScript
  2. Java
  3. Python
  4. PHP
  5. C#
  6. C++
  7. CSS
  8. Ruby
  9. C
  10. Swift & Objective-C (tied)

Use the Best Tool for the Job

It would be tempting to use data like this to say, “From now on, everything we’re doing will be in Java,” or “We’re going to do all web coding in JavaScript and use C++ for applications.” Don’t do that. That would be like say, “We’re going to make everything in the microwave.” Sometimes you want the microwave, sure, but sometimes you want the crockpot, or the regular oven, or sous vide, or the propane grill in your back yard.

The goal is productivity. Use agile processes like Scrum to determine what your development teams are going to build, where those applications will run, and which features must be included. Then, let the developers choose languages that fit best – and that includes supporting experimentation. Let them use R. Let them do some coding in Python, if it improves productivity, and gets a better job done faster.

 

As the saying goes, you can’t manage what you don’t measure. In a data-driven organization, the best tools for measuring the performance are business intelligence (BI) and analytics engines, which require data. And that explains why data warehouses continue to play such a crucial role in business. Data warehouses often provide the source of that data, by rolling up and summarizing key information from a variety of sources.

Data warehouses, which are themselves relational databases, can be complex to set up and manage on a daily basis. They typically require significant human involvement from database administrators (DBAs). In a large enterprise, a team of DBAs ensure that the data warehouse is extracting data from those disparate data sources, as well as accommodating new and changed data sources—and making sure the extracted data is summarized properly and stored in a structured manner that can be handled by other applications, including those BI and analytics tools.

On top of that, the DBAs are managing the data warehouse’s infrastructure. That includes everything from server processor utilization, the efficiency of storage, security of the data, backups, and more.

However, the labor-intensive nature of data warehouses is about to change, with the advent of Oracle Autonomous Data Warehouse Cloud, announced in October 2017. The self-driving, self-repairing, self-tuning functionality of Oracle’s Data Warehouse Cloud is good for the organization—and good for the DBAs.

Data-driven organizations need timely, up-to-date business intelligence. This can feed instant decision-making, short-term predictions and business adjustments, and long-term strategy. If the data warehouse goes down, slows down, or lacks some information feeds, the impact can be significant. No data warehouse may mean no daily operational dashboards and reports, or inaccurate dashboards or reports.

For C-level executives, Autonomous Data Warehouse can improve the value of the data warehouse. This boosts the responsiveness of business intelligence and other important applications, by improving availability and performance.

Stop worrying about uptime. Forget about disk-drive failures. Move beyond performance tuning. DBAs, you have a business to optimize.

Read more in my article, “Autonomous Capabilities Will Make Data Warehouses — And DBAs — More Valuable.”

“We estimate that malicious cyber activity cost the U.S. economy between $57 billion and $109 billion in 2016.” That’s from a February 2018 report, “email hidden; JavaScript is required,” by the Council of Economic Advisors – part of the Office of the President. It’s a big deal.

The White House is concerned about a number of sources of cyber threats. Those include attacks from nation-states, corporate competitors, hacktivists, organized criminal groups, opportunists, and company insiders.

It’s not always easy to tell exactly who is behind some event, or even how to categorize those events. Still, the report says that incidents breaks down as roughly 25% insiders, 75% outsiders. “Overall, 18 percent of threat actors were state-affiliated groups, and 51 percent involved organized criminal groups,” it says.

It’s More Than Stolen Valuables

The report points out that the economic cost includes many factors, including the stolen property, the costs of repairs – and opportunity lost costs. For example, the report says, “Consider potential costs of a DDoS attack. A DDoS attack interferes with a firm’s online operations, causing a loss of sales during the period of disruption. Some of the firm’s customers may permanently switch to a competing firm due to their inability to access online services, imposing additional costs in the form of the firm’s lost future revenue. Furthermore, a high-visibility attack may tarnish the firm’s brand name, reducing its future revenues and business opportunities.”

However, it’s not always that cut-and-dried. Intellectual property theft shows:

The costs incurred by a firm in the wake of IP theft are somewhat different. As the result of IP theft, the firm no longer has a monopoly on its proprietary findings because the stolen IP may now potentially be held and utilized by a competing firm. If the firm discovers that its IP has been stolen (and there is no guarantee of such discovery), attempting to identify the perpetrator or obtain relief via legal process could result in sizable costs without being successful, especially if the IP was stolen by a foreign actor. Hence, expected future revenues of the firm could decline. The cost of capital is likely to increase because investors will conclude that the firm’s IP is both sought-after and not sufficiently protected.

Indeed, this last example is particularly worrisome. Why? “IP theft is the costliest type of malicious cyber activity. Moreover, security breaches that enable IP theft via cyber may go undetected for years, allowing the periodic pilfering of corporate IP.”

Affecting the Economy

Do investors worry about cyber incidents? You bet. And it hits the share price of companies. According to the White House report, “We find that the stock price reaction to the news of an adverse cyber event is significantly negative. Firms on average lost about 0.8 percent of their market value in the seven days following news of an adverse cyber event.”

How much is that? Given that the study looked at large companies, “We estimate that, on average, the firms in our sample lost $498 million per adverse cyber event. The distribution of losses is highly right-skewed. When we trim the sample of estimated losses at 1 percent on each side of the distribution, the average loss declines to $338 million per event.” That’s significant.

Small and mid-sized companies can be harder hit by incidents, because they are less resilient. “Smaller firms, and especially those with few product lines, can easily go out of business if they are attacked or breached.”

Overall, the hit by cyber incidents cost the U.S. economy between $57 billion and $109 billion in 2016. That’s between 0.31% and 0.58% of that year’s gross domestic product (GDP), says the report. That’s lot, but could be worse. Let’s hope this amount doesn’t increase – by, say, a full-fledged cyberwar or significant terrorist incident.

The “throw it over the wall” problem is familiar to anyone who’s seen designers and builders create something that can’t actually be deployed or maintained out in the real world. In the tech world, avoiding this problem is a big part of what gave rise to DevOps.

DevOps, combines “development” and “IT operations.” It refers to a set of practices that help software developers and IT operations staff work better, together. DevOps emerged about a decade ago with the goal of tearing down the silos between the two groups, so that companies can get new apps and features out the door, faster and with fewer mistakes and less downtime in production.

DevOps is now widely accepted as a good idea, but that doesn’t mean it’s easy. It requires cultural shifts by two departments that not only have different working styles and toolsets, but where the teams may not even know or respect each other.

When DevOps is properly embraced and implemented, it can help get better software written more quickly. DevOps can make applications easier and less expensive to manage. It can simplify the process of updating software to respond to new requirements. Overall, a DevOps mindset can make your organization more competitive because you can respond quickly to problems, opportunities and industry pressures.

Is DevOps the right strategic fit for your organization? Here are six CEO-level insights about DevOps to help you consider that question:

  1. DevOps can and should drive business agility.DevOps often means supporting a more rapid rate of change in terms of delivering new software or updating existing applications. And it doesn’t just mean programmers knock out code faster. It means getting those new apps or features fully deployed and into customers’ hands. “A DevOps mindset represents development’s best ability to respond to business pressures by quickly bringing new features to market and we drive that rapid change by leveraging technology that lets us rewire our apps on an ongoing basis,” says Dan Koloski, vice president of product management at Oracle.

For the full story, see my essay for the Wall Street Journal, “Tech Strategy: 6 Things CEOs Should Know About DevOps.”

Simplified Java coding. Less garbage. Faster programs. Those are among the key features in the newly released Java 10, which arrived in developers’ hands only six months after the debut of Java 9 in September.

This pace is a significant change from Java’s previous cycle of one large release every two to three years. With its faster release cadence, Java is poised to provide developers with innovations twice every year, making the language and platform more attractive and competitive. Instead of waiting for a huge omnibus release, the Java community can choose to include new features as soon as those features are ready, in the next six-month Java release train. This gives developers access to the latest APIs, functions, language additions, and JVM updates much faster than ever before.

Java 10 is the first release on the new six-month schedule. It builds incrementally on the significant new functionality that appeared in Java 9, which had a multiyear gestation period.

Java 10 delivers 12 Java Enhancement Proposals (JEPs). Here’s the complete list, followed by a deeper look at three of the most significant JEPs:

  • Local-Variable Type Inference
  • Consolidate the JDK Forest into a Single Repository
  • Garbage-Collector Interface
  • Parallel Full GC for G1
  • Application Class-Data Sharing
  • Thread-Local Handshakes
  • Remove the Native-Header Generation Tool (javah)
  • Additional Unicode Language-Tag Extensions
  • Heap Allocation on Alternative Memory Devices
  • Experimental Java-Based JIT Compiler
  • Root Certificates
  • Time-Based Release Versioning

See my essay for Forbes, “What Java 10 And Java’s New 6-Month Release Cadence Mean For Developers.” We’ll look at three of the most significant JEPs: Local-Variable Type Inference, Parallel Full GC for G1, and the Experimental Java-Based JIT Compiler.

Blockchain is a distributed digital ledger technology in which blocks of transaction records can be added and viewed—but can’t be deleted or changed without detection. Here’s where the name comes from: a blockchain is an ever-growing sequential chain of transaction records, clumped together into blocks. There’s no central repository of the chain, which is replicated in each participant’s blockchain node, and that’s what makes the technology so powerful. Yes, blockchain was originally developed to underpin Bitcoin and is essential to the trust required for users to trade digital currencies, but that is only the beginning of its potential.

Blockchain neatly solves the problem of ensuring the validity of all kinds of digital records. What’s more, blockchain can be used for public transactions as well as for private business, inside a company or within an industry group. “Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days, or even weeks.”

That’s the power of blockchain: an immutable digital ledger for recording transactions. It can be used to power anonymous digital currencies—or farm-to-table vegetable tracking, business contracts, contractor licensing, real estate transfers, digital identity management, and financial transactions between companies or even within a single company.

“Blockchain doesn’t have to just be used for accounting ledgers,” says Rakhmilevich. “It can store any data, and you can use programmable smart contracts to evaluate and operate on this data. It provides nonrepudiation through digitally signed transactions, and the stored results are tamper proof. Because the ledger is replicated, there is no single source of failure, and no insider threat within a single organization can impact its integrity.”

It’s All About Distributed Ledgers

Several simple concepts underpin any blockchain system. The first is the block, which is a batch of one or more transactions, grouped together and hashed. The hashing process produces an error-checking and tamper-resistant code that will let anyone viewing the block see if it has been altered. The block also contains the hash of the previous block, which ties them together in a chain. The backward hashing makes it extremely difficult for anyone to modify a single block without detection.

A chain contains collections of blocks, which are stored on decentralized, distributed servers. The more the better, with every server containing the same set of blocks and the latest values of information, such as account balances. Multiple transactions are handled within a single block using an algorithm called a Merkle tree, or hash tree, which provides fault and fraud tolerance: if a server goes down, or if a block or chain is corrupted, the missing data can be reconstructed by polling other servers’ chains.

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can view relevant data, but not everything in the chain. A customer might be able to verify that a contractor has a valid business license and see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

When originally conceived, blockchain had a narrow set of protocols. They were designed to govern the creation of blocks, the grouping of hashes into the Merkle tree, the viewing of data encapsulated into the chain, and the validation that data has not been corrupted or tampered with. Over time, creators of blockchain applications (such as the many competing digital currencies) innovated and created their own protocols—which, due to their independent evolutionary processes, weren’t necessarily interoperable. By contrast, the success of general-purpose blockchain services, which might encompass computing services from many technology, government, and business players, created the need for industry standards—such as Hyperledger, a Linux Foundation project.

Read more in my feature article in Oracle Magazine, March/April 2018, “It’s All About Trust.”

DevOps is a technology discipline well-suited to cloud-native application development. When it only takes a few mouse clicks to create or manage cloud resources, why wouldn’t developers and IT operation teams work in sync to get new apps out the door and in front of user faster? The DevOps culture and tactics have done much to streamline everything from coding to software testing to application deployment.

Yet far from every organization has embraced DevOps, and not every organization that has tried DevOps has found the experience transformative. Perhaps that’s because the idea is relatively young (the term was coined around 2009), suggests Javed Mohammed, systems community manager at Oracle, or perhaps because different organization are at such different spots in DevOps’ technology adoption cycle. That idea—about where we are in the adoption of DevOps—became a central theme of a recent podcast discussion among tech experts. Following are some highlights.

Confusion about DevOps can arise because DevOps affects dev and IT teams in many ways. “It can apply to the culture piece, to the technology piece, to the process piece—and even how different teams interact, and how all of the different processes tie together,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC and co-author of Accelerate: The Science of Lean Software and DevOps.

The adoption and effectiveness of DevOps within a team depends on where each team is, and where organizations are. One team might be narrowly focused on the tech used to automate software deployment to the public, while another is looking at the culture and communication needed to release new features on a weekly or even daily basis. “Everyone is at a very, very different place,” Forsgren says.

Indeed, says Forsgren, some future-thinking organizations are starting to talk about what ‘DevOps Next’ is, extending the concept of developer-led operations beyond common best practices. At the same time, in other companies, there’s no DevOps. “DevOps isn’t even on their radar,” she sighs. Many experts, including Forsgren, see that DevOps is here, is working, and is delivering real value to software teams today—and is helping businesses create and deploy better software faster and less expensively. That’s especially true when it comes to cloud-native development, or when transitioning existing workloads from the data center into the cloud.

Read more in my essay, “DevOps: Sometimes Incredibly Transformative, Sometimes Not So Much.”

New phones are arriving nearly every day. Samsung unveiled its latest Galaxy S9 flagship. Google is selling lots of its Pixel 2 handset. Apple continues to push its iPhone X. The Vivo Apex concept phone, out of China, has a pop-up selfie camera. And Nokia has reintroduced its famous 8110 model – the slide-down keyboard model featured in the 1999 movie, “The Matrix.”

Yet there is a slowdown happening. Hard to say whether it’s merely seasonal, or an indication that despite the latest and newest features, it’s getting harder to distinguish a new phone from its predecessors.

According to the 451 report, “Consumer Smartphones: 90 Day Outlook: Smartphone Buying Slows but Apple and Samsung Demand Strong,” released February 2018: “Demand for smartphones is showing a seasonal downtick, with 12.7% of respondents from 451 Research’s Leading Indicator panel saying they plan on buying a smartphone in the next 90 days.” However, “Despite a larger than expected drop from the September survey, next 90 day smartphone demand is at its highest December level in three years.”

451 reports that over the next 90 days,

Apple (58%) leads in planned smartphone buying but is down 11 points. Samsung (15%) is up 2 points, as consumer excitement builds around next-gen Galaxy S9 and S9+ devices, scheduled to be released in March. Google (3%) is showing a slight improvement, buoyed by the October release of its Pixel 2 and 2 XL handsets. Apple’s latest releases are the most in-demand among planned iPhone buyers: iPhone X (37%; down 6 points), iPhone 8 (21%; up 5 points) and iPhone 8 Plus (18%; up 4 points).

Interestingly, Apple’s famous brand loyalty may be tracking. Says 451, “Google leads in customer satisfaction with 61% of owners saying they’re Very Satisfied. Apple is close behind, with 59% of iPhone owners saying they’re Very Satisfied. That said, it’s important to keep in mind that iPhone owners comprise 57% of smartphone owners in this survey vs. 2% who own a Google Pixel smartphone.”

Everyone Loves the Galaxy S9

Cnet was positively gushing over the new Samsung phone, writing,

A bold new camera, cutting-edge processor and a fix to a galling ergonomic pitfall — all in a body that looks nearly identical to last year’s model. That, in a nutshell, is the Samsung Galaxy S9 (with a 5.8-inch screen) and its larger step-up model, the Galaxy S9 Plus, which sports an even bigger 6.2-inch screen.

Cnet calls out two features. First, a camera upgrade that includes variable aperture designed to capture better low-light images – which is where most phones really fall down.

The other? “The second improvement is more of a fix. Samsung moved the fingerprint reader from the side of the rear camera to the center of the phone’s back, fixing what was without a doubt the Galaxy S8’s most maddening design flaw. Last year’s model made you stretch your finger awkwardly to hit the fingerprint target. No more.”

The Verge agrees with that assessment:

… the Galaxy S9 is actually a pretty simple device to explain. In essence, it’s the Galaxy S8, with a couple of tweaks (like moving the fingerprint sensor to a more sensible location), and all the specs jacked up to the absolute max for the most powerful device on the market — at least, on paper.

Pop Goes the Camera

The Vivo concept phone, the Apex, has a little pop-up front-facing camera designed for selfies. Says TechCrunch, this is part of a trend:

With shrinking bezels, gadget makers have to look for new solutions like the iPhone X notch. Others still, like Vivo and Huawei, are look at more elegant solutions than carving out a bit of the screen.

For Huawei, this means using a false key within the keyboard to house a hidden camera. Press the key and it pops up like a trapdoor. We tried it out and though the housing is clever, the placement makes for awkward photos — just make sure you trim those nose hairs before starting your conference call.

Vivo has a similar take to Huawei though the camera is embedded on a sliding tray that pops-up out of the top of the phone.

So, there’s still room for innovation. A little room. Beyond cameras, and some minor ergonomic improvements, it’s getting harder and harder to differentiate one phone from another – and possibly, to convince buyers to shell out for upgrades. At least, that is, until 5G handsets hit the market.

Spectre and Meltdown are two separate computer security problems. They are often lumped together because they were revealed around the same time – and both exploit vulnerabilities in many modern microprocessors. The website MeltdownAttack, from the Graz University of Technology, explains both Spectre and Meltdown very succinctly – and also links to official security advisories from the industry:

Meltdown breaks the most fundamental isolation between user applications and the operating system. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system. If your computer has a vulnerable processor and runs an unpatched operating system, it is not safe to work with sensitive information without the chance of leaking the information. This applies both to personal computers as well as cloud infrastructure. Luckily, there are software patches against Meltdown.

Spectre breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets. In fact, the safety checks of said best practices actually increase the attack surface and may make applications more susceptible to Spectre. Spectre is harder to exploit than Meltdown, but it is also harder to mitigate. However, it is possible to prevent specific known exploits based on Spectre through software patches.

For now, nearly everyone is dependent on microprocessor makers and operating system vendors to develop, test, and distribute patches to mitigate both flaws. In the future, new microprocessors should be immune to those exploits – but because of the long processor developing new processors, we are unlikely to see computers using such next-generation processors available for several years.

So, expect Spectre and Meltdown to be around for many years to come. Some devices will remain unpatched — because some devices always remain unpatched. Even after new computers become available, it will take years to replace all the old machines.

Wide-Ranging Effects

Just about everything is affected by these flaws. Says the Graz University website:

Which systems are affected by Meltdown? Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether AMD processors are also affected by Meltdown. According to ARM, some of their processors are also affected.

 Which systems are affected by Spectre? Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors.

Ignore Spectre and Meltdown at your peril.

Patch. Sue. Repeat.

Many techies are involved in trying to handle the Spectre and Meltdown issues. So are attorneys. Intel alone has indicated dozens of lawsuits in its annual report filing with the U.S. Securities asnd Exchange Commission:

As of February 15, 2018, 30 customer class action lawsuits and two securities class action lawsuits have been filed. The customer class action plaintiffs, who purport to represent various classes of end users of our products, generally claim to have been harmed by Intel’s actions and/or omissions in connection with the security vulnerabilities and assert a variety of common law and statutory claims seeking monetary damages and equitable relief.

Given that there are many microprocessor makers involved (it’s not only Intel, remember), expect lots more patches. And lots more lawsuits.

Companies can’t afford downtime. Employees need access to their applications and data 24/7, and so do other business applications, manufacturing and logistics management systems, and security monitoring centers. Anyone who thinks that the brute force effort of their hard-working IT administrators is enough to prevent system downtime just isn’t facing reality.

Traditional systems administrators and their admin tools can’t keep up with the complexity inherent in any modern enterprise. A recent survey of the Oracle Applications Users Group has found that despite significant progress in systems management automation, many customers still report that more than 80% of IT issues are first discovered and reported by users. The number of applications is spiraling up, while data increases at an even more rapid rate.

The boundaries between systems are growing more complex, especially with cloud-based and hybrid-cloud architectures. That reality is why Oracle, after analyzing a survey of its industry-leading customers, recently predicted that by 2020, more than 80% of application infrastructure operations will be managed autonomously.

Autonomously is an important word here. It means not only doing mundane day-to-day tasks including monitoring, tuning, troubleshooting, and applying fixes automatically, but also detecting and rapidly resolving issues. Even when it comes to the most complex problems, machines can simplify the analysis—sifting through the millions of possibilities to present simpler scenarios, to which people then can apply their expertise and judgment of what action to take.

Oracle asked, about the kind of activities that IT system administrators do. That includes on a daily, weekly, and monthly basis—things such as password resets, system reboots, software patches, and the like.

Expect that IT teams will soon reduce by several orders of magnitude the number of situations like those that need manual intervention. If an organization typically has 20,000 human-managed interventions per year, humans will need to touch only 20. The rest will be handled through systems that can apply automation combined with machine learning, which can analyze patterns and react faster than human admins to enable preventive maintenance, performance optimization, and problem resolution.

Read more in my article for Forbes, “Prediction: 80% of Routine IT Operations Will Soon Be Solved Autonomously.”

On February 7, 2018, the carrier Swisscom admitted that a security lapse revealed sensitive information about 800,000 customers was exposed. The security failure was at one of Swisscom’s sales partners.

This is what can happen when a business gives its partners access to critical data. The security chain is only as good as the weakest link – and it can be difficult to ensure that partners are taking sufficient care, even if they pass an onboarding audit. Swisscom says,

In autumn of 2017, unknown parties misappropriated the access rights of a sales partner, gaining unauthorised access to customers’ name, address, telephone number and date of birth.

That’s pretty bad, but what came next was even worse, in my opinion. “Under data protection law this data is classed as ‘non-sensitive’,” said Swisscom. That’s distressing, because that’s exactly the sort of data needed for identity theft. But we digress.

Partners and Trust

Partners can be the way into an organization. Swisscom claims that new restrictions, such as preventing high-volume queries and using two-factor authentication, mean such an event can never occur again, which seems optimistic: “Swisscom also made a number of changes to better protect access to such non-sensitive personal data by third-party companies… These measures mean that there is no chance of such a breach happening again in the future.”

Let’s hope they are correct. But in the meantime, what can organizations do? First, Ensure that all third parties that have access to sensitive data, such as intellectual property, financial information, and customer information, go through a rigorous security audit.

Tricia C. Bailey’s article, “Managing Third-Party Vendor Risk,” makes good recommendations for how to vet vendors – and also how to prepare at your end. For example, do you know what (and where) your sensitive data is? Do vendor contracts spell out your rights and responsibilities for security and data protection – and your vendor’s rights and responsibilities? Do you have a strong internal security policy? If your own house isn’t in order, you can’t expect a vendor to improve your security. After all, you might be the weakest link.

Unaccustomed to performing security audits on partners? Organizations like CA Veracode offer audit-as-a-service, such as with their Vendor Application Security Testing service. There are also vertical industry services: the HITRUST Alliance, for example, offers a standardized security audit process for vendors serving the U.S. healthcare industry with its Third Party Assurance Program.

Check the Back Door

Many vendors and partners require back doors into enterprise data systems. Those back doors, or remote access APIs, can be essential for the vendors’ performing their line-of-business function. Take the Swisscom sales partner: It needs to be able to query Swisscom customers and add/update customer information, in order to effectively serve as as a sales organization.

Yet if the partner is breached, that back door can fall under the control of hackers, using the partner’s systems or credentials. In its 2017 Data Breach Investigations Report, Verizon reported that in regard to Point-of-Sale (POS) systems, “Almost 65% of breaches involved the use of stolen credentials as the hacking variety, while a little over a third employed brute force to compromise POS systems. Following the same trend as last year, 95% of breaches featuring the use of stolen credentials leveraged vendor remote access to hack into their customer’s POS environments.”

A Handshake Isn’t Good Enough

How secure is your business partner, your vendor, your contractor? If you don’t know, then you don’t know. If something goes wrong at your partners’ end, never forget that it may be your IP, your financials, and your customers’ data that is exposed. After all, whether or not you can recover damages from the partner in a lawsuit, your organization is the one that will pay the long-term price in the marketplace.

Savvy businesses have policies that prevent on-site viewing of pornography, in part to avoid creating a hostile work environment — and to avoid sexual harassment lawsuits. For security professionals, porn sites are also a dangerous source of malware.

That’s why human-resources policies should be backed up with technological measures. Those include blocking porn sites at the firewall, and for using on-device means to stop browsers from accessing such sites.

Even that may not be enough, says Kaspersky Labs, in its report, “Naked online: cyberthreats facing users of adult websites and applications.” Why? Because naughty content and videos have gone mainstream, says the report:

Today, porn can be found not only on specialist websites, but also in social media networks and on social platforms like Twitter. Meanwhile, the ‘classic’ porn websites are turning into content-sharing platforms, creating loyal communities willing to share their videos with others in order to get ‘likes’ and ‘shares’.

This problem is not new, but it’s increasingly dangerous, thanks to the criminal elements on the Dark Web, which are advertising tools for weaponizing porn content. Says Kaspersky, “While observing underground and semi-underground market places on the dark web, looking for information on the types of legal and illegal goods sold there, we found that among the drugs, weapons, malware and more, credentials to porn websites were often offered for sale.”

So, what’s the danger? There are concerns about attacks on both desktop/notebook and mobile users. In the latter case, says Kaspersky,

  • In 2017, at least 1.2 million users encountered malware with adult content at least once. That is 25.4% of all users who encountered any type of Android malware.
  • Mobile malware is making extensive use of porn to attract users: Kaspersky Lab researchers identified 23 families of mobile malware that use porn content to hide their real functionality.
  • Malicious clickers, rooting malware, and banking Trojans are the types of malware that are most often found inside porn apps for Android.

That’s the type of malware that’s dangerous on a home network. It’s potential ruinous if it provides a foothold onto an enterprise network not protected by intrusion detection/prevention systems or other anti-malware tech. The Kaspersky report goes into a lot of detail, and you should read it.

For another take on the magnitude of the problem: The Nielsen Company reported that more than 21 million Americans accessed adult websites on work computers – that is, 29% of working adults. Bosses are in on it too. In 2013, Time Magazine said that a survey of 200 U.S.-based data security analysts reveals that 40 percent removed malware from a senior executive’s computer, phone, or tablet after the executive visited a porn website.

What Can You Do?

Getting rid of pornography isn’t easy, but it’s not rocket science either. Start with a strong policy. Work with your legal team to make sure the policy is both legal and comprehensive. Get employee feedback on the policy, to help generate buy-in from executives and the rank-and-file.

Once the policy is finalized, communicate it clearly. Train employees on what to do, what not to do… and the employment ramifications for violating the policy. Explain that this policy is not just about harassment, but also about information security.

Block, block, block. Block at the firewall, block at proxy servers, block on company-owned devices. Block on social media. Make sure that antivirus is up to date. Review log files.

Finally, take this seriously. This isn’t a case of giggling (or eye-rolling) about boys-being-boys, or harmless diversions comparable to work-time shopping on eBay. Porn isn’t only offensive in the workplace, but it’s also a gateway to the Dark Web, criminals, and hackers. Going after porn isn’t only about being Victorian about naughty content. It’s about protecting your business from hackers.

We all have heard the usual bold predictions for technology in 2018: Lots of cloud computing, self-driving cars, digital cryptocurrencies, 200-inch flat-screen televisions, and versions of Amazon’s Alexa smart speaker everywhere on the planet. Those types of predictions, however, are low-hanging fruit. They’re not bold. One might as well predict that there will be some sunshine, some rainy days, a big cyber-bank heist, and at least one smartphone catching fire.

Let’s dig for insights beyond the blindingly obvious. I talked to several tech leaders, deep-thinking individuals in California’s Silicon Valley, asking them for their predictions, their idea of new trends, and disruptions in the tech industry. Let’s see what caught their eye.

Gary Singh, VP of marketing, OnDot Systems, believes that 2018 will be the year when mobile banking will transform into digital banking — which is more disruptive than one would expect. “The difference between digital and mobile banking is that mobile banking is informational. You get information about your accounts,” he said. Singh continues, “But in terms of digital banking, it’s really about actionable insights, about how do you basically use your funds in the most appropriate way to get the best value for your dollar or your pound in terms of how you want to use your monies. So that’s one big shift that we would see start to happen from mobile to digital.”

Tom Burns, Vice President and General Manager of Dell EMC Networking, has been following Software-Defined Wide Area Networks. SD-WAN is a technology that allows enterprise WANs to thrive over the public Internet, replacing expensive fixed-point connections provisioned by carriers using technologies like MPLS. “The traditional way of connecting branches in office buildings and providing services to those particular branches is going to change,” Burns observed. “If you look at the traditional router, a proprietary architecture, dedicated lines. SD-WAN is offering a much lower cost but same level of service opportunity for customers to have that data center interconnectivity or branch connectivity providing some of the services, maybe a full even office in the box, but security services, segmentation services, at a much lower cost basis.”

NetFoundry’s co-founder, Mike Hallett, sees a bright future for Application Specific Networks, which link applications directly to cloud or data center applications. The focus is on the application, not on the device. “For 2018, when you think of the enterprise and the way they have to be more agile, flexible and faster to move to markets, particularly going from what I would call channel marketing to, say, direct marketing, they are going to need application-specific networking technologies.” Hallett explains that Application Specific Networks offer the ability to be able to connect from an application, from a cloud, from a device, from a thing, to any application or other device or thing quickly and with agility. Indeed, those connections, which are created using software, not hardware, could be created “within minutes, not within the weeks or months it might take, to bring up a very specific private network, being able to do that. So the year of 2018 will see enterprises move towards software-only networking.”

Mansour Karam, CEO and founder of Apstra, also sees software taking over the network. “I really see massive software-driven automation as a major trend. We saw technologies like intent-based networking emerge in 2017, and in 2018, they’re going to go mainstream,” he said.

There’s more

There are predictions around open networking, augmented reality, artificial intelligence – and more. See my full story in Upgrade Magazine, “From SD-WAN to automation to white-box switching: Five tech predictions for 2018.”

The pattern of cloud adoption moves something like the ketchup bottle effect: You tip the bottle and nothing comes out, so you shake the bottle and suddenly you have ketchup all over your plate.

That’s a great visual from Frank Munz, software architect and cloud evangelist at Munz & More, in Germany. Munz and a few other leaders in the Oracle community were interviewed on a podcast by Bob Rhubart, Architect Community Manager at Oracle, about the most important trends they saw in 2017. The responses covered a wide range of topics, from cloud to blockchain, from serverless to machine learning and deep learning.

During the 44-minute session, “What’s Hot? Tech Trends That Made a Real Difference in 2017,” the panel took some fascinating detours into the future of self-programming computers and the best uses of container technologies like Kubernetes. For those, you’ll need to listen to the podcast.

The panel included: Frank Munz; Lonneke Dikmans, chief product officer of eProseed, Netherlands; Lucas Jellema, CTO, AMIS Services, Netherlands; Pratik Patel, CTO, Triplingo, US; and Chris Richardson, founder and CEO, Eventuate, US. The program was recorded in San Francisco at Oracle OpenWorld and JavaOne.

The cloud’s tipping point

The ketchup quip reflects the cloud passing a tipping point of adoption in 2017. “For the first time in 2017, I worked on projects where large, multinational companies give up their own data center and move 100% to the cloud,” Munz said. These workload shifts are far from a rarity. As Dikmans said, the cloud drove the biggest change and challenge: “[The cloud] changes how we interact with customers, and with software. It’s convenient at times, and difficult at others.”

Security offered another way of looking at this tipping point. “Until recently, organizations had the impression that in the cloud, things were less secure and less well managed, in general, than they could do themselves,” said Jellema. Now, “people have come to realize that they’re not particularly good at specific IT tasks, because it’s not their core business.” They see that cloud vendors, whose core business is managing that type of IT, can often do those tasks better.

In 2017, the idea of shifting workloads en masse to the cloud and decommissioning data centers became mainstream and far less controversial.

But wait, there’s more! See about Blockchain, serverless computing, and pay-as-you-go machine learning, in my essay published in Forbes, “Tech Trends That Made A Real Difference In 2017.”

“The functional style of programming is very charming,” insists Venkat Subramaniam. “The code begins to read like the problem statement. We can relate to what the code is doing and we can quickly understand it.” Not only that, Subramaniam explains in his keynote address for Oracle Code Online, but as implemented in Java 8 and beyond, functional-style code is lazy—and that laziness makes for efficient operations because the runtime isn’t doing unnecessary work.

Subramaniam, president of Agile Developer and an instructional professor at the University of Houston, believes that laziness is the secret to success, both in life and in programming. Pretend that your boss tells you on January 10 that a certain hourlong task must be done before April 15. A go-getter might do that task by January 11.

That’s wrong, insists Subramaniam. Don’t complete that task until April 14. Why? Because the results of the boss’s task aren’t needed yet, and the requirements may change before the deadline, or the task might be canceled altogether. Or you might even leave the job on March 13. This same mindset should apply to your programming: “Efficiency often means not doing unnecessary work.”

Subramaniam received the JavaOne RockStar award three years in a row and was inducted into the Java Champions program in 2013 for his efforts in motivating and inspiring software developers around the world. In his Oracle Code Online keynote, he explored how functional-style programming is implemented in the latest versions of Java, and why he’s so enthusiastic about using this style for applications that process lots and lots of data—and where it’s important to create code that is easy to read, easy to modify, and easy to test.

Functional Versus Imperative Programming

The old mainstream of imperative programming, which has been a part of the Java language from day one, relies upon developers to explicitly code not only what they want the program to do, but also how to do it. Take software that has a huge amount of data to process; the programmer would normally create a loop that examines each piece of data, and if appropriate, take specific action on that data with each iteration of the loop. It’s up to the developer to create the loop and manage it—in addition to coding the business logic to be performed on the data.

The imperative model, argues Subramaniam, results in what he calls “accidental complexity”—each line of code might perform multiple functions, which makes it hard to understand, modify, test, and verify. And, the developer must do a lot of work to set up and manage the data and iterations. “You get bogged down with the details,” he said. This not only introduces complexity, but makes code hard to change.”

By contrast, when using a functional style of programming, developers can focus almost entirely on what is to be done, while ignoring the how. The how is handled by the underlying library of functions, which are defined separately and applied to the data as required. Subramaniam says that functional-style programming provides highly expressive code, where each line of code does only one thing: “The code becomes easier to work with, and easier to write.”

Subramaniam adds that in functional-style programming, “The code becomes the business logic.” Read more in my essay published in Forbes, “Lazy Java Code Makes Applications Elegant, Sophisticated — And Efficient at Runtime.”

 

At least, I think it’s Swedish! Just stumbled across this. I hope they bought the foreign rights to one of my articles…


With lots of inexpensive, abundant computation resources available, nearly anything becomes possible. For example, you can process a lot of network data to identify patterns, identify intelligence, and product insight that can be used to automate networks. The road to Intent-Based Networking Systems (IBNS) and Application-Specific Networks (ASN) is a journey. That’s the belief of Rajesh Ghai, Research Director of Telecom and Carrier IP Networks at IDC.

Ghai defines IBNS as a closed-loop continuous implementation of several steps:

  • Declaration of intent, where the network administrator defines what the network is supposed to do
  • Translation of intent into network design and configuration.
  • Validation of the design using a model that decides if that configuration can actually be implemented,
  • Propagation of that configuration into the network devices via APIs.
  • Gather and study real-time telemetry from all the devices.
  • Use machine learning to determine whether desired state of policy has been achieved. And then repeat,

Related to that concept, Ghai explains, is ASN. “It’s also a concept which is software control and optimization and automation. The only difference is that ASN is more applicable to distributed applications over the internet than IBNS.”

IBNS Operates Networks as One System

“Think of intent-based networking as software that sits on top of your infrastructure and focusing on the networking infrastructure, and enables you to operate your network infrastructure as one system, as opposed to box per box,” explained Mansour Karam, Founder, CEO of Apstra, which offers IBNS solutions for enterprise data centers.

“To achieve this, we have to start with intent,” he continued. “Intent is both the high-level business outcomes that are required by the business, but then also we think of intent as applying to every one of those stages. You may have some requirements in how you want to build.”

Karam added, “Validation includes tests that you would run — we call them expectations — to validate that your network indeed is behaving as you expected, as per intent. So we have to think of a sliding scale of intent and then we also have to collect all the telemetry in order to close the loop and continuously validate that the network does what you want it to do. There is the notion of state at the core of an IBNS that really boils down to managing state at scale and representing it in a way that you can reason about the state of your system, compare it with the desired state and making the right adjustments if you need to.”

The upshot of IBNS, Karam said: If you have powerful automation you’re taking the human out of the equation, and so you get a much more agile network. You can recoup the revenues that otherwise you would have lost, because you’re unable to deliver your business services on time. You will reduce your outages massively, because 80% of outages are caused by human error. You reduce your operational expenses massively, because organizations spend $4 operating every dollar of CapEx, and 80% of it is manual operations. So if you take that out you should be able to recoup easily your entire CapEx spend on IBNS.”

ASN Gives Each Application It Own Network

“Application-Specific Networks, like Intent-Based Networking Systems, enable digital transformation, agility, speed, and automation,” explained Galeal Zino, Founder of NetFoundry, which offers an ASN platform.

He continued, “ASN is a new term, so I’ll start with a simple analogy. ASNs are like are private clubs — very, very exclusive private clubs — with exactly two members, the application and the network. ASN literally gives each application its own network, one that’s purpose-built and driven by the specific needs of that application. ASN merges the application world and the network world into software which can enable digital transformation with velocity, with scale, and with automation.”

Read more in my new article for Upgrade Magazine, “Manage smarter, more autonomous networks with Intent-Based Networking Systems and Application Specific Networking.”

When the little wireless speaker in your kitchen acts on your request to add chocolate milk to your shopping list, there’s artificial intelligence (AI) working in the cloud, to understand your speech, determine what you want to do, and carry out the instruction.

When you send a text message to your HR department explaining that you woke up with a vision-blurring migraine, an AI-powered chatbot knows how to update your status to “out of the office” and notify your manager about the sick day.

When hackers attempt to systematically break into the corporate computer network over a period of weeks, AI sees the subtle patterns in historical log data, recognizes outliers in the packet traffic, raises the alarm, and recommends appropriate countermeasures.

AI is nearly everywhere in today’s society. Sometimes it’s fairly obvious (as with a chatbot), and sometimes AI is hidden under the covers (as with network security monitors). It’s a virtuous cycle: Modern cloud computing and algorithms make AI a fast, efficient, and inexpensive approach to problem-solving. Developers discover those cloud services and algorithms and imagine new ways to incorporate the latest AI functionality into their software. Businesses see the value of those advances (even if they don’t know that AI is involved), and everyone benefits. And quickly, the next wave of emerging technology accelerates the cycle again.

AI can improve the user experience, such as when deciphering spoken or written communications, or inferring actions based on patterns of past behavior. AI techniques are excellent at pattern-matching, making it easier for machines to accurately decipher human languages using context. One characteristic of several AI algorithms is flexibility in handling imprecise data: Human text. Specially, chatbots—where humans can type messages on their phones, and AI-driven software can understand what they say and carry on a conversation, providing desired information or taking the appropriate actions.

If you think AI is everywhere today, expect more tomorrow. AI-enhanced software-as-a-service and platform-as-a-service products will continue to incorporate additional AI to help make cloud-delivered and on-prem services more reliable, more performant, and more secure. AI-driven chatbots will find their ways into new, innovative applications, and speech-based systems will continue to get smarter. AI will handle larger and larger datasets and find its way into increasingly diverse industries.

Sometimes you’ll see the AI and know that you’re talking to a bot. Sometimes the AI will be totally hidden, as you marvel at the, well, uncanny intelligence of the software, websites, and even the Internet of Things. If you don’t believe me, ask a chatbot.

Read more in my feature article in the January/February 2018 edition of Oracle Magazine, “It’s Pervasive: AI Is Everywhere.”

Millions of developers are using Artificial Intelligence (AI) or Machine Learning (ML) in their projects, says Evans Data Corp. Evans’ latest Global Development and Demographics Study, released in January 2018, says that 29% of developers worldwide, or 6,452,000 in all, are currently using some form of AI or ML. What’s more, says the study, an additional 5.8 million expect to use AI or ML within the next six months.

ML is actually a subset of AI. To quote expertsystem.com,

In practice, artificial intelligence – also simply defined as AI – has come to represent the broad category of methodologies that teach a computer to perform tasks as an “intelligent” person would. This includes, among others, neural networks or the “networks of hardware and software that approximate the web of neurons in the human brain” (Wired); machine learning, which is a technique for teaching machines to learn; and deep learning, which helps machines learn to go deeper into data to recognize patterns, etc. Within AI, machine learning includes algorithms that are developed to tell a computer how to respond to something by example.

The same site defines ML as,

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.

A related and popular AI-derived technology, by the way, is Deep Learning. DL uses simulated neural networks to attempt to mimic the way a human brain learns and reacts. To quote from Rahul Sharma on Techgenix,

Deep learning is a subset of machine learning. The core of deep learning is associated with neural networks, which are programmatic simulations of the kind of decision making that takes place inside the human brain. However, unlike the human brain, where any neuron can establish a connection with some other proximate neuron, neural networks have discrete connections, layers, and data propagation directions.

Just like machine learning, deep learning is also dependent on the availability of massive volumes of data for the technology to “train” itself. For instance, a deep learning system meant to identify objects from images will need to run millions of test cases to be able to build the “intelligence” that lets it fuse together several kinds of analysis together, to actually identify the object from an image.

Why So Many AI Developers? Why Now?

You can find AI, ML and DL everywhere, it seems. There are highly visible projects, like self-driving cars, or the speech recognition software inside Amazon’s Alexa smart speakers. That’s merely the tip of the iceberg. These technologies are embedded into the Internet of Things, into smart analytics and predictive analytics, into systems management, into security scanners, into Facebook, into medical devices.

A modern but highly visible application of AI/ML are chatbots – software that can communicate with humans via textual interfaces. Some companies use chatbots on their websites or on social media channels (like Twitter) to talk to customers and provide basic customer services. Others use the tech within a company, such as in human-resources applications that let employees make requests (like scheduling vacation) by simply texting the HR chatbot.

AI is also paying off in finance. The technology can help service providers (like banks or payment-card transaction clearinghouses) more accurately review transactions to see if they are fraudulent, and improve overall efficiency. According to John Rampton, writing for the Huffington Post, AI investment by financial tech companies was more than $23 billion in 2016. The benefits of AI, he writes, include:

  • Increasing Security
  • Reducing Processing Times
  • Reducing Duplicate Expenses and Human Error
  • Increasing Levels of Automation
  • Empowering Smaller Companies

Rampton also explains that AI can offer game-changing insights:

One of the most valuable benefits AI brings to organizations of all kinds is data. The future of Fintech is largely reliant on gathering data and staying ahead of the competition, and AI can make that happen. With AI, you can process a huge volume of data which will, in turn, offer you some game-changing insights. These insights can be used to create reports that not only increase productivity and revenue, but also help with complex decision-making processes.

What’s happening in fintech with AI is nothing short of revolutionary. That’s true of other industries as well. Instead of asking why so many developers, 29%, are focusing on AI, we should ask, “Why so few?”