Can you name that Top 40 pop song in 10 seconds? Sure, that sounds easy. Can you name that pop song—even if it’s played slightly out of tune? Uh oh, that’s a lot harder. However, if you can guess 10 in a row, you might share in a cash prize.

That’s the point of “Out of Tune,” an online music trivia game where players mostly in their teens and 20s compete to win small cash prizes–just enough to make the game more fun. And fun is the point of “Out of Tune,” launched in August by FTW Studios, a startup based in New York. What’s different about “Out of Tune” is that it’s designed for group play in real time. The intent is that players will get together in groups, and play together using their Android or Apple iOS phones.

Unlike in first-person shooter games, or other activities where a game player is interacting with the game’s internal logic, “Out of Tune” emphasizes the human-to-human aspect. Each game is broadcast live from New York — sometimes from FTW Studio’s facilities, sometimes from a live venue. Each game is hosted by a DJ, and is enjoyed through streaming video. “We’re not in the game show business or the music business,” says Avner Ronen, FTW Studio’s founder and CEO. “We’re in the shared experiences business.”

Because of all that human interaction, game players should feel like they’re part of something big, part of a group. “It’s social, says Ronen, noting 70% of its participants today are female. “The audience is younger, and people play with their friends.”

How does the game work? Twice a day, at 8 p.m. and 11 p.m. Eastern time, a DJ launches the game live from New York City. The game consists of 10 pop songs played slightly out of tune—and players, using a mobile app on their phones, have 10 seconds to guess the song. Players who guess all the songs correctly share in that event’s prize money.

Learn more about FTW Studios – and how the software works – in my story in Forbes, “This Online Game Features Out-Of-Tune Pop Songs. The End Game Is About Much More.”

Every new graduate from Central New Mexico Community College leaves school with a beautiful paper diploma covered in fine calligraphy, colorful seals, and official signatures. This summer, every new graduate also left with the same information authenticated and recorded in blockchain.

What’s the point of recording diplomas using blockchain technology? Blockchain creates a list of immutable records—grouped in blocks—that are linked cryptographically to form a tamper-evident chain. Those blocks are replicated on multiple servers across the participating organizations, so if a school went out of business, or somehow lost certain records to disaster or other mayhem, a student’s credentials are still preserved in other organizations’ ledger copies. Anyone authorized to access information on that blockchain (which might include, for example, prospective employers) could verify whether the student’s diploma and its details, such as the year, degree, and honors, match what the student claims.

Today, using blockchain for diplomas or certifications is uncommon. But it’s one of a growing number of blockchain use cases being tested—cases where information needs to be both shared and trusted across many parties, and preserved against loss or tampering.

Academic credentials are important to adults looking for jobs or applying to study for advanced degrees. Those records are also vital for refugees fleeing natural disasters or war-torn countries, such as those leaving Syria. “There are refugees who are medical doctors who can no longer practice medicine because they don’t have those certificates anymore,” says Feng Hou, CIO and chief digital learning officer at Central New Mexico Community College (CNM).

CNM is the largest higher-education institution in the state in terms of undergraduate enrollment, serving more than 23,000 students this fall. Nationally accredited, with eight locations in and around Albuquerque, CNM offers more than 150 associate degrees and certificates, as well as non-credit job training programs.

A benefit of blockchain is that there’s no single point of failure. “Given the decentralized nature of blockchain technology, it will prevent the single point of failure for any identity crisis, such as Syrian refugees, because on blockchain the ID is secure, shareable and verifiable anywhere in the world,” says Hou.

Read more in my story for the Wall Street Journal, “New Mexico College Deploys Blockchain for Digital Diplomas.”

Oracle Database is the world’s most popular enterprise database. This year’s addition of autonomous operating capabilities to the cloud version of Oracle Database is one of the most important advances in the database’s history. What does it mean for a database to be “autonomous?” Let’s look under the covers of Oracle Autonomous Database to show just a few of the ways it does that.

Oracle Autonomous Database is a fully managed cloud service. Like all cloud services, the database runs on servers in cloud data centers—in this case, on hardware called Oracle Exadata Database Machine that’s specifically designed and tuned for high-performance, high-availability workloads. The tightly controlled and optimized hardware enables some of the autonomous functionality we’ll discuss shortly.

While the autonomous capability of Oracle Autonomous Database is new, it builds on scores of automation features that Oracle has been building into its Oracle database software and the Exadata database hardware for years. The goals of the autonomous functions are twofold: First, to lower operating costs by reducing costly and tedious manual administration, and second, to improve service levels through automation and fewer human errors.

My essay in Forbes, “What Makes Oracle Autonomous Database Truly ‘Autonomous,’” shows of how the capabilities in Oracle Autonomous Database change the game for database administrators (DBAs). The benefits: DBAs are freed them from mundane tasks and letting them focus on higher-value work.

Knowledge is power—and knowledge with the right context at the right moment is the most powerful of all. Emerging technologies will leverage the power of context to help people become more efficient, and one of the first to do so is a new generation of business-oriented digital assistants.

Let’s start by distinguishing a business digital assistant from consumer products such as Apple’s Siri, Amazon’s Echo, and Google’s Home. Those cloud-based technologies have proved themselves at tasks like information retrieval (“How long is my commute today?”) and personal organization (“Add diapers to my shopping list”). Those services have some limited context about you, like your address book, calendar, music library, and shopping cart. What they don’t have is deep knowledge about your job, your employer, and your customers.

In contrast, a business digital assistant needs much richer context to handle the kind of complex tasks we do at work, says Amit Zavery, executive vice president of product development at Oracle. Which sorts of business tasks? How about asking a digital assistant to summarize the recent orders from a company’s three biggest customers in Dallas; set up a conference call with everyone involved with a particular client account; create a report of all employees who haven’t completed information security training; figure out the impact of a canceled meeting on a travel plan; or pull reports on accounts receivable deviations from expected norms?

Those are usually tasks for human associates—often a tech-savvy person in supply chain, sales, finance, or human resources. That’s because so many business tasks require context about the employee making the request and about the organization itself, Zavery says. A digital assistant’s goal should be to reduce the amount of mental energy and physical steps needed to perform such tasks.

Learn more in my article for Forbes, “The One Thing Digital Assistants Need To Become Useful At Work: Context.”

At too many government agencies and companies, the security mindset, even though it’s never spoken, is that “We’re not a prime target, our data isn’t super-sensitive.” Wrong. The reality is that every piece of personal data adds to the picture that potential criminals or state-sponsored actors are painting of individuals.

And that makes your data a target. “Just because you think your data isn’t useful, don’t assume it’s not valuable to someone, because they’re looking for columns, not rows,” says Hayri Tarhan, Oracle regional vice president for public sector security.

Here’s what Tarhan means by columns not rows: Imagine that the bad actors are storing information in a database (which they probably are). What hackers want in many data breaches is more information about people already in that database. They correlate new data with the old, using big data techniques to fill in the columns, matching up data stolen from different sources to form a more-complete picture.

That picture is potentially much more important and more lucrative than finding out about new people and creating new, sparsely populated data rows. So, every bit of data, no matter how trivial it might seem, is important when it comes to filling the empty squares.

Read more about this – and how machine learning can help – in my article in Forbes, “Data Thieves Want Your Columns—Not Your Rows.”

Blockchain and the cloud go together like organic macaroni and cheese. What’s the connection? Choosy shoppers would like to know that their organic food is tracked from farm to shelf, to make sure they’re getting what’s promised on the label. Blockchain provides an immutable ledger perfect for tracking cheese, for example, as it goes from dairy to cheesemaker to distributor to grocer.

Oracle’s new Blockchain Cloud Service provides a platform for each participant in a supply chain to register transactions. Within that blockchain, each participant—and regulators, if appropriate—can review those transactions to ensure that promises are being kept, and that data has not been tampered with. Use cases range from supply chains and financial transactions to data sharing inside a company.

Launched this month, Oracle Blockchain Cloud Service has the features that an enterprise needs to move from experimenting with blockchain to creating production applications. It addresses some of the biggest challenges facing developers and administrators, such as mastering the peer-to-peer protocols used to link blockchain servers, ensuring resiliency and high availability, and ensuring that security is solid. For example, developers previously had to code one-off integrations using complex APIs; Oracle’s Blockchain Cloud Service provides integration accelerators with sample templates and design patterns for many Oracle and third-party applications in the cloud and running on-premises in the data center.

Oracle Blockchain Cloud Service provides the kind of resilience, recoverability, security, and global reach that enterprises require before they’d trust their supply chain and customer experience to blockchain. With blockchain implemented as a managed cloud service, organizations also get a system that’s ready to be integrated with other enterprise applications, and where Oracle handles the back end to ensure availability and security.

Read more about this in my story for Forbes, “Oracle Helps You Put Blockchain Into Real-World Use With New Cloud Service.”

If you saw the 2013 Sandra Bullock-George Clooney science-fiction movie Gravity, then you know about the silent but deadly damage that even a small object can do if it hits something like the Hubble telescope, a satellite, or even the International Space Station as it hurtles through space. If you didn’t see Gravity, a non-spoiler, one-word summary would be “disaster.” Given the thousands of satellites and pieces of man-made debris circling our planet, plus new, emerging threats from potentially hostile satellites, you don’t need to be a rocket scientist to know that it’s important to keep track of what’s around you up there.

It all starts with the basic physics of motion and managing the tens of thousands of data points associated with those objects, says Paul Graziani, CEO and cofounder of Analytical Graphics. The Exton, Pennsylvania-based software company develops four-dimensional software that analyzes and visualizes objects based on their physical location and their time and relative position to each other or to other known locations. AGI has leveraged its software models to build the ComSpOC – its Commercial Space Operations Center. ComSpOC is the first and only commercial Space Situational Awareness center, and since 2014 it has helped space agencies and satellite operators keep track of space objects, including satellites and spacecraft.

ComSpOC uses data from sensors that AGI owns around the globe, plus data from other organizations, to track objects in space. These sensors include optical telescopes, radar systems, and passive rf (radio frequency) sensors. “A telescope gathers reflections of sunlight that come off of objects in space,” Graziani says. “And a radar broadcasts radio signals that reflect off of those objects and then times how long it takes for those signals to get back to the antenna.”

The combination of these measurements helps pinpoint the position of each object. The optical measurements of the telescopes provide directional accuracy, while the time measurements of the radar systems provide the distance of that object from the surface of the Earth. Passive rf sensors, meanwhile, use communications antennas that receive the broadcast information from operational satellites to measure satellite position and velocity.

Read more in my story for Forbes, “How Satellites Avoid Attacks And Space Junk While Circling The Earth.”

Users care passionately about their software being fast and responsive. You need to give your applications both 0-60 speed and the strongest long-term endurance. Here are 14 guidelines for choosing a deployment platform to optimize performance, whether your application runs in the data center or the cloud.

Faster! Faster! Faster! That killer app won’t earn your company a fortune if the software is slow as molasses. Sure, your development team did the best it could to write server software that offers the maximum performance, but that doesn’t mean diddly if those bits end up on a pokey old computer that’s gathering cobwebs in the server closet.

Users don’t care where it runs as long as it runs fast. Your job, in IT, is to make the best choices possible to enhance application speed, including deciding if it’s best to deploy the software in-house or host it in the cloud.

When choosing an application’s deployment platform, there are 14 things you can do to maximize the opportunity for the best overall performance. But first, let’s make two assumptions:

  • These guidelines apply only to choosing the best data center or cloud-based platform, not to choosing the application’s software architecture. The job today is simply to find the best place to run the software.
  • I presume that if you are talking about a cloud deployment, you are choosing infrastructure as a service (IaaS) instead of platform as a service (PaaS). What’s the difference? In PaaS, the operating system is provided by the host, such as Windows or Linux, .NET, or Java; all you do is provide the application. In IaaS, you can provide, install, and configure the operating system yourself, giving you more control over the installation.

Here’s the checklist

  1. Run the latest software. Whether in your data center or in the IaaS cloud, install the latest version of your preferred operating system, the latest core libraries, and the latest application stack. (That’s one reason to go with IaaS, since you can control updates.) If you can’t control this yourself, because you’re assigned a server in the data center, pick the server that has the latest software foundation.
  2. Run the latest hardware. Assuming we’re talking the x86 architecture, look for the latest Intel Xeon processors, whether in the data center or in the cloud. As of mid-2018, I’d want servers running the Xeon E5 v3 or later, or E7 v4 or later. If you use anything older than that, you’re not getting the most out of the applications or taking advantage of the hardware chipset. For example, some E7 v4 chips have significantly improved instructions-per-CPU-cycle processing, which is a huge benefit. Similarly, if you choose AMD or another processor, look for the latest chip architectures.
  3. If you are using virtualization, make sure the server has the best and latest hypervisor. The hypervisor is key to a virtual machine’s (VM) performance—but not all hypervisors are created equal. Many of the top hypervisors have multiple product lines as well as configuration settings that affect performance (and security). There’s no way to know which hypervisor is best for any particular application. So, assuming your organization lets you make the choice, test, test, test. However, in the not-unlikely event you are required to go with the company’s standard hypervisor, make sure it’s the latest version.
  4. Take Spectre and Meltdown into account. The patches for Spectre and Meltdown slow down servers, but the extent of the performance hit depends on the server, the server’s firmware, the hypervisor, the operating system, and your application. It would be nice to give an overall number, such as expect a 15 percent hit (a number that’s been bandied about, though some dispute its accuracy). However, there’s no way to know except by testing. Thus, it’s important to know if your server has been patched. If it hasn’t been yet, expect application performance to drop when the patch is installed. (If it’s not going to be patched, find a different host server!)
  5. Base the number of CPUs and cores and the clock speed on the application requirements. If your application and its core dependencies (such as the LAMP stack or the .NET infrastructure) are heavily threaded, the software will likely perform best on servers with multiple CPUs, each equipped with the greatest number of cores—think 24 cores. However, if the application is not particularly threaded or runs in a not-so-well-threaded environment, you’ll get the biggest bang with the absolute top clock speeds on an 8-core server.

But wait, there’s more!

Read the full list of 14 recommendations in my story for HPE Enteprise.nxt, “Checklist: Optimizing application performance at deployment.”

You wouldn’t enjoy paying a fine of 4 percent of your company’s total revenue. But that’s the potential penalty if your company is found in violation of the European Union’s new General Data Protection Regulation (GDPR), which goes into effect May 25, 2018. As you’ve probably read, organizations anywhere in the world are subject to GDPR if they have customers in the EU and are storing any of their personal data.

GDPR compliance is a complex topic. It’s too much for one article — heck, books galore are being written about it, seminars abound, and GDPR consultants are on every street corner.

One challenge is that GDPR is a regulation, not a how-to guide. It’s big on explaining penalties for failing to detect and report a data breach in a sufficiently timely manner. It’s not big on telling you how to detect that breach. Rather than tell you what to do, let’s see what could go wrong with your GDPR plans—to help you avoid that 4 percent penalty.

First, the ground rules: GDPR’s overarching goal is to protect citizens’ privacy. In particular, the regulation pertains to anything that can be used to directly or indirectly identify a person. Such data can be anything: a name, a photo, an email address, bank details, social network posts, medical information, or even a computer IP address. To that end, data breaches that may pose a risk to individuals must be disclosed to the authorities within 72 hours and to the affected individuals soon thereafter.

What does that mean? As part of the regulations, individuals must have the ability to see what data you have about them, correct that data if appropriate, or have that data deleted, again if appropriate. (If someone owes you money, they can’t ask you to delete that record.)

Enough preamble. Let’s get into ten common problems.

First: Your privacy and data retention policies aren’t compliant with GDPR

There’s no specific policy wording required by GDPR. However, the policies must meet the overall objectives on GDPR, as well as the requirements in any other jurisdictions in which you operate (such as the United States). What would Alan do? Look at policies from big multinationals that do business in Europe and copy what they do, working with your legal team. You’ve got to get it right.

Second: Your actual practices don’t match your privacy policy

It’s easy to create a compliant privacy policy but hard to ensure your company actually is following it. Do you claim that you don’t store IP addresses? Make sure you’re not. Do you claim that data about a European customer is never stored in a server in the United States? Make sure that’s truly the case.

For example, let’s say you store information about German customers in Frankfurt. Great. But if that data is backed up to a server in Toronto, maybe not great.

Third: Your third-party providers aren’t honoring your GDPR responsibilities

Let’s take that customer data in Frankfurt. Perhaps you have a third-party provider in San Francisco that does data analytics for you, or that runs credit reports or handles image resizing. In those processes, does your customer data ever leave the EU? Even if it stays within the EU, is it protected in ways that are compliant with GDPR and other regulations? It’s your responsibility to make sure: While you might sue a supplier for a breach, that won’t cancel out your own primary responsibility to protect your customers’ privacy.

A place to start with compliance: Do you have an accurate, up-to-date listing of all third-party providers that ever touch your data? You can’t verify compliance if you don’t know where your data is.

But wait, there’s more

You can read the entire list of common GDPR failures in my story for HPE Enterprise.nxt, “10 ways to fail at GDPR compliance.”

The public cloud is part of your network. But it’s also not part of your network. That can make security tricky, and sometimes become a nightmare.

The cloud represents resources that your business rents. Computational resources, like CPU and memory; infrastructure resources, like Internet bandwidth and Internal networks; storage resources; and management platforms, like the tools needed to provision and configure services.

Whether it’s Amazon Web Services, Microsoft Azure or Google Cloud Platform, it’s like an empty apartment that you rent for a year or maybe a few months. You start out with empty space, put in there whatever you want and use it however you want. Is such a short-term rental apartment your home? That’s a big question, especially when it comes to security. By the way, let’s focus on platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS), where your business has a great deal of control over how the resource is used — like an empty rental apartment.

We are not talking about software-as-a-service (SaaS), like Office 365 or Salesforce.com. That’s where you show up, pay your bill and use the resources as configured. That’s more like a hotel room: you sleep there, but you can’t change the furniture. Security is almost entirely the responsibility of the hotel; your security responsibility is to ensure that you don’t lose your key, and to refuse to open the door for strangers. The SaaS equivalent: Protect your user accounts and passwords, and ensure users only have the least necessary access privileges.

Why PaaS/IaaS are part of your network

As Peter Parker knows, Spider Man’s great powers require great responsibility. That’s true in the enterprise data center — and it’s true in PaaS/IaaS networks. The customer is responsible for provisioning servers, storage and virtual machines. Not only that, but the customer also is responsible for creating connections between the cloud service and other resources, such as an enterprise data center — in a hybrid cloud architecture — and other cloud providers — in a multi-cloud architecture.

The cloud provider sets terms for use of the PaaS/IaaS, and allows inbound and outbound connections. There are service level guarantees for availability of the cloud, and of servers that the cloud provider owns. Otherwise, everything is on the enterprise. Think of the PaaS/IaaS cloud as being a remote data center that the enterprise rents, but where you can’t physically visit and see your rented servers and infrastructure.

Why PaaS/IaaS are not part of your network

In short, except for the few areas that the cloud provider handles — availability, cabling, power supplies, connections to carrier networks, physical security — you own it. That means installing patches and fixes. That means instrumenting servers and virtual machines. That means protecting them with software-based firewalls. That means doing backups, whether using the cloud provider’s value-added services or someone else. That means anti-malware.

That’s not to minimize the benefits the cloud provider offers you. Power and cooling are a big deal. So are racks and cabling. So is that physical security, and having 24×7 on-site staffing in the event of hardware failures. Also, there’s click-of-a-button ability to provision and spool up new servers to handle demand, and then shut them back again when not needed. Cloud providers can also provide firewall services, communications encryption, and of course, consulting on security.

The word elastic is often used for cloud services. That’s what makes the cloud much more agile than an on-premise data center, or renting an equipment cage in a colocation center. It’s like renting an apartment where if you need a couple extra bedrooms for a few months, you can upsize.

For many businesses, that’s huge. Read more about how great cloud power requires great responsibility in my essay for SecurityNow, “Public Cloud, Part of the Network or Not, Remains a Security Concern.”

It’s standard practice for a company to ask its tech suppliers to fill out detailed questionnaires about their security practices. Companies use that information when choosing a supplier. Too much is at stake, in terms of company reputation and customer trust, to be anything but thorough with information security.

But how can a company’s IT security teams be most effective in that technology buying process? How do they get all the information they need, while also staying focused on what really matters and not wasting their time? Oracle Chief Security Officer Mary Ann Davidson at the recent RSA Conference offered her tips on this IT security risk assessment process. Drawing on her extensive experience as both supplier and buyer of technology and cloud services in her role at Oracle, Davidson shared advice from both points of view.

Advice on business risk assessments

It’s time to put out an RFP to engage new technology providers or to conduct an annual assessment of existing service providers. What do you ask in such a vendor security assessment questionnaire? There are many existing documents and templates, some focused on specific industries, others on regulated sectors or regulated information. Those should guide any assessment process, but aren’t the only factors, says Davidson. Consider these practical tips to get the crucial data you need, and avoid gathering a lot of information that will only distract you from issues that are important for keeping your data secure.

  1. Have a clear objective in mind. The purpose of the vendor security assessment questionnaire should be to assess the security performance of the vendor in light of the organization’s tolerance for risk on a given project.
  2. Limit the scope of an assessment to the potential security risks for services that the supplier is offering you. Those services are obviously critical, because they could affect your data, operations, and security. There is no value in focusing on a supplier’s purely internal systems if they don’t contain or connect to your data. By analogy, “you care about the security of a childcare provider’s facility,” says Davidson. “It’s not relevant to ask about the security of the facility owner’s vacation home in Lake Tahoe.”
  3. When possible, align the questions with internationally recognized, relevant, independently developed standards. It’s reasonable to expect service providers to offer open services that conform to true industry standards. Be wary of faux standards, which are the opposite of open—they could be designed to encourage tech buyers to trust what they think are specifications designed around industry consensus, but which are really pushing one tech supplier’s agenda or that of a third-party certification business.

There are a lot more tips in my story for Forbes, “IT Security Risk Assessments: Tips For Streamlining Supplier-Customer Communication.”

Chapter One: Christine Hall

Should the popular Linux operating system be referred to as “Linux” or “GNU/Linux”? It’s a thing, or at least it used to be, writes my friend Christine Hall in her aptly named article, “Is It Linux or GNU/Linux?, published in Linux Journal on May 11:

Some may remember that the Linux naming convention was a controversy that raged from the late 1990s until about the end of the first decade of the 21st century. Back then, if you called it “Linux”, the GNU/Linux crowd was sure to start a flame war with accusations that the GNU Project wasn’t being given due credit for its contribution to the OS. And if you called it “GNU/Linux”, accusations were made about political correctness, although operating systems are pretty much apolitical by nature as far as I can tell.

Christine (aka Bride of Linux) quotes a number of learned people. That includes Steven J. Vaughan-Nichols, one of the top experts in the politics of open-source software – and frequent critic of the antics of Richard M. Stallman (aka RMS) who founded the Free Software Foundation, and who insists that everyone call the software GNU/Linux.

Here’s what Steven (aka SJVN), said in the article:

“Enough already”, he said. “RMS tried, and failed, to create an operating system: Hurd. He and the Free Software Foundation’s endless attempts to plaster his GNU name to the work of Linus Torvalds and the other Linux kernel developers is disingenuous and an insult to their work. RMS gets credit for EMACS, GPL, and GCC. Linux? No.”

Another humble luminary sought out by Christine: Yours truly.

“For me it’s always, always, always, always Linux,” said Alan Zeichick, an analyst at Camden Associates who frequently speaks, consults and writes about open-source projects for the enterprise. “One hundred percent. Never GNU/Linux. I follow industry norms.”

To make a long story short: In the article, the consensus was for Linux, not GNU/Linux.

Chapter Two: figosdev

But then someone going by the handle “figosdev” authored a rebuttal, “Debunking the Usual Omission of GNU,” published on Techrights. To make a long story short, he believes that the operating system should be called GNU/Linux. Here’s my favorite part of figosdev’s missive (which was written in all lower-case):

ive heard about gnu and linux about a million times in over a decade. as of today ive heard of alan zeichick once, and camden associates (what do they even do?) once. im just going to call them linux, its the more popular term.

Riiight. figosdev never heard of me, fine (founder of SD Times, but figosdev probably never heard of that either). On the other hand, at least figosdev knows my name. I have no idea who figosdev is, except to infer that he/she/it is a developer on the fig component compiler project, since he/she/it is hiding behind a handle. And that brings me to

Chapter Three: Richi Jennings

Christine Hall’s article sparked a lively debate on Twitter. As part of it, my friend Richi Jennings (quoted in the original article) tweeted:

Let’s end the story here, at least for now. Linux forever!

Oracle CEO Mark Hurd is known as an avid tennis fan and supporter of the sport’s development, having played in college at Baylor University. At the Collision Conference last week in New Orleans, Hurd discussed the similar challenges facing tennis players and top corporate executives.

“I like this sport because tennis teaches that you’re out there by yourself,” said Hurd, who was interviewed on stage by CNBC reporter Aditi Roy. “Tennis is like being CEO: You can’t call time out, you can’t bring in a substitute,” Hurd said. “Tennis is a space where you have to go out every day, rain or shine, and you’ve got to perform. It’s just like the business world.”

Performance returned to the center of the conversation when Roy asked about Oracle’s acquisition strategy. Hurd noted that Oracle’s leadership team gives intense scrutiny to acquisitions of any size. “We don’t go out of our way to spend money — it’s our shareholder’s money,” he said. “We also think about dividends and buying stock back.”

When it comes to mergers and acquisitions, Oracle is driven by three top criteria, Hurd said. “First, the company has to fit strategically with where we are going,” he said. “Second, it has to make fiscal sense. And third, we have to be able to effectively run the acquisition.”

Hurd emphasized that he’s focused on the future, not a company’s past performance. “We are looking for companies that will be part of things 5 or 10 years from now, not 5 or 10 years ago,” he said. “We want to move forward, in platforms and applications.”

To a large extent, that future includes artificial intelligence. Hurd was quick to say, “I’m not looking for someone to say, ‘I have an AI solution in the cloud, come to me.’” Rather, Oracle wants to be able to integrate AI directly into its applications, in a way that gives customers clear business returns.

He used the example of employee recruitment. “We recruit 2,000 college students today. It used to be done manually, but now we use machine learning and algorithms to figure out where to source people.” Not only does the AI help find potential employees, but it can help evaluate whether the person would be successful at Oracle. “We could never have done that before,” Hurd added.

Read more about what Hurd said at Collision, including his advice for aspiring CEOs, in my story for Forbes, “Mark Hurd On The Perfect Sport For CEOs — And Other Leadership Insights.”

You can also watch the 20-minute entire interview here.

No doubt you’ve heard about blockchain. It’s the a distributed digital ledger technology that lets participants add and view blocks of transaction records, but not delete or change them without being detected.

Most of us know blockchain as the foundation of Bitcoin and other digital currencies. But blockchain is starting to enter the business mainstream as the trusted ledger for farm-to-table vegetable tracking, real estate transfers, digital identity management, financial transactions and all manner of contracts. Blockchain can be used for public transactions as well as for private business, inside a company or within an industry group.

What makes the technology so powerful is that there’s no central repository for this ever-growing sequential chain of transaction records, clumped together into blocks. Because that repository is replicated in each participant’s blockchain node, there is no single source of failure, and no insider threat within a single organization can impact its integrity.

“Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days or even weeks.”

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can be permitted to view relevant data, but not everything in the chain.

A customer, for instance, might be able to verify that a contractor has a valid business license. The customer might also see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

Business models and use cases

Blockchain is well-suited for managing transactions between companies or organizations that may not know each other well and where there’s no implicit or explicit trust. Rakhmilevich explains, “Blockchain works because it’s peer-to-peer…and it provides an easy-to-track history, which can serve as an audit trail,” he says.

What’s more, blockchain smart contracts are ideal for automating manual or semi-automated processes prone to errors or fraud. “Blockchain can help when there might be challenges in proving that the data has not been tampered with or when verifying the source of a particular update or transaction is important,” Rakhmilevich says.

Blockchain has uses in many industries, including banking, securities, government, retail, healthcare, manufacturing and transportation. Take healthcare: Blockchain can provide immutable records on clinical trials. Think about all the data being collected and flowing to the pharmaceutical companies and regulators, all available instantly and from verified participants.

Read more about blockchain in my article for the Wall Street Journal, “Blockchain: It’s All About Business—and Trust.”

Blame people for the SOC scalability challenge. On the other hand, don’t blame your people. It’s not their fault.

The security operations center (SOC) team is frequently overwhelmed, particularly the Tier 1 security analysts tasked with triage. As companies grow and add more technology — including the Internet of Things (IoT) — that means more alerts.

As the enterprise adds more sophisticated security tools, such as Endpoint Detection and Response (EDR), that means more alerts. And more complex alerts. You’re not going to see a blinking red light that says: “You’re being hacked.” Or if you do see such an alert, it’s not very helpful.

The problem is people, say experts at the 2018 RSA Conference, which wrapped up last week. Your SOC team — or teams — simply can’t scale fast enough to keep up with the ever-increasing demand. Let’s talk about the five biggest problems challenging SOC scalability.

Reason #1: You can’t afford to hire enough analysts

You certainly can’t afford to hire enough Tier 2 analysts who respond to real — or almost certainly real — incidents. According sites like Glassdoor and Indeed, be prepared to pay over $100,000 per year, per person.

Reason #2: You can’t even find enough analysts

We’ve created a growing demand for labor, and thus, we’ve created this labor shortage,” said Malcolm Harkins, chief security and trust officer of CylanceThere are huge numbers of open positions at all levels of information security, and that includes in-enterprise SOC team members. Sure, you could pay more, or do competitive recruiting, but go back to the previous point: You can’t afford that. Perhaps a managed security service provider can afford to keep raising salaries, because an MSSP can monetize that expense. An ordinary enterprise can’t, because security is an expense.

Reason #3: You can’t train the analysts

Even with the best security tools, analysts require constant training on threats and techniques — which is expensive to offer, especially for a smaller organization. And wouldn’t you know it, as soon as you get a group of triage specialists or incident responders trained up nicely, off they go for a better job.

Read more, including two more reasons, in my essay for SecurityNow, “It’s the People: 5 Reasons Why SOC Can’t Scale.”

Got Terminator? Microsoft is putting artificial intelligence in charge of automatically responding to detected threats, with a forthcoming update to Windows Defender ATP.

Microsoft is expanding its use of artificial intelligence and big data analytics behind the current levels of machine learning in its security platform. Today, AI is used for incident detection and investigation, filtering out false positives and making it easier for humans in the security operations center (SOC) team to determine the correct response to an incident.

Soon, customers will be able to allow the AI to respond to some incidents automatically. Redmond claims this will cut time-to-remediation down to minutes. In a blog post released April 17, Moti Gindi, general manager for Windows Cyber Defense, wrote: “Threat investigation and remediation decisions can be taken automatically by Windows Defender ATP based on extensive historical data collected, stored and analyzed in our cloud (‘time travel’).”

What type of remediation? No, robots won’t teleport from the future and shoot lasers at the cybercriminals. At least, that’s not an announced capability. Rather, Windows Defender ATP will signal the Azure Active Directory user management and Microsoft Intune mobile device management platforms to temporarily revoke access privileges to cloud storage and enterprise applications, such as Office 365.

After the risk has been evaluated — or after the CEO has yelled at the CISO from her sales trip overseas — the access revocation can be reversed. Another significant part of the Windows Defender ATP announcements: Threat signal sharing between Microsoft’s various cloud platforms, which up until now have operated pretty much autonomously in terms of security.

In the example Microsoft offered, threats coming via a phishing email detected by Outlook 365 will be correlated with malware blocked by OneDrive for Business. In this incarnation, signal sharing will bring together Office 365, Azure 365 and Windows Defender ATP.

Read more, including about Microsoft’s Mac support for security, in my essay for SecurityNow, “Microsoft Security Is Channeling the Terminator.”

Ransomware rules the cybercrime world – perhaps because ransomware attacks are often successful and financially remunerative for criminals. Ransomware features prominently in Verizon’s fresh-off-the-press 2018 Data Breach Investigations Report (DBIR). As the report says, although ransomware is still a relatively new type of attack, it’s growing fast:

Ransomware was first mentioned in the 2013 DBIR and we referenced that these schemes could “blossom as an effective tool of choice for online criminals”. And blossom they did! Now we have seen this style of malware overtake all others to be the most prevalent variety of malicious code for this year’s dataset. Ransomware is an interesting phenomenon that, when viewed through the mind of an attacker, makes perfect sense.

The DBIR explains that ransomware can be attempted with little risk or cost to the attacker. It can be successful because the attacker doesn’t need to monetize stolen data, only ransom the return of that data; and can be deployed across numerous devices in organizations to inflict more damage, and potentially justify bigger ransoms.

Botnets Are Also Hot

Ransomware wasn’t the only prominent attack; the 2018 DBIR also talks extensively about botnet-based infections. Verizon cites more than 43,000 breaches using customer credentials stolen from botnet-infected clients. It’s a global problem, says the DBIR, and can affect organizations in two primary ways:

The first way, you never even see the bot. Instead, your users download the bot, it steals their credentials, and then uses them to log in to your systems. This attack primarily targeted banking organizations (91%) though Information (5%) and Professional Services organizations (2%) were victims as well.

The second way organizations are affected involves compromised hosts within your network acting as foot soldiers in a botnet. The data shows that most organizations clear most bots in the first month (give or take a couple of days).

However, the report says, some bots may be missed during the disinfection process. This could result in a re-infection later.

Insiders Are Still Significant Threats

Overall, says Verizon, outsiders perpetrated most breaches, 73%. But don’t get too complacent about employees or contracts: Many involved internal actors, 28%. Yes, that adds to more than 100% because some outside attacks had inside help. Here’s who Verizon says is behind breaches:

  • 73% perpetrated by outsiders
  • 28% involved internal actors
  • 2% involved partners
  • 2% featured multiple parties
  • 50% of breaches were carried out by organized criminal groups
  • 12% of breaches involved actors identified as nation-state or state-affiliated

Email is still the delivery vector of choice for malware and other attacks. Many of those attacks were financially motivated, says the DBIR. Most worrying, a significant number of breaches took a long time to discover.

  • 49% of non-point-of-sale malware was installed via malicious email
  • 76% of breaches were financially motivated
  • 13% of breaches were motivated by the gain of strategic advantage (espionage)
  • 68% of breaches took months or longer to discover

Taking Months to Discover the Breach

To that previous point: Attackers can move fast, but defenders can take a while. To use a terrible analogy: If someone breaks into your car and steals your designer sunglasses, the time from their initial penetration (picking the lock or smashing the window) to compromising the asset (grabbing the glasses) might be a minute or less. The time to discovery (when you see the broken window or realize your glasses are gone) could be minutes if you parked at the mall – or days, if the car was left at the airport parking garage. The DBIR makes the same point about enterprise data breaches:

When breaches are successful, the time to compromise continues to be very short. While we cannot determine how much time is spent in intelligence gathering or other adversary preparations, the time from first action in an event chain to initial compromise of an asset is most often measured in seconds or minutes. The discovery time is likelier to be weeks or months. The discovery time is also very dependent on the type of attack, with payment card compromises often discovered based on the fraudulent use of the stolen data (typically weeks or months) as opposed to a stolen laptop which is discovered when the victim realizes they have been burglarized.

Good News, Bad News on Phishing

Let’s end on a positive note, or a sort of positive note. The 2018 DBIR notes that most people never click phishing emails: “When analyzing results from phishing simulations the data showed that in the normal (median) organization, 78% of people don’t click a single phish all year.”

The less good news: “On average 4% of people in any given phishing campaign will click it.” The DBIR notes that the more phishing emails someone has clicked, the more they are likely to click on phishing emails in the future. The report’s advice: “Part of your overall strategy to combat phishing could be that you can try and find those 4% of people ahead of time and plan for them to click.”

Good luck with that.

The purchase order looks legitimate, yet does it have all the proper approvals? Many lawyers reviewed this draft contract so is this the latest version? Can we prove that this essential document hasn’t been tampered with, before I sign it? Can we prove that these two versions of a document are absolutely identical?

Blockchain might be able to help solve these kinds of everyday trust issues related to documents, especially when they are PDFs—data files created using the Portable Document Format. Blockchain technology is best known for securing financial transactions, including powering new financial instruments such as Bitcoin. But blockchain’s ability to increase trust will likely find enterprise use cases solving common, non-financial information exchanges like these documents use.

Joris Schellekens, a software engineer and PDF expert at iText Software in Ghent, Belgium, recently presented his ideas for blockchain-supported documents at Oracle Code Los Angeles. Oracle Code is a series of free events around the world created to bring developers together to share fresh thinking and collaborate on ideas like these.

PDF’s Power and Limitations

The PDF file format was created in the early 1990s by Adobe Systems. PDF was a way to share richly formatted documents whose visual layout, text, and graphics would look the same, no matter which software created them or where they were viewed or printed. The PDF specification became an international standard in 2008.

Early on, Adobe and other companies implemented security features into PDF files. That included password protection, encryption, and digital signatures. In theory, the digital signatures should be able to prove who created, or at least who encrypted, a PDF document. However, depending on the hashing algorithm used, it’s not so difficult to subvert those protections to, for example, change a date/time stamp, or even the document content, says Schellekens. His company, iText Software, markets a software development kit and APIs for creating and manipulating PDFs.

“The PDF specification contains the concept of an ID tuple,” or an immutable sequence of data, says Schellekens. “This ID tuple contains timestamps for when the file was created and when it was revised. However, the PDF spec is vague about how to implement these when creating the PDF.”

Even in the case of an unaltered PDF, the protections apply to the entire document, not to various parts of it. Consider a document that must be signed by multiple parties. Since not all certificate authorities store their private keys with equal vigilance, you might lack confidence about who really modified the document (e.g. signed it), at which times, and in which order. Or, you might not be confident that there were no modifications before or after someone signed it.

A related challenge: Signatures to a digital document generally must be made serially, one at a time. The PDF specification doesn’t allow for a document to be signed in parallel by several people (as is common with contract reviews and signatures) and then merged together.

Blockchain has the potential to solve such document problems, and several others besides. Read more in my story for Forbes, “Can Blockchain Solve Your Document And Digital Signature Headaches?

Asking “which is the best programming language” is like asking about the most important cooking tool in your kitchen. Mixer? Spatula? Microwave? Cooktop? Measuring cup? Egg timer? Lemon zester? All are critical, depending on what you’re making, and how you like to cook.

The same is true with programming languages. Some are best at coding =applications that run natively on mobile devices — think Objective-C or Java. Others are good at encoding logic within a PDF file, or on a web page — think JavaScript. And still others are best at coding fast applications for virtual machines or running directly on the operating system — for many people, that’s C or C++. Want a general purpose language? Think Python, PHP. Specialized? R and Matlab are good for statistics and data analytics. And so-on.

Last summer, IEEE Spectrum offered its take, surveying its audience and writing up the “2017 Top Programming Languages.” The top 10 languages for the typical reader:

  1. Python
  2. C
  3. Java
  4. C++
  5. C#
  6. R
  7. JavaScript
  8. PHP
  9. Go
  10. Swift

The story’s author, Stephen Case, noted not much change in the most popular languages. “Python has continued its upward trajectory from last year and jumped two places to the No. 1 slot, though the top four—Python, C, Java, and C++—all remain very close in popularity. “

What Do The PYPL Say?

The IEEE Spectrum annual survey isn’t the only game in town. The PYPL (PopularitY of Programming Language) index uses raw data from Google Trends to see how often people search for language tutorials. The people behind PYPL say, “If you believe in collective wisdom, the PYPL Popularity of Programming Language index can help you decide which language to study, or which one to use in a new software project.”

Here’s their Top 10:

  1. Java
  2. Python
  3. JavaScript
  4. PHP
  5. C#
  6. C
  7. R
  8. Objective-C
  9. Swift
  10. MATLAB

Asking the RedMonk

Stephen O’Grady describes RedMonk’s Programming Language Rankings, as of January 2018, as being based on two key external sources:

We extract language rankings from GitHub and Stack Overflow, and combine them for a ranking that attempts to reflect both code (GitHub) and discussion (Stack Overflow) traction. The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion and usage in an effort to extract insights into potential future adoption trends.

The top languages found by RedMonk look similar to PYPL and IEEE Spectrum:

  1. JavaScript
  2. Java
  3. Python
  4. PHP
  5. C#
  6. C++
  7. CSS
  8. Ruby
  9. C
  10. Swift & Objective-C (tied)

Use the Best Tool for the Job

It would be tempting to use data like this to say, “From now on, everything we’re doing will be in Java,” or “We’re going to do all web coding in JavaScript and use C++ for applications.” Don’t do that. That would be like say, “We’re going to make everything in the microwave.” Sometimes you want the microwave, sure, but sometimes you want the crockpot, or the regular oven, or sous vide, or the propane grill in your back yard.

The goal is productivity. Use agile processes like Scrum to determine what your development teams are going to build, where those applications will run, and which features must be included. Then, let the developers choose languages that fit best – and that includes supporting experimentation. Let them use R. Let them do some coding in Python, if it improves productivity, and gets a better job done faster.

 

As the saying goes, you can’t manage what you don’t measure. In a data-driven organization, the best tools for measuring the performance are business intelligence (BI) and analytics engines, which require data. And that explains why data warehouses continue to play such a crucial role in business. Data warehouses often provide the source of that data, by rolling up and summarizing key information from a variety of sources.

Data warehouses, which are themselves relational databases, can be complex to set up and manage on a daily basis. They typically require significant human involvement from database administrators (DBAs). In a large enterprise, a team of DBAs ensure that the data warehouse is extracting data from those disparate data sources, as well as accommodating new and changed data sources—and making sure the extracted data is summarized properly and stored in a structured manner that can be handled by other applications, including those BI and analytics tools.

On top of that, the DBAs are managing the data warehouse’s infrastructure. That includes everything from server processor utilization, the efficiency of storage, security of the data, backups, and more.

However, the labor-intensive nature of data warehouses is about to change, with the advent of Oracle Autonomous Data Warehouse Cloud, announced in October 2017. The self-driving, self-repairing, self-tuning functionality of Oracle’s Data Warehouse Cloud is good for the organization—and good for the DBAs.

Data-driven organizations need timely, up-to-date business intelligence. This can feed instant decision-making, short-term predictions and business adjustments, and long-term strategy. If the data warehouse goes down, slows down, or lacks some information feeds, the impact can be significant. No data warehouse may mean no daily operational dashboards and reports, or inaccurate dashboards or reports.

For C-level executives, Autonomous Data Warehouse can improve the value of the data warehouse. This boosts the responsiveness of business intelligence and other important applications, by improving availability and performance.

Stop worrying about uptime. Forget about disk-drive failures. Move beyond performance tuning. DBAs, you have a business to optimize.

Read more in my article, “Autonomous Capabilities Will Make Data Warehouses — And DBAs — More Valuable.”

“We estimate that malicious cyber activity cost the U.S. economy between $57 billion and $109 billion in 2016.” That’s from a February 2018 report, “email hidden; JavaScript is required,” by the Council of Economic Advisors – part of the Office of the President. It’s a big deal.

The White House is concerned about a number of sources of cyber threats. Those include attacks from nation-states, corporate competitors, hacktivists, organized criminal groups, opportunists, and company insiders.

It’s not always easy to tell exactly who is behind some event, or even how to categorize those events. Still, the report says that incidents breaks down as roughly 25% insiders, 75% outsiders. “Overall, 18 percent of threat actors were state-affiliated groups, and 51 percent involved organized criminal groups,” it says.

It’s More Than Stolen Valuables

The report points out that the economic cost includes many factors, including the stolen property, the costs of repairs – and opportunity lost costs. For example, the report says, “Consider potential costs of a DDoS attack. A DDoS attack interferes with a firm’s online operations, causing a loss of sales during the period of disruption. Some of the firm’s customers may permanently switch to a competing firm due to their inability to access online services, imposing additional costs in the form of the firm’s lost future revenue. Furthermore, a high-visibility attack may tarnish the firm’s brand name, reducing its future revenues and business opportunities.”

However, it’s not always that cut-and-dried. Intellectual property theft shows:

The costs incurred by a firm in the wake of IP theft are somewhat different. As the result of IP theft, the firm no longer has a monopoly on its proprietary findings because the stolen IP may now potentially be held and utilized by a competing firm. If the firm discovers that its IP has been stolen (and there is no guarantee of such discovery), attempting to identify the perpetrator or obtain relief via legal process could result in sizable costs without being successful, especially if the IP was stolen by a foreign actor. Hence, expected future revenues of the firm could decline. The cost of capital is likely to increase because investors will conclude that the firm’s IP is both sought-after and not sufficiently protected.

Indeed, this last example is particularly worrisome. Why? “IP theft is the costliest type of malicious cyber activity. Moreover, security breaches that enable IP theft via cyber may go undetected for years, allowing the periodic pilfering of corporate IP.”

Affecting the Economy

Do investors worry about cyber incidents? You bet. And it hits the share price of companies. According to the White House report, “We find that the stock price reaction to the news of an adverse cyber event is significantly negative. Firms on average lost about 0.8 percent of their market value in the seven days following news of an adverse cyber event.”

How much is that? Given that the study looked at large companies, “We estimate that, on average, the firms in our sample lost $498 million per adverse cyber event. The distribution of losses is highly right-skewed. When we trim the sample of estimated losses at 1 percent on each side of the distribution, the average loss declines to $338 million per event.” That’s significant.

Small and mid-sized companies can be harder hit by incidents, because they are less resilient. “Smaller firms, and especially those with few product lines, can easily go out of business if they are attacked or breached.”

Overall, the hit by cyber incidents cost the U.S. economy between $57 billion and $109 billion in 2016. That’s between 0.31% and 0.58% of that year’s gross domestic product (GDP), says the report. That’s lot, but could be worse. Let’s hope this amount doesn’t increase – by, say, a full-fledged cyberwar or significant terrorist incident.

The “throw it over the wall” problem is familiar to anyone who’s seen designers and builders create something that can’t actually be deployed or maintained out in the real world. In the tech world, avoiding this problem is a big part of what gave rise to DevOps.

DevOps, combines “development” and “IT operations.” It refers to a set of practices that help software developers and IT operations staff work better, together. DevOps emerged about a decade ago with the goal of tearing down the silos between the two groups, so that companies can get new apps and features out the door, faster and with fewer mistakes and less downtime in production.

DevOps is now widely accepted as a good idea, but that doesn’t mean it’s easy. It requires cultural shifts by two departments that not only have different working styles and toolsets, but where the teams may not even know or respect each other.

When DevOps is properly embraced and implemented, it can help get better software written more quickly. DevOps can make applications easier and less expensive to manage. It can simplify the process of updating software to respond to new requirements. Overall, a DevOps mindset can make your organization more competitive because you can respond quickly to problems, opportunities and industry pressures.

Is DevOps the right strategic fit for your organization? Here are six CEO-level insights about DevOps to help you consider that question:

  1. DevOps can and should drive business agility.DevOps often means supporting a more rapid rate of change in terms of delivering new software or updating existing applications. And it doesn’t just mean programmers knock out code faster. It means getting those new apps or features fully deployed and into customers’ hands. “A DevOps mindset represents development’s best ability to respond to business pressures by quickly bringing new features to market and we drive that rapid change by leveraging technology that lets us rewire our apps on an ongoing basis,” says Dan Koloski, vice president of product management at Oracle.

For the full story, see my essay for the Wall Street Journal, “Tech Strategy: 6 Things CEOs Should Know About DevOps.”

Simplified Java coding. Less garbage. Faster programs. Those are among the key features in the newly released Java 10, which arrived in developers’ hands only six months after the debut of Java 9 in September.

This pace is a significant change from Java’s previous cycle of one large release every two to three years. With its faster release cadence, Java is poised to provide developers with innovations twice every year, making the language and platform more attractive and competitive. Instead of waiting for a huge omnibus release, the Java community can choose to include new features as soon as those features are ready, in the next six-month Java release train. This gives developers access to the latest APIs, functions, language additions, and JVM updates much faster than ever before.

Java 10 is the first release on the new six-month schedule. It builds incrementally on the significant new functionality that appeared in Java 9, which had a multiyear gestation period.

Java 10 delivers 12 Java Enhancement Proposals (JEPs). Here’s the complete list, followed by a deeper look at three of the most significant JEPs:

  • Local-Variable Type Inference
  • Consolidate the JDK Forest into a Single Repository
  • Garbage-Collector Interface
  • Parallel Full GC for G1
  • Application Class-Data Sharing
  • Thread-Local Handshakes
  • Remove the Native-Header Generation Tool (javah)
  • Additional Unicode Language-Tag Extensions
  • Heap Allocation on Alternative Memory Devices
  • Experimental Java-Based JIT Compiler
  • Root Certificates
  • Time-Based Release Versioning

See my essay for Forbes, “What Java 10 And Java’s New 6-Month Release Cadence Mean For Developers.” We’ll look at three of the most significant JEPs: Local-Variable Type Inference, Parallel Full GC for G1, and the Experimental Java-Based JIT Compiler.

Blockchain is a distributed digital ledger technology in which blocks of transaction records can be added and viewed—but can’t be deleted or changed without detection. Here’s where the name comes from: a blockchain is an ever-growing sequential chain of transaction records, clumped together into blocks. There’s no central repository of the chain, which is replicated in each participant’s blockchain node, and that’s what makes the technology so powerful. Yes, blockchain was originally developed to underpin Bitcoin and is essential to the trust required for users to trade digital currencies, but that is only the beginning of its potential.

Blockchain neatly solves the problem of ensuring the validity of all kinds of digital records. What’s more, blockchain can be used for public transactions as well as for private business, inside a company or within an industry group. “Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days, or even weeks.”

That’s the power of blockchain: an immutable digital ledger for recording transactions. It can be used to power anonymous digital currencies—or farm-to-table vegetable tracking, business contracts, contractor licensing, real estate transfers, digital identity management, and financial transactions between companies or even within a single company.

“Blockchain doesn’t have to just be used for accounting ledgers,” says Rakhmilevich. “It can store any data, and you can use programmable smart contracts to evaluate and operate on this data. It provides nonrepudiation through digitally signed transactions, and the stored results are tamper proof. Because the ledger is replicated, there is no single source of failure, and no insider threat within a single organization can impact its integrity.”

It’s All About Distributed Ledgers

Several simple concepts underpin any blockchain system. The first is the block, which is a batch of one or more transactions, grouped together and hashed. The hashing process produces an error-checking and tamper-resistant code that will let anyone viewing the block see if it has been altered. The block also contains the hash of the previous block, which ties them together in a chain. The backward hashing makes it extremely difficult for anyone to modify a single block without detection.

A chain contains collections of blocks, which are stored on decentralized, distributed servers. The more the better, with every server containing the same set of blocks and the latest values of information, such as account balances. Multiple transactions are handled within a single block using an algorithm called a Merkle tree, or hash tree, which provides fault and fraud tolerance: if a server goes down, or if a block or chain is corrupted, the missing data can be reconstructed by polling other servers’ chains.

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can view relevant data, but not everything in the chain. A customer might be able to verify that a contractor has a valid business license and see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

When originally conceived, blockchain had a narrow set of protocols. They were designed to govern the creation of blocks, the grouping of hashes into the Merkle tree, the viewing of data encapsulated into the chain, and the validation that data has not been corrupted or tampered with. Over time, creators of blockchain applications (such as the many competing digital currencies) innovated and created their own protocols—which, due to their independent evolutionary processes, weren’t necessarily interoperable. By contrast, the success of general-purpose blockchain services, which might encompass computing services from many technology, government, and business players, created the need for industry standards—such as Hyperledger, a Linux Foundation project.

Read more in my feature article in Oracle Magazine, March/April 2018, “It’s All About Trust.”

DevOps is a technology discipline well-suited to cloud-native application development. When it only takes a few mouse clicks to create or manage cloud resources, why wouldn’t developers and IT operation teams work in sync to get new apps out the door and in front of user faster? The DevOps culture and tactics have done much to streamline everything from coding to software testing to application deployment.

Yet far from every organization has embraced DevOps, and not every organization that has tried DevOps has found the experience transformative. Perhaps that’s because the idea is relatively young (the term was coined around 2009), suggests Javed Mohammed, systems community manager at Oracle, or perhaps because different organization are at such different spots in DevOps’ technology adoption cycle. That idea—about where we are in the adoption of DevOps—became a central theme of a recent podcast discussion among tech experts. Following are some highlights.

Confusion about DevOps can arise because DevOps affects dev and IT teams in many ways. “It can apply to the culture piece, to the technology piece, to the process piece—and even how different teams interact, and how all of the different processes tie together,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC and co-author of Accelerate: The Science of Lean Software and DevOps.

The adoption and effectiveness of DevOps within a team depends on where each team is, and where organizations are. One team might be narrowly focused on the tech used to automate software deployment to the public, while another is looking at the culture and communication needed to release new features on a weekly or even daily basis. “Everyone is at a very, very different place,” Forsgren says.

Indeed, says Forsgren, some future-thinking organizations are starting to talk about what ‘DevOps Next’ is, extending the concept of developer-led operations beyond common best practices. At the same time, in other companies, there’s no DevOps. “DevOps isn’t even on their radar,” she sighs. Many experts, including Forsgren, see that DevOps is here, is working, and is delivering real value to software teams today—and is helping businesses create and deploy better software faster and less expensively. That’s especially true when it comes to cloud-native development, or when transitioning existing workloads from the data center into the cloud.

Read more in my essay, “DevOps: Sometimes Incredibly Transformative, Sometimes Not So Much.”

New phones are arriving nearly every day. Samsung unveiled its latest Galaxy S9 flagship. Google is selling lots of its Pixel 2 handset. Apple continues to push its iPhone X. The Vivo Apex concept phone, out of China, has a pop-up selfie camera. And Nokia has reintroduced its famous 8110 model – the slide-down keyboard model featured in the 1999 movie, “The Matrix.”

Yet there is a slowdown happening. Hard to say whether it’s merely seasonal, or an indication that despite the latest and newest features, it’s getting harder to distinguish a new phone from its predecessors.

According to the 451 report, “Consumer Smartphones: 90 Day Outlook: Smartphone Buying Slows but Apple and Samsung Demand Strong,” released February 2018: “Demand for smartphones is showing a seasonal downtick, with 12.7% of respondents from 451 Research’s Leading Indicator panel saying they plan on buying a smartphone in the next 90 days.” However, “Despite a larger than expected drop from the September survey, next 90 day smartphone demand is at its highest December level in three years.”

451 reports that over the next 90 days,

Apple (58%) leads in planned smartphone buying but is down 11 points. Samsung (15%) is up 2 points, as consumer excitement builds around next-gen Galaxy S9 and S9+ devices, scheduled to be released in March. Google (3%) is showing a slight improvement, buoyed by the October release of its Pixel 2 and 2 XL handsets. Apple’s latest releases are the most in-demand among planned iPhone buyers: iPhone X (37%; down 6 points), iPhone 8 (21%; up 5 points) and iPhone 8 Plus (18%; up 4 points).

Interestingly, Apple’s famous brand loyalty may be tracking. Says 451, “Google leads in customer satisfaction with 61% of owners saying they’re Very Satisfied. Apple is close behind, with 59% of iPhone owners saying they’re Very Satisfied. That said, it’s important to keep in mind that iPhone owners comprise 57% of smartphone owners in this survey vs. 2% who own a Google Pixel smartphone.”

Everyone Loves the Galaxy S9

Cnet was positively gushing over the new Samsung phone, writing,

A bold new camera, cutting-edge processor and a fix to a galling ergonomic pitfall — all in a body that looks nearly identical to last year’s model. That, in a nutshell, is the Samsung Galaxy S9 (with a 5.8-inch screen) and its larger step-up model, the Galaxy S9 Plus, which sports an even bigger 6.2-inch screen.

Cnet calls out two features. First, a camera upgrade that includes variable aperture designed to capture better low-light images – which is where most phones really fall down.

The other? “The second improvement is more of a fix. Samsung moved the fingerprint reader from the side of the rear camera to the center of the phone’s back, fixing what was without a doubt the Galaxy S8’s most maddening design flaw. Last year’s model made you stretch your finger awkwardly to hit the fingerprint target. No more.”

The Verge agrees with that assessment:

… the Galaxy S9 is actually a pretty simple device to explain. In essence, it’s the Galaxy S8, with a couple of tweaks (like moving the fingerprint sensor to a more sensible location), and all the specs jacked up to the absolute max for the most powerful device on the market — at least, on paper.

Pop Goes the Camera

The Vivo concept phone, the Apex, has a little pop-up front-facing camera designed for selfies. Says TechCrunch, this is part of a trend:

With shrinking bezels, gadget makers have to look for new solutions like the iPhone X notch. Others still, like Vivo and Huawei, are look at more elegant solutions than carving out a bit of the screen.

For Huawei, this means using a false key within the keyboard to house a hidden camera. Press the key and it pops up like a trapdoor. We tried it out and though the housing is clever, the placement makes for awkward photos — just make sure you trim those nose hairs before starting your conference call.

Vivo has a similar take to Huawei though the camera is embedded on a sliding tray that pops-up out of the top of the phone.

So, there’s still room for innovation. A little room. Beyond cameras, and some minor ergonomic improvements, it’s getting harder and harder to differentiate one phone from another – and possibly, to convince buyers to shell out for upgrades. At least, that is, until 5G handsets hit the market.

Spectre and Meltdown are two separate computer security problems. They are often lumped together because they were revealed around the same time – and both exploit vulnerabilities in many modern microprocessors. The website MeltdownAttack, from the Graz University of Technology, explains both Spectre and Meltdown very succinctly – and also links to official security advisories from the industry:

Meltdown breaks the most fundamental isolation between user applications and the operating system. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system. If your computer has a vulnerable processor and runs an unpatched operating system, it is not safe to work with sensitive information without the chance of leaking the information. This applies both to personal computers as well as cloud infrastructure. Luckily, there are software patches against Meltdown.

Spectre breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets. In fact, the safety checks of said best practices actually increase the attack surface and may make applications more susceptible to Spectre. Spectre is harder to exploit than Meltdown, but it is also harder to mitigate. However, it is possible to prevent specific known exploits based on Spectre through software patches.

For now, nearly everyone is dependent on microprocessor makers and operating system vendors to develop, test, and distribute patches to mitigate both flaws. In the future, new microprocessors should be immune to those exploits – but because of the long processor developing new processors, we are unlikely to see computers using such next-generation processors available for several years.

So, expect Spectre and Meltdown to be around for many years to come. Some devices will remain unpatched — because some devices always remain unpatched. Even after new computers become available, it will take years to replace all the old machines.

Wide-Ranging Effects

Just about everything is affected by these flaws. Says the Graz University website:

Which systems are affected by Meltdown? Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether AMD processors are also affected by Meltdown. According to ARM, some of their processors are also affected.

 Which systems are affected by Spectre? Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors.

Ignore Spectre and Meltdown at your peril.

Patch. Sue. Repeat.

Many techies are involved in trying to handle the Spectre and Meltdown issues. So are attorneys. Intel alone has indicated dozens of lawsuits in its annual report filing with the U.S. Securities asnd Exchange Commission:

As of February 15, 2018, 30 customer class action lawsuits and two securities class action lawsuits have been filed. The customer class action plaintiffs, who purport to represent various classes of end users of our products, generally claim to have been harmed by Intel’s actions and/or omissions in connection with the security vulnerabilities and assert a variety of common law and statutory claims seeking monetary damages and equitable relief.

Given that there are many microprocessor makers involved (it’s not only Intel, remember), expect lots more patches. And lots more lawsuits.

Companies can’t afford downtime. Employees need access to their applications and data 24/7, and so do other business applications, manufacturing and logistics management systems, and security monitoring centers. Anyone who thinks that the brute force effort of their hard-working IT administrators is enough to prevent system downtime just isn’t facing reality.

Traditional systems administrators and their admin tools can’t keep up with the complexity inherent in any modern enterprise. A recent survey of the Oracle Applications Users Group has found that despite significant progress in systems management automation, many customers still report that more than 80% of IT issues are first discovered and reported by users. The number of applications is spiraling up, while data increases at an even more rapid rate.

The boundaries between systems are growing more complex, especially with cloud-based and hybrid-cloud architectures. That reality is why Oracle, after analyzing a survey of its industry-leading customers, recently predicted that by 2020, more than 80% of application infrastructure operations will be managed autonomously.

Autonomously is an important word here. It means not only doing mundane day-to-day tasks including monitoring, tuning, troubleshooting, and applying fixes automatically, but also detecting and rapidly resolving issues. Even when it comes to the most complex problems, machines can simplify the analysis—sifting through the millions of possibilities to present simpler scenarios, to which people then can apply their expertise and judgment of what action to take.

Oracle asked, about the kind of activities that IT system administrators do. That includes on a daily, weekly, and monthly basis—things such as password resets, system reboots, software patches, and the like.

Expect that IT teams will soon reduce by several orders of magnitude the number of situations like those that need manual intervention. If an organization typically has 20,000 human-managed interventions per year, humans will need to touch only 20. The rest will be handled through systems that can apply automation combined with machine learning, which can analyze patterns and react faster than human admins to enable preventive maintenance, performance optimization, and problem resolution.

Read more in my article for Forbes, “Prediction: 80% of Routine IT Operations Will Soon Be Solved Autonomously.”

On February 7, 2018, the carrier Swisscom admitted that a security lapse revealed sensitive information about 800,000 customers was exposed. The security failure was at one of Swisscom’s sales partners.

This is what can happen when a business gives its partners access to critical data. The security chain is only as good as the weakest link – and it can be difficult to ensure that partners are taking sufficient care, even if they pass an onboarding audit. Swisscom says,

In autumn of 2017, unknown parties misappropriated the access rights of a sales partner, gaining unauthorised access to customers’ name, address, telephone number and date of birth.

That’s pretty bad, but what came next was even worse, in my opinion. “Under data protection law this data is classed as ‘non-sensitive’,” said Swisscom. That’s distressing, because that’s exactly the sort of data needed for identity theft. But we digress.

Partners and Trust

Partners can be the way into an organization. Swisscom claims that new restrictions, such as preventing high-volume queries and using two-factor authentication, mean such an event can never occur again, which seems optimistic: “Swisscom also made a number of changes to better protect access to such non-sensitive personal data by third-party companies… These measures mean that there is no chance of such a breach happening again in the future.”

Let’s hope they are correct. But in the meantime, what can organizations do? First, Ensure that all third parties that have access to sensitive data, such as intellectual property, financial information, and customer information, go through a rigorous security audit.

Tricia C. Bailey’s article, “Managing Third-Party Vendor Risk,” makes good recommendations for how to vet vendors – and also how to prepare at your end. For example, do you know what (and where) your sensitive data is? Do vendor contracts spell out your rights and responsibilities for security and data protection – and your vendor’s rights and responsibilities? Do you have a strong internal security policy? If your own house isn’t in order, you can’t expect a vendor to improve your security. After all, you might be the weakest link.

Unaccustomed to performing security audits on partners? Organizations like CA Veracode offer audit-as-a-service, such as with their Vendor Application Security Testing service. There are also vertical industry services: the HITRUST Alliance, for example, offers a standardized security audit process for vendors serving the U.S. healthcare industry with its Third Party Assurance Program.

Check the Back Door

Many vendors and partners require back doors into enterprise data systems. Those back doors, or remote access APIs, can be essential for the vendors’ performing their line-of-business function. Take the Swisscom sales partner: It needs to be able to query Swisscom customers and add/update customer information, in order to effectively serve as as a sales organization.

Yet if the partner is breached, that back door can fall under the control of hackers, using the partner’s systems or credentials. In its 2017 Data Breach Investigations Report, Verizon reported that in regard to Point-of-Sale (POS) systems, “Almost 65% of breaches involved the use of stolen credentials as the hacking variety, while a little over a third employed brute force to compromise POS systems. Following the same trend as last year, 95% of breaches featuring the use of stolen credentials leveraged vendor remote access to hack into their customer’s POS environments.”

A Handshake Isn’t Good Enough

How secure is your business partner, your vendor, your contractor? If you don’t know, then you don’t know. If something goes wrong at your partners’ end, never forget that it may be your IP, your financials, and your customers’ data that is exposed. After all, whether or not you can recover damages from the partner in a lawsuit, your organization is the one that will pay the long-term price in the marketplace.

Savvy businesses have policies that prevent on-site viewing of pornography, in part to avoid creating a hostile work environment — and to avoid sexual harassment lawsuits. For security professionals, porn sites are also a dangerous source of malware.

That’s why human-resources policies should be backed up with technological measures. Those include blocking porn sites at the firewall, and for using on-device means to stop browsers from accessing such sites.

Even that may not be enough, says Kaspersky Labs, in its report, “Naked online: cyberthreats facing users of adult websites and applications.” Why? Because naughty content and videos have gone mainstream, says the report:

Today, porn can be found not only on specialist websites, but also in social media networks and on social platforms like Twitter. Meanwhile, the ‘classic’ porn websites are turning into content-sharing platforms, creating loyal communities willing to share their videos with others in order to get ‘likes’ and ‘shares’.

This problem is not new, but it’s increasingly dangerous, thanks to the criminal elements on the Dark Web, which are advertising tools for weaponizing porn content. Says Kaspersky, “While observing underground and semi-underground market places on the dark web, looking for information on the types of legal and illegal goods sold there, we found that among the drugs, weapons, malware and more, credentials to porn websites were often offered for sale.”

So, what’s the danger? There are concerns about attacks on both desktop/notebook and mobile users. In the latter case, says Kaspersky,

  • In 2017, at least 1.2 million users encountered malware with adult content at least once. That is 25.4% of all users who encountered any type of Android malware.
  • Mobile malware is making extensive use of porn to attract users: Kaspersky Lab researchers identified 23 families of mobile malware that use porn content to hide their real functionality.
  • Malicious clickers, rooting malware, and banking Trojans are the types of malware that are most often found inside porn apps for Android.

That’s the type of malware that’s dangerous on a home network. It’s potential ruinous if it provides a foothold onto an enterprise network not protected by intrusion detection/prevention systems or other anti-malware tech. The Kaspersky report goes into a lot of detail, and you should read it.

For another take on the magnitude of the problem: The Nielsen Company reported that more than 21 million Americans accessed adult websites on work computers – that is, 29% of working adults. Bosses are in on it too. In 2013, Time Magazine said that a survey of 200 U.S.-based data security analysts reveals that 40 percent removed malware from a senior executive’s computer, phone, or tablet after the executive visited a porn website.

What Can You Do?

Getting rid of pornography isn’t easy, but it’s not rocket science either. Start with a strong policy. Work with your legal team to make sure the policy is both legal and comprehensive. Get employee feedback on the policy, to help generate buy-in from executives and the rank-and-file.

Once the policy is finalized, communicate it clearly. Train employees on what to do, what not to do… and the employment ramifications for violating the policy. Explain that this policy is not just about harassment, but also about information security.

Block, block, block. Block at the firewall, block at proxy servers, block on company-owned devices. Block on social media. Make sure that antivirus is up to date. Review log files.

Finally, take this seriously. This isn’t a case of giggling (or eye-rolling) about boys-being-boys, or harmless diversions comparable to work-time shopping on eBay. Porn isn’t only offensive in the workplace, but it’s also a gateway to the Dark Web, criminals, and hackers. Going after porn isn’t only about being Victorian about naughty content. It’s about protecting your business from hackers.