David McLeod, CISO, Cox Enterprises
David McLeod, Cox Enterprises

“Training and recovery.” That’s where David McLeod, chief information security officer of Cox Enterprises, says that CISOs should spend their money in 2020.

Training often focuses on making employees less of a security risk. That includes teaching them what not to click on and how to proactively protect the information that is a part of their everyday work. McLeod sees employees as something more powerful.

“Train everyone so you have a wall of passionate people surrounding your business. I’m talking about creating a neighborhood watch,” McLeod says. “I find people who are eager to know what they can do, and they help expand our culture of proactive protection akin to a neighborhood watch. So if I’m going to drive security for the least cost and the highest effectiveness, I’m always increasing my neighborhood watch.”

Recovery isn’t far behind, though, because sooner or later, there will likely be a security incident, such as a breach, ransomware attack, or worse. “Some hacker’s going to get in. It’s all about recovery. It’s all about keeping the business going. You can do a lot of harm to a business if you have to shut down your revenue systems for three days,” McLeod says.

Read more from David McLeod and from other top experts in my story for Forbes, “Chief Information Security Officer Priorities For 2020.”

Java Magazine home page
Java Magazine home page

I’m back in the saddle again, if by “saddle” you mean editing a magazine. Today, I took over the helm of Oracle’s Java Magazine, one of the world’s leading publications for software developers, with about 260,000 subscribers. The previous e-in-c, Andrew Binstock, moved on after five years to work on other projects; he leaves big shoes to fill.

As Andrew wrote,

This is my last issue of Java Magazine. After five very enjoyable years at the helm, I’m ready to take on other challenges, including getting back to working on my preferred coding projects. I will surely pop up here and there with articles and reviews (likely even in this magazine). If you’ve enjoyed my work, I invite you to follow me on Twitter (@platypusguy) or to reach out to me on LinkedIn, where I accept all invitations. At the moment, I am currently participating in interviewing prospective successors and I’ll make sure that Oracle has a good person in place to carry on. From the bottom of my heart, thank you all for being readers; and to many of you, I send additional gratitude for your thoughtful comments and suggestions over the years. It’s been truly an honor.

I am honored to follow in Andrew’s footsteps.

I was written about in “P2P Payments Go Mainstream in Canada,” by Pete Reville in PaymentsJournal:

One of the biggest hurdles in the adoption of mobile payments is consumer comfort. That is to say, in order for consumers to adopt digital payments there has to be a level of trust, familiarity, and acceptance of digital payments that will entice consumers to use digital methods over older, more “engrained” methods. In fact, one of the biggest barriers to digital payments has always been a perception that the “current method works just fine” or “I see no need to switch”.

Courtesy of PaymentsJournal

Well, things are changing. We’ve all read the stories of digital payment successes in place like Kenya and China. Alas, in economies that have had card based systems in place, adoption of digital – phone based – payments has been slower to gain popularity.

With all this in mind, I was very interested to read a commentary in Forbes about the adoption of P2P payments in Canada. In this piece, Canada Embraces Digital Payments, With Some Behind-The-Scenes Help, by Alan Zeichick from Oracle, he points to the rapid rise of one P2P solution, Interac e-Transfer.

Person-to-person (P2P) payments are one of the fastest-growing segments of business for Interac. Its P2P service, called Interac e-Transfer, saw 371.4 million transactions in 2018, representing a 54% increase over 2017. The amount of money involved is significant, too: CAN$132.8 billion in 2018, a 45% increase over 2017.

There’s more. It’s a nice piece. Thank you, PaymentsJournal.

Canadian banknote

Like consumers and merchants all around the world, Canadians have embraced digital payments instead of cash and checks. The growth rate is staggering, as evidenced by statistics provided by Interac, which processes many of those payments.

Digital payments are used for payments from an individual or business to another individual or business. For example, a person might buy artwork from a gallery in Montréal, using a mobile wallet or an app that person’s bank provides. Interac provides behind-the-scenes technology that facilitates these payments with a high degree of security.

Person-to-person (P2P) payments are one of the fastest-growing segments of business for Interac. Its P2P service, called Interac e-Transfer, saw 371.4 million transactions in 2018, representing a 54% increase over 2017. The amount of money involved is significant, too: CAN$132.8 billion in 2018, a 45% increase over 2017.

Interac overall processes about 16 million transactions per day, the bulk of which are debit card transactions made at the point of sale. That growth has back-office technology implications—the rapid increase in online transactions is prompting Interac to move its core software to the cloud. A shift to cloud-based services ensures it can handle future growth and will strengthen the always-on resiliency of its platform.

Why the fast growth in Interac e-Transfer use? It starts with more consumer and business acceptance of digital payments in place of cash and checks, owing in part to their convenience, reliability, and security. With that interest, more financial institutions have signed up as partners, so they can offer customers the ability to transfer money and make digital payments right from their bank accounts.

Also, more businesses are relying on digital payments for business-to-business transfers. Approximately one in six Interac e-Transfer transactions are conducted by a business, which lets them eliminate reliance on checks and allows invoices to be settled in real time.

Read more about this in my story in Forbes, “Canada Embraces Digital Payments, With Some Behind-The-Scenes Help.”

Where will you find CARE? Think of trouble spots around the world where there are humanitarian disasters tied to extreme poverty, conflict, hunger, or a lack of basic healthcare or education. CARE is on the ground in these places, addressing survival needs, running clinics, and helping individuals, families, and communities rebuild their lives.

CARE’s scope is truly global. In 2018, the organization reached 56 million needy people through 965 programs in 95 countries, in places such as Mali, Jordan, Bangladesh, Brazil, the Democratic Republic of the Congo, Yemen, India, the Dominican Republic, and Niger.

CARE didn’t start out as a huge global charity, though. Founded in 1945, CARE provided a way for Americans to send lifesaving food and supplies to survivors of World War II — “CARE packages.” Today, it responds to dozens of disasters each year, reaching nearly 12 million people through its emergency programs. The rest of CARE’s work is through longer-term engagements, such as its work in Bihar State, in northern India.

Bihar, with a population of more than 110 million people, is one of India’s poorest states. Bihar has some of the country’s highest rates of infant and maternal mortality as well as childhood malnutrition. Since 2011, CARE has been working with the Bihar state government and other nongovernmental organizations (NGOs) to address those problems and to increase immunization rates for mothers and children.

The results to date have been significant. In Bihar, the percentage of 1-year-olds with completed immunization schedules increased from 12% to 84% between 2005 and 2018; there were nearly 20,000 fewer newborn deaths in 2016 than in 2011; and the maternal mortality rate fell by nearly half, from 312 to 165 maternal deaths per 100,000 live births between 2005 and 2018. How? Some of CARE’s initiatives involved improving healthcare facilities, mentoring nurses, supporting local social workers and midwives, and tracking the care given to weak and low-weight newborns.

Read more in my story for Forbes, “CARE’s Work In Bihar Shows Progress Is Possible Against The Toughest Problems.”

Solve the puzzle: A company’s critical customer data is in a multiterabyte on-premises database, and the digital marketing application that uses that data to manage and execute campaigns runs in the cloud. How can the cloud-based marketing software quickly access and leverage that on-premises data?

It’s a puzzle that one small consumer-engagement consulting company, Embel Assist, found its clients facing. The traditional solution, perhaps, would be to periodically replicate the on-premises database in the cloud using extract-transfer-load (ETL) software, but that may take too much time and bandwidth, especially when processing terabytes of data. What’s more, the replicated data could quickly become out of date.

Using cloud-based development and computing resources, Embel Assist found another way to crack this problem. It created an app called EALink that acts as a smart interface between an organization’s customer data sources and Oracle Eloqua, a cloud-based marketing automation platform. EALink also shows how development using Oracle Cloud Infrastructure creates new opportunities for a small and creative company to take on big enterprise data challenges.

Say the on-premises CRM system for a drugstore chain has 1 million customer records. The chain wants an e-mail campaign to reach customers who made their last purchase more than a month ago, who live within 20 miles of one set of stores, and who purchased products related to a specific condition. Instead of exporting the entire database into Eloqua, EALink runs the record-extraction query on the CRM system and sends Eloqua only the minimum information needed to execute the campaign. And, the query is run when the campaign is being executed, so the campaign information won’t be out of date.

Learn more about Embel Assist in my story for Forbes, “Embel Assist Links Marketing Apps With Enterprise Data.”

When a microprocessor vulnerability rocked the tech industry last year, companies scrambled to patch nearly every server they had. In Oracle’s case, that meant patching the operating system on about 1.5 million Linux-based servers.

Oracle finished the job in just 4 hours, without taking down the applications the servers ran, by using Oracle’s own automation technology. The technology involved is at the heart of Oracle Autonomous Linux, which the company announced at Oracle OpenWorld 2019 in San Francisco last month. Oracle has been using Autonomous Linux to run its own Generation 2 Cloud infrastructure, and now it is available at no cost to Oracle Cloud customers.

The last things most CIOs, CTOs, chief information security officers, and even developers want to worry about are patching their server operating systems. Whether they have a hundred servers or hundreds of thousands, that type of maintenance can slow down a business, especially if the maintenance requires shutting down the software running on that server.

A delay is doubly worrying when the reason for the patch is to handle a software or hardware vulnerability. In those instances, delays create an opportunity for malicious operators to strike. If an organization traditionally applies updates to its servers every three months, for example, and a zero-day vulnerability comes out just after that update, the company is vulnerable for months. When updates require a lengthy process, companies are reluctant to do it more frequently.

Not with Autonomous Linux, which can patch itself quickly after a vulnerability is found and the patch is applied by Oracle. Combined with Oracle Cloud Infrastructure’s other cost advantages, customers can expect significant total cost of ownership savings compared to other Linux versions that run either on-premises or in the cloud.

Underneath the Autonomous Linux service is Oracle Linux, which remains binary compatible with Red Hat Enterprise Linux. Therefore, software that runs on RHEL will run on Oracle Autonomous Linux in Oracle Cloud Infrastructure without change.

Learn more in my story for Forbes, “With Autonomous Linux, Oracle Keeps Server Apps Running During Patching.”

Phoenix City Hall

U.S. government agencies needing high levels of information security can upgrade to use the latest cloud technologies to run their applications. That’s thanks to a pair of new cloud infrastructure regions from Oracle. The cloud data center complexes are authorized against strenuous FedRAMP and Department of Defense requirements.

The two new cloud infrastructure regions are in Ashburn, Virginia, outside Washington, D.C., and Phoenix, Ariz. They are part of Oracle’s goal to have 36 Generation 2 Cloud data center regions, offering services such as Oracle Autonomous Database, live by the end of 2020, including three additional dedicated regions to support Department of Defense agencies and contractors.

FedRAMP, more formally known as the Federal Risk and Authorization Management Program, provides a standard approach to federal security assessments, authorizations, and monitoring of cloud services. With FedRAMP, once a cloud provider is approved to provide a set of services or applications to one branch of government, other departments can use that service without getting a new security authorization.

With FedRAMP authorization in place, a federal agency can more quickly move an application or database workload that’s running in a government-run data center into Oracle Cloud Infrastructure. Agencies can also build and launch new cloud-native applications directly on Oracle’s cloud.

The cloud also lets federal agencies tap the latest technology and analytics strategies, including applying artificial intelligence and machine learning. Those techniques often rely on GPU-based computing—graphics processing units—which are used for math-heavy tasks such as high-performance scientific computing, data analytics, and machine learning.

Learn more about FedRAMP in my article for Forbes, “With FedRAMP Clearance, Oracle Brings Its Gen 2 Cloud Infrastructure To Government.”

My short essay, “You can’t secure what you can’t see,” was published in the 2019/2020 edition of Commerce Trends, from Manhattan Associates (page 18). The essay begins with,

When your company’s name appears in the press, the story should be about your fantastic third-quarter earnings, improved year-on-year same-store results, and the efficiency of your supply chain. You never, never, never want to see a news story about a huge data breach that exposes private, GDPR-regulated information about your employees – or your customers.

Yet such breaches happen far too often, as we all can see by reading our favorite newspaper or website. What can you do to prevent this? The first step is to know what you have in terms of data, systems, applications, users – and third-party actors like suppliers, customers, partners, consultants, and contractors.

This can be particularly complicated in retail, because of the complexity of managing stores and e-commerce, as well as a v-e-r-y long supply chain with complicated logistics. However, there are no excuses. Every company needs to keep your confidential data out of the hands of competitors, while assuring customers and partners that you are safe to do business with.

Please download the magazine, read my story, and share your thoughts.

Want better enterprise cybersecurity? It may seem counter-intuitive, but the answer probably isn’t a surge in employee training or hiring of cybersecurity talent. That’s because humans will always make errors, and humans can’t cope with the scale and stealth of today’s cyberattacks. To best protect information systems, including data, applications, networks, and mobile devices, look to more automation and artificial intelligence-based software to give the defense-in-depth required to reduce risk and stop attacks.

That’s one of the key conclusions of a new report conducted by Oracle, “Security in the Age of AI,” released in May. The report draws on a survey of 775 respondents based in the US, including 341 CISOs, CSOs, and other CXOs at firms with at least $100 million in annual revenue; 110 federal or state government policy influencers; and 324 technology-engaged workers in non-managerial roles.

Looking at the CXO responses in the report shows that corporate executives see human error as one of the biggest risks to information security. The most common response (47%) is to invest more in people via training and hiring than in technology in the next two years. Less common is to invest in new types of software with enhanced security, upgrade infrastructure, or buy artificial intelligence and machine learning to use for security, all of which could contribute to minimizing human error.

Learn more about this in my article, “You Can’t Improve Cybersecurity By Throwing People At The Problem,” published in Forbes.

Charles Nutter remembers when, working as a Java architect, he attended a conference and saw the Ruby programming language for the first time. And he was blown away. “I was just stunned that I understood every piece of code, every example, without knowing the language at all. It was super easy for me to understand the code.”

As a Java developer, Nutter began looking for an existing way to run Ruby within a Java runtime environment, specifically a Java virtual machine (JVM). This would let Ruby programs run on any hardware or software platform supported by a JVM, and would facilitate writing polyglot applications that used some Java and some Ruby, with developers free to choose whichever language was best for a particular task.

Nutter found the existing Ruby-on-JVM project, JRuby. However, “it had not been moving forward very quickly. It had been kind of stalled out for some years.” So, he became involved, helping drive support for a popular web application framework, Ruby on Rails, to run within a JVM.

“We made it work,” says Nutter. “In 2005 and 2006, we got Rails to run on top of the JVM—and it was the first time any major framework from off the Java platform had ever been run on top of the JVM.”

Want to be like Nutter someday? His career advice is direct: Contribute to an open source community, even if it’s a little daunting, and even if some people in that community are, well, rude to newcomers.

“Don’t be afraid to get out into the open source community,” Nutter says. “Get out into the public community, do talks, submit bugs, submit patches. It’s going to be discouraging, and there’s a lot of jerks out there that will scare you away. Don’t let them. Get into the heart of the community and don’t be afraid to help contribute or ask questions.”

For his successful coleadership of JRuby during more than a decade, and for his broader leadership in the software industry, Nutter was recently honored with a Groundbreaker Award. The award was presented at Oracle Code One in San Francisco, where we had a long chat. Read what we talked about in my article for Forbes, “A Java Developer Walks Into A Ruby Conference: Charles Nutter’s Open Source Journey.”

Doug Cutting stands head-and-shoulders above most developers I’ve met—figuratively, as well as literally. As one of the founders of the Hadoop open source project, which allows many Big Data projects to scale to handle huge problems and immense quantities of data, Cutting is revered. Plus, Doug Cutting is tall. Very tall. (Lots taller than I am.)

“Six-foot-eight, or 2 meters 3 centimeters, for the record,” Cutting volunteers when we meet.

In the software industry, Cutting looms large for two major open source successes, proof that innovation lightning sometimes strikes twice. Hadoop, managed by the Apache Software Foundation, is at its heart a framework and file system that manages distributed computing—that is, it allows many computers to work together in a cluster to solve hard problems.

Hadoop provided the initial foundation for many companies’ big data efforts. The software let them pull in data from multiple sources for analysis using clusters of dozens, or hundreds, of servers. The other project, also managed by Apache, is Lucene, a Java library that lets programmers build fast text indexing and searching into their applications.

In his day job, Cutting serves as the chief architect for Cloudera, one of the largest open source software companies. He also serves as an evangelist for the open source movement, inspiring contributions to Hadoop and Lucene and also many other projects.

Cutting was recently honored with a Groundbreaker Award, presented at Oracle Code One in San Francisco. He talked to me about collaborating on open source software, creating a fulfilling career in software, understanding how technology affects society, and the meaning of the word “Hadoop.” Read Cutting’s thoughts about everything from building a career in open source to the meaning of data science in my article for Forbes, “Hadoop Pioneer Says Developers Should Build Open Source Into Their Career Plans.”

Consider an employee who normally fills out his weekly time card on Thursday afternoon, because he doesn’t work most Fridays. Machine learning that’s built into a payroll application could help the app learn the individual working habits of each employee. Having learned this specific pattern, the app could ask him if he meant to fill out the time card when he goes to log out of the system Thursday. There’s no policy there: It’s a behavior pattern that machine learning can pick up on.

In fact, modern-day AI might be able to fill in the time card automatically, and present it to the employee for review and approval. That save even more time, and potentially eliminates errors. This capability, known as “auto defaulting,” could have applications for nearly every form-based application, from accounting to inventory to sales reporting.

Executives wrestle with how to take advantage of artificial intelligence capabilities. That’s especially true now that cloud computing resources have made the technology accessible to companies of all sizes. One of the fastest roads to AI payoff comes from using AI capabilities embedded in applications that your employees use every day—like that time card app.

Smart classification, smart recognition, and smart predictions. Those are three big buckets that encompass many cutting-edge AI and machine learning capabilities.

  • Smart classification involves studying both structured and unstructured data to take action based on what it means, such as to automatically identity unreliable suppliers, properly interpret complex invoices, and categorize consumers based on their current activities and past history.
  • Smart recognition looks to find anomalies in the data to find innocent errors—not-so-innocent errors. Smart recognition can help stop fraud, enforce corporate and compliance policies, and even speed financial reconciliations.
  • Smart predictions go farther, such as offering proactive advice to sales reps, making recommendations in e-commerce, or providing suggestions for service reps on how to direct a customer. Pattern-matching can come into play here, such as predicting which add-on product recommendation a customer’s most likely to buy.

Learn more in my story for Forbes, “Want A Bigger Bang From AI? Embed It Into Your Apps.”

Can you name that Top 40 pop song in 10 seconds? Sure, that sounds easy. Can you name that pop song—even if it’s played slightly out of tune? Uh oh, that’s a lot harder. However, if you can guess 10 in a row, you might share in a cash prize.

That’s the point of “Out of Tune,” an online music trivia game where players mostly in their teens and 20s compete to win small cash prizes–just enough to make the game more fun. And fun is the point of “Out of Tune,” launched in August by FTW Studios, a startup based in New York. What’s different about “Out of Tune” is that it’s designed for group play in real time. The intent is that players will get together in groups, and play together using their Android or Apple iOS phones.

Unlike in first-person shooter games, or other activities where a game player is interacting with the game’s internal logic, “Out of Tune” emphasizes the human-to-human aspect. Each game is broadcast live from New York — sometimes from FTW Studio’s facilities, sometimes from a live venue. Each game is hosted by a DJ, and is enjoyed through streaming video. “We’re not in the game show business or the music business,” says Avner Ronen, FTW Studio’s founder and CEO. “We’re in the shared experiences business.”

Because of all that human interaction, game players should feel like they’re part of something big, part of a group. “It’s social, says Ronen, noting 70% of its participants today are female. “The audience is younger, and people play with their friends.”

How does the game work? Twice a day, at 8 p.m. and 11 p.m. Eastern time, a DJ launches the game live from New York City. The game consists of 10 pop songs played slightly out of tune—and players, using a mobile app on their phones, have 10 seconds to guess the song. Players who guess all the songs correctly share in that event’s prize money.

Learn more about FTW Studios – and how the software works – in my story in Forbes, “This Online Game Features Out-Of-Tune Pop Songs. The End Game Is About Much More.”

Every new graduate from Central New Mexico Community College leaves school with a beautiful paper diploma covered in fine calligraphy, colorful seals, and official signatures. This summer, every new graduate also left with the same information authenticated and recorded in blockchain.

What’s the point of recording diplomas using blockchain technology? Blockchain creates a list of immutable records—grouped in blocks—that are linked cryptographically to form a tamper-evident chain. Those blocks are replicated on multiple servers across the participating organizations, so if a school went out of business, or somehow lost certain records to disaster or other mayhem, a student’s credentials are still preserved in other organizations’ ledger copies. Anyone authorized to access information on that blockchain (which might include, for example, prospective employers) could verify whether the student’s diploma and its details, such as the year, degree, and honors, match what the student claims.

Today, using blockchain for diplomas or certifications is uncommon. But it’s one of a growing number of blockchain use cases being tested—cases where information needs to be both shared and trusted across many parties, and preserved against loss or tampering.

Academic credentials are important to adults looking for jobs or applying to study for advanced degrees. Those records are also vital for refugees fleeing natural disasters or war-torn countries, such as those leaving Syria. “There are refugees who are medical doctors who can no longer practice medicine because they don’t have those certificates anymore,” says Feng Hou, CIO and chief digital learning officer at Central New Mexico Community College (CNM).

CNM is the largest higher-education institution in the state in terms of undergraduate enrollment, serving more than 23,000 students this fall. Nationally accredited, with eight locations in and around Albuquerque, CNM offers more than 150 associate degrees and certificates, as well as non-credit job training programs.

A benefit of blockchain is that there’s no single point of failure. “Given the decentralized nature of blockchain technology, it will prevent the single point of failure for any identity crisis, such as Syrian refugees, because on blockchain the ID is secure, shareable and verifiable anywhere in the world,” says Hou.

Read more in my story for the Wall Street Journal, “New Mexico College Deploys Blockchain for Digital Diplomas.”

Oracle Database is the world’s most popular enterprise database. This year’s addition of autonomous operating capabilities to the cloud version of Oracle Database is one of the most important advances in the database’s history. What does it mean for a database to be “autonomous?” Let’s look under the covers of Oracle Autonomous Database to show just a few of the ways it does that.

Oracle Autonomous Database is a fully managed cloud service. Like all cloud services, the database runs on servers in cloud data centers—in this case, on hardware called Oracle Exadata Database Machine that’s specifically designed and tuned for high-performance, high-availability workloads. The tightly controlled and optimized hardware enables some of the autonomous functionality we’ll discuss shortly.

While the autonomous capability of Oracle Autonomous Database is new, it builds on scores of automation features that Oracle has been building into its Oracle database software and the Exadata database hardware for years. The goals of the autonomous functions are twofold: First, to lower operating costs by reducing costly and tedious manual administration, and second, to improve service levels through automation and fewer human errors.

My essay in Forbes, “What Makes Oracle Autonomous Database Truly ‘Autonomous,’” shows of how the capabilities in Oracle Autonomous Database change the game for database administrators (DBAs). The benefits: DBAs are freed them from mundane tasks and letting them focus on higher-value work.

Knowledge is power—and knowledge with the right context at the right moment is the most powerful of all. Emerging technologies will leverage the power of context to help people become more efficient, and one of the first to do so is a new generation of business-oriented digital assistants.

Let’s start by distinguishing a business digital assistant from consumer products such as Apple’s Siri, Amazon’s Echo, and Google’s Home. Those cloud-based technologies have proved themselves at tasks like information retrieval (“How long is my commute today?”) and personal organization (“Add diapers to my shopping list”). Those services have some limited context about you, like your address book, calendar, music library, and shopping cart. What they don’t have is deep knowledge about your job, your employer, and your customers.

In contrast, a business digital assistant needs much richer context to handle the kind of complex tasks we do at work, says Amit Zavery, executive vice president of product development at Oracle. Which sorts of business tasks? How about asking a digital assistant to summarize the recent orders from a company’s three biggest customers in Dallas; set up a conference call with everyone involved with a particular client account; create a report of all employees who haven’t completed information security training; figure out the impact of a canceled meeting on a travel plan; or pull reports on accounts receivable deviations from expected norms?

Those are usually tasks for human associates—often a tech-savvy person in supply chain, sales, finance, or human resources. That’s because so many business tasks require context about the employee making the request and about the organization itself, Zavery says. A digital assistant’s goal should be to reduce the amount of mental energy and physical steps needed to perform such tasks.

Learn more in my article for Forbes, “The One Thing Digital Assistants Need To Become Useful At Work: Context.”

At too many government agencies and companies, the security mindset, even though it’s never spoken, is that “We’re not a prime target, our data isn’t super-sensitive.” Wrong. The reality is that every piece of personal data adds to the picture that potential criminals or state-sponsored actors are painting of individuals.

And that makes your data a target. “Just because you think your data isn’t useful, don’t assume it’s not valuable to someone, because they’re looking for columns, not rows,” says Hayri Tarhan, Oracle regional vice president for public sector security.

Here’s what Tarhan means by columns not rows: Imagine that the bad actors are storing information in a database (which they probably are). What hackers want in many data breaches is more information about people already in that database. They correlate new data with the old, using big data techniques to fill in the columns, matching up data stolen from different sources to form a more-complete picture.

That picture is potentially much more important and more lucrative than finding out about new people and creating new, sparsely populated data rows. So, every bit of data, no matter how trivial it might seem, is important when it comes to filling the empty squares.

Read more about this – and how machine learning can help – in my article in Forbes, “Data Thieves Want Your Columns—Not Your Rows.”

Blockchain and the cloud go together like organic macaroni and cheese. What’s the connection? Choosy shoppers would like to know that their organic food is tracked from farm to shelf, to make sure they’re getting what’s promised on the label. Blockchain provides an immutable ledger perfect for tracking cheese, for example, as it goes from dairy to cheesemaker to distributor to grocer.

Oracle’s new Blockchain Cloud Service provides a platform for each participant in a supply chain to register transactions. Within that blockchain, each participant—and regulators, if appropriate—can review those transactions to ensure that promises are being kept, and that data has not been tampered with. Use cases range from supply chains and financial transactions to data sharing inside a company.

Launched this month, Oracle Blockchain Cloud Service has the features that an enterprise needs to move from experimenting with blockchain to creating production applications. It addresses some of the biggest challenges facing developers and administrators, such as mastering the peer-to-peer protocols used to link blockchain servers, ensuring resiliency and high availability, and ensuring that security is solid. For example, developers previously had to code one-off integrations using complex APIs; Oracle’s Blockchain Cloud Service provides integration accelerators with sample templates and design patterns for many Oracle and third-party applications in the cloud and running on-premises in the data center.

Oracle Blockchain Cloud Service provides the kind of resilience, recoverability, security, and global reach that enterprises require before they’d trust their supply chain and customer experience to blockchain. With blockchain implemented as a managed cloud service, organizations also get a system that’s ready to be integrated with other enterprise applications, and where Oracle handles the back end to ensure availability and security.

Read more about this in my story for Forbes, “Oracle Helps You Put Blockchain Into Real-World Use With New Cloud Service.”

If you saw the 2013 Sandra Bullock-George Clooney science-fiction movie Gravity, then you know about the silent but deadly damage that even a small object can do if it hits something like the Hubble telescope, a satellite, or even the International Space Station as it hurtles through space. If you didn’t see Gravity, a non-spoiler, one-word summary would be “disaster.” Given the thousands of satellites and pieces of man-made debris circling our planet, plus new, emerging threats from potentially hostile satellites, you don’t need to be a rocket scientist to know that it’s important to keep track of what’s around you up there.

It all starts with the basic physics of motion and managing the tens of thousands of data points associated with those objects, says Paul Graziani, CEO and cofounder of Analytical Graphics. The Exton, Pennsylvania-based software company develops four-dimensional software that analyzes and visualizes objects based on their physical location and their time and relative position to each other or to other known locations. AGI has leveraged its software models to build the ComSpOC – its Commercial Space Operations Center. ComSpOC is the first and only commercial Space Situational Awareness center, and since 2014 it has helped space agencies and satellite operators keep track of space objects, including satellites and spacecraft.

ComSpOC uses data from sensors that AGI owns around the globe, plus data from other organizations, to track objects in space. These sensors include optical telescopes, radar systems, and passive rf (radio frequency) sensors. “A telescope gathers reflections of sunlight that come off of objects in space,” Graziani says. “And a radar broadcasts radio signals that reflect off of those objects and then times how long it takes for those signals to get back to the antenna.”

The combination of these measurements helps pinpoint the position of each object. The optical measurements of the telescopes provide directional accuracy, while the time measurements of the radar systems provide the distance of that object from the surface of the Earth. Passive rf sensors, meanwhile, use communications antennas that receive the broadcast information from operational satellites to measure satellite position and velocity.

Read more in my story for Forbes, “How Satellites Avoid Attacks And Space Junk While Circling The Earth.”

Users care passionately about their software being fast and responsive. You need to give your applications both 0-60 speed and the strongest long-term endurance. Here are 14 guidelines for choosing a deployment platform to optimize performance, whether your application runs in the data center or the cloud.

Faster! Faster! Faster! That killer app won’t earn your company a fortune if the software is slow as molasses. Sure, your development team did the best it could to write server software that offers the maximum performance, but that doesn’t mean diddly if those bits end up on a pokey old computer that’s gathering cobwebs in the server closet.

Users don’t care where it runs as long as it runs fast. Your job, in IT, is to make the best choices possible to enhance application speed, including deciding if it’s best to deploy the software in-house or host it in the cloud.

When choosing an application’s deployment platform, there are 14 things you can do to maximize the opportunity for the best overall performance. But first, let’s make two assumptions:

  • These guidelines apply only to choosing the best data center or cloud-based platform, not to choosing the application’s software architecture. The job today is simply to find the best place to run the software.
  • I presume that if you are talking about a cloud deployment, you are choosing infrastructure as a service (IaaS) instead of platform as a service (PaaS). What’s the difference? In PaaS, the operating system is provided by the host, such as Windows or Linux, .NET, or Java; all you do is provide the application. In IaaS, you can provide, install, and configure the operating system yourself, giving you more control over the installation.

Here’s the checklist

  1. Run the latest software. Whether in your data center or in the IaaS cloud, install the latest version of your preferred operating system, the latest core libraries, and the latest application stack. (That’s one reason to go with IaaS, since you can control updates.) If you can’t control this yourself, because you’re assigned a server in the data center, pick the server that has the latest software foundation.
  2. Run the latest hardware. Assuming we’re talking the x86 architecture, look for the latest Intel Xeon processors, whether in the data center or in the cloud. As of mid-2018, I’d want servers running the Xeon E5 v3 or later, or E7 v4 or later. If you use anything older than that, you’re not getting the most out of the applications or taking advantage of the hardware chipset. For example, some E7 v4 chips have significantly improved instructions-per-CPU-cycle processing, which is a huge benefit. Similarly, if you choose AMD or another processor, look for the latest chip architectures.
  3. If you are using virtualization, make sure the server has the best and latest hypervisor. The hypervisor is key to a virtual machine’s (VM) performance—but not all hypervisors are created equal. Many of the top hypervisors have multiple product lines as well as configuration settings that affect performance (and security). There’s no way to know which hypervisor is best for any particular application. So, assuming your organization lets you make the choice, test, test, test. However, in the not-unlikely event you are required to go with the company’s standard hypervisor, make sure it’s the latest version.
  4. Take Spectre and Meltdown into account. The patches for Spectre and Meltdown slow down servers, but the extent of the performance hit depends on the server, the server’s firmware, the hypervisor, the operating system, and your application. It would be nice to give an overall number, such as expect a 15 percent hit (a number that’s been bandied about, though some dispute its accuracy). However, there’s no way to know except by testing. Thus, it’s important to know if your server has been patched. If it hasn’t been yet, expect application performance to drop when the patch is installed. (If it’s not going to be patched, find a different host server!)
  5. Base the number of CPUs and cores and the clock speed on the application requirements. If your application and its core dependencies (such as the LAMP stack or the .NET infrastructure) are heavily threaded, the software will likely perform best on servers with multiple CPUs, each equipped with the greatest number of cores—think 24 cores. However, if the application is not particularly threaded or runs in a not-so-well-threaded environment, you’ll get the biggest bang with the absolute top clock speeds on an 8-core server.

But wait, there’s more!

Read the full list of 14 recommendations in my story for HPE Enteprise.nxt, “Checklist: Optimizing application performance at deployment.”

You wouldn’t enjoy paying a fine of 4 percent of your company’s total revenue. But that’s the potential penalty if your company is found in violation of the European Union’s new General Data Protection Regulation (GDPR), which goes into effect May 25, 2018. As you’ve probably read, organizations anywhere in the world are subject to GDPR if they have customers in the EU and are storing any of their personal data.

GDPR compliance is a complex topic. It’s too much for one article — heck, books galore are being written about it, seminars abound, and GDPR consultants are on every street corner.

One challenge is that GDPR is a regulation, not a how-to guide. It’s big on explaining penalties for failing to detect and report a data breach in a sufficiently timely manner. It’s not big on telling you how to detect that breach. Rather than tell you what to do, let’s see what could go wrong with your GDPR plans—to help you avoid that 4 percent penalty.

First, the ground rules: GDPR’s overarching goal is to protect citizens’ privacy. In particular, the regulation pertains to anything that can be used to directly or indirectly identify a person. Such data can be anything: a name, a photo, an email address, bank details, social network posts, medical information, or even a computer IP address. To that end, data breaches that may pose a risk to individuals must be disclosed to the authorities within 72 hours and to the affected individuals soon thereafter.

What does that mean? As part of the regulations, individuals must have the ability to see what data you have about them, correct that data if appropriate, or have that data deleted, again if appropriate. (If someone owes you money, they can’t ask you to delete that record.)

Enough preamble. Let’s get into ten common problems.

First: Your privacy and data retention policies aren’t compliant with GDPR

There’s no specific policy wording required by GDPR. However, the policies must meet the overall objectives on GDPR, as well as the requirements in any other jurisdictions in which you operate (such as the United States). What would Alan do? Look at policies from big multinationals that do business in Europe and copy what they do, working with your legal team. You’ve got to get it right.

Second: Your actual practices don’t match your privacy policy

It’s easy to create a compliant privacy policy but hard to ensure your company actually is following it. Do you claim that you don’t store IP addresses? Make sure you’re not. Do you claim that data about a European customer is never stored in a server in the United States? Make sure that’s truly the case.

For example, let’s say you store information about German customers in Frankfurt. Great. But if that data is backed up to a server in Toronto, maybe not great.

Third: Your third-party providers aren’t honoring your GDPR responsibilities

Let’s take that customer data in Frankfurt. Perhaps you have a third-party provider in San Francisco that does data analytics for you, or that runs credit reports or handles image resizing. In those processes, does your customer data ever leave the EU? Even if it stays within the EU, is it protected in ways that are compliant with GDPR and other regulations? It’s your responsibility to make sure: While you might sue a supplier for a breach, that won’t cancel out your own primary responsibility to protect your customers’ privacy.

A place to start with compliance: Do you have an accurate, up-to-date listing of all third-party providers that ever touch your data? You can’t verify compliance if you don’t know where your data is.

But wait, there’s more

You can read the entire list of common GDPR failures in my story for HPE Enterprise.nxt, “10 ways to fail at GDPR compliance.”

The public cloud is part of your network. But it’s also not part of your network. That can make security tricky, and sometimes become a nightmare.

The cloud represents resources that your business rents. Computational resources, like CPU and memory; infrastructure resources, like Internet bandwidth and Internal networks; storage resources; and management platforms, like the tools needed to provision and configure services.

Whether it’s Amazon Web Services, Microsoft Azure or Google Cloud Platform, it’s like an empty apartment that you rent for a year or maybe a few months. You start out with empty space, put in there whatever you want and use it however you want. Is such a short-term rental apartment your home? That’s a big question, especially when it comes to security. By the way, let’s focus on platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS), where your business has a great deal of control over how the resource is used — like an empty rental apartment.

We are not talking about software-as-a-service (SaaS), like Office 365 or Salesforce.com. That’s where you show up, pay your bill and use the resources as configured. That’s more like a hotel room: you sleep there, but you can’t change the furniture. Security is almost entirely the responsibility of the hotel; your security responsibility is to ensure that you don’t lose your key, and to refuse to open the door for strangers. The SaaS equivalent: Protect your user accounts and passwords, and ensure users only have the least necessary access privileges.

Why PaaS/IaaS are part of your network

As Peter Parker knows, Spider Man’s great powers require great responsibility. That’s true in the enterprise data center — and it’s true in PaaS/IaaS networks. The customer is responsible for provisioning servers, storage and virtual machines. Not only that, but the customer also is responsible for creating connections between the cloud service and other resources, such as an enterprise data center — in a hybrid cloud architecture — and other cloud providers — in a multi-cloud architecture.

The cloud provider sets terms for use of the PaaS/IaaS, and allows inbound and outbound connections. There are service level guarantees for availability of the cloud, and of servers that the cloud provider owns. Otherwise, everything is on the enterprise. Think of the PaaS/IaaS cloud as being a remote data center that the enterprise rents, but where you can’t physically visit and see your rented servers and infrastructure.

Why PaaS/IaaS are not part of your network

In short, except for the few areas that the cloud provider handles — availability, cabling, power supplies, connections to carrier networks, physical security — you own it. That means installing patches and fixes. That means instrumenting servers and virtual machines. That means protecting them with software-based firewalls. That means doing backups, whether using the cloud provider’s value-added services or someone else. That means anti-malware.

That’s not to minimize the benefits the cloud provider offers you. Power and cooling are a big deal. So are racks and cabling. So is that physical security, and having 24×7 on-site staffing in the event of hardware failures. Also, there’s click-of-a-button ability to provision and spool up new servers to handle demand, and then shut them back again when not needed. Cloud providers can also provide firewall services, communications encryption, and of course, consulting on security.

The word elastic is often used for cloud services. That’s what makes the cloud much more agile than an on-premise data center, or renting an equipment cage in a colocation center. It’s like renting an apartment where if you need a couple extra bedrooms for a few months, you can upsize.

For many businesses, that’s huge. Read more about how great cloud power requires great responsibility in my essay for SecurityNow, “Public Cloud, Part of the Network or Not, Remains a Security Concern.”

It’s standard practice for a company to ask its tech suppliers to fill out detailed questionnaires about their security practices. Companies use that information when choosing a supplier. Too much is at stake, in terms of company reputation and customer trust, to be anything but thorough with information security.

But how can a company’s IT security teams be most effective in that technology buying process? How do they get all the information they need, while also staying focused on what really matters and not wasting their time? Oracle Chief Security Officer Mary Ann Davidson at the recent RSA Conference offered her tips on this IT security risk assessment process. Drawing on her extensive experience as both supplier and buyer of technology and cloud services in her role at Oracle, Davidson shared advice from both points of view.

Advice on business risk assessments

It’s time to put out an RFP to engage new technology providers or to conduct an annual assessment of existing service providers. What do you ask in such a vendor security assessment questionnaire? There are many existing documents and templates, some focused on specific industries, others on regulated sectors or regulated information. Those should guide any assessment process, but aren’t the only factors, says Davidson. Consider these practical tips to get the crucial data you need, and avoid gathering a lot of information that will only distract you from issues that are important for keeping your data secure.

  1. Have a clear objective in mind. The purpose of the vendor security assessment questionnaire should be to assess the security performance of the vendor in light of the organization’s tolerance for risk on a given project.
  2. Limit the scope of an assessment to the potential security risks for services that the supplier is offering you. Those services are obviously critical, because they could affect your data, operations, and security. There is no value in focusing on a supplier’s purely internal systems if they don’t contain or connect to your data. By analogy, “you care about the security of a childcare provider’s facility,” says Davidson. “It’s not relevant to ask about the security of the facility owner’s vacation home in Lake Tahoe.”
  3. When possible, align the questions with internationally recognized, relevant, independently developed standards. It’s reasonable to expect service providers to offer open services that conform to true industry standards. Be wary of faux standards, which are the opposite of open—they could be designed to encourage tech buyers to trust what they think are specifications designed around industry consensus, but which are really pushing one tech supplier’s agenda or that of a third-party certification business.

There are a lot more tips in my story for Forbes, “IT Security Risk Assessments: Tips For Streamlining Supplier-Customer Communication.”

Chapter One: Christine Hall

Should the popular Linux operating system be referred to as “Linux” or “GNU/Linux”? It’s a thing, or at least it used to be, writes my friend Christine Hall in her aptly named article, “Is It Linux or GNU/Linux?, published in Linux Journal on May 11:

Some may remember that the Linux naming convention was a controversy that raged from the late 1990s until about the end of the first decade of the 21st century. Back then, if you called it “Linux”, the GNU/Linux crowd was sure to start a flame war with accusations that the GNU Project wasn’t being given due credit for its contribution to the OS. And if you called it “GNU/Linux”, accusations were made about political correctness, although operating systems are pretty much apolitical by nature as far as I can tell.

Christine (aka Bride of Linux) quotes a number of learned people. That includes Steven J. Vaughan-Nichols, one of the top experts in the politics of open-source software – and frequent critic of the antics of Richard M. Stallman (aka RMS) who founded the Free Software Foundation, and who insists that everyone call the software GNU/Linux.

Here’s what Steven (aka SJVN), said in the article:

“Enough already”, he said. “RMS tried, and failed, to create an operating system: Hurd. He and the Free Software Foundation’s endless attempts to plaster his GNU name to the work of Linus Torvalds and the other Linux kernel developers is disingenuous and an insult to their work. RMS gets credit for EMACS, GPL, and GCC. Linux? No.”

Another humble luminary sought out by Christine: Yours truly.

“For me it’s always, always, always, always Linux,” said Alan Zeichick, an analyst at Camden Associates who frequently speaks, consults and writes about open-source projects for the enterprise. “One hundred percent. Never GNU/Linux. I follow industry norms.”

To make a long story short: In the article, the consensus was for Linux, not GNU/Linux.

Chapter Two: figosdev

But then someone going by the handle “figosdev” authored a rebuttal, “Debunking the Usual Omission of GNU,” published on Techrights. To make a long story short, he believes that the operating system should be called GNU/Linux. Here’s my favorite part of figosdev’s missive (which was written in all lower-case):

ive heard about gnu and linux about a million times in over a decade. as of today ive heard of alan zeichick once, and camden associates (what do they even do?) once. im just going to call them linux, its the more popular term.

Riiight. figosdev never heard of me, fine (founder of SD Times, but figosdev probably never heard of that either). On the other hand, at least figosdev knows my name. I have no idea who figosdev is, except to infer that he/she/it is a developer on the fig component compiler project, since he/she/it is hiding behind a handle. And that brings me to

Chapter Three: Richi Jennings

Christine Hall’s article sparked a lively debate on Twitter. As part of it, my friend Richi Jennings (quoted in the original article) tweeted:

Let’s end the story here, at least for now. Linux forever!

Oracle CEO Mark Hurd is known as an avid tennis fan and supporter of the sport’s development, having played in college at Baylor University. At the Collision Conference last week in New Orleans, Hurd discussed the similar challenges facing tennis players and top corporate executives.

“I like this sport because tennis teaches that you’re out there by yourself,” said Hurd, who was interviewed on stage by CNBC reporter Aditi Roy. “Tennis is like being CEO: You can’t call time out, you can’t bring in a substitute,” Hurd said. “Tennis is a space where you have to go out every day, rain or shine, and you’ve got to perform. It’s just like the business world.”

Performance returned to the center of the conversation when Roy asked about Oracle’s acquisition strategy. Hurd noted that Oracle’s leadership team gives intense scrutiny to acquisitions of any size. “We don’t go out of our way to spend money — it’s our shareholder’s money,” he said. “We also think about dividends and buying stock back.”

When it comes to mergers and acquisitions, Oracle is driven by three top criteria, Hurd said. “First, the company has to fit strategically with where we are going,” he said. “Second, it has to make fiscal sense. And third, we have to be able to effectively run the acquisition.”

Hurd emphasized that he’s focused on the future, not a company’s past performance. “We are looking for companies that will be part of things 5 or 10 years from now, not 5 or 10 years ago,” he said. “We want to move forward, in platforms and applications.”

To a large extent, that future includes artificial intelligence. Hurd was quick to say, “I’m not looking for someone to say, ‘I have an AI solution in the cloud, come to me.’” Rather, Oracle wants to be able to integrate AI directly into its applications, in a way that gives customers clear business returns.

He used the example of employee recruitment. “We recruit 2,000 college students today. It used to be done manually, but now we use machine learning and algorithms to figure out where to source people.” Not only does the AI help find potential employees, but it can help evaluate whether the person would be successful at Oracle. “We could never have done that before,” Hurd added.

Read more about what Hurd said at Collision, including his advice for aspiring CEOs, in my story for Forbes, “Mark Hurd On The Perfect Sport For CEOs — And Other Leadership Insights.”

You can also watch the 20-minute entire interview here.

No doubt you’ve heard about blockchain. It’s the a distributed digital ledger technology that lets participants add and view blocks of transaction records, but not delete or change them without being detected.

Most of us know blockchain as the foundation of Bitcoin and other digital currencies. But blockchain is starting to enter the business mainstream as the trusted ledger for farm-to-table vegetable tracking, real estate transfers, digital identity management, financial transactions and all manner of contracts. Blockchain can be used for public transactions as well as for private business, inside a company or within an industry group.

What makes the technology so powerful is that there’s no central repository for this ever-growing sequential chain of transaction records, clumped together into blocks. Because that repository is replicated in each participant’s blockchain node, there is no single source of failure, and no insider threat within a single organization can impact its integrity.

“Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days or even weeks.”

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can be permitted to view relevant data, but not everything in the chain.

A customer, for instance, might be able to verify that a contractor has a valid business license. The customer might also see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

Business models and use cases

Blockchain is well-suited for managing transactions between companies or organizations that may not know each other well and where there’s no implicit or explicit trust. Rakhmilevich explains, “Blockchain works because it’s peer-to-peer…and it provides an easy-to-track history, which can serve as an audit trail,” he says.

What’s more, blockchain smart contracts are ideal for automating manual or semi-automated processes prone to errors or fraud. “Blockchain can help when there might be challenges in proving that the data has not been tampered with or when verifying the source of a particular update or transaction is important,” Rakhmilevich says.

Blockchain has uses in many industries, including banking, securities, government, retail, healthcare, manufacturing and transportation. Take healthcare: Blockchain can provide immutable records on clinical trials. Think about all the data being collected and flowing to the pharmaceutical companies and regulators, all available instantly and from verified participants.

Read more about blockchain in my article for the Wall Street Journal, “Blockchain: It’s All About Business—and Trust.”

Blame people for the SOC scalability challenge. On the other hand, don’t blame your people. It’s not their fault.

The security operations center (SOC) team is frequently overwhelmed, particularly the Tier 1 security analysts tasked with triage. As companies grow and add more technology — including the Internet of Things (IoT) — that means more alerts.

As the enterprise adds more sophisticated security tools, such as Endpoint Detection and Response (EDR), that means more alerts. And more complex alerts. You’re not going to see a blinking red light that says: “You’re being hacked.” Or if you do see such an alert, it’s not very helpful.

The problem is people, say experts at the 2018 RSA Conference, which wrapped up last week. Your SOC team — or teams — simply can’t scale fast enough to keep up with the ever-increasing demand. Let’s talk about the five biggest problems challenging SOC scalability.

Reason #1: You can’t afford to hire enough analysts

You certainly can’t afford to hire enough Tier 2 analysts who respond to real — or almost certainly real — incidents. According sites like Glassdoor and Indeed, be prepared to pay over $100,000 per year, per person.

Reason #2: You can’t even find enough analysts

We’ve created a growing demand for labor, and thus, we’ve created this labor shortage,” said Malcolm Harkins, chief security and trust officer of CylanceThere are huge numbers of open positions at all levels of information security, and that includes in-enterprise SOC team members. Sure, you could pay more, or do competitive recruiting, but go back to the previous point: You can’t afford that. Perhaps a managed security service provider can afford to keep raising salaries, because an MSSP can monetize that expense. An ordinary enterprise can’t, because security is an expense.

Reason #3: You can’t train the analysts

Even with the best security tools, analysts require constant training on threats and techniques — which is expensive to offer, especially for a smaller organization. And wouldn’t you know it, as soon as you get a group of triage specialists or incident responders trained up nicely, off they go for a better job.

Read more, including two more reasons, in my essay for SecurityNow, “It’s the People: 5 Reasons Why SOC Can’t Scale.”

Got Terminator? Microsoft is putting artificial intelligence in charge of automatically responding to detected threats, with a forthcoming update to Windows Defender ATP.

Microsoft is expanding its use of artificial intelligence and big data analytics behind the current levels of machine learning in its security platform. Today, AI is used for incident detection and investigation, filtering out false positives and making it easier for humans in the security operations center (SOC) team to determine the correct response to an incident.

Soon, customers will be able to allow the AI to respond to some incidents automatically. Redmond claims this will cut time-to-remediation down to minutes. In a blog post released April 17, Moti Gindi, general manager for Windows Cyber Defense, wrote: “Threat investigation and remediation decisions can be taken automatically by Windows Defender ATP based on extensive historical data collected, stored and analyzed in our cloud (‘time travel’).”

What type of remediation? No, robots won’t teleport from the future and shoot lasers at the cybercriminals. At least, that’s not an announced capability. Rather, Windows Defender ATP will signal the Azure Active Directory user management and Microsoft Intune mobile device management platforms to temporarily revoke access privileges to cloud storage and enterprise applications, such as Office 365.

After the risk has been evaluated — or after the CEO has yelled at the CISO from her sales trip overseas — the access revocation can be reversed. Another significant part of the Windows Defender ATP announcements: Threat signal sharing between Microsoft’s various cloud platforms, which up until now have operated pretty much autonomously in terms of security.

In the example Microsoft offered, threats coming via a phishing email detected by Outlook 365 will be correlated with malware blocked by OneDrive for Business. In this incarnation, signal sharing will bring together Office 365, Azure 365 and Windows Defender ATP.

Read more, including about Microsoft’s Mac support for security, in my essay for SecurityNow, “Microsoft Security Is Channeling the Terminator.”

Ransomware rules the cybercrime world – perhaps because ransomware attacks are often successful and financially remunerative for criminals. Ransomware features prominently in Verizon’s fresh-off-the-press 2018 Data Breach Investigations Report (DBIR). As the report says, although ransomware is still a relatively new type of attack, it’s growing fast:

Ransomware was first mentioned in the 2013 DBIR and we referenced that these schemes could “blossom as an effective tool of choice for online criminals”. And blossom they did! Now we have seen this style of malware overtake all others to be the most prevalent variety of malicious code for this year’s dataset. Ransomware is an interesting phenomenon that, when viewed through the mind of an attacker, makes perfect sense.

The DBIR explains that ransomware can be attempted with little risk or cost to the attacker. It can be successful because the attacker doesn’t need to monetize stolen data, only ransom the return of that data; and can be deployed across numerous devices in organizations to inflict more damage, and potentially justify bigger ransoms.

Botnets Are Also Hot

Ransomware wasn’t the only prominent attack; the 2018 DBIR also talks extensively about botnet-based infections. Verizon cites more than 43,000 breaches using customer credentials stolen from botnet-infected clients. It’s a global problem, says the DBIR, and can affect organizations in two primary ways:

The first way, you never even see the bot. Instead, your users download the bot, it steals their credentials, and then uses them to log in to your systems. This attack primarily targeted banking organizations (91%) though Information (5%) and Professional Services organizations (2%) were victims as well.

The second way organizations are affected involves compromised hosts within your network acting as foot soldiers in a botnet. The data shows that most organizations clear most bots in the first month (give or take a couple of days).

However, the report says, some bots may be missed during the disinfection process. This could result in a re-infection later.

Insiders Are Still Significant Threats

Overall, says Verizon, outsiders perpetrated most breaches, 73%. But don’t get too complacent about employees or contracts: Many involved internal actors, 28%. Yes, that adds to more than 100% because some outside attacks had inside help. Here’s who Verizon says is behind breaches:

  • 73% perpetrated by outsiders
  • 28% involved internal actors
  • 2% involved partners
  • 2% featured multiple parties
  • 50% of breaches were carried out by organized criminal groups
  • 12% of breaches involved actors identified as nation-state or state-affiliated

Email is still the delivery vector of choice for malware and other attacks. Many of those attacks were financially motivated, says the DBIR. Most worrying, a significant number of breaches took a long time to discover.

  • 49% of non-point-of-sale malware was installed via malicious email
  • 76% of breaches were financially motivated
  • 13% of breaches were motivated by the gain of strategic advantage (espionage)
  • 68% of breaches took months or longer to discover

Taking Months to Discover the Breach

To that previous point: Attackers can move fast, but defenders can take a while. To use a terrible analogy: If someone breaks into your car and steals your designer sunglasses, the time from their initial penetration (picking the lock or smashing the window) to compromising the asset (grabbing the glasses) might be a minute or less. The time to discovery (when you see the broken window or realize your glasses are gone) could be minutes if you parked at the mall – or days, if the car was left at the airport parking garage. The DBIR makes the same point about enterprise data breaches:

When breaches are successful, the time to compromise continues to be very short. While we cannot determine how much time is spent in intelligence gathering or other adversary preparations, the time from first action in an event chain to initial compromise of an asset is most often measured in seconds or minutes. The discovery time is likelier to be weeks or months. The discovery time is also very dependent on the type of attack, with payment card compromises often discovered based on the fraudulent use of the stolen data (typically weeks or months) as opposed to a stolen laptop which is discovered when the victim realizes they have been burglarized.

Good News, Bad News on Phishing

Let’s end on a positive note, or a sort of positive note. The 2018 DBIR notes that most people never click phishing emails: “When analyzing results from phishing simulations the data showed that in the normal (median) organization, 78% of people don’t click a single phish all year.”

The less good news: “On average 4% of people in any given phishing campaign will click it.” The DBIR notes that the more phishing emails someone has clicked, the more they are likely to click on phishing emails in the future. The report’s advice: “Part of your overall strategy to combat phishing could be that you can try and find those 4% of people ahead of time and plan for them to click.”

Good luck with that.