David McLeod, CISO, Cox Enterprises
David McLeod, Cox Enterprises

“Training and recovery.” That’s where David McLeod, chief information security officer of Cox Enterprises, says that CISOs should spend their money in 2020.

Training often focuses on making employees less of a security risk. That includes teaching them what not to click on and how to proactively protect the information that is a part of their everyday work. McLeod sees employees as something more powerful.

“Train everyone so you have a wall of passionate people surrounding your business. I’m talking about creating a neighborhood watch,” McLeod says. “I find people who are eager to know what they can do, and they help expand our culture of proactive protection akin to a neighborhood watch. So if I’m going to drive security for the least cost and the highest effectiveness, I’m always increasing my neighborhood watch.”

Recovery isn’t far behind, though, because sooner or later, there will likely be a security incident, such as a breach, ransomware attack, or worse. “Some hacker’s going to get in. It’s all about recovery. It’s all about keeping the business going. You can do a lot of harm to a business if you have to shut down your revenue systems for three days,” McLeod says.

Read more from David McLeod and from other top experts in my story for Forbes, “Chief Information Security Officer Priorities For 2020.”

Canadian banknote

Like consumers and merchants all around the world, Canadians have embraced digital payments instead of cash and checks. The growth rate is staggering, as evidenced by statistics provided by Interac, which processes many of those payments.

Digital payments are used for payments from an individual or business to another individual or business. For example, a person might buy artwork from a gallery in Montréal, using a mobile wallet or an app that person’s bank provides. Interac provides behind-the-scenes technology that facilitates these payments with a high degree of security.

Person-to-person (P2P) payments are one of the fastest-growing segments of business for Interac. Its P2P service, called Interac e-Transfer, saw 371.4 million transactions in 2018, representing a 54% increase over 2017. The amount of money involved is significant, too: CAN$132.8 billion in 2018, a 45% increase over 2017.

Interac overall processes about 16 million transactions per day, the bulk of which are debit card transactions made at the point of sale. That growth has back-office technology implications—the rapid increase in online transactions is prompting Interac to move its core software to the cloud. A shift to cloud-based services ensures it can handle future growth and will strengthen the always-on resiliency of its platform.

Why the fast growth in Interac e-Transfer use? It starts with more consumer and business acceptance of digital payments in place of cash and checks, owing in part to their convenience, reliability, and security. With that interest, more financial institutions have signed up as partners, so they can offer customers the ability to transfer money and make digital payments right from their bank accounts.

Also, more businesses are relying on digital payments for business-to-business transfers. Approximately one in six Interac e-Transfer transactions are conducted by a business, which lets them eliminate reliance on checks and allows invoices to be settled in real time.

Read more about this in my story in Forbes, “Canada Embraces Digital Payments, With Some Behind-The-Scenes Help.”

Where will you find CARE? Think of trouble spots around the world where there are humanitarian disasters tied to extreme poverty, conflict, hunger, or a lack of basic healthcare or education. CARE is on the ground in these places, addressing survival needs, running clinics, and helping individuals, families, and communities rebuild their lives.

CARE’s scope is truly global. In 2018, the organization reached 56 million needy people through 965 programs in 95 countries, in places such as Mali, Jordan, Bangladesh, Brazil, the Democratic Republic of the Congo, Yemen, India, the Dominican Republic, and Niger.

CARE didn’t start out as a huge global charity, though. Founded in 1945, CARE provided a way for Americans to send lifesaving food and supplies to survivors of World War II — “CARE packages.” Today, it responds to dozens of disasters each year, reaching nearly 12 million people through its emergency programs. The rest of CARE’s work is through longer-term engagements, such as its work in Bihar State, in northern India.

Bihar, with a population of more than 110 million people, is one of India’s poorest states. Bihar has some of the country’s highest rates of infant and maternal mortality as well as childhood malnutrition. Since 2011, CARE has been working with the Bihar state government and other nongovernmental organizations (NGOs) to address those problems and to increase immunization rates for mothers and children.

The results to date have been significant. In Bihar, the percentage of 1-year-olds with completed immunization schedules increased from 12% to 84% between 2005 and 2018; there were nearly 20,000 fewer newborn deaths in 2016 than in 2011; and the maternal mortality rate fell by nearly half, from 312 to 165 maternal deaths per 100,000 live births between 2005 and 2018. How? Some of CARE’s initiatives involved improving healthcare facilities, mentoring nurses, supporting local social workers and midwives, and tracking the care given to weak and low-weight newborns.

Read more in my story for Forbes, “CARE’s Work In Bihar Shows Progress Is Possible Against The Toughest Problems.”

Solve the puzzle: A company’s critical customer data is in a multiterabyte on-premises database, and the digital marketing application that uses that data to manage and execute campaigns runs in the cloud. How can the cloud-based marketing software quickly access and leverage that on-premises data?

It’s a puzzle that one small consumer-engagement consulting company, Embel Assist, found its clients facing. The traditional solution, perhaps, would be to periodically replicate the on-premises database in the cloud using extract-transfer-load (ETL) software, but that may take too much time and bandwidth, especially when processing terabytes of data. What’s more, the replicated data could quickly become out of date.

Using cloud-based development and computing resources, Embel Assist found another way to crack this problem. It created an app called EALink that acts as a smart interface between an organization’s customer data sources and Oracle Eloqua, a cloud-based marketing automation platform. EALink also shows how development using Oracle Cloud Infrastructure creates new opportunities for a small and creative company to take on big enterprise data challenges.

Say the on-premises CRM system for a drugstore chain has 1 million customer records. The chain wants an e-mail campaign to reach customers who made their last purchase more than a month ago, who live within 20 miles of one set of stores, and who purchased products related to a specific condition. Instead of exporting the entire database into Eloqua, EALink runs the record-extraction query on the CRM system and sends Eloqua only the minimum information needed to execute the campaign. And, the query is run when the campaign is being executed, so the campaign information won’t be out of date.

Learn more about Embel Assist in my story for Forbes, “Embel Assist Links Marketing Apps With Enterprise Data.”

When a microprocessor vulnerability rocked the tech industry last year, companies scrambled to patch nearly every server they had. In Oracle’s case, that meant patching the operating system on about 1.5 million Linux-based servers.

Oracle finished the job in just 4 hours, without taking down the applications the servers ran, by using Oracle’s own automation technology. The technology involved is at the heart of Oracle Autonomous Linux, which the company announced at Oracle OpenWorld 2019 in San Francisco last month. Oracle has been using Autonomous Linux to run its own Generation 2 Cloud infrastructure, and now it is available at no cost to Oracle Cloud customers.

The last things most CIOs, CTOs, chief information security officers, and even developers want to worry about are patching their server operating systems. Whether they have a hundred servers or hundreds of thousands, that type of maintenance can slow down a business, especially if the maintenance requires shutting down the software running on that server.

A delay is doubly worrying when the reason for the patch is to handle a software or hardware vulnerability. In those instances, delays create an opportunity for malicious operators to strike. If an organization traditionally applies updates to its servers every three months, for example, and a zero-day vulnerability comes out just after that update, the company is vulnerable for months. When updates require a lengthy process, companies are reluctant to do it more frequently.

Not with Autonomous Linux, which can patch itself quickly after a vulnerability is found and the patch is applied by Oracle. Combined with Oracle Cloud Infrastructure’s other cost advantages, customers can expect significant total cost of ownership savings compared to other Linux versions that run either on-premises or in the cloud.

Underneath the Autonomous Linux service is Oracle Linux, which remains binary compatible with Red Hat Enterprise Linux. Therefore, software that runs on RHEL will run on Oracle Autonomous Linux in Oracle Cloud Infrastructure without change.

Learn more in my story for Forbes, “With Autonomous Linux, Oracle Keeps Server Apps Running During Patching.”

Phoenix City Hall

U.S. government agencies needing high levels of information security can upgrade to use the latest cloud technologies to run their applications. That’s thanks to a pair of new cloud infrastructure regions from Oracle. The cloud data center complexes are authorized against strenuous FedRAMP and Department of Defense requirements.

The two new cloud infrastructure regions are in Ashburn, Virginia, outside Washington, D.C., and Phoenix, Ariz. They are part of Oracle’s goal to have 36 Generation 2 Cloud data center regions, offering services such as Oracle Autonomous Database, live by the end of 2020, including three additional dedicated regions to support Department of Defense agencies and contractors.

FedRAMP, more formally known as the Federal Risk and Authorization Management Program, provides a standard approach to federal security assessments, authorizations, and monitoring of cloud services. With FedRAMP, once a cloud provider is approved to provide a set of services or applications to one branch of government, other departments can use that service without getting a new security authorization.

With FedRAMP authorization in place, a federal agency can more quickly move an application or database workload that’s running in a government-run data center into Oracle Cloud Infrastructure. Agencies can also build and launch new cloud-native applications directly on Oracle’s cloud.

The cloud also lets federal agencies tap the latest technology and analytics strategies, including applying artificial intelligence and machine learning. Those techniques often rely on GPU-based computing—graphics processing units—which are used for math-heavy tasks such as high-performance scientific computing, data analytics, and machine learning.

Learn more about FedRAMP in my article for Forbes, “With FedRAMP Clearance, Oracle Brings Its Gen 2 Cloud Infrastructure To Government.”

Charles Nutter remembers when, working as a Java architect, he attended a conference and saw the Ruby programming language for the first time. And he was blown away. “I was just stunned that I understood every piece of code, every example, without knowing the language at all. It was super easy for me to understand the code.”

As a Java developer, Nutter began looking for an existing way to run Ruby within a Java runtime environment, specifically a Java virtual machine (JVM). This would let Ruby programs run on any hardware or software platform supported by a JVM, and would facilitate writing polyglot applications that used some Java and some Ruby, with developers free to choose whichever language was best for a particular task.

Nutter found the existing Ruby-on-JVM project, JRuby. However, “it had not been moving forward very quickly. It had been kind of stalled out for some years.” So, he became involved, helping drive support for a popular web application framework, Ruby on Rails, to run within a JVM.

“We made it work,” says Nutter. “In 2005 and 2006, we got Rails to run on top of the JVM—and it was the first time any major framework from off the Java platform had ever been run on top of the JVM.”

Want to be like Nutter someday? His career advice is direct: Contribute to an open source community, even if it’s a little daunting, and even if some people in that community are, well, rude to newcomers.

“Don’t be afraid to get out into the open source community,” Nutter says. “Get out into the public community, do talks, submit bugs, submit patches. It’s going to be discouraging, and there’s a lot of jerks out there that will scare you away. Don’t let them. Get into the heart of the community and don’t be afraid to help contribute or ask questions.”

For his successful coleadership of JRuby during more than a decade, and for his broader leadership in the software industry, Nutter was recently honored with a Groundbreaker Award. The award was presented at Oracle Code One in San Francisco, where we had a long chat. Read what we talked about in my article for Forbes, “A Java Developer Walks Into A Ruby Conference: Charles Nutter’s Open Source Journey.”

Doug Cutting stands head-and-shoulders above most developers I’ve met—figuratively, as well as literally. As one of the founders of the Hadoop open source project, which allows many Big Data projects to scale to handle huge problems and immense quantities of data, Cutting is revered. Plus, Doug Cutting is tall. Very tall. (Lots taller than I am.)

“Six-foot-eight, or 2 meters 3 centimeters, for the record,” Cutting volunteers when we meet.

In the software industry, Cutting looms large for two major open source successes, proof that innovation lightning sometimes strikes twice. Hadoop, managed by the Apache Software Foundation, is at its heart a framework and file system that manages distributed computing—that is, it allows many computers to work together in a cluster to solve hard problems.

Hadoop provided the initial foundation for many companies’ big data efforts. The software let them pull in data from multiple sources for analysis using clusters of dozens, or hundreds, of servers. The other project, also managed by Apache, is Lucene, a Java library that lets programmers build fast text indexing and searching into their applications.

In his day job, Cutting serves as the chief architect for Cloudera, one of the largest open source software companies. He also serves as an evangelist for the open source movement, inspiring contributions to Hadoop and Lucene and also many other projects.

Cutting was recently honored with a Groundbreaker Award, presented at Oracle Code One in San Francisco. He talked to me about collaborating on open source software, creating a fulfilling career in software, understanding how technology affects society, and the meaning of the word “Hadoop.” Read Cutting’s thoughts about everything from building a career in open source to the meaning of data science in my article for Forbes, “Hadoop Pioneer Says Developers Should Build Open Source Into Their Career Plans.”

Can you name that Top 40 pop song in 10 seconds? Sure, that sounds easy. Can you name that pop song—even if it’s played slightly out of tune? Uh oh, that’s a lot harder. However, if you can guess 10 in a row, you might share in a cash prize.

That’s the point of “Out of Tune,” an online music trivia game where players mostly in their teens and 20s compete to win small cash prizes–just enough to make the game more fun. And fun is the point of “Out of Tune,” launched in August by FTW Studios, a startup based in New York. What’s different about “Out of Tune” is that it’s designed for group play in real time. The intent is that players will get together in groups, and play together using their Android or Apple iOS phones.

Unlike in first-person shooter games, or other activities where a game player is interacting with the game’s internal logic, “Out of Tune” emphasizes the human-to-human aspect. Each game is broadcast live from New York — sometimes from FTW Studio’s facilities, sometimes from a live venue. Each game is hosted by a DJ, and is enjoyed through streaming video. “We’re not in the game show business or the music business,” says Avner Ronen, FTW Studio’s founder and CEO. “We’re in the shared experiences business.”

Because of all that human interaction, game players should feel like they’re part of something big, part of a group. “It’s social, says Ronen, noting 70% of its participants today are female. “The audience is younger, and people play with their friends.”

How does the game work? Twice a day, at 8 p.m. and 11 p.m. Eastern time, a DJ launches the game live from New York City. The game consists of 10 pop songs played slightly out of tune—and players, using a mobile app on their phones, have 10 seconds to guess the song. Players who guess all the songs correctly share in that event’s prize money.

Learn more about FTW Studios – and how the software works – in my story in Forbes, “This Online Game Features Out-Of-Tune Pop Songs. The End Game Is About Much More.”

Every new graduate from Central New Mexico Community College leaves school with a beautiful paper diploma covered in fine calligraphy, colorful seals, and official signatures. This summer, every new graduate also left with the same information authenticated and recorded in blockchain.

What’s the point of recording diplomas using blockchain technology? Blockchain creates a list of immutable records—grouped in blocks—that are linked cryptographically to form a tamper-evident chain. Those blocks are replicated on multiple servers across the participating organizations, so if a school went out of business, or somehow lost certain records to disaster or other mayhem, a student’s credentials are still preserved in other organizations’ ledger copies. Anyone authorized to access information on that blockchain (which might include, for example, prospective employers) could verify whether the student’s diploma and its details, such as the year, degree, and honors, match what the student claims.

Today, using blockchain for diplomas or certifications is uncommon. But it’s one of a growing number of blockchain use cases being tested—cases where information needs to be both shared and trusted across many parties, and preserved against loss or tampering.

Academic credentials are important to adults looking for jobs or applying to study for advanced degrees. Those records are also vital for refugees fleeing natural disasters or war-torn countries, such as those leaving Syria. “There are refugees who are medical doctors who can no longer practice medicine because they don’t have those certificates anymore,” says Feng Hou, CIO and chief digital learning officer at Central New Mexico Community College (CNM).

CNM is the largest higher-education institution in the state in terms of undergraduate enrollment, serving more than 23,000 students this fall. Nationally accredited, with eight locations in and around Albuquerque, CNM offers more than 150 associate degrees and certificates, as well as non-credit job training programs.

A benefit of blockchain is that there’s no single point of failure. “Given the decentralized nature of blockchain technology, it will prevent the single point of failure for any identity crisis, such as Syrian refugees, because on blockchain the ID is secure, shareable and verifiable anywhere in the world,” says Hou.

Read more in my story for the Wall Street Journal, “New Mexico College Deploys Blockchain for Digital Diplomas.”

Oracle Database is the world’s most popular enterprise database. This year’s addition of autonomous operating capabilities to the cloud version of Oracle Database is one of the most important advances in the database’s history. What does it mean for a database to be “autonomous?” Let’s look under the covers of Oracle Autonomous Database to show just a few of the ways it does that.

Oracle Autonomous Database is a fully managed cloud service. Like all cloud services, the database runs on servers in cloud data centers—in this case, on hardware called Oracle Exadata Database Machine that’s specifically designed and tuned for high-performance, high-availability workloads. The tightly controlled and optimized hardware enables some of the autonomous functionality we’ll discuss shortly.

While the autonomous capability of Oracle Autonomous Database is new, it builds on scores of automation features that Oracle has been building into its Oracle database software and the Exadata database hardware for years. The goals of the autonomous functions are twofold: First, to lower operating costs by reducing costly and tedious manual administration, and second, to improve service levels through automation and fewer human errors.

My essay in Forbes, “What Makes Oracle Autonomous Database Truly ‘Autonomous,’” shows of how the capabilities in Oracle Autonomous Database change the game for database administrators (DBAs). The benefits: DBAs are freed them from mundane tasks and letting them focus on higher-value work.

In Australia, at 8 a.m. on ‘Results Day,’ thousands and thousands of South Australian year 12 students receive their ATAR (Australia Tertiary Admissions Rank)—the all-important standardized score used to gain admission to universities across Australia. The frustrating challenge: many are eligible to add as many as nine school and subject-specific bonus points to their ATAR, which can improve their chances of gaining admission to tertiary institutions like the University of Adelaide. To find out about those bonuses, or adjusted ATAR, they must talk to university staff.

Thousands of students. All receiving their ATAR at the same time. All desperate to know about their bonus points. That very moment. They’re all phoning the university wanting a 5- or 10-minute call to answer a few questions and learn about their adjusted score. This past year, 2,100 of those students skipped what in the past could be an hours-long phone queue to talk to university staff. Instead, they used Facebook Messenger to converse with a chatbot, answering questions about their bonus eligibility and learning their adjusted ATAR score–in about three minutes.

“It’s always been really difficult for us to support the adjusted ATAR calls,” says Catherine Cherry, director of prospect management at University of Adelaide. “There are only so many people we can bring in on that busy day, and only so many phone calls that the staff can take at any given time.” Without the chatbot option, even when the prospective student is able to reach university staff, the staff can’t afford to stay on the phone to answer all that student’s questions, which can create a potentially bad first experience with the university. “The staff who are working that day really feel compelled to hurry the student off the phone because we can see the queue of 15, 20 people waiting, and we can see that they’ve been waiting for a long time,” Cherry says.

Enter the chatbot: Three minutes on Facebook Messenger and students had their adjusted ATAR. Read about the technology behind this chatbot application in my story in Forbes, “University of Adelaide Builds A Chatbot To Solve One Very Hard Problem.”

“All aboooooaaaaard!” Whether you love to watch the big freight engines rumble by, or you just ride a commuter train to work, the safety rules around trains are pretty simple for most of us. Look both ways before crossing the track, and never try to beat a train, for example. If you’re a rail operator, however, safety is a much more complicated challenge—such as making sure you always have the right people on the right positions, and ensuring that the crew is properly trained, rested, and has up-to-date safety certifications.

Helping rail operators tackle that huge challenge is CrewPro, the railroad crew management software from PS Technology, a wholly owned subsidiary of the Union Pacific RailroadThe original versions of this package run on mainframes and still are used by railroads ranging from the largest Class I freight operators to local rail-based passenger transit systems in major US cities.

Those railroad operators use CrewPro to handle complex staffing issues on the engines and on the ground. The demands include scheduling based on availability and seniority; tracking mandatory rest status; and managing certifications and qualifications, including pending certification expirations.

Smaller railroads, though, don’t have the sophisticated IT departments needed to stand up this fully automated crew management system. That’s which is why PS Technology launched a cloud version that saw its first railroad customer online in April. “There are more than 600 short line railroads, and that is our growth area,” says Seenu Chundru, president of PS Technology. “They don’t want to host this type of software on premises.”

Learn more about this in my story for Forbes, “Railroads Roll Ahead With Cloud-Based Crew Management.”

Knowledge is power—and knowledge with the right context at the right moment is the most powerful of all. Emerging technologies will leverage the power of context to help people become more efficient, and one of the first to do so is a new generation of business-oriented digital assistants.

Let’s start by distinguishing a business digital assistant from consumer products such as Apple’s Siri, Amazon’s Echo, and Google’s Home. Those cloud-based technologies have proved themselves at tasks like information retrieval (“How long is my commute today?”) and personal organization (“Add diapers to my shopping list”). Those services have some limited context about you, like your address book, calendar, music library, and shopping cart. What they don’t have is deep knowledge about your job, your employer, and your customers.

In contrast, a business digital assistant needs much richer context to handle the kind of complex tasks we do at work, says Amit Zavery, executive vice president of product development at Oracle. Which sorts of business tasks? How about asking a digital assistant to summarize the recent orders from a company’s three biggest customers in Dallas; set up a conference call with everyone involved with a particular client account; create a report of all employees who haven’t completed information security training; figure out the impact of a canceled meeting on a travel plan; or pull reports on accounts receivable deviations from expected norms?

Those are usually tasks for human associates—often a tech-savvy person in supply chain, sales, finance, or human resources. That’s because so many business tasks require context about the employee making the request and about the organization itself, Zavery says. A digital assistant’s goal should be to reduce the amount of mental energy and physical steps needed to perform such tasks.

Learn more in my article for Forbes, “The One Thing Digital Assistants Need To Become Useful At Work: Context.”

At too many government agencies and companies, the security mindset, even though it’s never spoken, is that “We’re not a prime target, our data isn’t super-sensitive.” Wrong. The reality is that every piece of personal data adds to the picture that potential criminals or state-sponsored actors are painting of individuals.

And that makes your data a target. “Just because you think your data isn’t useful, don’t assume it’s not valuable to someone, because they’re looking for columns, not rows,” says Hayri Tarhan, Oracle regional vice president for public sector security.

Here’s what Tarhan means by columns not rows: Imagine that the bad actors are storing information in a database (which they probably are). What hackers want in many data breaches is more information about people already in that database. They correlate new data with the old, using big data techniques to fill in the columns, matching up data stolen from different sources to form a more-complete picture.

That picture is potentially much more important and more lucrative than finding out about new people and creating new, sparsely populated data rows. So, every bit of data, no matter how trivial it might seem, is important when it comes to filling the empty squares.

Read more about this – and how machine learning can help – in my article in Forbes, “Data Thieves Want Your Columns—Not Your Rows.”

Blockchain and the cloud go together like organic macaroni and cheese. What’s the connection? Choosy shoppers would like to know that their organic food is tracked from farm to shelf, to make sure they’re getting what’s promised on the label. Blockchain provides an immutable ledger perfect for tracking cheese, for example, as it goes from dairy to cheesemaker to distributor to grocer.

Oracle’s new Blockchain Cloud Service provides a platform for each participant in a supply chain to register transactions. Within that blockchain, each participant—and regulators, if appropriate—can review those transactions to ensure that promises are being kept, and that data has not been tampered with. Use cases range from supply chains and financial transactions to data sharing inside a company.

Launched this month, Oracle Blockchain Cloud Service has the features that an enterprise needs to move from experimenting with blockchain to creating production applications. It addresses some of the biggest challenges facing developers and administrators, such as mastering the peer-to-peer protocols used to link blockchain servers, ensuring resiliency and high availability, and ensuring that security is solid. For example, developers previously had to code one-off integrations using complex APIs; Oracle’s Blockchain Cloud Service provides integration accelerators with sample templates and design patterns for many Oracle and third-party applications in the cloud and running on-premises in the data center.

Oracle Blockchain Cloud Service provides the kind of resilience, recoverability, security, and global reach that enterprises require before they’d trust their supply chain and customer experience to blockchain. With blockchain implemented as a managed cloud service, organizations also get a system that’s ready to be integrated with other enterprise applications, and where Oracle handles the back end to ensure availability and security.

Read more about this in my story for Forbes, “Oracle Helps You Put Blockchain Into Real-World Use With New Cloud Service.”

The trash truck rumbles down the street, and its cameras pour video into the city’s data lake. An AI-powered application mines that image data looking for graffiti—and advises whether to dispatch a fully equipped paint crew or a squad with just soap and brushes.

Meanwhile, cameras on other city vehicles could feed the same data lake so another application detects piles of trash that should be collected. That information is used by an application to send the right clean-up squad. Citizens, too, can get into the act, by sending cell phone pictures of graffiti or litter to the city for AI-driven processing.

Applications like these provide the vision for the Intelligent Internet of Things Integration Consortium (I3). This is a new initiative launched by the University of Southern California (USC), the City of Los Angeles, and a number of stakeholders including researchers and industry. At USC, I3 is jointly managed by three institutes: Institute for Communication Technology Management (CTM), Center for Cyber-Physical Systems and the Internet of Things (CCI), and Integrated Media Systems Center (IMSC).

“We’re trying to make the I3 Consortium a big tent,” says Jerry Power, assistant professor at the USC Marshall School of Business’s Institute for Communication Technology Management (CTM). Power serves as executive director of the consortium. “Los Angeles is a founding member, but we’re talking to other cities and vendors. We want lots of people to participate in the process, whether a startup or a super-large corporation.”

As of now, there are 24 members of the consortium, including USC’s Viterbi School of Engineering and Marshall School of Business. And companies are contributing resources. Oracle’s Startup for Higher Education program, for example, is providing $75,000 a year in cloud infrastructure services to support the I3 Consortium’s first three years of development work.

The I3 Consortium needs a lot of computing power. The consortium allows the cities to move beyond data silos where information is confined to individual departments, such as transportation and sanitation, to one where data flows among departments, can be more easily managed, and also lets cities use data contributions from residents or even other governmental or commercial data providers. That information is consolidated into a city’s data lake that can be accessed by AI-powered applications across departments.

The I3 Consortium will provide a vehicle to manage the data flow into the data lake. Cyrus Shahabi, a professor at USC’s Viterbi School of Engineering, and director of its Integrated Media Systems Center (IMSC), is using Oracle Cloud credits to create advanced computation applications that apply vast amounts of processing needed to train AI-based, deep learning neural networks and use real-time I3-driven data lakes to recognize issues, such as graffiti or garbage, that drive action.

 

Read more about the I3 Consortium in my story for Forbes, “How AI Could Tackle City Problems Like Graffiti, Trash, And Fires.”

The public cloud is part of your network. But it’s also not part of your network. That can make security tricky, and sometimes become a nightmare.

The cloud represents resources that your business rents. Computational resources, like CPU and memory; infrastructure resources, like Internet bandwidth and Internal networks; storage resources; and management platforms, like the tools needed to provision and configure services.

Whether it’s Amazon Web Services, Microsoft Azure or Google Cloud Platform, it’s like an empty apartment that you rent for a year or maybe a few months. You start out with empty space, put in there whatever you want and use it however you want. Is such a short-term rental apartment your home? That’s a big question, especially when it comes to security. By the way, let’s focus on platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS), where your business has a great deal of control over how the resource is used — like an empty rental apartment.

We are not talking about software-as-a-service (SaaS), like Office 365 or Salesforce.com. That’s where you show up, pay your bill and use the resources as configured. That’s more like a hotel room: you sleep there, but you can’t change the furniture. Security is almost entirely the responsibility of the hotel; your security responsibility is to ensure that you don’t lose your key, and to refuse to open the door for strangers. The SaaS equivalent: Protect your user accounts and passwords, and ensure users only have the least necessary access privileges.

Why PaaS/IaaS are part of your network

As Peter Parker knows, Spider Man’s great powers require great responsibility. That’s true in the enterprise data center — and it’s true in PaaS/IaaS networks. The customer is responsible for provisioning servers, storage and virtual machines. Not only that, but the customer also is responsible for creating connections between the cloud service and other resources, such as an enterprise data center — in a hybrid cloud architecture — and other cloud providers — in a multi-cloud architecture.

The cloud provider sets terms for use of the PaaS/IaaS, and allows inbound and outbound connections. There are service level guarantees for availability of the cloud, and of servers that the cloud provider owns. Otherwise, everything is on the enterprise. Think of the PaaS/IaaS cloud as being a remote data center that the enterprise rents, but where you can’t physically visit and see your rented servers and infrastructure.

Why PaaS/IaaS are not part of your network

In short, except for the few areas that the cloud provider handles — availability, cabling, power supplies, connections to carrier networks, physical security — you own it. That means installing patches and fixes. That means instrumenting servers and virtual machines. That means protecting them with software-based firewalls. That means doing backups, whether using the cloud provider’s value-added services or someone else. That means anti-malware.

That’s not to minimize the benefits the cloud provider offers you. Power and cooling are a big deal. So are racks and cabling. So is that physical security, and having 24×7 on-site staffing in the event of hardware failures. Also, there’s click-of-a-button ability to provision and spool up new servers to handle demand, and then shut them back again when not needed. Cloud providers can also provide firewall services, communications encryption, and of course, consulting on security.

The word elastic is often used for cloud services. That’s what makes the cloud much more agile than an on-premise data center, or renting an equipment cage in a colocation center. It’s like renting an apartment where if you need a couple extra bedrooms for a few months, you can upsize.

For many businesses, that’s huge. Read more about how great cloud power requires great responsibility in my essay for SecurityNow, “Public Cloud, Part of the Network or Not, Remains a Security Concern.”

No more pizza boxes: Traditional hardware firewalls can’t adequately protect a modern corporate network and its users. Why? Because while there still may be physical servers inside an on-premises data center or in a wiring closet somewhere, an increasing number of essential resources are virtualized or off-site. And off-site includes servers in infrastructure-as-a-service (IaaS) and platform-as-a-service (PasS) clouds.

It’s the enterprise’s responsibility to protect each of those assets, as well as the communications paths to and from those assets, as well as public Internet connections. So, no, a pizza-box appliance next to the router can’t protect virtual servers, IaaS or PaaS. What’s needed are the poorly named “next-generation firewalls” (NGFW) — very badly named because that term is not at all descriptive, and will seem really stupid in a few years, when the software-based NGFW will magically become an OPGFW (obsolete previous-generation firewall).

Still, the industry loves the “next generation” phrase, so let’s stick with NGFW here. If you have a range of assets that must be protected, including some combination of on-premises servers, virtual servers and cloud servers, you need an NGFW to unify protection and ensure consistent coverage and policy compliance across all those assets.

Cobbling together a variety of different technologies may not suffice, and could end up with coverage gaps. Also, only an NGFW can detect attacks or threats against multiple assets; discrete protection for, say, on-premises servers and cloud servers won’t be able to correlate incidents and raise the alarm when an attack is detected.

Here’s how Gartner defines NGFW:

Next-generation firewalls (NGFWs) are deep-packet inspection firewalls that move beyond port/protocol inspection and blocking to add application-level inspection, intrusion prevention, and bringing intelligence from outside the firewall.

What this means is that an NGFW does an excellent job of detecting when traffic is benign or malicious, and can be configured to analyze traffic and detect anomalies in a variety of situations. A true NGFW looks at northbound/southbound traffic, that is, data entering and leaving the network. It also doesn’t trust anything: The firewall software also examines eastbound/westbound traffic, that is, packets flowing from one asset inside the network to another.

After all, an intrusion might compromise one asset, and use that as an entry point to compromise other assets, install malware, exfiltrate data, or cause other mischief. Where does the cloud come in? Read my essay for SecurityNow, “Next-Generation Firewalls: Poorly Named but Essential to the Enterprise Network

Nine takeaways from the RSA Conference 2018 can give business leaders some perspective on how to think about the latest threats and information security trends. I attended the conference in April, along with more than 42,000 corporate security executives and practitioners, tech vendors, consultants, researchers and law enforcement experts.

In my many conversations, over way too much coffee, these nine topics below kept coming up. Consider these as real-world takeaways from the field:

1. Ransomware presents a real threat to operations

The RSA Conference took place shortly after a big ransomware event shut down some of Atlanta’s online services. The general consensus from practitioners at RSA was that such an attack could happen to any municipality, large or small, and the more that government services are interconnected, the greater the likelihood that a breach in one part of an organization could spill over and affect other systems. Thus, IT must be eternally vigilant to ensure that systems are patched and anti-malware measures are up to date to prevent a breach from spreading horizontally through the organization.

2. Spearphishing is getting more sophisticated

One would think that a CFO would know better than to respond to a midnight email from the CEO saying, “Please wire a million dollars to this overseas account immediately.” One would think that employees would know not to respond to requests from their IT department for a “password audit” and apply their login credentials. Yet those types of scenarios are happening with alarming frequencies—enough that when asked what they lose sleep over, many practitioners responded by saying “spearphishing” right after they said “ransomware.”

Spearphishing works because it arrives via carefully written emails. It is sometimes customized to a company or even a person’s role, and capable at times of evading spam filters and other email security software. Spearphishing tricks consumers into logging into fake banking websites, and it tricks employees into giving away money or revealing credentials.

Continuous employee training is the most common solution offered. Another option: strong monitoring that can use machine learning to learn what “normal” is and flag out-of-the-norm behaviors or data access by a person or system.

3. Cryptomining is a growing concern

Cryptomining occurs when hackers manage to install software onto enterprise computers that surreptitiously use processor and memory resources to mine cryptocurrencies. Unlike many other types of malware, cryptomining doesn’t try to disrupt operations or steal data. Instead, the malware wants to stay hidden, invisibly making money (literally) for the hacker for days, weeks, months or years. Again, effective system monitoring could help raise a flag when a company’s computing resources are being abused this way.

Interestingly, while many at RSA were talking about cryptomining, none of the people I talked to had experienced it first-hand. And while everyone agreed that such malware should be blocked, detected and eradicated, some treated cryptomining as a nuisance that is lower in security priority than other threats, like ransomware, spearphishing or other attacks that would steal corporate data.

What about 4-9?

Read the entire list, including thoughts about insider threats and the split between presentation and detection, in my essay for the Wall Street Journal, “9 Practical Takeaways From a Huge Data Security Conference.

Oracle CEO Mark Hurd is known as an avid tennis fan and supporter of the sport’s development, having played in college at Baylor University. At the Collision Conference last week in New Orleans, Hurd discussed the similar challenges facing tennis players and top corporate executives.

“I like this sport because tennis teaches that you’re out there by yourself,” said Hurd, who was interviewed on stage by CNBC reporter Aditi Roy. “Tennis is like being CEO: You can’t call time out, you can’t bring in a substitute,” Hurd said. “Tennis is a space where you have to go out every day, rain or shine, and you’ve got to perform. It’s just like the business world.”

Performance returned to the center of the conversation when Roy asked about Oracle’s acquisition strategy. Hurd noted that Oracle’s leadership team gives intense scrutiny to acquisitions of any size. “We don’t go out of our way to spend money — it’s our shareholder’s money,” he said. “We also think about dividends and buying stock back.”

When it comes to mergers and acquisitions, Oracle is driven by three top criteria, Hurd said. “First, the company has to fit strategically with where we are going,” he said. “Second, it has to make fiscal sense. And third, we have to be able to effectively run the acquisition.”

Hurd emphasized that he’s focused on the future, not a company’s past performance. “We are looking for companies that will be part of things 5 or 10 years from now, not 5 or 10 years ago,” he said. “We want to move forward, in platforms and applications.”

To a large extent, that future includes artificial intelligence. Hurd was quick to say, “I’m not looking for someone to say, ‘I have an AI solution in the cloud, come to me.’” Rather, Oracle wants to be able to integrate AI directly into its applications, in a way that gives customers clear business returns.

He used the example of employee recruitment. “We recruit 2,000 college students today. It used to be done manually, but now we use machine learning and algorithms to figure out where to source people.” Not only does the AI help find potential employees, but it can help evaluate whether the person would be successful at Oracle. “We could never have done that before,” Hurd added.

Read more about what Hurd said at Collision, including his advice for aspiring CEOs, in my story for Forbes, “Mark Hurd On The Perfect Sport For CEOs — And Other Leadership Insights.”

You can also watch the 20-minute entire interview here.

Microservices are a software architecture that have become quite popular in conjunction with cloud-native applications. Microservices allow companies to add or update new or existing tech-powered features more easily—and quite frequently even reduce the operating expenses of a product. A microservices approach does this by making it easier to update a large, complex program without revising the entire application, thereby accelerating the process of software updates.

Think about major enterprise software such as a customer management application. Such programs are often written as a single, monolithic application. Instead, some parts of that application could be considered as neatly encapsulated functionality, such as the function that talks to an order-processing database to create a new order.

In the microservices architecture, developers could write that order-processing service, including its state, into its own program—a loosely-coupled service. The main customer management application would consist of many of those services and interact using the service’s applications programming interfaces (APIs). Here’s what you need to know about the business advantages from a microservices architectural approach, according to Boris Scholl, vice president of development for microservices at Oracle.

1. Microservices can let you add new features faster to your company’s vital applications

Microservices can reduce complexity for your main enterprise applications. By encapsulating each business scenario, such as ordering a product or shopping card functionality into its own services, the code base becomes smaller, easier to maintain and easier to test. When you want to add or update a feature, “you can go faster by updating the service, as opposed to having to change the functionality of a very large project,” says Scholl.

Managing and connecting these many microservices might sound complex. However, “we are lucky that the microservices technology is evolving so fast there are infrastructures and platforms that make the heavy lifting easier,” Scholl says.

2. Microservices can let you embrace new, modern technology like artificial intelligence (AI) more easily

Developers can use the programming languages, tools and frameworks that are best suited for each service. Those may not be the same languages, tools and platforms used for other services. For example, consider how you apply a new idea, like today’s machine learning capability, to a customer management program. If the customer management program is built with a monolithic approach using a specific language, it will be very hard to easily integrate the new functionality—not to mention you are bound to the language, framework and even version used in the monolithic application.

However, with a microservices approach, developers might write a machine learning-focused service in the Scala language. They might run that service in a specialized AI-based cloud service that has the hardware speed needed to process huge datasets. That Scala-based service gets easily integrated with the rest of the customer application that might be written in Java or some other language. “You get cost savings because you can use a different technology stack for each service,” says Scholl. “You can use the best technology for the service.”

3. Each part of an application written with microservices can have its own release cadence

This feature relates to speed and also control and governance, which can be important to highly regulated industries. “Perhaps some parts of your application can be updated yearly and that fits your needs,” says Scholl. “Other parts might need to be updated more often, if you are looking to be agile and react faster to the market or to take advantage of new technology.”

Let’s go back to the data analytics and machine learning example. Perhaps new machine-learning technology has become available, allowing data analytics to run in seconds instead of minutes. That opportunity can be exploited by updating the data-analytics microservices. Or say an order-processing system was moved from an on-premises database to a cloud database. In a microservices architecture, all the developers would need to do is update the service that accesses that order processing; the rest of the customer-management application would not need to be changed.

“Think about where you need to update more frequently,” says Scholl. “If you can identify those components that should be on a different release cycle, break off that functionality into a new service. Then, use an API to let that new service talk to the rest of your application.”

Read more, including about scalability and organizational agility, in my essay for the Wall Street Journal, “Tech Strategy: 5 Things CEOs Should Know About Microservices.”

No doubt you’ve heard about blockchain. It’s the a distributed digital ledger technology that lets participants add and view blocks of transaction records, but not delete or change them without being detected.

Most of us know blockchain as the foundation of Bitcoin and other digital currencies. But blockchain is starting to enter the business mainstream as the trusted ledger for farm-to-table vegetable tracking, real estate transfers, digital identity management, financial transactions and all manner of contracts. Blockchain can be used for public transactions as well as for private business, inside a company or within an industry group.

What makes the technology so powerful is that there’s no central repository for this ever-growing sequential chain of transaction records, clumped together into blocks. Because that repository is replicated in each participant’s blockchain node, there is no single source of failure, and no insider threat within a single organization can impact its integrity.

“Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days or even weeks.”

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can be permitted to view relevant data, but not everything in the chain.

A customer, for instance, might be able to verify that a contractor has a valid business license. The customer might also see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

Business models and use cases

Blockchain is well-suited for managing transactions between companies or organizations that may not know each other well and where there’s no implicit or explicit trust. Rakhmilevich explains, “Blockchain works because it’s peer-to-peer…and it provides an easy-to-track history, which can serve as an audit trail,” he says.

What’s more, blockchain smart contracts are ideal for automating manual or semi-automated processes prone to errors or fraud. “Blockchain can help when there might be challenges in proving that the data has not been tampered with or when verifying the source of a particular update or transaction is important,” Rakhmilevich says.

Blockchain has uses in many industries, including banking, securities, government, retail, healthcare, manufacturing and transportation. Take healthcare: Blockchain can provide immutable records on clinical trials. Think about all the data being collected and flowing to the pharmaceutical companies and regulators, all available instantly and from verified participants.

Read more about blockchain in my article for the Wall Street Journal, “Blockchain: It’s All About Business—and Trust.”

Get ready for insomnia. Attackers are finding new techniques, and here are five that will give you nightmares worse than after you watched the slasher film everyone warned you about when you were a kid.

At a panel at the 2018 RSA Conference in San Francisco last week, we learned that these new attack techniques aren’t merely theoretically possible. They’re here, they’re real, and they’re hurting companies today. The speakers on the panel laid out the biggest attack vectors we’re seeing — and some of them are either different than in the past, or are becoming more common.

Here’s the list:

1. Repositories and cloud storage data leakage

People have been grabbing data from unsecured cloud storage for as long as cloud storage existed. Now that the cloud is nearly ubiquitous, so are the instances of non-encrypted, non-password-protected repositories on Amazon S3, Microsoft Azure, or Google Cloud Storage.

Ed Skoudis, the Penetration Testing Curriculum Director at the SANS Institute, a security training organization, points to three major flaws here. First, private repositories are accidentally opened to the public. Second, these public repositories are allowed to hold sensitive information, such as encryption keys, user names, and passwords. Third, source code and behind-the-scenes application data can be stored in the wrong cloud repository.

The result? Leakage, if someone happens to find it. And “Hackers are constantly searching for repositories that don’t have the appropriate security,” Skoudis said.

2. Data de-anonymization and correlation

Lots of medical and financial data is shared between businesses. Often that data is anonymized. That is, scrubbed with all the personally identifiable information (PII) removed so it’s impossible to figure out which human a particular data record belongs to.

Well, that’s the theory, said Skoudis. In reality, if you beg, borrow or steal enough data from many sources (including breaches), you can often correlate the data and figure out which person is described by financial or health data. It’s not easy, because a lot of data and computation resources are required, but de-anonymization can be done, and used for identity theft or worse.

3. Monetizing compromised systems using cryptominers

Johannes Ullrich, who runs the SANS Internet Storm Center, said that hackers care about selling your stuff, like all other criminals. Some want to steal your data, including bank accounts, and sell that to other people, say on the Dark Web. A few years ago, hackers learned how to steal your data and sell it back to you, in the form of ransomware. And now, they’re stealing your computer’s processing power.

What’s the processing power used for? “They’re using your system for crypto-coin mining,” the experts said. This became obvious earlier this year, he said, with a PeopleSoft breach where hackers installed a coin miner on thousands of servers – and never touched the PeopleSoft data. Meanwhile, since no data is touched or stolen, the hack could stay undetected for months, maybe years.

Two more

Read the full story, including the two biggest sleep-inhibiting worries, in my story for SecurityNow: “5 New Network Attack Techniques That Will Keep You Awake at Night.”

Is the cloud ready for sensitive data? You bet it is. Some 90% of businesses in a new survey say that at least half of their cloud-based data is indeed sensitive, the kind that cybercriminals would love to get their hands on.

The migration to the cloud can’t come soon enough. About two-thirds of companies in the study say at least one cybersecurity incident has disrupted their operations within the past two years, and 80% say they’re concerned about the threat that cybercriminals pose to their data.

The good news is that 62% of organizations consider the security of cloud-based enterprise applications to be better than the security of their on-premises applications. Another 21% consider it as good. The caveat: Companies must be proactive about their cloud-based data and can’t naively assume that “someone else” is taking care of that security.

Those insights come from a brand-new threat report, the first ever jointly conducted by Oracle and KPMG. The “Oracle and KPMG Cloud Threat Report 2018,” to be released this month at the RSA Conference, fills a unique niche among the vast number of existing threat and security reports, including the well-respected Verizon Data Breach Investigations Report produced annually since 2008.

The difference is the Cloud Threat Report’s emphasis on hybrid cloud, and on organizations lifting and shifting workloads and data into the cloud. “In the threat landscape, you have a wide variety of reports around infrastructure, threat analytics, malware, penetrations, data breaches, and patch management,” says one of the designers of the study, Greg Jensen, senior principal director of Oracle’s Cloud Security Business. “What’s missing is pulling this all together for the journey to the cloud.”

Indeed, 87% of the 450 businesses surveyed say they have a cloud-first orientation. “That’s the kind of trust these organizations have in cloud-based technology,” Jensen says.

Here are data points that break that idea down into more detail:

  • 20% of respondents to the survey say the cloud is much more secure than their on-premises environments; 42% say the cloud is somewhat more secure; and 21% say the cloud is equally secure. Only 21% think the cloud is less secure.
  • 14% say that more than half of their data is in the cloud already, and 46% say that between a quarter and half of their data is in the cloud.

That cloud-based data is increasingly “sensitive,” the survey respondents say. That data includes information collected from customer relationship management systems, personally identifiable information (PII), payment card data, legal documents, product designs, source code, and other types of intellectual property.

Read more, including what cyberattacks say about the “pace gap,” in my essay in Forbes, “Threat Report: Companies Trust Cloud Security.”

Endpoints everywhere! That’s the future, driven by the Internet of Things. When IoT devices are deployed in their billions, network traffic patterns won’t look at all like today’s patterns. Sure, enterprises have a few employees working at home, or use technologies like MPLS (Multi-Protocol Label Switching) or even SD-WAN (Software Defined Wide-Area Networks) to connect branch offices. However, for the most part, most internal traffic remains within the enterprise LAN, and external traffic is driven by end-users accessing websites from browsers.

The IoT will change all of that, predicts IHS Markit, one of the industry’s most astute analyst firms. In particular, the IoT will accelerate the growth of colo facilities, because it will be to everyone’s benefit to place servers closer to the network edge, avoiding the last mile.

To set the stage, IHS Markit forecasts Internet connectable devices to grow from 27.5 billion in 2017 to 45.4 billion in 2021. That’s a 65% increase in four short years. How will that affect colos? “Data center growth is correlated with general data growth. The more data transmitted via connected devices; the more data centers are needed to store, transfer, and analyze this data.” The analysts say:

In the specific case of the Internet of Things, there’s a need for geographically distributed data centers that can provide low-latency connections to certain connected devices. There are applications, like autonomous vehicles or virtual reality, which are going to require local data centers to manage much of the data processing required to operate.

Therefore, most enterprises will not have the means or the business case to build new data centers everywhere. “They will need to turn to colocations to provide quickly scalable, low capital-intensive options for geographically distributed data centers.”

Another trend being pointed to by IHS Markit: More local processing, rather than relying on servers in a colo-facility, at a cloud provider, or in the enterprise’s own data center. “And thanks to local analytics on devices, and the use of machine learning, a lot of data will never need to leave the device. Which is good news for the network infrastructure of the world that is not yet capable of handling a 65% increase in data traffic, given the inevitable proliferation of devices.”

Four Key Drivers of IoT This Year

The folks at IHS Markit have pointed out four key drivers of IoT growth. They paint a compelling picture, which we can summarize here:

  • Innovation and competitiveness. There are many new wireless models and solutions being released, which means lots of possibility for the future, but confusion in the short term. Companies are also seeing that the location of data is increasingly relevant to competition, and this will drive both on-prem data center and cloud computing.
  • Business models. As 5G rolls out, it will improve the economies of scale on machine-to-machine communications. This will create new business opportunities for the industry, a well as new security products and services.
  • Standardization and security. Speaking of which, IoT must be secure from the beginning, not only for business reasons, but also for compliance reasons. Soon there will be more IoT devices out there than traditional computing devices, which changes the security equation.
  • Wireless technology innovation. IHS Markit says there are currently more than 400 IoT platform providers, and vendors are working hard to integrate the platforms so that the data can be accessed by app developers. “A key inflection point for the IoT will be the gradual shift from the current ‘Intranets of Things’ deployment model to one where data can be exposed, discovered, entitled and shared with third-party IoT application developers,” says IHS Markit.

The IoT is not new. However, “what is new is it’s now working hand in hand with other transformative technologies like artificial intelligence and the cloud,” said Jenalea Howell, research director for IoT connectivity and smart cities at IHS Markit. “This is fueling the convergence of verticals such as industrial IoT, smart cities and buildings, and the connected home, and it’s increasing competitiveness.”

 

The purchase order looks legitimate, yet does it have all the proper approvals? Many lawyers reviewed this draft contract so is this the latest version? Can we prove that this essential document hasn’t been tampered with, before I sign it? Can we prove that these two versions of a document are absolutely identical?

Blockchain might be able to help solve these kinds of everyday trust issues related to documents, especially when they are PDFs—data files created using the Portable Document Format. Blockchain technology is best known for securing financial transactions, including powering new financial instruments such as Bitcoin. But blockchain’s ability to increase trust will likely find enterprise use cases solving common, non-financial information exchanges like these documents use.

Joris Schellekens, a software engineer and PDF expert at iText Software in Ghent, Belgium, recently presented his ideas for blockchain-supported documents at Oracle Code Los Angeles. Oracle Code is a series of free events around the world created to bring developers together to share fresh thinking and collaborate on ideas like these.

PDF’s Power and Limitations

The PDF file format was created in the early 1990s by Adobe Systems. PDF was a way to share richly formatted documents whose visual layout, text, and graphics would look the same, no matter which software created them or where they were viewed or printed. The PDF specification became an international standard in 2008.

Early on, Adobe and other companies implemented security features into PDF files. That included password protection, encryption, and digital signatures. In theory, the digital signatures should be able to prove who created, or at least who encrypted, a PDF document. However, depending on the hashing algorithm used, it’s not so difficult to subvert those protections to, for example, change a date/time stamp, or even the document content, says Schellekens. His company, iText Software, markets a software development kit and APIs for creating and manipulating PDFs.

“The PDF specification contains the concept of an ID tuple,” or an immutable sequence of data, says Schellekens. “This ID tuple contains timestamps for when the file was created and when it was revised. However, the PDF spec is vague about how to implement these when creating the PDF.”

Even in the case of an unaltered PDF, the protections apply to the entire document, not to various parts of it. Consider a document that must be signed by multiple parties. Since not all certificate authorities store their private keys with equal vigilance, you might lack confidence about who really modified the document (e.g. signed it), at which times, and in which order. Or, you might not be confident that there were no modifications before or after someone signed it.

A related challenge: Signatures to a digital document generally must be made serially, one at a time. The PDF specification doesn’t allow for a document to be signed in parallel by several people (as is common with contract reviews and signatures) and then merged together.

Blockchain has the potential to solve such document problems, and several others besides. Read more in my story for Forbes, “Can Blockchain Solve Your Document And Digital Signature Headaches?

Albert Einstein famously said, “Everything should be made as simple as possible, but not simpler.” Agile development guru Venkat Subramaniam has a knack for taking that insight and illustrating just how desperately the software development process needs the lessons of Professor Einstein.

As the keynote speaker at the Oracle Code event in Los Angeles—the first in a 14-city tour of events for developers—Subramaniam describes the art of simplicity, and why and how complexity becomes the enemy. While few would argue that complex is better, that’s what we often end up creating, because complex applications or source code may make us feel smart. But if someone says our software design or core algorithm looks simple, well, we feel bad—perhaps the problem was easy and obvious.

Subramaniam, who’s president of Agile Developer and an instructional professor at the University of Houston, urges us instead to take pride in coming up with a simple solution. “It takes a lot of courage to say, ‘we don’t need to make this complex,’” he argues. (See his full keynote, or register for an upcoming Oracle Code event.)

Simplicity Is Not Simple

Simplicity is hard to define, so let’s start by considering what simple is not, says Subramaniam. In most cases, our first attempts at solving a problem won’t be simple at all. The most intuitive solution might be overly verbose, or inefficient, or perhaps difficult to understand, even by its programmers after the fact.

Simple is not clever. Clever software, or clever solutions, may feel worthwhile, and might cause people to pat developers on the back. But ultimately, it’s hard to understand, and can be hard to change later. “Clever code is self-obfuscating,” says Subramaniam, meaning that it can be incomprehensible. “Even programmers can’t understand their clever code a week later.”

Simple is not necessarily familiar. Subramaniam insists that we are drawn to the old, comfortable ways of writing software, even when those methods are terribly inefficient. He mentioned someone who wrote code with 70 “if/then” questions in a series—because it was familiar. But it certainly wasn’t simple, and would be nearly impossible to debug or modify later. Something that we’re not familiar with may actually be simpler than what we’re comfortable with. To fight complexity, Subramaniam recommends learning new approaches and staying up with the latest thinking and the latest paradigms.

Simple is not over-engineered. Sometimes you can overthink the problem. Perhaps that means trying to develop a generalized algorithm that can be reused to solve many problems, when the situation calls for a fast, basic solution to a single problem. Subramaniam cited Occam’s Razor: When choosing between two solutions, the simplest may be the best.

Simple is not terse. Program source code should be concise, which means that it’s small, but also clearly communicate the programmer’s intent. By contrast, something that’s terse may still execute correctly when compiled into software, but the human understanding may be lost. “Don’t confuse terse with concise,” warns Subramaniam. “Both are really small, but terse code is waiting to hurt you when you least expect it.”

Read more in my essay, “Practical Advice To Whip Complexity And Develop Simpler Software.”

As the saying goes, you can’t manage what you don’t measure. In a data-driven organization, the best tools for measuring the performance are business intelligence (BI) and analytics engines, which require data. And that explains why data warehouses continue to play such a crucial role in business. Data warehouses often provide the source of that data, by rolling up and summarizing key information from a variety of sources.

Data warehouses, which are themselves relational databases, can be complex to set up and manage on a daily basis. They typically require significant human involvement from database administrators (DBAs). In a large enterprise, a team of DBAs ensure that the data warehouse is extracting data from those disparate data sources, as well as accommodating new and changed data sources—and making sure the extracted data is summarized properly and stored in a structured manner that can be handled by other applications, including those BI and analytics tools.

On top of that, the DBAs are managing the data warehouse’s infrastructure. That includes everything from server processor utilization, the efficiency of storage, security of the data, backups, and more.

However, the labor-intensive nature of data warehouses is about to change, with the advent of Oracle Autonomous Data Warehouse Cloud, announced in October 2017. The self-driving, self-repairing, self-tuning functionality of Oracle’s Data Warehouse Cloud is good for the organization—and good for the DBAs.

Data-driven organizations need timely, up-to-date business intelligence. This can feed instant decision-making, short-term predictions and business adjustments, and long-term strategy. If the data warehouse goes down, slows down, or lacks some information feeds, the impact can be significant. No data warehouse may mean no daily operational dashboards and reports, or inaccurate dashboards or reports.

For C-level executives, Autonomous Data Warehouse can improve the value of the data warehouse. This boosts the responsiveness of business intelligence and other important applications, by improving availability and performance.

Stop worrying about uptime. Forget about disk-drive failures. Move beyond performance tuning. DBAs, you have a business to optimize.

Read more in my article, “Autonomous Capabilities Will Make Data Warehouses — And DBAs — More Valuable.”

The “throw it over the wall” problem is familiar to anyone who’s seen designers and builders create something that can’t actually be deployed or maintained out in the real world. In the tech world, avoiding this problem is a big part of what gave rise to DevOps.

DevOps, combines “development” and “IT operations.” It refers to a set of practices that help software developers and IT operations staff work better, together. DevOps emerged about a decade ago with the goal of tearing down the silos between the two groups, so that companies can get new apps and features out the door, faster and with fewer mistakes and less downtime in production.

DevOps is now widely accepted as a good idea, but that doesn’t mean it’s easy. It requires cultural shifts by two departments that not only have different working styles and toolsets, but where the teams may not even know or respect each other.

When DevOps is properly embraced and implemented, it can help get better software written more quickly. DevOps can make applications easier and less expensive to manage. It can simplify the process of updating software to respond to new requirements. Overall, a DevOps mindset can make your organization more competitive because you can respond quickly to problems, opportunities and industry pressures.

Is DevOps the right strategic fit for your organization? Here are six CEO-level insights about DevOps to help you consider that question:

  1. DevOps can and should drive business agility.DevOps often means supporting a more rapid rate of change in terms of delivering new software or updating existing applications. And it doesn’t just mean programmers knock out code faster. It means getting those new apps or features fully deployed and into customers’ hands. “A DevOps mindset represents development’s best ability to respond to business pressures by quickly bringing new features to market and we drive that rapid change by leveraging technology that lets us rewire our apps on an ongoing basis,” says Dan Koloski, vice president of product management at Oracle.

For the full story, see my essay for the Wall Street Journal, “Tech Strategy: 6 Things CEOs Should Know About DevOps.”