The purchase order looks legitimate, yet does it have all the proper approvals? Many lawyers reviewed this draft contract so is this the latest version? Can we prove that this essential document hasn’t been tampered with, before I sign it? Can we prove that these two versions of a document are absolutely identical?

Blockchain might be able to help solve these kinds of everyday trust issues related to documents, especially when they are PDFs—data files created using the Portable Document Format. Blockchain technology is best known for securing financial transactions, including powering new financial instruments such as Bitcoin. But blockchain’s ability to increase trust will likely find enterprise use cases solving common, non-financial information exchanges like these documents use.

Joris Schellekens, a software engineer and PDF expert at iText Software in Ghent, Belgium, recently presented his ideas for blockchain-supported documents at Oracle Code Los Angeles. Oracle Code is a series of free events around the world created to bring developers together to share fresh thinking and collaborate on ideas like these.

PDF’s Power and Limitations

The PDF file format was created in the early 1990s by Adobe Systems. PDF was a way to share richly formatted documents whose visual layout, text, and graphics would look the same, no matter which software created them or where they were viewed or printed. The PDF specification became an international standard in 2008.

Early on, Adobe and other companies implemented security features into PDF files. That included password protection, encryption, and digital signatures. In theory, the digital signatures should be able to prove who created, or at least who encrypted, a PDF document. However, depending on the hashing algorithm used, it’s not so difficult to subvert those protections to, for example, change a date/time stamp, or even the document content, says Schellekens. His company, iText Software, markets a software development kit and APIs for creating and manipulating PDFs.

“The PDF specification contains the concept of an ID tuple,” or an immutable sequence of data, says Schellekens. “This ID tuple contains timestamps for when the file was created and when it was revised. However, the PDF spec is vague about how to implement these when creating the PDF.”

Even in the case of an unaltered PDF, the protections apply to the entire document, not to various parts of it. Consider a document that must be signed by multiple parties. Since not all certificate authorities store their private keys with equal vigilance, you might lack confidence about who really modified the document (e.g. signed it), at which times, and in which order. Or, you might not be confident that there were no modifications before or after someone signed it.

A related challenge: Signatures to a digital document generally must be made serially, one at a time. The PDF specification doesn’t allow for a document to be signed in parallel by several people (as is common with contract reviews and signatures) and then merged together.

Blockchain has the potential to solve such document problems, and several others besides. Read more in my story for Forbes, “Can Blockchain Solve Your Document And Digital Signature Headaches?

Asking “which is the best programming language” is like asking about the most important cooking tool in your kitchen. Mixer? Spatula? Microwave? Cooktop? Measuring cup? Egg timer? Lemon zester? All are critical, depending on what you’re making, and how you like to cook.

The same is true with programming languages. Some are best at coding =applications that run natively on mobile devices — think Objective-C or Java. Others are good at encoding logic within a PDF file, or on a web page — think JavaScript. And still others are best at coding fast applications for virtual machines or running directly on the operating system — for many people, that’s C or C++. Want a general purpose language? Think Python, PHP. Specialized? R and Matlab are good for statistics and data analytics. And so-on.

Last summer, IEEE Spectrum offered its take, surveying its audience and writing up the “2017 Top Programming Languages.” The top 10 languages for the typical reader:

  1. Python
  2. C
  3. Java
  4. C++
  5. C#
  6. R
  7. JavaScript
  8. PHP
  9. Go
  10. Swift

The story’s author, Stephen Case, noted not much change in the most popular languages. “Python has continued its upward trajectory from last year and jumped two places to the No. 1 slot, though the top four—Python, C, Java, and C++—all remain very close in popularity. “

What Do The PYPL Say?

The IEEE Spectrum annual survey isn’t the only game in town. The PYPL (PopularitY of Programming Language) index uses raw data from Google Trends to see how often people search for language tutorials. The people behind PYPL say, “If you believe in collective wisdom, the PYPL Popularity of Programming Language index can help you decide which language to study, or which one to use in a new software project.”

Here’s their Top 10:

  1. Java
  2. Python
  3. JavaScript
  4. PHP
  5. C#
  6. C
  7. R
  8. Objective-C
  9. Swift
  10. MATLAB

Asking the RedMonk

Stephen O’Grady describes RedMonk’s Programming Language Rankings, as of January 2018, as being based on two key external sources:

We extract language rankings from GitHub and Stack Overflow, and combine them for a ranking that attempts to reflect both code (GitHub) and discussion (Stack Overflow) traction. The idea is not to offer a statistically valid representation of current usage, but rather to correlate language discussion and usage in an effort to extract insights into potential future adoption trends.

The top languages found by RedMonk look similar to PYPL and IEEE Spectrum:

  1. JavaScript
  2. Java
  3. Python
  4. PHP
  5. C#
  6. C++
  7. CSS
  8. Ruby
  9. C
  10. Swift & Objective-C (tied)

Use the Best Tool for the Job

It would be tempting to use data like this to say, “From now on, everything we’re doing will be in Java,” or “We’re going to do all web coding in JavaScript and use C++ for applications.” Don’t do that. That would be like say, “We’re going to make everything in the microwave.” Sometimes you want the microwave, sure, but sometimes you want the crockpot, or the regular oven, or sous vide, or the propane grill in your back yard.

The goal is productivity. Use agile processes like Scrum to determine what your development teams are going to build, where those applications will run, and which features must be included. Then, let the developers choose languages that fit best – and that includes supporting experimentation. Let them use R. Let them do some coding in Python, if it improves productivity, and gets a better job done faster.

 

As the saying goes, you can’t manage what you don’t measure. In a data-driven organization, the best tools for measuring the performance are business intelligence (BI) and analytics engines, which require data. And that explains why data warehouses continue to play such a crucial role in business. Data warehouses often provide the source of that data, by rolling up and summarizing key information from a variety of sources.

Data warehouses, which are themselves relational databases, can be complex to set up and manage on a daily basis. They typically require significant human involvement from database administrators (DBAs). In a large enterprise, a team of DBAs ensure that the data warehouse is extracting data from those disparate data sources, as well as accommodating new and changed data sources—and making sure the extracted data is summarized properly and stored in a structured manner that can be handled by other applications, including those BI and analytics tools.

On top of that, the DBAs are managing the data warehouse’s infrastructure. That includes everything from server processor utilization, the efficiency of storage, security of the data, backups, and more.

However, the labor-intensive nature of data warehouses is about to change, with the advent of Oracle Autonomous Data Warehouse Cloud, announced in October 2017. The self-driving, self-repairing, self-tuning functionality of Oracle’s Data Warehouse Cloud is good for the organization—and good for the DBAs.

Data-driven organizations need timely, up-to-date business intelligence. This can feed instant decision-making, short-term predictions and business adjustments, and long-term strategy. If the data warehouse goes down, slows down, or lacks some information feeds, the impact can be significant. No data warehouse may mean no daily operational dashboards and reports, or inaccurate dashboards or reports.

For C-level executives, Autonomous Data Warehouse can improve the value of the data warehouse. This boosts the responsiveness of business intelligence and other important applications, by improving availability and performance.

Stop worrying about uptime. Forget about disk-drive failures. Move beyond performance tuning. DBAs, you have a business to optimize.

Read more in my article, “Autonomous Capabilities Will Make Data Warehouses — And DBAs — More Valuable.”

“We estimate that malicious cyber activity cost the U.S. economy between $57 billion and $109 billion in 2016.” That’s from a February 2018 report, “email hidden; JavaScript is required,” by the Council of Economic Advisors – part of the Office of the President. It’s a big deal.

The White House is concerned about a number of sources of cyber threats. Those include attacks from nation-states, corporate competitors, hacktivists, organized criminal groups, opportunists, and company insiders.

It’s not always easy to tell exactly who is behind some event, or even how to categorize those events. Still, the report says that incidents breaks down as roughly 25% insiders, 75% outsiders. “Overall, 18 percent of threat actors were state-affiliated groups, and 51 percent involved organized criminal groups,” it says.

It’s More Than Stolen Valuables

The report points out that the economic cost includes many factors, including the stolen property, the costs of repairs – and opportunity lost costs. For example, the report says, “Consider potential costs of a DDoS attack. A DDoS attack interferes with a firm’s online operations, causing a loss of sales during the period of disruption. Some of the firm’s customers may permanently switch to a competing firm due to their inability to access online services, imposing additional costs in the form of the firm’s lost future revenue. Furthermore, a high-visibility attack may tarnish the firm’s brand name, reducing its future revenues and business opportunities.”

However, it’s not always that cut-and-dried. Intellectual property theft shows:

The costs incurred by a firm in the wake of IP theft are somewhat different. As the result of IP theft, the firm no longer has a monopoly on its proprietary findings because the stolen IP may now potentially be held and utilized by a competing firm. If the firm discovers that its IP has been stolen (and there is no guarantee of such discovery), attempting to identify the perpetrator or obtain relief via legal process could result in sizable costs without being successful, especially if the IP was stolen by a foreign actor. Hence, expected future revenues of the firm could decline. The cost of capital is likely to increase because investors will conclude that the firm’s IP is both sought-after and not sufficiently protected.

Indeed, this last example is particularly worrisome. Why? “IP theft is the costliest type of malicious cyber activity. Moreover, security breaches that enable IP theft via cyber may go undetected for years, allowing the periodic pilfering of corporate IP.”

Affecting the Economy

Do investors worry about cyber incidents? You bet. And it hits the share price of companies. According to the White House report, “We find that the stock price reaction to the news of an adverse cyber event is significantly negative. Firms on average lost about 0.8 percent of their market value in the seven days following news of an adverse cyber event.”

How much is that? Given that the study looked at large companies, “We estimate that, on average, the firms in our sample lost $498 million per adverse cyber event. The distribution of losses is highly right-skewed. When we trim the sample of estimated losses at 1 percent on each side of the distribution, the average loss declines to $338 million per event.” That’s significant.

Small and mid-sized companies can be harder hit by incidents, because they are less resilient. “Smaller firms, and especially those with few product lines, can easily go out of business if they are attacked or breached.”

Overall, the hit by cyber incidents cost the U.S. economy between $57 billion and $109 billion in 2016. That’s between 0.31% and 0.58% of that year’s gross domestic product (GDP), says the report. That’s lot, but could be worse. Let’s hope this amount doesn’t increase – by, say, a full-fledged cyberwar or significant terrorist incident.

The “throw it over the wall” problem is familiar to anyone who’s seen designers and builders create something that can’t actually be deployed or maintained out in the real world. In the tech world, avoiding this problem is a big part of what gave rise to DevOps.

DevOps, combines “development” and “IT operations.” It refers to a set of practices that help software developers and IT operations staff work better, together. DevOps emerged about a decade ago with the goal of tearing down the silos between the two groups, so that companies can get new apps and features out the door, faster and with fewer mistakes and less downtime in production.

DevOps is now widely accepted as a good idea, but that doesn’t mean it’s easy. It requires cultural shifts by two departments that not only have different working styles and toolsets, but where the teams may not even know or respect each other.

When DevOps is properly embraced and implemented, it can help get better software written more quickly. DevOps can make applications easier and less expensive to manage. It can simplify the process of updating software to respond to new requirements. Overall, a DevOps mindset can make your organization more competitive because you can respond quickly to problems, opportunities and industry pressures.

Is DevOps the right strategic fit for your organization? Here are six CEO-level insights about DevOps to help you consider that question:

  1. DevOps can and should drive business agility.DevOps often means supporting a more rapid rate of change in terms of delivering new software or updating existing applications. And it doesn’t just mean programmers knock out code faster. It means getting those new apps or features fully deployed and into customers’ hands. “A DevOps mindset represents development’s best ability to respond to business pressures by quickly bringing new features to market and we drive that rapid change by leveraging technology that lets us rewire our apps on an ongoing basis,” says Dan Koloski, vice president of product management at Oracle.

For the full story, see my essay for the Wall Street Journal, “Tech Strategy: 6 Things CEOs Should Know About DevOps.”

Simplified Java coding. Less garbage. Faster programs. Those are among the key features in the newly released Java 10, which arrived in developers’ hands only six months after the debut of Java 9 in September.

This pace is a significant change from Java’s previous cycle of one large release every two to three years. With its faster release cadence, Java is poised to provide developers with innovations twice every year, making the language and platform more attractive and competitive. Instead of waiting for a huge omnibus release, the Java community can choose to include new features as soon as those features are ready, in the next six-month Java release train. This gives developers access to the latest APIs, functions, language additions, and JVM updates much faster than ever before.

Java 10 is the first release on the new six-month schedule. It builds incrementally on the significant new functionality that appeared in Java 9, which had a multiyear gestation period.

Java 10 delivers 12 Java Enhancement Proposals (JEPs). Here’s the complete list, followed by a deeper look at three of the most significant JEPs:

  • Local-Variable Type Inference
  • Consolidate the JDK Forest into a Single Repository
  • Garbage-Collector Interface
  • Parallel Full GC for G1
  • Application Class-Data Sharing
  • Thread-Local Handshakes
  • Remove the Native-Header Generation Tool (javah)
  • Additional Unicode Language-Tag Extensions
  • Heap Allocation on Alternative Memory Devices
  • Experimental Java-Based JIT Compiler
  • Root Certificates
  • Time-Based Release Versioning

See my essay for Forbes, “What Java 10 And Java’s New 6-Month Release Cadence Mean For Developers.” We’ll look at three of the most significant JEPs: Local-Variable Type Inference, Parallel Full GC for G1, and the Experimental Java-Based JIT Compiler.

Blockchain is a distributed digital ledger technology in which blocks of transaction records can be added and viewed—but can’t be deleted or changed without detection. Here’s where the name comes from: a blockchain is an ever-growing sequential chain of transaction records, clumped together into blocks. There’s no central repository of the chain, which is replicated in each participant’s blockchain node, and that’s what makes the technology so powerful. Yes, blockchain was originally developed to underpin Bitcoin and is essential to the trust required for users to trade digital currencies, but that is only the beginning of its potential.

Blockchain neatly solves the problem of ensuring the validity of all kinds of digital records. What’s more, blockchain can be used for public transactions as well as for private business, inside a company or within an industry group. “Blockchain lets you conduct transactions securely without requiring an intermediary, and records are secure and immutable,” says Mark Rakhmilevich, product management director at Oracle. “It also can eliminate offline reconciliations that can take hours, days, or even weeks.”

That’s the power of blockchain: an immutable digital ledger for recording transactions. It can be used to power anonymous digital currencies—or farm-to-table vegetable tracking, business contracts, contractor licensing, real estate transfers, digital identity management, and financial transactions between companies or even within a single company.

“Blockchain doesn’t have to just be used for accounting ledgers,” says Rakhmilevich. “It can store any data, and you can use programmable smart contracts to evaluate and operate on this data. It provides nonrepudiation through digitally signed transactions, and the stored results are tamper proof. Because the ledger is replicated, there is no single source of failure, and no insider threat within a single organization can impact its integrity.”

It’s All About Distributed Ledgers

Several simple concepts underpin any blockchain system. The first is the block, which is a batch of one or more transactions, grouped together and hashed. The hashing process produces an error-checking and tamper-resistant code that will let anyone viewing the block see if it has been altered. The block also contains the hash of the previous block, which ties them together in a chain. The backward hashing makes it extremely difficult for anyone to modify a single block without detection.

A chain contains collections of blocks, which are stored on decentralized, distributed servers. The more the better, with every server containing the same set of blocks and the latest values of information, such as account balances. Multiple transactions are handled within a single block using an algorithm called a Merkle tree, or hash tree, which provides fault and fraud tolerance: if a server goes down, or if a block or chain is corrupted, the missing data can be reconstructed by polling other servers’ chains.

And while the chain itself should be open for validation by any participant, some chains can be implemented with some form of access control to limit viewing of specific data fields. That way, participants can view relevant data, but not everything in the chain. A customer might be able to verify that a contractor has a valid business license and see the firm’s registered address and list of complaints—but not see the names of other customers. The state licensing board, on the other hand, may be allowed to access the customer list or see which jobs are currently in progress.

When originally conceived, blockchain had a narrow set of protocols. They were designed to govern the creation of blocks, the grouping of hashes into the Merkle tree, the viewing of data encapsulated into the chain, and the validation that data has not been corrupted or tampered with. Over time, creators of blockchain applications (such as the many competing digital currencies) innovated and created their own protocols—which, due to their independent evolutionary processes, weren’t necessarily interoperable. By contrast, the success of general-purpose blockchain services, which might encompass computing services from many technology, government, and business players, created the need for industry standards—such as Hyperledger, a Linux Foundation project.

Read more in my feature article in Oracle Magazine, March/April 2018, “It’s All About Trust.”

DevOps is a technology discipline well-suited to cloud-native application development. When it only takes a few mouse clicks to create or manage cloud resources, why wouldn’t developers and IT operation teams work in sync to get new apps out the door and in front of user faster? The DevOps culture and tactics have done much to streamline everything from coding to software testing to application deployment.

Yet far from every organization has embraced DevOps, and not every organization that has tried DevOps has found the experience transformative. Perhaps that’s because the idea is relatively young (the term was coined around 2009), suggests Javed Mohammed, systems community manager at Oracle, or perhaps because different organization are at such different spots in DevOps’ technology adoption cycle. That idea—about where we are in the adoption of DevOps—became a central theme of a recent podcast discussion among tech experts. Following are some highlights.

Confusion about DevOps can arise because DevOps affects dev and IT teams in many ways. “It can apply to the culture piece, to the technology piece, to the process piece—and even how different teams interact, and how all of the different processes tie together,” says Nicole Forsgren, founder and CEO of DevOps Research and Assessment LLC and co-author of Accelerate: The Science of Lean Software and DevOps.

The adoption and effectiveness of DevOps within a team depends on where each team is, and where organizations are. One team might be narrowly focused on the tech used to automate software deployment to the public, while another is looking at the culture and communication needed to release new features on a weekly or even daily basis. “Everyone is at a very, very different place,” Forsgren says.

Indeed, says Forsgren, some future-thinking organizations are starting to talk about what ‘DevOps Next’ is, extending the concept of developer-led operations beyond common best practices. At the same time, in other companies, there’s no DevOps. “DevOps isn’t even on their radar,” she sighs. Many experts, including Forsgren, see that DevOps is here, is working, and is delivering real value to software teams today—and is helping businesses create and deploy better software faster and less expensively. That’s especially true when it comes to cloud-native development, or when transitioning existing workloads from the data center into the cloud.

Read more in my essay, “DevOps: Sometimes Incredibly Transformative, Sometimes Not So Much.”

New phones are arriving nearly every day. Samsung unveiled its latest Galaxy S9 flagship. Google is selling lots of its Pixel 2 handset. Apple continues to push its iPhone X. The Vivo Apex concept phone, out of China, has a pop-up selfie camera. And Nokia has reintroduced its famous 8110 model – the slide-down keyboard model featured in the 1999 movie, “The Matrix.”

Yet there is a slowdown happening. Hard to say whether it’s merely seasonal, or an indication that despite the latest and newest features, it’s getting harder to distinguish a new phone from its predecessors.

According to the 451 report, “Consumer Smartphones: 90 Day Outlook: Smartphone Buying Slows but Apple and Samsung Demand Strong,” released February 2018: “Demand for smartphones is showing a seasonal downtick, with 12.7% of respondents from 451 Research’s Leading Indicator panel saying they plan on buying a smartphone in the next 90 days.” However, “Despite a larger than expected drop from the September survey, next 90 day smartphone demand is at its highest December level in three years.”

451 reports that over the next 90 days,

Apple (58%) leads in planned smartphone buying but is down 11 points. Samsung (15%) is up 2 points, as consumer excitement builds around next-gen Galaxy S9 and S9+ devices, scheduled to be released in March. Google (3%) is showing a slight improvement, buoyed by the October release of its Pixel 2 and 2 XL handsets. Apple’s latest releases are the most in-demand among planned iPhone buyers: iPhone X (37%; down 6 points), iPhone 8 (21%; up 5 points) and iPhone 8 Plus (18%; up 4 points).

Interestingly, Apple’s famous brand loyalty may be tracking. Says 451, “Google leads in customer satisfaction with 61% of owners saying they’re Very Satisfied. Apple is close behind, with 59% of iPhone owners saying they’re Very Satisfied. That said, it’s important to keep in mind that iPhone owners comprise 57% of smartphone owners in this survey vs. 2% who own a Google Pixel smartphone.”

Everyone Loves the Galaxy S9

Cnet was positively gushing over the new Samsung phone, writing,

A bold new camera, cutting-edge processor and a fix to a galling ergonomic pitfall — all in a body that looks nearly identical to last year’s model. That, in a nutshell, is the Samsung Galaxy S9 (with a 5.8-inch screen) and its larger step-up model, the Galaxy S9 Plus, which sports an even bigger 6.2-inch screen.

Cnet calls out two features. First, a camera upgrade that includes variable aperture designed to capture better low-light images – which is where most phones really fall down.

The other? “The second improvement is more of a fix. Samsung moved the fingerprint reader from the side of the rear camera to the center of the phone’s back, fixing what was without a doubt the Galaxy S8’s most maddening design flaw. Last year’s model made you stretch your finger awkwardly to hit the fingerprint target. No more.”

The Verge agrees with that assessment:

… the Galaxy S9 is actually a pretty simple device to explain. In essence, it’s the Galaxy S8, with a couple of tweaks (like moving the fingerprint sensor to a more sensible location), and all the specs jacked up to the absolute max for the most powerful device on the market — at least, on paper.

Pop Goes the Camera

The Vivo concept phone, the Apex, has a little pop-up front-facing camera designed for selfies. Says TechCrunch, this is part of a trend:

With shrinking bezels, gadget makers have to look for new solutions like the iPhone X notch. Others still, like Vivo and Huawei, are look at more elegant solutions than carving out a bit of the screen.

For Huawei, this means using a false key within the keyboard to house a hidden camera. Press the key and it pops up like a trapdoor. We tried it out and though the housing is clever, the placement makes for awkward photos — just make sure you trim those nose hairs before starting your conference call.

Vivo has a similar take to Huawei though the camera is embedded on a sliding tray that pops-up out of the top of the phone.

So, there’s still room for innovation. A little room. Beyond cameras, and some minor ergonomic improvements, it’s getting harder and harder to differentiate one phone from another – and possibly, to convince buyers to shell out for upgrades. At least, that is, until 5G handsets hit the market.

Spectre and Meltdown are two separate computer security problems. They are often lumped together because they were revealed around the same time – and both exploit vulnerabilities in many modern microprocessors. The website MeltdownAttack, from the Graz University of Technology, explains both Spectre and Meltdown very succinctly – and also links to official security advisories from the industry:

Meltdown breaks the most fundamental isolation between user applications and the operating system. This attack allows a program to access the memory, and thus also the secrets, of other programs and the operating system. If your computer has a vulnerable processor and runs an unpatched operating system, it is not safe to work with sensitive information without the chance of leaking the information. This applies both to personal computers as well as cloud infrastructure. Luckily, there are software patches against Meltdown.

Spectre breaks the isolation between different applications. It allows an attacker to trick error-free programs, which follow best practices, into leaking their secrets. In fact, the safety checks of said best practices actually increase the attack surface and may make applications more susceptible to Spectre. Spectre is harder to exploit than Meltdown, but it is also harder to mitigate. However, it is possible to prevent specific known exploits based on Spectre through software patches.

For now, nearly everyone is dependent on microprocessor makers and operating system vendors to develop, test, and distribute patches to mitigate both flaws. In the future, new microprocessors should be immune to those exploits – but because of the long processor developing new processors, we are unlikely to see computers using such next-generation processors available for several years.

So, expect Spectre and Meltdown to be around for many years to come. Some devices will remain unpatched — because some devices always remain unpatched. Even after new computers become available, it will take years to replace all the old machines.

Wide-Ranging Effects

Just about everything is affected by these flaws. Says the Graz University website:

Which systems are affected by Meltdown? Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether AMD processors are also affected by Meltdown. According to ARM, some of their processors are also affected.

 Which systems are affected by Spectre? Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors.

Ignore Spectre and Meltdown at your peril.

Patch. Sue. Repeat.

Many techies are involved in trying to handle the Spectre and Meltdown issues. So are attorneys. Intel alone has indicated dozens of lawsuits in its annual report filing with the U.S. Securities asnd Exchange Commission:

As of February 15, 2018, 30 customer class action lawsuits and two securities class action lawsuits have been filed. The customer class action plaintiffs, who purport to represent various classes of end users of our products, generally claim to have been harmed by Intel’s actions and/or omissions in connection with the security vulnerabilities and assert a variety of common law and statutory claims seeking monetary damages and equitable relief.

Given that there are many microprocessor makers involved (it’s not only Intel, remember), expect lots more patches. And lots more lawsuits.

Companies can’t afford downtime. Employees need access to their applications and data 24/7, and so do other business applications, manufacturing and logistics management systems, and security monitoring centers. Anyone who thinks that the brute force effort of their hard-working IT administrators is enough to prevent system downtime just isn’t facing reality.

Traditional systems administrators and their admin tools can’t keep up with the complexity inherent in any modern enterprise. A recent survey of the Oracle Applications Users Group has found that despite significant progress in systems management automation, many customers still report that more than 80% of IT issues are first discovered and reported by users. The number of applications is spiraling up, while data increases at an even more rapid rate.

The boundaries between systems are growing more complex, especially with cloud-based and hybrid-cloud architectures. That reality is why Oracle, after analyzing a survey of its industry-leading customers, recently predicted that by 2020, more than 80% of application infrastructure operations will be managed autonomously.

Autonomously is an important word here. It means not only doing mundane day-to-day tasks including monitoring, tuning, troubleshooting, and applying fixes automatically, but also detecting and rapidly resolving issues. Even when it comes to the most complex problems, machines can simplify the analysis—sifting through the millions of possibilities to present simpler scenarios, to which people then can apply their expertise and judgment of what action to take.

Oracle asked, about the kind of activities that IT system administrators do. That includes on a daily, weekly, and monthly basis—things such as password resets, system reboots, software patches, and the like.

Expect that IT teams will soon reduce by several orders of magnitude the number of situations like those that need manual intervention. If an organization typically has 20,000 human-managed interventions per year, humans will need to touch only 20. The rest will be handled through systems that can apply automation combined with machine learning, which can analyze patterns and react faster than human admins to enable preventive maintenance, performance optimization, and problem resolution.

Read more in my article for Forbes, “Prediction: 80% of Routine IT Operations Will Soon Be Solved Autonomously.”

On February 7, 2018, the carrier Swisscom admitted that a security lapse revealed sensitive information about 800,000 customers was exposed. The security failure was at one of Swisscom’s sales partners.

This is what can happen when a business gives its partners access to critical data. The security chain is only as good as the weakest link – and it can be difficult to ensure that partners are taking sufficient care, even if they pass an onboarding audit. Swisscom says,

In autumn of 2017, unknown parties misappropriated the access rights of a sales partner, gaining unauthorised access to customers’ name, address, telephone number and date of birth.

That’s pretty bad, but what came next was even worse, in my opinion. “Under data protection law this data is classed as ‘non-sensitive’,” said Swisscom. That’s distressing, because that’s exactly the sort of data needed for identity theft. But we digress.

Partners and Trust

Partners can be the way into an organization. Swisscom claims that new restrictions, such as preventing high-volume queries and using two-factor authentication, mean such an event can never occur again, which seems optimistic: “Swisscom also made a number of changes to better protect access to such non-sensitive personal data by third-party companies… These measures mean that there is no chance of such a breach happening again in the future.”

Let’s hope they are correct. But in the meantime, what can organizations do? First, Ensure that all third parties that have access to sensitive data, such as intellectual property, financial information, and customer information, go through a rigorous security audit.

Tricia C. Bailey’s article, “Managing Third-Party Vendor Risk,” makes good recommendations for how to vet vendors – and also how to prepare at your end. For example, do you know what (and where) your sensitive data is? Do vendor contracts spell out your rights and responsibilities for security and data protection – and your vendor’s rights and responsibilities? Do you have a strong internal security policy? If your own house isn’t in order, you can’t expect a vendor to improve your security. After all, you might be the weakest link.

Unaccustomed to performing security audits on partners? Organizations like CA Veracode offer audit-as-a-service, such as with their Vendor Application Security Testing service. There are also vertical industry services: the HITRUST Alliance, for example, offers a standardized security audit process for vendors serving the U.S. healthcare industry with its Third Party Assurance Program.

Check the Back Door

Many vendors and partners require back doors into enterprise data systems. Those back doors, or remote access APIs, can be essential for the vendors’ performing their line-of-business function. Take the Swisscom sales partner: It needs to be able to query Swisscom customers and add/update customer information, in order to effectively serve as as a sales organization.

Yet if the partner is breached, that back door can fall under the control of hackers, using the partner’s systems or credentials. In its 2017 Data Breach Investigations Report, Verizon reported that in regard to Point-of-Sale (POS) systems, “Almost 65% of breaches involved the use of stolen credentials as the hacking variety, while a little over a third employed brute force to compromise POS systems. Following the same trend as last year, 95% of breaches featuring the use of stolen credentials leveraged vendor remote access to hack into their customer’s POS environments.”

A Handshake Isn’t Good Enough

How secure is your business partner, your vendor, your contractor? If you don’t know, then you don’t know. If something goes wrong at your partners’ end, never forget that it may be your IP, your financials, and your customers’ data that is exposed. After all, whether or not you can recover damages from the partner in a lawsuit, your organization is the one that will pay the long-term price in the marketplace.

Savvy businesses have policies that prevent on-site viewing of pornography, in part to avoid creating a hostile work environment — and to avoid sexual harassment lawsuits. For security professionals, porn sites are also a dangerous source of malware.

That’s why human-resources policies should be backed up with technological measures. Those include blocking porn sites at the firewall, and for using on-device means to stop browsers from accessing such sites.

Even that may not be enough, says Kaspersky Labs, in its report, “Naked online: cyberthreats facing users of adult websites and applications.” Why? Because naughty content and videos have gone mainstream, says the report:

Today, porn can be found not only on specialist websites, but also in social media networks and on social platforms like Twitter. Meanwhile, the ‘classic’ porn websites are turning into content-sharing platforms, creating loyal communities willing to share their videos with others in order to get ‘likes’ and ‘shares’.

This problem is not new, but it’s increasingly dangerous, thanks to the criminal elements on the Dark Web, which are advertising tools for weaponizing porn content. Says Kaspersky, “While observing underground and semi-underground market places on the dark web, looking for information on the types of legal and illegal goods sold there, we found that among the drugs, weapons, malware and more, credentials to porn websites were often offered for sale.”

So, what’s the danger? There are concerns about attacks on both desktop/notebook and mobile users. In the latter case, says Kaspersky,

  • In 2017, at least 1.2 million users encountered malware with adult content at least once. That is 25.4% of all users who encountered any type of Android malware.
  • Mobile malware is making extensive use of porn to attract users: Kaspersky Lab researchers identified 23 families of mobile malware that use porn content to hide their real functionality.
  • Malicious clickers, rooting malware, and banking Trojans are the types of malware that are most often found inside porn apps for Android.

That’s the type of malware that’s dangerous on a home network. It’s potential ruinous if it provides a foothold onto an enterprise network not protected by intrusion detection/prevention systems or other anti-malware tech. The Kaspersky report goes into a lot of detail, and you should read it.

For another take on the magnitude of the problem: The Nielsen Company reported that more than 21 million Americans accessed adult websites on work computers – that is, 29% of working adults. Bosses are in on it too. In 2013, Time Magazine said that a survey of 200 U.S.-based data security analysts reveals that 40 percent removed malware from a senior executive’s computer, phone, or tablet after the executive visited a porn website.

What Can You Do?

Getting rid of pornography isn’t easy, but it’s not rocket science either. Start with a strong policy. Work with your legal team to make sure the policy is both legal and comprehensive. Get employee feedback on the policy, to help generate buy-in from executives and the rank-and-file.

Once the policy is finalized, communicate it clearly. Train employees on what to do, what not to do… and the employment ramifications for violating the policy. Explain that this policy is not just about harassment, but also about information security.

Block, block, block. Block at the firewall, block at proxy servers, block on company-owned devices. Block on social media. Make sure that antivirus is up to date. Review log files.

Finally, take this seriously. This isn’t a case of giggling (or eye-rolling) about boys-being-boys, or harmless diversions comparable to work-time shopping on eBay. Porn isn’t only offensive in the workplace, but it’s also a gateway to the Dark Web, criminals, and hackers. Going after porn isn’t only about being Victorian about naughty content. It’s about protecting your business from hackers.

We all have heard the usual bold predictions for technology in 2018: Lots of cloud computing, self-driving cars, digital cryptocurrencies, 200-inch flat-screen televisions, and versions of Amazon’s Alexa smart speaker everywhere on the planet. Those types of predictions, however, are low-hanging fruit. They’re not bold. One might as well predict that there will be some sunshine, some rainy days, a big cyber-bank heist, and at least one smartphone catching fire.

Let’s dig for insights beyond the blindingly obvious. I talked to several tech leaders, deep-thinking individuals in California’s Silicon Valley, asking them for their predictions, their idea of new trends, and disruptions in the tech industry. Let’s see what caught their eye.

Gary Singh, VP of marketing, OnDot Systems, believes that 2018 will be the year when mobile banking will transform into digital banking — which is more disruptive than one would expect. “The difference between digital and mobile banking is that mobile banking is informational. You get information about your accounts,” he said. Singh continues, “But in terms of digital banking, it’s really about actionable insights, about how do you basically use your funds in the most appropriate way to get the best value for your dollar or your pound in terms of how you want to use your monies. So that’s one big shift that we would see start to happen from mobile to digital.”

Tom Burns, Vice President and General Manager of Dell EMC Networking, has been following Software-Defined Wide Area Networks. SD-WAN is a technology that allows enterprise WANs to thrive over the public Internet, replacing expensive fixed-point connections provisioned by carriers using technologies like MPLS. “The traditional way of connecting branches in office buildings and providing services to those particular branches is going to change,” Burns observed. “If you look at the traditional router, a proprietary architecture, dedicated lines. SD-WAN is offering a much lower cost but same level of service opportunity for customers to have that data center interconnectivity or branch connectivity providing some of the services, maybe a full even office in the box, but security services, segmentation services, at a much lower cost basis.”

NetFoundry’s co-founder, Mike Hallett, sees a bright future for Application Specific Networks, which link applications directly to cloud or data center applications. The focus is on the application, not on the device. “For 2018, when you think of the enterprise and the way they have to be more agile, flexible and faster to move to markets, particularly going from what I would call channel marketing to, say, direct marketing, they are going to need application-specific networking technologies.” Hallett explains that Application Specific Networks offer the ability to be able to connect from an application, from a cloud, from a device, from a thing, to any application or other device or thing quickly and with agility. Indeed, those connections, which are created using software, not hardware, could be created “within minutes, not within the weeks or months it might take, to bring up a very specific private network, being able to do that. So the year of 2018 will see enterprises move towards software-only networking.”

Mansour Karam, CEO and founder of Apstra, also sees software taking over the network. “I really see massive software-driven automation as a major trend. We saw technologies like intent-based networking emerge in 2017, and in 2018, they’re going to go mainstream,” he said.

There’s more

There are predictions around open networking, augmented reality, artificial intelligence – and more. See my full story in Upgrade Magazine, “From SD-WAN to automation to white-box switching: Five tech predictions for 2018.”

Tom Burns, VP and General Manager of Dell EMC Networking, doesn’t want 2018 to be like 2017. Frankly, none of us in tech want to hit the “repeat” button either. And we won’t, not with increased adoption of blockchain, machine learning/deep learning, security-as-a-service, software-defined everything, and critical enterprise traffic over the public Internet.

Of course, not all possible trends are positive ones. Everyone should prepare for more ransomware, more dangerous data breaches, newly discovered flaws in microprocessors and operating systems, lawsuits over GDPR, and political attacks on Net Neutrality. Yet, as the tech industry embraces 5G wireless and practical applications of the Internet of Things, let’s be optimistic, and hope that innovation outweighs the downsides of fast-moving technology.

Here, Dell has become a major force in networking across the globe. The company’s platform, known as Dell EMC Open Networking, includes a portfolio of data center switches and software, as well as solutions for campus and branch networks. Plus, Dell offers end-to-end services for digital transformation, training, and multivendor environment support.

Tom Burns heads up Dell’s networking business. That business became even larger in September 2106, which Dell closed on its US$67 billion acquisition of EMC Corp. Before joining Dell in 2012, Burns was a senior executive at Alcatel-Lucent for many years. He and I chatted in early January at one of Dell’s offices in Santa Clara, Calif.

Q: What’s the biggest tech trend from 2017 that you see continuing into 2018?

Tom Burns (TB): The trend that I think that will continue into 2018 and even beyond is around digital transformation. And I recognize that everyone may have a different definition of what that means, but what we at Dell Technologies believe it means is that the number of connected devices exploding, whether it be cell phones or RFIDs or intelligent type of devices that are looking at our factories and so forth.

And all of this information needs to be collected and analyzed, with what some call artificial intelligence. Some of it needs to be aggregated at the edge. Some of it’s going to be brought back to the core data centers. This is what we infer to as IT transformation, to enable workforce transformation and other capabilities to deliver the applications, the information, the video, the voice communications, in real time to the users and give them the intelligence from the information that’s being gathered to make real-time decisions or whatever they need the information for.

Q: What do you see as being the tech trend from 2017 that you hope won’t continue into 2018?

TB: The trend that won’t continue into 2018 is the old buying habits around previous-generation technology. CIOs and CEOs, whether in enterprises or in service providers, are going to have to think of a new way to deliver their services and applications on a real-time basis, and the traditional architectures that have driven our data centers over the years just is not going to work anymore. It’s not scalable. It’s not flexible. It doesn’t drive out the costs that are necessary in order to enable those new applications.

So one of the things that I think is going to stop in 2018 is the old way of thinking – proprietary applications, proprietary full stacks. I think disaggregation, open, is going to be adopted much, much faster.

Q: If you could name one thing that will predict how the tech industry will do business next year, what do you think it will be?

TB: Well, I think one of the major changes, and we’ve started to see it already, and in fact, Dell Technologies announced it about a year ago, is how is our technology being consumed? We’ve been, let’s face it, box sellers or even solution providers that look at it from a CapEx standpoint. We go in, talk to our customers, we help them enable a new application as a service, and we kind of walk away. We sell them the product, and then obviously we support the product.

More and more, I think the customers and the consumers are looking for different ways to consume that technology, so we’ve started things like consumption models like pay as you grow, pay as you turn on, consumption models that allows us to basically ignite new services on demand. We have some several customers that are doing this, particularly around the service provider area. So I think one way tech companies are going to change on how they deliver is this whole thing around pay as a service, consumption models and a new way to really provide the technology capabilities to our customers and then how do they enable them.

Q: If you could predict one thing that will change how enterprise customers do business next year…?

TB: One that we see as a huge, tremendous impact on how customers are going to operate is SD-WAN. The traditional way of connecting branches and office buildings and providing services to those particular branches is going to change. If you look at the traditional router, a proprietary architecture, dedicated lines, SD-WAN is offering a much lower cost but same level of service opportunity for customers to have that data center interconnectivity or branch connectivity, providing some of the services, maybe a full even office in the box, but security services, segmentation services, at a much lower cost basis. So I think that one of the major changes for enterprises next year and service providers is going to be this whole concept and idea with real technology behind it around Software-Defined WAN.

Read the full interview

There’s a lot more to my conversation with Tom Burns. Read the entire interview at Upgrade Magazine.

The pattern of cloud adoption moves something like the ketchup bottle effect: You tip the bottle and nothing comes out, so you shake the bottle and suddenly you have ketchup all over your plate.

That’s a great visual from Frank Munz, software architect and cloud evangelist at Munz & More, in Germany. Munz and a few other leaders in the Oracle community were interviewed on a podcast by Bob Rhubart, Architect Community Manager at Oracle, about the most important trends they saw in 2017. The responses covered a wide range of topics, from cloud to blockchain, from serverless to machine learning and deep learning.

During the 44-minute session, “What’s Hot? Tech Trends That Made a Real Difference in 2017,” the panel took some fascinating detours into the future of self-programming computers and the best uses of container technologies like Kubernetes. For those, you’ll need to listen to the podcast.

The panel included: Frank Munz; Lonneke Dikmans, chief product officer of eProseed, Netherlands; Lucas Jellema, CTO, AMIS Services, Netherlands; Pratik Patel, CTO, Triplingo, US; and Chris Richardson, founder and CEO, Eventuate, US. The program was recorded in San Francisco at Oracle OpenWorld and JavaOne.

The cloud’s tipping point

The ketchup quip reflects the cloud passing a tipping point of adoption in 2017. “For the first time in 2017, I worked on projects where large, multinational companies give up their own data center and move 100% to the cloud,” Munz said. These workload shifts are far from a rarity. As Dikmans said, the cloud drove the biggest change and challenge: “[The cloud] changes how we interact with customers, and with software. It’s convenient at times, and difficult at others.”

Security offered another way of looking at this tipping point. “Until recently, organizations had the impression that in the cloud, things were less secure and less well managed, in general, than they could do themselves,” said Jellema. Now, “people have come to realize that they’re not particularly good at specific IT tasks, because it’s not their core business.” They see that cloud vendors, whose core business is managing that type of IT, can often do those tasks better.

In 2017, the idea of shifting workloads en masse to the cloud and decommissioning data centers became mainstream and far less controversial.

But wait, there’s more! See about Blockchain, serverless computing, and pay-as-you-go machine learning, in my essay published in Forbes, “Tech Trends That Made A Real Difference In 2017.”

“The functional style of programming is very charming,” insists Venkat Subramaniam. “The code begins to read like the problem statement. We can relate to what the code is doing and we can quickly understand it.” Not only that, Subramaniam explains in his keynote address for Oracle Code Online, but as implemented in Java 8 and beyond, functional-style code is lazy—and that laziness makes for efficient operations because the runtime isn’t doing unnecessary work.

Subramaniam, president of Agile Developer and an instructional professor at the University of Houston, believes that laziness is the secret to success, both in life and in programming. Pretend that your boss tells you on January 10 that a certain hourlong task must be done before April 15. A go-getter might do that task by January 11.

That’s wrong, insists Subramaniam. Don’t complete that task until April 14. Why? Because the results of the boss’s task aren’t needed yet, and the requirements may change before the deadline, or the task might be canceled altogether. Or you might even leave the job on March 13. This same mindset should apply to your programming: “Efficiency often means not doing unnecessary work.”

Subramaniam received the JavaOne RockStar award three years in a row and was inducted into the Java Champions program in 2013 for his efforts in motivating and inspiring software developers around the world. In his Oracle Code Online keynote, he explored how functional-style programming is implemented in the latest versions of Java, and why he’s so enthusiastic about using this style for applications that process lots and lots of data—and where it’s important to create code that is easy to read, easy to modify, and easy to test.

Functional Versus Imperative Programming

The old mainstream of imperative programming, which has been a part of the Java language from day one, relies upon developers to explicitly code not only what they want the program to do, but also how to do it. Take software that has a huge amount of data to process; the programmer would normally create a loop that examines each piece of data, and if appropriate, take specific action on that data with each iteration of the loop. It’s up to the developer to create the loop and manage it—in addition to coding the business logic to be performed on the data.

The imperative model, argues Subramaniam, results in what he calls “accidental complexity”—each line of code might perform multiple functions, which makes it hard to understand, modify, test, and verify. And, the developer must do a lot of work to set up and manage the data and iterations. “You get bogged down with the details,” he said. This not only introduces complexity, but makes code hard to change.”

By contrast, when using a functional style of programming, developers can focus almost entirely on what is to be done, while ignoring the how. The how is handled by the underlying library of functions, which are defined separately and applied to the data as required. Subramaniam says that functional-style programming provides highly expressive code, where each line of code does only one thing: “The code becomes easier to work with, and easier to write.”

Subramaniam adds that in functional-style programming, “The code becomes the business logic.” Read more in my essay published in Forbes, “Lazy Java Code Makes Applications Elegant, Sophisticated — And Efficient at Runtime.”

 

At least, I think it’s Swedish! Just stumbled across this. I hope they bought the foreign rights to one of my articles…


With lots of inexpensive, abundant computation resources available, nearly anything becomes possible. For example, you can process a lot of network data to identify patterns, identify intelligence, and product insight that can be used to automate networks. The road to Intent-Based Networking Systems (IBNS) and Application-Specific Networks (ASN) is a journey. That’s the belief of Rajesh Ghai, Research Director of Telecom and Carrier IP Networks at IDC.

Ghai defines IBNS as a closed-loop continuous implementation of several steps:

  • Declaration of intent, where the network administrator defines what the network is supposed to do
  • Translation of intent into network design and configuration.
  • Validation of the design using a model that decides if that configuration can actually be implemented,
  • Propagation of that configuration into the network devices via APIs.
  • Gather and study real-time telemetry from all the devices.
  • Use machine learning to determine whether desired state of policy has been achieved. And then repeat,

Related to that concept, Ghai explains, is ASN. “It’s also a concept which is software control and optimization and automation. The only difference is that ASN is more applicable to distributed applications over the internet than IBNS.”

IBNS Operates Networks as One System

“Think of intent-based networking as software that sits on top of your infrastructure and focusing on the networking infrastructure, and enables you to operate your network infrastructure as one system, as opposed to box per box,” explained Mansour Karam, Founder, CEO of Apstra, which offers IBNS solutions for enterprise data centers.

“To achieve this, we have to start with intent,” he continued. “Intent is both the high-level business outcomes that are required by the business, but then also we think of intent as applying to every one of those stages. You may have some requirements in how you want to build.”

Karam added, “Validation includes tests that you would run — we call them expectations — to validate that your network indeed is behaving as you expected, as per intent. So we have to think of a sliding scale of intent and then we also have to collect all the telemetry in order to close the loop and continuously validate that the network does what you want it to do. There is the notion of state at the core of an IBNS that really boils down to managing state at scale and representing it in a way that you can reason about the state of your system, compare it with the desired state and making the right adjustments if you need to.”

The upshot of IBNS, Karam said: If you have powerful automation you’re taking the human out of the equation, and so you get a much more agile network. You can recoup the revenues that otherwise you would have lost, because you’re unable to deliver your business services on time. You will reduce your outages massively, because 80% of outages are caused by human error. You reduce your operational expenses massively, because organizations spend $4 operating every dollar of CapEx, and 80% of it is manual operations. So if you take that out you should be able to recoup easily your entire CapEx spend on IBNS.”

ASN Gives Each Application It Own Network

“Application-Specific Networks, like Intent-Based Networking Systems, enable digital transformation, agility, speed, and automation,” explained Galeal Zino, Founder of NetFoundry, which offers an ASN platform.

He continued, “ASN is a new term, so I’ll start with a simple analogy. ASNs are like are private clubs — very, very exclusive private clubs — with exactly two members, the application and the network. ASN literally gives each application its own network, one that’s purpose-built and driven by the specific needs of that application. ASN merges the application world and the network world into software which can enable digital transformation with velocity, with scale, and with automation.”

Read more in my new article for Upgrade Magazine, “Manage smarter, more autonomous networks with Intent-Based Networking Systems and Application Specific Networking.”

When the little wireless speaker in your kitchen acts on your request to add chocolate milk to your shopping list, there’s artificial intelligence (AI) working in the cloud, to understand your speech, determine what you want to do, and carry out the instruction.

When you send a text message to your HR department explaining that you woke up with a vision-blurring migraine, an AI-powered chatbot knows how to update your status to “out of the office” and notify your manager about the sick day.

When hackers attempt to systematically break into the corporate computer network over a period of weeks, AI sees the subtle patterns in historical log data, recognizes outliers in the packet traffic, raises the alarm, and recommends appropriate countermeasures.

AI is nearly everywhere in today’s society. Sometimes it’s fairly obvious (as with a chatbot), and sometimes AI is hidden under the covers (as with network security monitors). It’s a virtuous cycle: Modern cloud computing and algorithms make AI a fast, efficient, and inexpensive approach to problem-solving. Developers discover those cloud services and algorithms and imagine new ways to incorporate the latest AI functionality into their software. Businesses see the value of those advances (even if they don’t know that AI is involved), and everyone benefits. And quickly, the next wave of emerging technology accelerates the cycle again.

AI can improve the user experience, such as when deciphering spoken or written communications, or inferring actions based on patterns of past behavior. AI techniques are excellent at pattern-matching, making it easier for machines to accurately decipher human languages using context. One characteristic of several AI algorithms is flexibility in handling imprecise data: Human text. Specially, chatbots—where humans can type messages on their phones, and AI-driven software can understand what they say and carry on a conversation, providing desired information or taking the appropriate actions.

If you think AI is everywhere today, expect more tomorrow. AI-enhanced software-as-a-service and platform-as-a-service products will continue to incorporate additional AI to help make cloud-delivered and on-prem services more reliable, more performant, and more secure. AI-driven chatbots will find their ways into new, innovative applications, and speech-based systems will continue to get smarter. AI will handle larger and larger datasets and find its way into increasingly diverse industries.

Sometimes you’ll see the AI and know that you’re talking to a bot. Sometimes the AI will be totally hidden, as you marvel at the, well, uncanny intelligence of the software, websites, and even the Internet of Things. If you don’t believe me, ask a chatbot.

Read more in my feature article in the January/February 2018 edition of Oracle Magazine, “It’s Pervasive: AI Is Everywhere.”

Millions of developers are using Artificial Intelligence (AI) or Machine Learning (ML) in their projects, says Evans Data Corp. Evans’ latest Global Development and Demographics Study, released in January 2018, says that 29% of developers worldwide, or 6,452,000 in all, are currently using some form of AI or ML. What’s more, says the study, an additional 5.8 million expect to use AI or ML within the next six months.

ML is actually a subset of AI. To quote expertsystem.com,

In practice, artificial intelligence – also simply defined as AI – has come to represent the broad category of methodologies that teach a computer to perform tasks as an “intelligent” person would. This includes, among others, neural networks or the “networks of hardware and software that approximate the web of neurons in the human brain” (Wired); machine learning, which is a technique for teaching machines to learn; and deep learning, which helps machines learn to go deeper into data to recognize patterns, etc. Within AI, machine learning includes algorithms that are developed to tell a computer how to respond to something by example.

The same site defines ML as,

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves.

The process of learning begins with observations or data, such as examples, direct experience, or instruction, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.

A related and popular AI-derived technology, by the way, is Deep Learning. DL uses simulated neural networks to attempt to mimic the way a human brain learns and reacts. To quote from Rahul Sharma on Techgenix,

Deep learning is a subset of machine learning. The core of deep learning is associated with neural networks, which are programmatic simulations of the kind of decision making that takes place inside the human brain. However, unlike the human brain, where any neuron can establish a connection with some other proximate neuron, neural networks have discrete connections, layers, and data propagation directions.

Just like machine learning, deep learning is also dependent on the availability of massive volumes of data for the technology to “train” itself. For instance, a deep learning system meant to identify objects from images will need to run millions of test cases to be able to build the “intelligence” that lets it fuse together several kinds of analysis together, to actually identify the object from an image.

Why So Many AI Developers? Why Now?

You can find AI, ML and DL everywhere, it seems. There are highly visible projects, like self-driving cars, or the speech recognition software inside Amazon’s Alexa smart speakers. That’s merely the tip of the iceberg. These technologies are embedded into the Internet of Things, into smart analytics and predictive analytics, into systems management, into security scanners, into Facebook, into medical devices.

A modern but highly visible application of AI/ML are chatbots – software that can communicate with humans via textual interfaces. Some companies use chatbots on their websites or on social media channels (like Twitter) to talk to customers and provide basic customer services. Others use the tech within a company, such as in human-resources applications that let employees make requests (like scheduling vacation) by simply texting the HR chatbot.

AI is also paying off in finance. The technology can help service providers (like banks or payment-card transaction clearinghouses) more accurately review transactions to see if they are fraudulent, and improve overall efficiency. According to John Rampton, writing for the Huffington Post, AI investment by financial tech companies was more than $23 billion in 2016. The benefits of AI, he writes, include:

  • Increasing Security
  • Reducing Processing Times
  • Reducing Duplicate Expenses and Human Error
  • Increasing Levels of Automation
  • Empowering Smaller Companies

Rampton also explains that AI can offer game-changing insights:

One of the most valuable benefits AI brings to organizations of all kinds is data. The future of Fintech is largely reliant on gathering data and staying ahead of the competition, and AI can make that happen. With AI, you can process a huge volume of data which will, in turn, offer you some game-changing insights. These insights can be used to create reports that not only increase productivity and revenue, but also help with complex decision-making processes.

What’s happening in fintech with AI is nothing short of revolutionary. That’s true of other industries as well. Instead of asking why so many developers, 29%, are focusing on AI, we should ask, “Why so few?”

A friend insists that “the Internet is down” whenever he can’t get a strong wireless connection on his smartphone. With that type of logic, enjoy this photo found on the afore-mentioned Internet:

“Wi-Fi” is apparently now synonymous with “Internet” or “network.” It’s clear that we have come a long way from the origins of the Wi-Fi Alliance, which originally defined the term as meaning “Wireless Fidelity.” The vendor-driven alliance was formed in 1999 to jointly promote the broad family of IEEE 802.11 wireless local-area networking standards, as well as to insure interoperability through certifications.

But that was so last millennium! It’s all Wi-Fi, all the time. In that vein, let me propose three new acronyms:

  • Wi-Fi-Wi – Wireless networking, i.e., 802.11
  • Wi-Fi-Cu – Any conventionally cabled network
  • Wi-Fi-Fi – Networking over fiber optics (but not Fibre Channel)
  • Wi-Fi-FC – Wireless Fibre Channel, I suppose

You get the idea….

It’s all about the tradeoffs! You can have the chicken or the fish, but not both. You can have the big engine in your new car, but that means a stick shift—you can’t have the V8 and an automatic. Same for that cake you want to have and eat. Your business applications can be easy to use or secure—not both.

But some of those are false dichotomies, especially when it comes to security for data center and cloud applications. You can have it both ways. The systems can be easy to use and maintain, and they can be secure.

On the consumer side, consider two-factor authentication (2FA), whereby users receive a code number, often by text message to their phones, which they must type into a webpage to confirm their identity. There’s no doubt that 2FA makes systems more secure. The problem is that 2FA is a nuisance for the individual end user, because it slows down access to a desired resource or application. Unless you’re protecting your personal bank account, there’s little incentive for you to use 2FA. Thus, services that require 2FA frequently aren’t used, get phased out, are subverted, or are simply loathed.

Likewise, security measures specified by corporate policies can be seen as a nuisance or an impediment. Consider dividing an enterprise network into small “trusted” networks, such as by using virtual LANs or other forms of authenticating users, applications, or API calls. This setup can require considerable effort for internal developers to create, and even more effort to modify or update.

When IT decides to migrate an application from a data center to the cloud, the steps required to create API-level authentication across such a hybrid deployment can be substantial. The effort required to debug that security scheme can be horrific. As for audits to ensure adherence to the policy? Forget it. How about we just bypass it, or change the policy instead?

Multiply that simple scenario by 1,000 for all the interlinked applications and users at a typical midsize company. Or 10,000 or 100,000 at big ones. That’s why post-mortem examinations of so many security breaches show what appears to be an obvious lack of “basic” security. However, my guess is that in many of those incidents, the chief information security officer or IT staffers were under pressure to make systems, including applications and data sources, extremely easy for employees to access, and there was no appetite for creating, maintaining, and enforcing strong security measures.

Read more about these tradeoffs in my article on Forbes for Oracle Voice: “You Can Have Your Security Cake And Eat It, Too.”

I’m #1! Well, actually #4 and #7. During 2017, I wrote several article for Hewlett Packard Enterprise’s online magazine, Enterprise.nxt Insights, and two of them were quite successful – named as #4 and #7 in the site’s list of Top 10 Articles for 2017.

Article #7 was, “4 lessons for modern software developers from 1970s mainframe programing.” Based entirely on my own experiences, the article began,

Eight megabytes of memory is plenty. Or so we believed back in the late 1970s. Our mainframe programs usually ran in 8 MB virtual machines (VMs) that had to contain the program, shared libraries, and working storage. Though these days, you might liken those VMs more to containers, since the timesharing operating system didn’t occupy VM space. In fact, users couldn’t see the OS at all.

In that mainframe environment, we programmers learned how to be parsimonious with computing resources, which were expensive, limited, and not always available on demand. We learned how to minimize the costs of computation, develop headless applications, optimize code up front, and design for zero defects. If the very first compilation and execution of a program failed, I was seriously angry with myself.

Please join me on a walk down memory lane as I revisit four lessons I learned while programming mainframes and teaching mainframe programming in the era of Watergate, disco on vinyl records, and Star Wars—and which remain relevant today.

Article #4 was, “The OWASP Top 10 is killing me, and killing you! It began,

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10, a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Click the links (or pictures) above and enjoy the articles! And kudos to my prolific friend Steven J. Vaughan-Nichols, whose articles took the #3, #2 and #1 slots. He’s good. Damn good.

Amazon says that that a cloud-connected speaker/microphone was at the top of the charts: “This holiday season was better than ever for the family of Echo products. The Echo Dot was the #1 selling Amazon Device this holiday season, and the best-selling product from any manufacturer in any category across all of Amazon, with millions sold.”

The Echo products are an ever-expanding family of inexpensive consumer electronics from Amazon, which connect to a cloud-based service called Alexa. The devices are always listening for spoken commands, and will respond through conversation, playing music, turning on/off lights and other connected gadgets, making phone calls, and even by showing videos.

While Amazon doesn’t release sales figures for its Echo products, it’s clear that consumers love them. In fact, Echo is about to hit the road, as BMW will integrate the Echo technology (and Alexa cloud service) into some cars beginning this year. Expect other automakers to follow.

Why the Echo – and Apple’s Siri and Google’s Home? Speech.

The traditional way of “talking” to computers has been through the keyboard, augmented with a mouse used to select commands or input areas. Computers initially responded only to typed instructions using a command-line interface (CLI); this was replaced in the era of the Apple Macintosh and the first iterations of Microsoft Windows with windows, icons, menus, and pointing devices (WIMP). Some refer to the modern interface used on standard computers as a graphic user interface (GUI); embedded devices, such as network routers, might be controlled by either a GUI or a CLI.

Smartphones, tablets, and some computers (notably running Windows) also include touchscreens. While touchscreens have been around for decades, it’s only in the past few years they’ve gone mainstream. Even so, the primary way to input data was through a keyboard – even if it’s a “soft” keyboard implemented on a touchscreen, as on a smartphone.

Talk to me!

Enter speech. Sometimes it’s easier to talk, simply talk, to a device than to use a physical interface. Speech can be used for commands (“Alexa, turn up the thermostat” or “Hey Google, turn off the kitchen lights”) or for dictation.

Speech recognition is not easy for computers; in fact, it’s pretty difficult. However, improved microphones and powerful artificial-intelligence algorithms make speech recognition a lot easier. Helping the process: Cloud computing, which can throw nearly unlimited resources at speech recognition, including predictive analytics. Another helper: Constrained inputs, which means that when it comes to understanding commands, there are only so many words for the speech recognition system to decode. (Free-form dictation, like writing an essay using speech recognition, is a far harder problem.)

Speech recognition is only going to get better – and bigger. According to one report, “The speech and voice recognition market is expected to be valued at USD 6.19 billion in 2017and is likely to reach USD 18.30 billion by 2023, at a CAGR of 19.80% between 2017 and 2023. The growing impact of artificial intelligence (AI) on the accuracy of speech and voice recognition and the increased demand for multifactor authentication are driving the market growth.”

Helping the process: Cloud computing, which can throw nearly unlimited resources at speech recognition, including predictive analytics. Another helper: Constrained inputs, which means that when it comes to understanding commands, there are only so many words for the speech recognition system to decode. (Free-form dictation, like writing an essay using speech recognition, is a far harder problem.)

It’s a big market

Speech recognition is only going to get better – and bigger. According to one report, “The speech and voice recognition market is expected to be valued at USD 6.19 billion in 2017and is likely to reach USD 18.30 billion by 2023, at a CAGR of 19.80% between 2017 and 2023. The growing impact of artificial intelligence (AI) on the accuracy of speech and voice recognition and the increased demand for multifactor authentication are driving the market growth.” The report continues:

“The speech recognition technology is expected to hold the largest share of the market during the forecast period due to its growing use in multiple applications owing to the continuously decreasing word error rate (WER) of speech recognition algorithm with the developments in natural language processing and neural network technology. The speech recognition technology finds applications mainly across healthcare and consumer electronics sectors to produce health data records and develop intelligent virtual assistant devices, respectively.

“The market for the consumer vertical is expected to grow at the highest CAGR during the forecast period. The key factor contributing to this growth is the ability to integrate speech and voice recognition technologies into other consumer devices, such as refrigerators, ovens, mixers, and thermostats, with the growth of Internet of Things.”

Right now, many of us are talking to Alexa, talking to Siri, and talking to Google Home. Back in 2009, I owned a Ford car that had a primitive (and laughably inaccurate) infotainment system – today, a new car might do a lot better, perhaps due to embedded Alexa. Will we soon be talking to our ovens, to our laser printers and photocopiers, to our medical implants, to our assembly-line equipment, and to our network infrastructure? It wouldn’t surprise Alexa in the least.

Agility – the ability to deliver projects quickly. That applies to new projects, as well as updates to existing projects. The agile software movement began when many smart people became frustrated with the classic model of development, where first the organization went through a complex process to develop requirements (which took months or years), and wrote software to address those requirements (which took months or years, or maybe never finished). By then, not only did the organization miss out on many opportunities, but perhaps the requirements were no longer valid – if they ever were.

With agile methodologies, the goal is to build software (or accomplish some complex task or action), in small incremental iterations. Each iteration delivers some immediate value, and after each iteration, there would be an evaluation of how well those who requested the project (the stakeholders) were satisfied, and what they wanted to do next. No laborious up-front requirements. No years of investment before there was any return on that investment.

One of the best-known agile frameworks is Scrum, developed by Jeff Sutherland and Ken Schwaber in the early 1990s. In my view, Scrum is noteworthy for several innovations, including:

  • The Scrum framework is simple enough for everyone involved to understand.
  • The Scrum framework is not a product.
  • Scrum itself is not tied to specific vendor’s project-management tools.
  • Work is performed in two-week increments, called Sprints.
  • Every day there is a brief meeting called a Daily Scrum.
  • Development is iterative, incremental, and outcomes are predictable.
  • The work must be transparent, as much as possible, to everyone involved.
  • The roles of participants in the project are defined extremely clearly.
  • The relationship between people in the various roles is also clearly defined.
  • A key participant is the Scrum Master, who helps everyone maximize the value of the team and the project.
  • There is a clear, unambiguous definition of what “Done” means for every action item.

Scrum itself is refined every year or two by Sutherland and Schwaber. The most recent version (if you can call it a version) is Scrum 2017; before that, it was revised in 2016 and 2013. While there aren’t that many significant changes from the original vision unveiled in 1995, here are three recent changes that, in my view, make Scrum better than ever – enough that it might be called Scrum 2.0. Well, maybe Scrum 1.5. You decide:

  1. The latest version acknowledges more clearly that Scrum, like other agile methodologies, is used for all sorts of projects, not merely creating or enhancing software. While the Scrum Guide is still development-focused, Scrum can be used for market research, product development, developing cloud services, and even managing schools and governments.
  2. The Daily Scrum will be more focused on exploring how well the work is driving toward the goals planned for the biweekly Sprint Goal. For example – what work will be done today to drive to the goal? What impediments likely to prevent us from meeting the goal? (Previously, the Daily Scrum was often viewed as a glorified status report meeting.)
  3. Scrum has a set of values, and those are now spelled out: “When the values of commitment, courage, focus, openness and respect are embodied and lived by the Scrum Team, the Scrum pillars of transparency, inspection, and adaptation come to life and build trust for everyone. The Scrum Team members learn and explore those values as they work with the Scrum events, roles and artifacts. Successful use of Scrum depends on people becoming more proficient in living these five values… Scrum Team members respect each other to be capable, independent people.”

The word “agile” is thrown around too often in business and technology, covering everything from planning a business acquisition to planning a network upgrade. Scrum is one of the best-known agile methodologies, and the framework is very well suited for all sorts of projects where it’s not feasible to determine a full set of requirements up front, and there’s a need to immediately begin delivering some functionality (or accomplish parts of the tasks). That Scrum continues to evolve will help ensure its value in the coming years… and decades.

Criminals steal money from banks. Nothing new there: As Willie Sutton famously said, “I rob banks because that’s where the money is.”

Criminals steal money from other places too. While many cybercriminals target banks, the reality is that there are better places to steal money, or at least, steal information that can be used to steal money. That’s because banks are generally well-protected – and gas stations, convenience stores, smaller on-line retailers, and even payment processors are likely to have inadequate defenses — or make stupid mistakes that aren’t caught by security professionals.

Take TIO Networks, a bill-payment service purchased by PayPal for US$233 in July 2017. TIO processed more than $7 billion in bill payments last year, serving more than 10,000 vendors and 16 million consumers.

Hackers now know critical information about all 16 million TIO customers. According to Paymts.com, “… the data that may have been impacted included names, addresses, bank account details, Social Security numbers and login information. How much of those details fell into the hands of cybercriminals depends on how many of TIO’s services the consumers used.”

PayPal has said,

“The ongoing investigation has uncovered evidence of unauthorized access to TIO’s network, including locations that stored personal information of some of TIO’s customers and customers of TIO billers. TIO has begun working with the companies it services to notify potentially affected individuals. We are working with a consumer credit reporting agency to provide free credit monitoring memberships. Individuals who are affected will be contacted directly and receive instructions to sign up for monitoring.”

Card Skimmers and EMV Chips

Another common place where money changes hands: The point-of-purchase device. Consider payment-card skimmers – that is, a hardware device secretly installed into a retail location’s card reader, often at an unattended location like a gasoline pump.

The amount of fraud caused by skimmers copying information on payment cards is expected to rise from $3.1 billion in 2015 to $6.4 billion in 2018, affecting about 16 million cardholders. Those are for payment cards that don’t have the integrated EMV chip, or for transactions that don’t use the EMV system.

EMV chips, also known as chip-and-PIN or chip-and-signature, are named for the three companies behind the technology standards – Europay, MasterCard, and Visa. Chip technology, which is seen as a nuisance by consumers, has dramatically reduced the amount of fraud by generating a unique, non-repeatable transaction code for each purchase.

The rollout of EMV, especially in the United States, is painfully slow. Many merchants still haven’t upgraded to the new card-reader devices or back-end financial services to handle those transactions. For example, there are very few fuel stations using chips to validate transactions, and so pay-at-the-pump in U.S. is universally still dependent on the mag stripe reader. That presents numerous opportunities for thieves to install skimmers on that stripe reader, and be able to steal payment card information.

For an excellent, well-illustrated primer on skimmers and skimmer-related fraud at gas stations, see “As gas station skimmer card fraud increases, here’s how to cut your risk.” Theft at the point of purchase, or at payment processors, will continue as long as companies fail to execute solid security practices – and continue to accept non-EMV payment card transactions, including allowing customers to type their credit- or debit-card numbers onto websites. Those are both threats for the foreseeable future, especially since desktops, notebooks, and mobile device don’t have built-in EMV chip readers.

Crooks are clever, and are everywhere. They always have been. Money theft and fraud – no matter how secure the banks are, it’s not going away any time soon.

SysSecOps is a new phrase, still unseen by many IT and security administrators – however it’s being discussed within the market, by analysts, and at technical conferences. SysSecOps, or Systems & Security Operations, describes the practice of combining security groups and IT operations groups to be able to make sure the health of enterprise technology – and having the tools to be able to respond most effectively when issues happen.

SysSecOps concentrates on taking down the info walls, disrupting the silos, that get between security groups and IT administrators. IT operations personnel are there to make sure that end-users can access applications, and that important infrastructure is running at all times. They want to optimize access and availability, and require the data required to do that job – like that a new employee needs to be provisioned, or a hard disk drive in a RAID array has actually stopped working, that a new partner needs to be provisioned with access to a secure document repository, or that an Oracle database is ready to be moved to the cloud. It’s everything about innovation to drive business.

Very Same Data, Various Use-Cases

Endpoint and network monitoring details and analytics are clearly customized to fit the diverse needs of IT and security. However, the underlying raw data is in fact the exact same. The IT and security groups simply are looking at their own domain’s issues and scenarios – and doing something about it based upon those use-cases.

Yet in some cases the IT and security groups have to interact. Like provisioning that brand-new organization partner: It must touch all the ideal systems, and be done securely. Or if there is a problem with a remote endpoint, such as a mobile phone or a mechanism on the Industrial Internet of Things, IT and security might have to work together to identify exactly what’s going on. When IT and security share the exact same data sources, and have access to the very same tools, this job becomes a lot easier – and hence SysSecOps.

Envision that an IT administrator spots that a server hard drive is nearing full capacity – and this was not anticipated. Perhaps the network had actually been breached, and the server is now being utilized to steam pirated films throughout the Web. It happens, and finding and resolving that issue is a task for both IT and security. The data gathered by endpoint instrumentation, and showed through a SysSecOps-ready tracking platform, can assist both sides working together more effectively than would happen with conventional, distinct, IT and security tools.

SysSecOps: It’s a brand-new term, and a brand-new idea, and it’s resonating with both IT and security groups. You can discover more about this in a brief 9 minute video, where I talk with numerous market specialists about this subject: “Exactly what is SysSecOps?

Ransomware is genuine, and is threatening individuals, services, schools, medical facilities, governments – and there’s no indication that ransomware is stopping. In fact, it’s probably increasing. Why? Let’s be honest: Ransomware is probably the single most efficient attack that hackers have ever created. Anybody can develop ransomware utilizing easily available tools; any cash received is likely in untraceable Bitcoin; and if something goes wrong with decrypting someone’s disk drive, the hacker isn’t impacted.

A business is hit with ransomware every 40 seconds, according to some sources, and 60% of all malware were ransomware. It strikes all sectors. No industry is safe. And with the increase of RaaS (Ransomware-as-a-Service) it’s going to get worse.

Fortunately: We can fight back. Here’s a 4 step fight plan.

Four steps to good fundamental hygiene

  1. Training employees on handling destructive e-mails. There are falsified messages from service partners. There’s phishing and target spearphishing. Some will survive email spam/malware filters; workers need to be taught not to click links in those messages, or naturally, not to give permission for plugins or apps to be installed. However, some malware, like ransomware, will get through, typically making use of obsolete software applications or unpatched systems, just like in the Equifax breach.
  2. Patch everything. Guaranteeing that end points are completely patched and completely updated with the current, most safe OS, applications, utilities, device drivers, and code libraries. In this way, if there is an attack, the end point is healthy, and has the ability to best battle the infection.
  3. Ransomware isn’t really a technology or security problem. It’s an organization problem. And it’s a lot more than the ransom that is demanded. That’s peanuts compared to loss of efficiency because of downtime, bad public relations, angry clients if service is interfered with, and the expense of rebuilding lost data. (And that assumes that valuable intellectual property or protected financial or consumer health data isn’t really stolen.).
  4. Backup, backup, backup, and safeguard those backups. If you do not have safe, protected backups, you cannot restore data and core infrastructure in a timely fashion. That includes making day-to-day snapshots of virtual machines, databases, applications, source code, and configuration files.

By the way, businesses need tools to discover, determine, and avoid malware like ransomware from dispersing. This needs continuous visibility and reporting of what’s taking place in the environment – consisting of “zero day” attacks that have not been seen before. Part of that is keeping an eye on end points, from the smart phone to the PC to the server to the cloud, to make sure that endpoints are up-to-date and secure, which no unexpected changes have been made to their underlying configuration. That way, if a machine is contaminated by ransomware or other malware, the breach can be discovered quickly, and the device separated and closed down pending forensics and healing. If an end point is breached, quick containment is critical.

Read more in my guest story for Chuck Leaver’s blog, “Prevent And Manage Ransomware With These 4 Steps.”

AI is an emerging technology – always, has been always will be. Back in the early 1990s, I was editor of AI Expert Magazine. Looking for something else in my archives, I found this editorial, dated February 1991.

What do you think? Is AI real yet?