Agility – the ability to deliver projects quickly. That applies to new projects, as well as updates to existing projects. The agile software movement began when many smart people became frustrated with the classic model of development, where first the organization went through a complex process to develop requirements (which took months or years), and wrote software to address those requirements (which took months or years, or maybe never finished). By then, not only did the organization miss out on many opportunities, but perhaps the requirements were no longer valid – if they ever were.

With agile methodologies, the goal is to build software (or accomplish some complex task or action), in small incremental iterations. Each iteration delivers some immediate value, and after each iteration, there would be an evaluation of how well those who requested the project (the stakeholders) were satisfied, and what they wanted to do next. No laborious up-front requirements. No years of investment before there was any return on that investment.

One of the best-known agile frameworks is Scrum, developed by Jeff Sutherland and Ken Schwaber in the early 1990s. In my view, Scrum is noteworthy for several innovations, including:

  • The Scrum framework is simple enough for everyone involved to understand.
  • The Scrum framework is not a product.
  • Scrum itself is not tied to specific vendor’s project-management tools.
  • Work is performed in two-week increments, called Sprints.
  • Every day there is a brief meeting called a Daily Scrum.
  • Development is iterative, incremental, and outcomes are predictable.
  • The work must be transparent, as much as possible, to everyone involved.
  • The roles of participants in the project are defined extremely clearly.
  • The relationship between people in the various roles is also clearly defined.
  • A key participant is the Scrum Master, who helps everyone maximize the value of the team and the project.
  • There is a clear, unambiguous definition of what “Done” means for every action item.

Scrum itself is refined every year or two by Sutherland and Schwaber. The most recent version (if you can call it a version) is Scrum 2017; before that, it was revised in 2016 and 2013. While there aren’t that many significant changes from the original vision unveiled in 1995, here are three recent changes that, in my view, make Scrum better than ever – enough that it might be called Scrum 2.0. Well, maybe Scrum 1.5. You decide:

  1. The latest version acknowledges more clearly that Scrum, like other agile methodologies, is used for all sorts of projects, not merely creating or enhancing software. While the Scrum Guide is still development-focused, Scrum can be used for market research, product development, developing cloud services, and even managing schools and governments.
  2. The Daily Scrum will be more focused on exploring how well the work is driving toward the goals planned for the biweekly Sprint Goal. For example – what work will be done today to drive to the goal? What impediments likely to prevent us from meeting the goal? (Previously, the Daily Scrum was often viewed as a glorified status report meeting.)
  3. Scrum has a set of values, and those are now spelled out: “When the values of commitment, courage, focus, openness and respect are embodied and lived by the Scrum Team, the Scrum pillars of transparency, inspection, and adaptation come to life and build trust for everyone. The Scrum Team members learn and explore those values as they work with the Scrum events, roles and artifacts. Successful use of Scrum depends on people becoming more proficient in living these five values… Scrum Team members respect each other to be capable, independent people.”

The word “agile” is thrown around too often in business and technology, covering everything from planning a business acquisition to planning a network upgrade. Scrum is one of the best-known agile methodologies, and the framework is very well suited for all sorts of projects where it’s not feasible to determine a full set of requirements up front, and there’s a need to immediately begin delivering some functionality (or accomplish parts of the tasks). That Scrum continues to evolve will help ensure its value in the coming years… and decades.

AI is an emerging technology – always, has been always will be. Back in the early 1990s, I was editor of AI Expert Magazine. Looking for something else in my archives, I found this editorial, dated February 1991.

What do you think? Is AI real yet?

The bad news: There are servers used in serverless computing. Real servers, with whirring fans and lots of blinking lights, installed in racks inside data centers inside the enterprise or up in the cloud.

The good news: You don’t need to think about those servers in order to use their functionality to write and deploy enterprise software. Your IT administrators don’t need to provision or maintain those servers, or think about their processing power, memory, storage, or underlying software infrastructure. It’s all invisible, abstracted away.

The whole point of serverless computing is that there are small blocks of code that do one thing very efficiently. Those blocks of code are designed to run in containers so that they are scalable, easy to deploy, and can run in basically any computing environment. The open Docker platform has become the de facto industry standard for containers, and as a general rule, developers are seeing the benefits of writing code that target Docker containers, instead of, say, Windows servers or Red Hat Linux servers or SuSE Linux servers, or any specific run-time environment. Docker can be hosted in a data center or in the cloud, and containers can be easily moved from one Docker host to another, adding to its appeal.

Currently, applications written for Docker containers still need to be managed by enterprise IT developers or administrators. That means deciding where to create the containers, ensuring that the container has sufficient resources (like memory and processing power) for the application, actually installing the application into the container, running/monitoring the application while it’s running, and then adding more resources if required. Helping do that is Kubernetes, an open container management and orchestration system for Docker. So while containers greatly assist developers and admins in creating portable code, the containers still need to be managed.

That’s where serverless comes in. Developers write their bits of code (such as to read or write from a database, or encrypt/decrypt data, or search the Internet, or authenticate users, or to format output) to run in a Docker container. However, instead of deploying directly to Docker, or using Kubernetes to handle deployment, they write their code as a function, and then deploy that function onto a serverless platform, like the new Fn project. Other applications can call that function (perhaps using a RESTful API) to do the required operation, and the serverless platform then takes care of everything else automatically behind the scenes, running the code when needed, idling it when not needed.

Read my essay, “Serverless Computing: What It Is, Why You Should Care,” to find out more.

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10. That’s a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Note that the top 10 list doesn’t directly represent the 10 most common attacks. Rather, it’s a ranking of risk. There are four factors used for this calculation. One is the likelihood that applications would have specific vulnerabilities; that’s based on data provided by companies. That’s the only “hard” metric in the OWASP Top 10. The other three risk factors are based on professional judgement.

It boggles the mind that a majority of top 10 issues appear across the 2007, 2010, 2013, and draft 2017 OWASP lists. That doesn’t mean that these application security vulnerabilities have to remain on your organization’s list of top problems, though—you can swat those flaws.

Read more in my essay, “The OWASP Top 10 is killing me, and killing you!

Those are two popular ways of migrating enterprise assets to the cloud:

  1. Write new cloud-native applications.
  2. Lift-and-shift existing data center applications to the cloud.

Gartner’s definition: “Lift-and-shift means that workloads are migrated to cloud IaaS in as unchanged a manner as possible, and change is done only when absolutely necessary. IT operations management tools from the existing data center are deployed into the cloud environment largely unmodified.”

There’s no wrong answer, no wrong way of proceeding. Some data center applications (including servers and storage) may be easier to move than others. Some cloud-native apps may be easier to write than others. Much depends on how much interconnectivity there is between the applications and other software; that’s why, for example, public-facing websites are often easiest to move to the web, while tightly coupled internal software, such as inventory control or factory-floor automation, can be trickier.

That’s why in some cases, a hybrid strategy is best. Some parts of the applications are moved up to the cloud, while others remain in the data centers, with SD-WANs or other connectivity linking everything together in a secure manner.

In other words, no one size fits all. And no one timeframe fits all, especially when it comes to lifting-and-shifting.

Saas? Paas? It Depends.

A recent survey from the Oracle Applications User Group (OAUG) showed that 70% of respondents who have plans to adopt Oracle Cloud solutions will do so in the next three years. About 35% plan to implement Software-as-a-Service (SaaS) solutions to run with their existing Oracle on-premises installations and 29 percent planning to use Platform-as-a-Service (PaaS) services to accelerate software development efforts in the next 12 months.

Joe Paiva, CIO of the U.S. Commerce Department’s International Trade Administration (ITA), is a fan of lift-and-shift. He said at a cloud conference that “Sometimes it makes sense because it gets you there. That was the key. We had to get there because we would be no worse off or no better off, and we were still spending a lot of money, but it got us to the cloud. Then we started doing rationalization of hardware and applications, and dropped our bill to Amazon by 40 percent compared to what we were spending in our government data center. We were able to rationalize the way we use the service.” Paiva estimates government agencies could save 5%-15% using lift-and-shift.

The benefits of moving existing workloads to the cloud are almost entirely financial. If you can shut down a data center and pay less to run the application in the cloud, it’s can be a good short-term solution with immediate ROI. Gartner cautions, however, that lift and shift “generally results in little created value. Plus, it can be a more expensive option and does not deliver immediate cost savings.” Much depends on how much it costs to run that application today.

A Multi-Track Process for Cloud Migration

The real benefits of new cloud development and deployment architectures take time to realize. For many organizations, there may be a multi-track process:

First track: Lift-and-shift existing workloads that are relatively easy to migrate, while simultaneously writing cloud-native applications for new projects. Those provide the biggest and fastest return on investment, while leaving data center workloads in place and untouched.

Second track: Write cloud-native applications for the remaining data-center workloads, the ones impractical to migrate in their existing form. These will be slower, but the payoff would result in the ability to turn off some or all existing data centers – and eliminating their associated expenses, such as power and cooling, bandwidth, and physical space.

Third track: At some point, revisit the lifted-and-shifted workloads to see which would significantly benefit from being rewritten as cloud-native apps. Unless there is an order of magnitude increase in efficiency, or significant added functionality, the financial returns won’t be high – or may be nonexistent. For some applications, it may never make sense to redesign and rewrite them in a cloud-native way. So, those old enterprise applications may live on for years to come.

About a decade ago, I purchased a piece of a mainframe on eBay — the name ID bar. Carved from a big block of aluminum, it says “IBM System/370 168,” and it hangs proudly over my desk.

My time on mainframes was exclusively with the IBM System/370 series. With a beautiful IBM 3278 color display terminal on my desk, and, later, a TeleVideo 925 terminal and an acoustic coupler at home, I was happier than anyone had a right to be.

We refreshed our hardware often. The latest variant I worked on was the System/370 4341, introduced in early 1979, which ran faster and cooler than the slower, very costly 3031 mainframes we had before. I just found this on the IBM archives: “The 4341, under a 24-month contract, can be leased for $5,975 a month with two million characters of main memory and for $6,725 a month with four million characters. Monthly rental prices are $7,021 and $7,902; purchase prices are $245,000 and $275,000, respectively.” And we had three, along with tape drives, disk drives (in IBM-speak, DASD, for Direct Access Storage Devices), and high-speed line printers. Not cheap!

Our operating system on those systems was called Virtual Machine, or VM/370. It consisted of two parts, Control Program and Conversational Monitoring System. CP was the timesharing operating system – in modern virtualization terms, the hypervisor running on the bare metal. CMS was the user interface that users logged into, and provide access to not only a text-based command console, but also file storage and a library of tools, such as compilers. (We often referred to the platform as CP/CMS).

Thanks to VM/370, each user believed she had access to a 100% dedicated and isolated System/370 mainframe, with every resource available and virtualized. (I.e., she appeared to have dedicated access to tape drives, but they appeared non-functional if her tape(s) weren’t loaded, or if she didn’t buy access to the drives.)

My story about mainframes isn’t just reminiscing about the time of dinosaurs. When programming those computers, which I did full-time in the late 1970s and early 1980s, I learned a lot of lessons that are very applicable today. Read all about that in my article for HP Enterprise Insights, “4 lessons for modern software developers from 1970s mainframe programming.”

To get the most benefit from the new world of cloud-native server applications, forget about the old way of writing software. In the old model, architects designed software. Programmers wrote the code, and testers tested it on test server. Once the testing was complete, the code was “thrown over the wall” to administrators, who installed the software on production servers, and who essentially owned the applications moving forward, only going back to the developers if problems occurred.

The new model, which appeared about 10 years ago is called “DevOps,” or developer operations. In the DevOps model, architects, developers, testers, and administrators collaborate much more closely to create and manage applications. Specifically, developers play a much broader role in the day-to-day administration of deployed applications, and use information about how the applications are running to tune and enhance those applications.

The involvement of developers in administration made DevOps perfect for cloud computing. Because administrators had fewer responsibilities (i.e., no hardware to worry about), it was less threatening for those developers and administrators to collaborate as equals.

Change matters

In that old model of software development and deployment, developers were always change agents. They created new stuff, or added new capabilities to existing stuff. They embraced change, including new technologies – and the faster they created change (i.e., wrote code), the more competitive their business.

By contrast, administrators are tasked with maintaining uptime, while ensuring security. Change is not a virtue to those departments. While admins must accept change as they install new applications, it’s secondary to maintaining stability. Indeed, admins could push back against deploying software if they believed those apps weren’t reliable, or if they might affect the overall stability of the data center as a whole.

With DevOps, everyone can embrace change. One of the ways that works, with cloud computing, is to reduce the risk that an unstable application can damage system reliability. In the cloud, applications can be build and deployed using bare-metal servers (like in a data center), or in virtual machines or containers.

DevOps works best when software is deployed in VMs or containers, since those are isolated from other systems – thereby reducing risk. Turns out that administrators do like change, if there’s minimal risk that changes will negatively affect overall system reliability, performance, and uptime.

Benefits of DevOps

Goodbye, CapEx, hello, OpEx. Cloud computing moves enterprises from capital-expense data centers (which must be built, electrified, cooled, networked, secured, stocked with servers, and refreshed periodically) to operational-expense service (where the business pays monthly for the processors, memory, bandwidth, and storage reserved and/or consumed). When you couple those benefits that with virtual machines, containers, and DevOps, you get:

  • Easier Maintenance: It can be faster to apply patches and fixes to software virtual machines – and use snapshots to roll back if needed.
  • Better Security: Cloud platform vendors offer some security monitoring tools, and it’s relatively easy to install top-shelf protections like next-generation firewalls – themselves offered as cloud services.
  • Improved Agility: Studies show that the process of designing, coding, testing, and deploying new applications can be 10x faster than traditional data center methods, because the cloud reduces and provides robust resources.
  • Lower Cost: Vendors such as Amazon, Google, Microsoft, and Oracle, are aggressively lowering prices to gain market share — and in many cases, those prices are an order of magnitude below what it could cost to provision an enterprise data center.
  • Massive Scale: Need more power? Need more bandwidth? Need more storage? Push a button, and the resources are live. If those needs are short-term, you can turn the dials back down, to lower the monthly bill. You can’t do that in a data center.

Rapidly evolving

The technologies used in creating cloud-native applications are evolving rapidly. Containers, for example, are relatively new, yet are becoming incredibly popular because they require 4x-10x fewer resources than VMs – thereby slashing OpEx costs even further. Software development and management tools, like Kubernetes (for orchestration of multiple containers), Chef (which makes it easy to manage cloud infrastructure), Puppet (which automates pushing out cloud service configurations), and OpenWhisk (which strips down cloud services to “serverless” basics) push the revolution farther.

DevOps is more important than the meaningless “developer operations” moniker implies. It’s a whole new, faster way of doing computing with cloud-native applications. Because rapid change means everything in achieving business agility, everyone wins.

“One of these things is not like the others,” the television show Sesame Street taught generations of children. Easy. Let’s move to the next level: “One or more of these things may or may not be like the others, and those variances may or may not represent systems vulnerabilities, failed patches, configuration errors, compliance nightmares, or imminent hardware crashes.” That’s a lot harder than distinguishing cookies from brownies.

Looking through gigabytes of log files and transactions records to spot patterns or anomalies is hard for humans: it’s slow, tedious, error-prone, and doesn’t scale. Fortunately, it’s easy for artificial intelligence (AI) software, such as the machine learning algorithms built into Oracle Management Cloud. What’s more, the machine learning algorithms can be used to direct manual or automated remediation efforts to improve security, compliance, and performance.

Consider how large-scale systems gradually drift away from their required (or desired) configuration, a key area of concern in the large enterprise. In his Monday, October 2 Oracle OpenWorld session on managing and securing systems at scale using AI, Prakash Ramamurthy, senior vice president of systems management at Oracle, talked about how drift happens. Imagine that you’ve applied a patch, but then later you spool up a virtual server that is running an old version of a critical service or contains an obsolete library with a known vulnerability. That server is out of compliance, Ramamurthy said. Drift.

Drift is bad, said Ramamurthy, and detecting and stopping drift is a core competency of Oracle Management Cloud. It starts with monitoring cloud and on-premises servers, services, applications, and logs, using machine learning to automatically understand normal behavior and identify anomalies. No training necessary here: A variety of machine learning algorithms teach themselves how to play the “one of these things is not like the others” game with your data, your systems, and your configuration, and also to classify the systems in ways that are operationally relevant. Even if those logs contain gigabytes of information on hundreds of thousands of transactions each second.

Learn more in my article for Forbes, “Catch The Drift With Machine Learning — Before The Drift Catches You.”

IT managers shouldn’t have to choose between cloud-driven innovation and data-center-style computing. Developers shouldn’t have to choose between the latest DevOps programming using containers and microservices, and traditional architectures and methodologies. CIOs shouldn’t have to choose between a fully automated and fully managed cloud and a self-managed model using their own on-staff administrators.

At an Oracle OpenWorld general session on infrastructure-as-a-service (IaaS) October 3, Don Johnson, senior vice president of product development at Oracle, lamented that CIOs are often forced to make such difficult choices. Sure, the cloud is excellent for purpose-built applications, he said, “and so what’s working for them is cloud-native, but what’s not working in the cloud are enterprise workloads. It’s an unnecessary set of bad choices.”

When it comes to moving existing business-critical applications to the cloud, Johnson explained the three difficult choices faced by many organizations:

  • First, CIOs can rewrite those applications from the ground up to run in the cloud in a platform-as-a-service (PaaS) model. That’s best in terms of achieving the greatest computational efficiency, as well as integration with other cloud services, but it can be time-consuming and costly.
  • Second, organizations can retrofit their existing applications to run in in the cloud, but this can be challenging at best, or nearly impossible in some cases.
  • Or third, CIOs can “lift and shift” existing on-premises applications, including their full software stack, directly into the cloud, using the IaaS model.

Historically, those three models have required three different clouds. No longer. Only the Oracle Cloud Infrastructure, Johnson stated, “lets you run your full existing stack alongside cloud-native applications.” And this is important, he added, because migration to the cloud must be slow and deliberate. “Running in the cloud is very disruptive. It can’t happen overnight. You need to move when and how you want to move,” he said. And a deliberative movement to the cloud means a combination of new cloud-native PaaS applications and legacy applications migrated to IaaS.

Read more in my story for Forbes, “Lift And Shift Workloads — And Write Cloud-Native Apps — For The Same Cloud.”

Despite Elon Musk’s warnings this summer, there’s not a whole lot of reason to lose any sleep worrying about Skynet and the Terminator. Artificial Intelligence (AI) is far from becoming a maleficent, all-knowing force. The only “Apocalypse” on the horizon right now is an over reliance by humans on machine learning and expert systems, as demonstrated by the deaths of Tesla owners who took their hands off the wheel.

Examples of what currently pass for “Artificial Intelligence” — technologies such as expert systems and machine learning — are excellent for creating software. AI software is truly valuable help in contexts that involve pattern recognition, automated decision-making, and human-to-machine conversations. Both types of AI have been around for decades. And both are only as good as the source information they are based on. For that reason, it’s unlikely that AI will replace human beings’ judgment on important tasks requiring decisions more complex than “yes or no” any time soon.

Expert systems, also known as rule-based or knowledge-based systems, are when computers are programmed with explicit rules, written down by human experts. The computers can then run the same rules but much faster, 24×7, to come up with the same conclusions as the human experts. Imagine asking an oncologist how she diagnoses cancer and then programming medical software to follow those same steps. For a particular diagnosis, an oncologist can study which of those rules was activated to validate that the expert system is working correctly.

However, it takes a lot of time and specialized knowledge to create and maintain those rules, and extremely complex rule systems can be difficult to validate. Needless to say, expert systems can’t function beyond their rules.

By contrast, machine learning allows computers to come to a decision—but without being explicitly programmed. Instead, they are shown hundreds or thousands of sample data sets and told how they should be categorized, such as “cancer | no cancer,” or “stage 1 | stage 2 | stage 3 cancer.”

Read more about this, including my thoughts on machine learning, pattern recognition, expert systems, and comparisons to human intelligence, in my story for Ars Technica, “Never mind the Elon—the forecast isn’t that spooky for AI in business.”

HP-35 slide rule calculatorAt the current rate of rainfall, when will your local reservoir overflow its banks? If you shoot a rocket at an angle of 60 degrees into a headwind, how far will it fly with 40 pounds of propellant and a 5-pound payload? Assuming a 100-month loan for $75,000 at 5.11 percent, what will the payoff balance be after four years? If a lab culture is doubling every 14 hours, how many viruses will there be in a week?

Those sorts of questions aren’t asked by mathematicians, who are the people who derive equations to solve problems in a general way. Rather, they are asked by working engineers, technicians, military ballistics officers, and financiers, all of whom need an actual number: Given this set of inputs, tell me the answer.

Before the modern era (say, the 1970s), these problems could be hard to solve. They required a lot of pencils and paper, a book of tables, or a slide rule. Mathematicians never carried slide rules, but astronauts did, as their backup computers.

However, slide rules had limitations. They were good to about three digits of accuracy, no more, in the hands of a skilled operator. Three digits was fine for real-world engineering, but not enough for finance. With slide rules, you had to keep track of the decimal point yourself: The slide rule might tell you the answer is 641, but you had to know if that was 64.1 or 0.641 or 641.0. And if you were chaining calculations (needed in all but the simplest problems), accuracy dropped with each successive operation.

Everything the slide rule could do, a so-called slide-rule calculator could do better—and more accurately. Slide rules are really good at few things. Multiplication and division? Easy. Exponents, like 613? Easy. Doing trig, like sines, cosines, and tangents? Easy. Logarithms? Easy.

Hewlett-Packard unleashed a monster when it created the HP-9100A desktop calculator, released in 1968 at a price of about $5,000. The HP-9100A did everything a slide rule could do, and more—such as trig, polar/rectangular conversions, and exponents and roots. However, it was big and it was expensive—about $35,900 in 2017 dollars, or the price of a nice car! HP had a market for the HP-9100A, since it already sold test equipment into many labs. However, something better was needed, something affordable, something that could become a mass-market item. And that became the pocket slide-rule calculator revolution, starting off with the amazing HP-35.

If you look at the HP-35 today, it seems laughably simplistic. The calculator app in your smartphone is much more powerful. However, back in 1972, and at a price of only $395 ($2,350 in 2017 dollars), the HP-35 changed the world. Companies like General Electric ordered tens of thousands of units. It was crazy, especially for a device that had a few minor math bugs in its first shipping batch (HP gave everyone a free replacement).

Read more about early slide-rule calculators — and the more advanced card-programmable models like the HP-65 and HP-67, in my story, “The early history of HP calculators.”

HP-65 and HP-67 card-programmable calculators

It’s almost painful to see an issue of SD Times without my name printed in the masthead. From Editor-in-Chief to Editorial Director to Founding Editor to… nothing. However, it’s all good!

My company, BZ Media, is selling our flagship print publication, SD Times, to a startup, D2 Emerge LLC. The deal shall formally close in a few weeks. If you’ve been following SD Times, you’ll recognize the two principals of the startup, David Lyman and David Rubinstein. (Thus, the “D2” part of the name.)

BZ Media co-founder Ted Bahr and I wish David, and David, and SD Times, and its staff, readers, and advertisers, nothing but success. (I retired from BZ Media mid-2013, becoming a silent partner with no involvement in day-to-day operations.)

D2 Emerge is ready to roll. Here’s what David Rubinstein wrote in the July 2017 issue (download it here):

The Times, it is a-changin’

There’s a saying that goes ‘when one chapter closes, another one begins.’

This issue of SD Times marks the close of the BZ Media chapter of this publication’s history and opens the chapter on D2 Emerge LLC, a new-age publishing and marketing company founded by two long-time members of the SD Times team: the publisher, David Lyman, and the editor-in-chief … me!

We will work hard to maintain the quality of SD Times and build on the solid foundation that has been built over the past 17 years. Wherever we go, we hear from readers who tell us they look forward to each issue, and they say they’re learning about things they didn’t know they needed to know. And we’re proud of that.

The accolades are certainly nice — and always welcome. Yet, there is nothing more important to us than the stories we tell. Whether putting a spotlight on new trends in the industry and analyzing what they mean, profiling the amazing, brilliant people behind the innovation in our industry, or helping software providers tell their unique stories to the industry, our mission is to inform, enlighten and even entertain.

But, as much as things will stay the same, there will be some changes. We will look to introduce you to different voices and perspectives from the industry, inviting subject matter experts to share their knowledge and vision of changes in our industry. The exchange of ideas and free flow of information are the bedrock of our publishing philosophy.

We will somewhat broaden the scope of our coverage to include topics that might once have been thought of as ancillary to software development but are now important areas for you to follow as silos explode and walls come tumbling down in IT shops around the world.

We will work to improve our already excellent digital offerings by bettering the user experience and the way in which we deliver content to you. So, whether you’re reading SD Times on a desktop at work, or on a tablet at a coffee shop, or even on your cellphone at the beach, we want you have the same wonderful experience.

For our advertisers, we will help guide you toward the best way to reach our readers, whether through white papers, webinars, or strategic ad placement across our platforms. And, we will look

to add to an already robust list of services we can provide to help you tailor your messages in a way that best suits our readers.

BZ Media was a traditional publishing company, with a print-first attitude (only because there weren’t any viable digital platforms back in 2000). D2 Emerge offers an opportunity to strike the right balance between a digital-first posture and all that is good about print publishing.

I would be remiss if I didn’t acknowledge BZ Media founders Ted Bahr and Alan Zeichick, who took a cynical, grizzled daily newspaperman and turned him into a cynical, grizzled technology editor. But as I often say, covering this space is never dull. Years ago, I covered sports for a few newspapers, and after a while, I saw that I had basically seen every outcome there was: A walk-off home run, a last-second touchdown, a five-goal hockey game. The only thing that seemed to change were the players. Sure, once in a while a once-in-a-lifetime player comes along, and we all enjoy his feats. But mostly sports do not change.

Technology, on the other hand, changes at breakneck speed. As we worked to acquire SD Times, I had a chance to look back at the first issues we published, and realized just how far we’ve come. Who could have known in 2000, when we were writing about messaging middleware and Enterprise JavaBeans that one day we’d be writing about microservices architectures and augmented reality?

Back then, we covered companies such as Sun Microsystems, Metrowerks, IONA, Rational Software, BEA Systems, Allaire Corp, Bluestone Software and many more that were either acquired or couldn’t keep up with changes in the industry.

The big news at the JavaOne conference in 2000 was extreme clustering of multiple JVMs on a single server, while elsewhere, the creation of an XML Signature specification looked to unify authentication, and Corel Corp. was looking for cash to stay alive after a proposed merger with Borland Corp. (then Inprise) fell apart.

So now, we’re excited to begin the next chapter in the storied (pardon the pun) history of SD Times, and we’re glad you’re coming along with us as OUR story unfolds.

CNN didn’t get the memo. After all the progress that’s been made to eliminate the requirement for using Adobe’s Flash player by so many streaming-media websites, CNNgo still requires the problematic plug-in, as you can see by the screen I saw just a few minutes ago.


Have you not heard of HTML5, oh, CNN programmers? Perhaps the techies at CNN should read “Why Adobe Flash is a Security Risk and Why Media Companies Still Use it.” After that, “Gone in a Flash: Top 10 Vulnerabilities Used by Exploit Kits.”

Yes, Adobe keeps patching Flash to make it less insecure. Lots and lots of patches, says the story “Patch Tuesday: Adobe Flash Player receives updates for 13 security issues,” publishing in January. That comes in the heels of 17 security flaws patched in December 2016.

And yes, there were more critical patches issued on June 13, 2017. Flash. Just say no. Goodbye, CNNgo, until you stop requiring that prospective customers utilize such a buggy, flawed media player.

And no, I didn’t enable the use of Flash. Guess I’ll never see what CNN wanted to show me. No great loss.

The WannaCry (WannaCrypt) malware attack spread through unpatched old software. Old software is the bane of the tech industry. Software vendors hate old software for many reasons. One, of course, is that the old software has vulnerabilities that must be patched. Another is that the support costs for older software keeps going and growing. Plus, of course, newer software has new features that can generate business. Meanwhile, of course, customers running old software aren’t generating much revenue.

Enterprises, too, hate old software. They don’t like the support costs, either, or the security vulnerabilities. However, there are huge costs in licensing and installing new software – which might require training users and IT staff, buying new hardware, updating templates, adjusting integrations, and so-on. Plus, old software has been tested and certified, and better the risk you know than the risk you don’t know. So, they keep using old software.

Think about a family that’s torn between keeping a paid-for 13-year-old car, like my 2004 BMW, instead of leasing a newer, safer, more reliable model. The decision about whether to upgrade or not upgrade is complicated. There’s no good answer, and in case of doubt, the best decision is to simply wait until next year’s budget.

However: What about a family that decides to go car-shopping after paying for a scary breakdown or an unexpectedly large repair bill? Similarly, companies are inspired to upgrade critical software after suffering a data breach or learning about irreparable vulnerabilities in the old code.

The call to action?

WannaCry might be that call to action for some organizations. Take Windows, for example – but let me be quick to stress that this issue isn’t entirely about Microsoft products. Smartphones running old versions of Android or Apple’s iOS, or old Mac laptops that can’t be moved to the latest edition of OS X, are just as vulnerable.

Okay, back to Windows and WannaCry. In its critical March 14, 2017, security update, Microsoft accurately identified a flaw in its Server Message Block (SMB) code that could be exploited; the flaw was disclosed in documents stolen by hackers from the U.S. security agencies. Given the massive severity of that flaw, Microsoft offered patches to old software including Windows Server 2008 and Windows Vista.

It’s important to note that customers who applied those patches were not affected by WannaCry. Microsoft fixed it. Many customers didn’t install the fix because they didn’t know about it, they couldn’t find the IT staff resources, or simply thought this vulnerability was no big deal. Well, some made the wrong bet, and paid for it.

Patches keep coming; they aren’t enough

This week, Microsoft blogged,

On May 12, 2017, the WannaCrypt ransomware served as an all too real example of the danger of cyber attacks to individuals and businesses globally.

In reviewing the updates for this month, some vulnerabilities were identified that pose elevated risk of cyber attacks by government organizations, sometimes referred to as nation-state actors or other copycat organizations. To address this risk, today we are providing additional security updates along with our regular Update Tuesday service. These security updates are being made available to all customers, including those using older versions of Windows. Due to the elevated risk for destructive cyber attacks at this time, we made the decision to take this action because applying these updates provides further protection against potential attacks with characteristics similar to WannaCrypt.

The new patches go back even farther than those issued in March, covering Windows XP and Windows Server 2003. While Microsoft is to be complimented on released those patches, customers should not be complacent. It is dangerous for consumers or consumers to keep running Windows XP, or heaven forbid, Windows 95. It’s equally dangerous to run Windows 2003 at all; anything left on that platform should be migrated. The same is true of smartphones running old versions of Android or iOS, laptops or notebooks running old versions of Macintosh OS, or even old versions of Linux. In some cases, those systems may seem super-reliable – but they are not secure, and can’t be secured.

Unfortunately, upgrades to the latest operating system may require hardware updates (such as more memory) – or a complete replacement. That’s often the case with phones and notebooks, and even servers might require a forklift upgrade. That’s the price of security, however, Forget about the new features of new software; forget about the improved reliability or higher performance that comes along with new hardware. Old software simply can’t be secured. It must go. As my friend Jason Perlow wrote in mid-May, “If you’re still using Windows XP, you’re a menace to society.” He’s right. Get it done.

“Someone is waiting just for you / Spinnin’ wheel, spinnin’ true.”

Those lyrics to a 1969 song by Blood, Sweat & Tears could also describe 2017 enterprise apps that time-out or fail because of dropped or poor connectivity. Wheels spin. Data is lost. Applications crash. Users are frustrated. Devices are thrown. Screens are smashed.

It doesn’t have to be that way. Always-on applications can continue to function even when the user loses an Internet or Wi-Fi connection. With proper design and testing, you won’t have to handle as many smartphone accidental-damage insurance claims.

Let’s start with the fundamentals. Many business applications are friendly front ends to remote services. The software may run on phones, tablets, or laptops, and the services may be in the cloud or in the on-premises data center.

When connectivity is strong, with sufficient bandwidth and low latency, the front-end software works fine. The user experience is excellent. Data sent to the back end is received and confirmed, and data served to the user front end is transmitted without delay. Joy!

When connectivity is non-existent or fails intermittently, when bandwidth is limited, and when there’s too much latency — which you can read as “Did the Internet connection go down again?!” — users immediately feel frustration. That’s bad news for the user experience, and also extremely bad in terms of saving and processing transactions. A user who taps a drop-down menu or presses “Enter” and sees nothing happen might progress to multiple mouse clicks, a force-reset of the application, or a reboot of the device, any of which could result in data loss. Submitted forms and uploads could be lost in a time-out. Sessions could halt. In some cases, the app could freeze (with or without a spinning indicator) or crash outright. Disaster!

What can you do about it? Easy: Read my article for HP Enterprise Insights, “How to design software that doesn’t crash when the Internet connection fails.”

 

No, no, no, no, no!

The email client updates in the 10.12.4 update to macOS Sierra is everything that’s wrong with operating systems today. And so is the planned inclusion of an innovative, fun-sounding 3D painter as part of next week’s Windows 10 Creators Update.

Repeat after me: Applications do not belong in operating systems. Diagnostics, yes. Shared libraries, yes. Essential device drivers, yes. Hardware abstraction layers, yes. File systems, yes. Program loads and tools, yes. A network stack, yes. A graphical user interface, yes. A scripting/job control language, yes. A basic web browser, yes.

Applications? No, no, no!

Why not?

Applications bloat up the operating system release. What if you don’t need a 3D paint program? What if you don’t want to use the built-in mail client? The binaries are there anyway taking up storage. Whenever the operating system is updated, the binaries are updated, eating up bandwidth and CPU time.

If you do want those applications, bug fixes are tied to OS updates. The Sierra 10.12.4 update fixes a bug in Mail. Why must that be tied to an OS update? The update supports more digital camera RAW formats. Why are they tied to the operating system, and not released as they become available? The 10.12.4 update also fixes a Siri issue regarding cricket scores in the IPL. Why, for heaven’s sake, is that functionality tied to an operating system update?? That’s simply insane.

An operating system is easier for the developer test and verify if it’s smaller. The more things in your OS update release train, the more things can go wrong, whether it’s in the installation process or in the code itself. A smaller OS means less regression testing and fewer bugs.

An operating system is easier for the client to test and verify if it’s smaller. Take your corporate clients — if they are evaluating macOS Sierra 10/12/4 or Windows 10 Creators Update prior to roll-out, if there’s less stuff there, the validation process is easier.

Performance and memory utilization are better if it’s smaller. The microkernel concept says that the OS should be as small as possible – if something doesn’t have to be in the OS, leave it out. Well, that’s not the case any more, at least in terms of the software release trains.

This isn’t new

No, Alan isn’t off his rocker, at least not more than usual. Operating system releases, especially those for consumers, have been bloated up with applications and junk for decades. I know that. Nothing will change.

Yes, it would be better if productivity applications and games were distributed and installed separately. Maybe as free downloads, as optional components on the release CD/DVD, or even as a separate SKU. Remember Microsoft Plus and Windows Ultimate Extras? Yeah, those were mainly games and garbage. Never mind.

Still, seeing the macOS Sierra Update release notes today inspired this missive. I hope you enjoyed it. </rant>

Can’t we fix injection already? It’s been nearly four years since the most recent iteration of the OWASP Top 10 came out — that’s June 12, 2013. The OWASP Top 10 are the most critical web application security flaws, as determined by a large group of experts. The list doesn’t change much, or change often, because the fundamentals of web application security are consistent.

The 2013 OWASP Top 10 were

  1. Injection
  2. Broken Authentication and Session Management
  3. Cross-Site Scripting (XSS)
  4. Insecure Direct Object References
  5. Security Misconfiguration
  6. Sensitive Data Exposure
  7. Missing Function Level Access Control
  8. Cross-Site Request Forgery (CSRF)
  9. Using Components with Known Vulnerabilities
  10. Unvalidated Redirects and Forwards

The preceding list came out on April 19. 2010:

  1. Injection
  2. Cross-Site Scripting (XSS)
  3. Broken Authentication and Session Management
  4. Insecure Direct Object References
  5. Cross-Site Request Forgery (CSRF)
  6. Security Misconfiguration
  7. Insecure Cryptographic Storage
  8. Failure to Restrict URL Access
  9. Insufficient Transport Layer Protection
  10. Unvalidated Redirects and Forwards

Looks pretty familiar. If you go back further to the inaugural Open Web Application Security Project 2004 and then the 2007 lists, the pattern of flaws stays the same. That’s because programmers, testers, and code-design tools keep making the same mistakes, over and over again.

Take the #1, Injection (often written as SQL Injection, but it’s broader than simply SQL). It’s described as:

Injection flaws occur when an application sends untrusted data to an interpreter. Injection flaws are very prevalent, particularly in legacy code. They are often found in SQL, LDAP, Xpath, or NoSQL queries; OS commands; XML parsers, SMTP Headers, program arguments, etc. Injection flaws are easy to discover when examining code, but frequently hard to discover via testing. Scanners and fuzzers can help attackers find injection flaws.

The technical impact?

Injection can result in data loss or corruption, lack of accountability, or denial of access. Injection can sometimes lead to complete host takeover.

And the business impact?

Consider the business value of the affected data and the platform running the interpreter. All data could be stolen, modified, or deleted. Could your reputation be harmed?

Eliminating the vulnerability to injection attacks is not rocket science. OWASP summaries three approaches:

Preventing injection requires keeping untrusted data separate from commands and queries.

The preferred option is to use a safe API which avoids the use of the interpreter entirely or provides a parameterized interface. Be careful with APIs, such as stored procedures, that are parameterized, but can still introduce injection under the hood.

If a parameterized API is not available, you should carefully escape special characters using the specific escape syntax for that interpreter. OWASP’s ESAPI provides many of these escaping routines.

Positive or “white list” input validation is also recommended, but is not a complete defense as many applications require special characters in their input. If special characters are required, only approaches 1. and 2. above will make their use safe. OWASP’s ESAPI has an extensible library of white list input validation routines.

Not rocket science, not brain surgery — and the same is true of the other vulnerabilities. There’s no excuse for still getting these wrong, folks. Cut down on these top 10, and our web applications will be much safer, and our organizational risk much reduced.

Do you know how often your web developers make the OWASP Top 10 mistakes? The answer should be “never.” They’ve had plenty of time to figure this out.

Modern medical devices increasingly leverage microprocessors and embedded software, as well as sophisticated communications connections, for life-saving functionality. Insulin pumps, for example, rely on a battery, pump mechanism, microprocessor, sensors, and embedded software. Pacemakers and cardiac monitors also contain batteries, sensors, and software. Many devices also have WiFi- or Bluetooth-based communications capabilities. Even hospital rooms with intravenous drug delivery systems are controlled by embedded microprocessors and software, which are frequently connected to the institution’s network. But these innovations also mean that a software defect can cause a critical failure or security vulnerability.

In 2007, former vice president Dick Cheney famously had the wireless capabilities of his pacemaker disabled. Why? He was concerned “about reports that attackers could hack the devices and kill their owners.” Since then, the vulnerabilities caused by the larger attack surface area on modern medical devices have gone from hypothetical to demonstrable, in part due to the complexity of the software, and in part due to the failure to properly harden the code.

In October 2011, The Register reported that “a security researcher has devised an attack that hijacks nearby insulin pumps, enabling him to surreptitiously deliver fatal doses to diabetic patients who rely on them.” The insulin pump worked because the pump contained a short-range radio that allow patients and doctors to adjust its functions. The researcher showed that, by using a special antenna and custom-written software, he could locate and seize control of any such device within 300 feet.

report published by Independent Security Evaluators (ISE) shows the danger. This report examined 12 hospitals, the organization concluded “that remote adversaries can easily deploy attacks that manipulate records or devices in order to fully compromise patient health” (p. 25). Later in the report, the researchers show how they demonstrated the ability to manipulate the flow of medicine or blood samples within the hospital, resulting in the delivery of improper medicate types and dosages (p. 37)–and do all this from the hospital lobby. They were also able to hack into and remotely control patient monitors and breathing tubes – and trigger alarms that might cause doctors or nurses to administer unneeded medications.

Read more in my blog post for Parasoft, “What’s the Cure for Software Defects and Vulnerabilities in Medical Devices?

The best way to have a butt-kicking cloud-native application is to write one from scratch. Leverage the languages, APIs, and architecture of the chosen cloud platform before exploiting its databases, analytics engines, and storage. As I wrote for Ars Technica, this will allow you to take advantage of the wealth of resources offered by companies like Microsoft, with their Azure PaaS (Platform-as-a-Service) offering or by Google Cloud Platform’s Google App Engine PaaS service.

Sometimes, however, that’s not the job. Sometimes, you have to take a native application running on a server in your local data center or colocation facility and make it run in the cloud. That means virtual machines.

Before we get into the details, let’s define “native application.” For the purposes of this exercise, it’s an application written in a high-level programming language, like C/C++, C#, or Java. It’s an application running directly on a machine talking to an operating system, like Linux or Windows, that you want to run on a cloud platform like Windows Azure, Amazon Web Services (AWS), or Google Cloud Platform (GCP).

What we are not talking about is an application that has already been virtualized, such as already running within VMware’s ESXi or Microsoft’s Hyper-V virtual machine. Sure, moving an ESXi or Hyper-V application running on-premises into the cloud is an important migration that may improve performance and add elasticity while switching capital expenses to operational expenses. Important, yes, but not a challenge. All the virtual machine giants and cloud hosts have copious documentation to help you make the switch… which amounts to basically copying the virtual machine file onto a cloud server and turning it on.

Many possible scenarios exist for moving a native datacenter application into the cloud. They boil down to two main types of migrations, and there’s no clear reason to choose one over the other:

The first is to create a virtual server within your chosen cloud provider, perhaps running Windows Server or running a flavor of Linux. Once that virtual server has been created, you migrate the application from your on-prem server to the new virtual server—exactly as you would if you were moving from one of your servers to a new server. The benefits: the application migration is straightforward, and you have 100-percent control of the server, the application, and security. The downside: the application doesn’t take advantage of cloud APIs or other special servers. It’s simply a migration that gets a server out of your data center. When you do this, you are leveraging a type of cloud called Infrastructure-as-a-Service (IaaS). You are essentially treating the cloud like a colocation facility.

The second is to see if your application code can be ported to run within the native execution engine provided by the cloud service. This is called Platform-as-a-Service (PaaS). The benefits are that you can leverage a wealth of APIs and other services offered by the cloud provider. The downsides are that you have to ensure that your code can work on the service (which may require recoding or even redesign) in order to use those APIs or even to run at all. You also don’t have full control over the execution environment, which means that security is managed by the cloud provider, not by you.

And of course, there’s the third option mentioned at the beginning: Writing an entirely new application native for the cloud provider’s PaaS. That’s still the best option, if you can do it. But our task today is to focus on migrating an existing application.

Let’s look into this more closely, via my recent article for Ars Technica, “Great app migration takes enterprise “on-prem” applications to the Cloud.”

Las Vegas, January 2017 — “Alexa, secure the enterprise against ransomware.” Artificial intelligence is making tremendous headway, as seen at this year’s huge Consumer Electronics Show (CES). We’re seeing advances that leverage AI in everything from speech recognition to the Internet of Things (IoT) to robotics to home entertainment.

Not sure what type of music to play? Don’t worry, the AI engine in your cloud-based music service knows your taste better than you do. Want to read a book whilst driving to the office? Self-driving cars are here today in limited applications, and we’ll see a lot more of them in 2017.

Want to make brushing your teeth more fun, all while promoting good dental health? The Ara is the “1st toothbrush with Artificial Intelligence,” claims Kolibree, a French company that introduced the product at CES 2017.

Gadgets dominate CES. While crowds are lining up to see the AI-powered televisions, cookers and robots, the real power of AI is hidden, behind the scenes, and not part of the consumer context. Unknown to happy shoppers exploring AI-based barbecues, artificial intelligence is keeping our networks safe, detecting ransomware, helping improve the efficiency of advertising and marketing, streamlining business efficiencies, diagnosing telecommunication faults in undersea cables, detecting fraud in banking and stock-marketing transactions, and even helping doctors track the spread of infectious diseases.

Medical applications capture the popular imagination because they’re so fast and effective. The IBM Watson AI-enabled supercomputer, for example, can read 200 million pages of text in three seconds — and understand what it reads. An oncology application running on Watson analyzes a patient’s medical records, and then combines attributes from the patient’s file with clinical expertise, external research, and data. Based on that information, Watson for Oncology identifies potential treatment plans for a patient. This means doctors can consider the treatment options provided by Watson when making decisions for individual patients. Watson even offers supporting evidence in the form of administration information, as well as warnings and toxicities for each drug.

Doctor AI Can Cure Cybersecurity Ills

Moving beyond medicine, AI is proving essential for protecting computer networks — and their users against intrusion. The traditional non-AI-based anti-virus and anti-malware products can’t protect against advanced threats, and that’s where companies like Cylance come in. They can use neural networks and other machine-learning techniques to study millions of malicious files, from executables to documents to PDFs to images. Using pattern recognition, Cylance have developed a revolutionary machine learning platform that can identify suspicious files that might be seen on websites or as email attachments, even if it’s never seen that particular type of malware before. Nothing but AI can get the job done, not in an era when over a million new pieces of malware, ranging from phishing to ransomware, appear every single day.

Menlo Security is another network-protection company that leverages artificial intelligence. The Menlo Security Isolation Platform uses AI to prevent Internet-based malware from ever reaching an endpoint, such as a desktop or mobile device, because email and websites are accessed inside the cloud — not on the client’s computer. Only safe, malware-free rendering information is sent to the user’s endpoint, eliminating the possibility of malware reaching the user’s device. An artificial intelligence engine constantly scans the Internet session to provide protection against spear-phishing and other email attacks.

What if a machine does become compromised? It’s unlikely, but it can happen — and the price of a single breech can be incredible, especially if a hacker can take full control of the compromised device and use it to attack other assets within the enterprise, such as servers, routers or executives’ computers. If a breach does occur, that’s when the AI technology of Javelin Networks leaps into action, detecting that the attack is in progress, alerting security teams, isolating the device from the network — while simultaneously tricking the attackers into believing they’ve succeeded in their attack, therefore keeping them “on the line” while real-time forensics gather information needed to identify the attacker and help shut them down for good.

Socializing Artificial Intelligence

There’s a lot more to enterprise-scale AI than medicine and computer security, of course. QSocialNow, an incredibly innovative company in Argentina, uses AI-based Big Data and Predictive Analytics to watch an organization’s social media account — and empower them to not only analyze trends, but respond in mere seconds in the case of an unexpected event, such as a rise in customer complaints, the emergence of a social protest, even a physical disaster like an earthquake or tornado. Yes, humans can watch Twitter, Facebook and other networks, but they can’t act as fast as AI — or spot subtle trends that only advanced machine learning can observe through mathematics.

Robots can be powerful helpers for humanity, and AI-based toothbrushes can help us and our kids keep our teeth healthy. While the jury may be out on the implications of self-driving cars on our city streets, there’s no doubt that AI is keeping us — and our businesses — safe and secure. Let’s celebrate the consumer devices unveiled at CES, and the artificial intelligence working behind the scenes, far from the Las Vegas Strip, for our own benefit.

According to a recent study, 46% of the top one million websites are considered risky. Why? Because the homepage or background ad sites are running software with known vulnerabilities, the site was categorized as a known bad for phishing or malware, or the site had a security incident in the past year.

According to Menlo Security, in its “State of the Web 2016” report introduced mid-December 2016, “… nearly half (46%) of the top million websites are risky.” Indeed, Menlo says, “Primarily due to outdated software, cyber hackers now have their veritable pick of half the web to exploit. And exploitation is becoming more widespread and effective for three reasons: 1. Risky sites have never been easier to exploit; 2. Traditional security products fail to provide adequate protection; 3. Phishing attacks can now utilize legitimate sites.”

This has been a significant issue for years. However, the issue came to the forefront earlier this year when several well-known media sites were essentially hijacked by malicious ads. The New York Times, the BBC, MSN and AOL were hit by tainted advertising that installed ransomware, reports Ars Technica. From their March 15, 2016, article, “Big-name sites hit by rash of malicious ads spreading crypto ransomware”:

The new campaign started last week when ‘Angler,’ a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

The results of this attack, reported The Guardian at around the same time: 

When the infected adverts hit users, they redirect the page to servers hosting the malware, which includes the widely-used (amongst cybercriminals) Angler exploit kit. That kit then attempts to find any back door it can into the target’s computer, where it will install cryptolocker-style software, which encrypts the user’s hard drive and demands payment in bitcoin for the keys to unlock it.

If big-money trusted media sites can be hit, so can nearly any corporate site, e-commerce portal, or any website that uses third-party tools – or where there might be the possibility of unpatched servers and software. That means just about anyone. After all, not all organizations are diligent about monitoring for common vulnerabilities and exploits (CVE) on their on-premises servers. When companies run their websites on multi-tenant hosting facilities, they don’t even have access to the operating system directly, but rely upon the hosting company to install patches and fixes to Windows Server, Linux, Joomla, WordPress and so-on.

A single unpatched operating system, web server platform, database or extension can introduce a vulnerability which can be scanned for. Once found, that CVE can be exploited, by a talented hacker — or by a disgruntled teenager with a readily-available web exploit kit

What can you do about it? Well, you can read my complete story on this subject, “Malware explosion: The web is risky,” published on ITProPortal.

For programmers, a language style guide is essential for helping learn a language’s standards. A style guide also can resolve potential ambiguities in syntax and usage. Interestingly, though, the official Code Conventions for the Java Programming Language guide has not been updated since April 20,1999 – back from long before Oracle bought Sun Microsystems. In fact, the page is listed as for “Archival Purposes Only.”

What’s up with that? I wrote to Andrew Binstock (@PlatypusGuy), the editor-in-chief of Oracle Java Magazine. In the November/December 2016 issue of the magazine, Andrew explained that according to the Java team, the Code Conventions guide was meant as an internal coding guide – not as an attempt to standardize the language.

Instead of Coding Conventions, Mr. B recommends the Google Java Style Guide as a “full set of well-reasoned Java coding guidelines.” So there you have it: If you want the good Java guidelines, look to Google — not to Oracle. Here’s the letter and the response.

bloombergMedical devices are incredibly vulnerable to hacking attacks. In some cases it’s because of software defects that allow for exploits, like buffer overflows, SQL injection or insecure direct object references. In other cases, you can blame misconfigurations, lack of encryption (or weak encryption), non-secure data/control networks, unfettered wireless access, and worse.

Why would hackers go after medical devices? Lots of reasons. To name but one: It’s a potential terrorist threat against real human beings. Remember that Dick Cheney famously disabled the wireless capabilities of his implanted heart monitor for fear of an assassination attack.

Certainly healthcare organizations are being targeted for everything from theft of medical records to ransomware. To quote the report “Hacking Healthcare IT in 2016,” from the Institute for Critical Infrastructure Technology (ICIT):

The Healthcare sector manages very sensitive and diverse data, which ranges from personal identifiable information (PII) to financial information. Data is increasingly stored digitally as electronic Protected Health Information (ePHI). Systems belonging to the Healthcare sector and the Federal Government have recently been targeted because they contain vast amounts of PII and financial data. Both sectors collect, store, and protect data concerning United States citizens and government employees. The government systems are considered more difficult to attack because the United States Government has been investing in cybersecurity for a (slightly) longer period. Healthcare systems attract more attackers because they contain a wider variety of information. An electronic health record (EHR) contains a patient’s personal identifiable information, their private health information, and their financial information.

EHR adoption has increased over the past few years under the Health Information Technology and Economics Clinical Health (HITECH) Act. Stan Wisseman [from Hewlett-Packard] comments, “EHRs enable greater access to patient records and facilitate sharing of information among providers, payers and patients themselves. However, with extensive access, more centralized data storage, and confidential information sent over networks, there is an increased risk of privacy breach through data leakage, theft, loss, or cyber-attack. A cautious approach to IT integration is warranted to ensure that patients’ sensitive information is protected.”

Let’s talk devices. Those could be everything from emergency-room monitors to pacemakers to insulin pumps to X-ray machines whose radiation settings might be changed or overridden by malware. The ICIT report says,

Mobile devices introduce new threat vectors to the organization. Employees and patients expand the attack surface by connecting smartphones, tablets, and computers to the network. Healthcare organizations can address the pervasiveness of mobile devices through an Acceptable Use policy and a Bring-Your-Own-Device policy. Acceptable Use policies govern what data can be accessed on what devices. BYOD policies benefit healthcare organizations by decreasing the cost of infrastructure and by increasing employee productivity. Mobile devices can be corrupted, lost, or stolen. The BYOD policy should address how the information security team will mitigate the risk of compromised devices. One solution is to install software to remotely wipe devices upon command or if they do not reconnect to the network after a fixed period. Another solution is to have mobile devices connect from a secured virtual private network to a virtual environment. The virtual machine should have data loss prevention software that restricts whether data can be accessed or transferred out of the environment.

The Internet of Things – and the increased prevalence of medical devices connected hospital or home networks – increase the risk. What can you do about it? The ICIT report says,

The best mitigation strategy to ensure trust in a network connected to the internet of things, and to mitigate future cyber events in general, begins with knowing what devices are connected to the network, why those devices are connected to the network, and how those devices are individually configured. Otherwise, attackers can conduct old and innovative attacks without the organization’s knowledge by compromising that one insecure system.

Given how common these devices are, keeping IT in the loop may seem impossible — but we must rise to the challenge, ICIT says:

If a cyber network is a castle, then every insecure device with a connection to the internet is a secret passage that the adversary can exploit to infiltrate the network. Security systems are reactive. They have to know about something before they can recognize it. Modern systems already have difficulty preventing intrusion by slight variations of known malware. Most commercial security solutions such as firewalls, IDS/ IPS, and behavioral analytic systems function by monitoring where the attacker could attack the network and protecting those weakened points. The tools cannot protect systems that IT and the information security team are not aware exist.

The home environment – or any use outside the hospital setting – is another huge concern, says the report:

Remote monitoring devices could enable attackers to track the activity and health information of individuals over time. This possibility could impose a chilling effect on some patients. While the effect may lessen over time as remote monitoring technologies become normal, it could alter patient behavior enough to cause alarm and panic.

Pain medicine pumps and other devices that distribute controlled substances are likely high value targets to some attackers. If compromise of a system is as simple as downloading free malware to a USB and plugging the USB into the pump, then average drug addicts can exploit homecare and other vulnerable patients by fooling the monitors. One of the simpler mitigation strategies would be to combine remote monitoring technologies with sensors that aggregate activity data to match a profile of expected user activity.

A major responsibility falls onto the device makers – and the programmers who create the embedded software. For the most part, they are simply not up to the challenge of designing secure devices, and may not have the polices, practices and tools in place to get cybersecurity right. Regrettably, the ICIT report doesn’t go into much detail about the embedded software, but does state,

Unlike cell phones and other trendy technologies, embedded devices require years of research and development; sadly, cybersecurity is a new concept to many healthcare manufacturers and it may be years before the next generation of embedded devices incorporates security into its architecture. In other sectors, if a vulnerability is discovered, then developers rush to create and issue a patch. In the healthcare and embedded device environment, this approach is infeasible. Developers must anticipate what the cyber landscape will look like years in advance if they hope to preempt attacks on their devices. This model is unattainable.

In November 2015, Bloomberg Businessweek published a chilling story, “It’s Way too Easy to Hack the Hospital.” The authors, Monte Reel and Jordon Robertson, wrote about one hacker, Billy Rios:

Shortly after flying home from the Mayo gig, Rios ordered his first device—a Hospira Symbiq infusion pump. He wasn’t targeting that particular manufacturer or model to investigate; he simply happened to find one posted on EBay for about $100. It was an odd feeling, putting it in his online shopping cart. Was buying one of these without some sort of license even legal? he wondered. Is it OK to crack this open?

Infusion pumps can be found in almost every hospital room, usually affixed to a metal stand next to the patient’s bed, automatically delivering intravenous drips, injectable drugs, or other fluids into a patient’s bloodstream. Hospira, a company that was bought by Pfizer this year, is a leading manufacturer of the devices, with several different models on the market. On the company’s website, an article explains that “smart pumps” are designed to improve patient safety by automating intravenous drug delivery, which it says accounts for 56 percent of all medication errors.

Rios connected his pump to a computer network, just as a hospital would, and discovered it was possible to remotely take over the machine and “press” the buttons on the device’s touchscreen, as if someone were standing right in front of it. He found that he could set the machine to dump an entire vial of medication into a patient. A doctor or nurse standing in front of the machine might be able to spot such a manipulation and stop the infusion before the entire vial empties, but a hospital staff member keeping an eye on the pump from a centralized monitoring station wouldn’t notice a thing, he says.

 The 97-page ICIT report makes some recommendations, which I heartily agree with.

  • With each item connected to the internet of things there is a universe of vulnerabilities. Empirical evidence of aggressive penetration testing before and after a medical device is released to the public must be a manufacturer requirement.
  • Ongoing training must be paramount in any responsible healthcare organization. Adversarial initiatives typically start with targeting staff via spear phishing and watering hole attacks. The act of an ill- prepared executive clicking on a malicious link can trigger a hurricane of immediate and long term negative impact on the organization and innocent individuals whose records were exfiltrated or manipulated by bad actors.
  • A cybersecurity-centric culture must demand safer devices from manufacturers, privacy adherence by the healthcare sector as a whole and legislation that expedites the path to a more secure and technologically scalable future by policy makers.

This whole thing is scary. The healthcare industry needs to set up its game on cybersecurity.

firefox-privateBe paranoid! When you visit a website for the first time, it can learn a lot about you. If you have cookies on your computer from one of the site’s partners, it can see what else you have been doing. And it can place cookies onto your computer so it can track your future activities.

Many (or most?) browsers have some variation of “private” browsing mode. In that mode, websites shouldn’t be able to read cookies stored on your computer, and they shouldn’t be able to place permanent cookies onto your computer. (They think they can place cookies, but those cookies are deleted at the end of the session.)

Those settings aren’t good enough, because they are either all or nothing, and offer a poor balance between ease-of-use and security/privacy. The industry can and must do better. See why in my essay on NetworkWorld, “We need a better Private Browsing Mode.

 

zebra-tc8000Are you a coder? Architect? Database guru? Network engineer? Mobile developer? User-experience expert? If you have hands-on tech skills, get those hands dirty at a Hackathon.

Full disclosure: Years ago, I thought Hackathons were, well, silly. If you’ve got the skills and extra energy, put them to work for coding your own mobile apps. Do a startup! Make some dough! Contribute to an open-source project! Do something productive instead of taking part in coding contests!

Since then, I’ve seen the light, because it’s clear that Hackathons are a win-win-win.

  • They are a win for techies, because they get to hone their abilities, meet people, and learn stuff.
  • They are a win for Hackathon sponsors, because they often give the latest tools, platforms and APIs a real workout.
  • They are a win for the industry, because they help advance the creation and popularization of emerging standards.

One upcoming Hackathon that I’d like to call attention to: The MEF LSO Hackathon will be at the upcoming MEF16 Global Networking Conference, in Baltimore, Nov. 7-10. The work will support Third Network service projects that are built upon key OpenLSO scenarios and OpenCS use cases for constructing Layer 2 and Layer 3 services. You can read about a previous MEF LSO Hackathon here.

Build your skills! Advance the industry! Meet interesting people! Sign up for a Hackathon!

gaurdian_duke-1What’s it going to mean for Java? When Oracle purchased Sun Microsystems that was one of the biggest questions on the minds of many software developers, and indeed, the entire industry. In an April 2009 blog post, “Oracle, Sun, Winners, Losers,” written when the deal was announced (it closed in January 2010), I predicted,

Winner: Java. Java is very important to Sun. Expect a lot of investment — in the areas that are important to Oracle.

Loser: The Java Community Process. Oracle is not known for openness. Oracle is not known for embracing competitors, or for collaborating with them to create markets. Instead, Oracle is known to play hardball to dominate its markets.

Looks like I called that one correctly. While Oracle continues to invest in Java, it’s not big on true engagement with the community (aka, the Java Community Process). In a story in SD Times, “Java EE awaits its future,” published July 20, 2016, Alex Handy writes about what to expect at the forthcoming JavaOne conference, including about Java EE:

When Oracle purchased Sun Microsystems in 2010, the immediate worry in the marketplace was that the company would become a bad actor around Java. Six years later, it would seem that these fears have come true—at least in part. The biggest new platform for Java, Android, remains embroiled in ugly litigation between Google and Oracle.

Despite outward appearances of a danger for mainstream Java, however, it’s undeniable that the OpenJDK has continued along apace, almost at the same rate of change IT experienced at Sun. When Sun open-sourced the OpenJDK under the GPL before it was acquired by Oracle, it was, in a sense, ensuring that no single entity could control Java entirely, as with Linux.

Java EE, however, has lagged behind in its attention from Oracle. Java EE 7 arrived two years ago, and it’s already out of step with the new APIs introduced in OpenJDK 8. The executive committee at the Java Community Process is ready to move the enterprise platform along its road map. Yet something has stopped Java EE dead in its tracks at Oracle. JSR 366 laid out the foundations for this next revision of the platform in the fall of 2015. One would never know that, however, by looking at the Expert Committee mailing lists at the JCP: Those have been completely silent since 2014.

Alex continues,

One person who’s worried that JavaOne won’t reveal any amazing new developments in Java EE is Reza Rahman. He’s a former Java EE evangelist at Oracle, and is now one of the founders of the Java EE Guardians, a group dedicated to goading Oracle into action, or going around them entirely.

“Our principal goal is to move Java EE forward using community involvement. Our biggest concern now is if Oracle is even committed to delivering Java EE. There are various ways of solving it, but the best is for Oracle to commit to and deliver Java EE 8,” said Rahman.

His concerns come from the fact that the Java EE 8 specification has been, essentially, stalled by lack of action on Oracle’s part. The specification leads for the project are stuck in a sort of limbo, with their last chunk of work completed in December, followed by no indication of movement inside Oracle.

Alex quotes an executive at Red Hat, Craig Muzilla, who seems justifiably pessimistic:

The only thing standing in the way of evolving Java EE right now, said Muzilla, is Oracle. “Basically, what Oracle does is they hold the keys to the [Test Compatibility Kit] for certifying in EE, but in terms of creating other ways of using Java, other runtime environments, they don’t have anything other than their name on the language,” he said.

Java is still going strong. Oracle’s commitment to the community and the process – not so much. This is one “told you so” that I’m not proud of, not one bit.

javamagThe newest issue of the second-best software development publication is out – and it’s a doozy. You’ll definitely want to read the July/August 2016 issue of Java Magazine.

(The #1 publication in this space is my own Software Development Times. Yeah, SD Times rules.)

Here is how Andrew Binstock, editor-in-chief of Java Magazine, describes the latest issue:

…in which we look at enterprise Java – not so much at Java EE as a platform, but at individual services that can be useful as part of a larger solution, For example, we examine JSON-P, the two core Java libraries for parsing JSON data; JavaMail, the standalone library for sending and receiving email messages; and JASPIC , which is a custom way to handle security, often used with containers. For Java EE fans, one of the leaders of the JSF team discusses in considerable detail the changes being delivered in the upcoming JSF 2.3 release.

We also show off JShell from Java 9, which is an interactive shell (or REPL) useful for testing Java code snippets. It will surely become one of the most used features of the new language release, especially for testing code interactively without having to set up and run an entire project.

And we continue our series on JVM languages with JRuby, the JVM implementation of the Ruby scripting language. The article’s author, Charlie Nutter, who implemented most of the language, discusses not only the benefits of JRuby but how it became one of the fastest implementations of Ruby.

For new to intermediate programmers, we deliver more of our in-depth tutorials. Michael Kölling concludes his two-part series on generics by explaining the use of and logic behind wildcards in generics. And a book excerpt on NIO.2 illustrates advanced uses of files, paths, and directories, including an example that demonstrates how to monitor a directory for changes to its files.

In addition, we have our usual code quiz with its customary detailed solutions, a book review of a new text on writing maintainable code, an editorial about some of the challenges of writing code using only small classes, and the overview of a Java Enhancement Proposal (JEP) for Java linker. A linker in Java? Have a look.

The story I particularly recommend is “Using the Java APIs for JSON processing.” David Delabasseé covers the Java API for JavaScript Object Notation Processing (JSR-353) and its two parts, one of which is high-level object modal API, and the other a lower-level streaming API.

It’s a solid issue. Read it – and subscribe, it’s free!

can-busWhen it comes to cars, safety means more than strong brakes, good tires, a safety cage, and lots of airbags. It also means software that won’t betray you; software that doesn’t pose a risk to life and property; software that’s working for you, not for a hacker.

Please join me for this upcoming webinar, where I am presenting along with Arthur Hicken, the Code Curmudgeon and technology evangelist for Parasoft. It’s on Thursday, August 18. Arthur and I have been plotting and scheming, and there will be some excellent information presented. Don’t miss it! Click here to register.

Driving Risks out of Embedded Automotive Software

Automobiles are becoming the ultimate mobile computer. Popular models have as many as 100 Electronic Control Units (ECUs), while high-end models push 200 ECUs. Those processors run hundreds of millions of lines of code written by the OEMs’ teams and external contractors—often for black-box assemblies. Modern cars also have increasingly sophisticated high-bandwidth internal networks and unprecedented external connectivity. Considering that no code is 100% error-free, these factors point to an unprecedented need to manage the risks of failure—including protecting life and property, avoiding costly recalls, and reducing the risk of ruinous lawsuits.

This one-hour practical webinar will review the business risks of defective embedded software in today’s connected cars. Led by Arthur Hicken, Parasoft’s automotive technology expert and evangelist, and Alan Zeichick, an independent technology analyst and founding editor of Software Development Times, the webinar will also cover five practical techniques for driving the risks out of embedded automotive software, including:

• Policy enforcement
• Reducing defects during coding
• Effective techniques for acceptance testing
• Using metrics analytics to measure risk
• Converting SDLC analytics into specific tasks to focus on the riskiest software

You can apply the proven techniques you’ll learn to code written and tested by your teams, as well as code supplied by your vendors and contractors.

faux-awardScammers give local businesses a faux award and then try to make money by selling certificates, trophies, and so-on.

Going through my spam filter today, I received FIVE of this exact same message praising SD Times for winning the “2016 Best of Huntington” award. The emails came from five different email addresses and domains, but the links all went to the same domain. (SD Times is published by BZ Media; I’m the “Z” of BZ Media.)

The messages read:

Sd Times has been selected for the 2016 Best of Huntington Awards for Media & Entertainment.

For details and more information please view our website: [link redacted]

If you click the link (which is not included above), you are given the choice to buy lots of things, including a plaque for $149.99 or a crystal award for $199.99. Such a deal: You can buy both for $229.99, a $349.98 value!! This is probably a lucrative scam, since the cost of sending emails is approximately $0; even a very low response rate could yield a lot of profits.

The site’s FAQ says,

Do I have to pay for an award to be a winner?

No, you do not have to pay for an award to be a winner. Award winners are not chosen based on purchases, however it is your option, to have us send you one of the 2016 Awards that have been designed for display at your place of business.

Shouldn’t my award be free?

No, most business organizations charge their members annual dues and with that money sponsor an annual award program. The Best of Huntington Award Program does not charge membership dues and as an award recipient, there is no membership requirement. We simply ask each award recipient to pay for the cost of their awards.

There is also a link to a free press release. Aren’t you excited on our behalf?

Press Release

FOR IMMEDIATE RELEASE

Sd Times Receives 2016 Best of Huntington Award

Huntington Award Program Honors the Achievement

HUNTINGTON July 2, 2016 — Sd Times has been selected for the 2016 Best of Huntington Award in the Media & Entertainment category by the Huntington Award Program.

Each year, the Huntington Award Program identifies companies that we believe have achieved exceptional marketing success in their local community and business category. These are local companies that enhance the positive image of small business through service to their customers and our community. These exceptional companies help make the Huntington area a great place to live, work and play.

Various sources of information were gathered and analyzed to choose the winners in each category. The 2016 Huntington Award Program focuses on quality, not quantity. Winners are determined based on the information gathered both internally by the Huntington Award Program and data provided by third parties.

About Huntington Award Program

The Huntington Award Program is an annual awards program honoring the achievements and accomplishments of local businesses throughout the Huntington area. Recognition is given to those companies that have shown the ability to use their best practices and implemented programs to generate competitive advantages and long-term value.

The Huntington Award Program was established to recognize the best of local businesses in our community. Our organization works exclusively with local business owners, trade groups, professional associations and other business advertising and marketing groups. Our mission is to recognize the small business community’s contributions to the U.S. economy.

SOURCE: Huntington Award Program

ThrivingandSurvivinginaMulti-CoreWorld-1I wrote five contributions for an ebook from AMD Developer Central — and forgot entirely about it! The book, called “Surviving and Thriving in a Multi-Core World: Taking Advantage of Threads and Cores on AMD64,” popped up in this morning’s Google Alerts report. I have no idea why!

Here are the pieces that I wrote for the book, published in 2006. Darn, they still read well! Other contributors include my friends Anderson Bailey, Alexa Weber Morales and Larry O’Brien.

  • Driving in the Fast Lane: Multi-Core Computing for Programmers, Part 1 (page 5)
  • Driving in the Fast Lane: Multi-Core Computing for Programmers, Part 2 (page 8)
  • Coarse-Grained Vs. Fine-Grained Threading for Native Applications, Part 1 (p. 37)
  • Coarse-Grained Vs. Fine-Grained Threading for Native Applications, Part 2 (p. 40)
  • Device Driver & BIOS Development for AMD Systems (p. 87)

I am still obsessed with questionable automotive analogies. The first article begins with:

The main road near my house, called Skyline Drive, drives me nuts. For several miles, it’s a quasi-limited access highway. But for some inexplicable reason, it keeps alternating between one and two lanes in each direction. In the two-lane part, traffic moves along swiftly, even during rush hour. In the one-lane part, the traffic merges back together, and everything crawls to a standstill. When the next two-lane part appears, things speed up again.

Two lanes are better than one — and not just because they can accommodate twice as many cars. What makes the two-lane section better is that people can overtake. In the one-lane portion (which has a double-yellow line, so there’s no passing), traffic is limited to the slowest truck’s speed, or to little-old-man-peering-over-the-steering-wheel-of-his-Dodge-Dart speed. Wake me when we get there. But in the two-lane section, the traffic can sort itself out. Trucks move to the right, cars pass on the left. Police and other priority traffic weave in and out, using both lanes depending on which has more capacity at any particular moment. Delivery services with a convoy of trucks will exploit both lanes to improve throughput. The entire system becomes more efficient, and net flow of cars through those two-lane sections is considerably higher.

Okay, you’ve figured out that this is all about dual-core and multi-core computing, where cars are analogous to application threads, and the lanes are analogous to processor cores.

I’ll have to admit that my analogy is somewhat simplistic, and purists will say that it’s flawed, because an operating system has more flexibility to schedule tasks in a single-core environment under a preemptive multiprocessing environment. But that flexibility comes at a cost. Yes, if I were really modeling a microprocessor using Skyline Drive, cars would be able to pass each other in the single-lane section, but only if the car in front were to pull over and stop.

Okay, enough about cars. Let’s talk about dual-core and multi-core systems, why businesses are interested in buying them, and what implications all that should have for software developers like us.

Download and enjoy the book – it’s not gated and entirely free.