The best way to have a butt-kicking cloud-native application is to write one from scratch. Leverage the languages, APIs, and architecture of the chosen cloud platform before exploiting its databases, analytics engines, and storage. As I wrote for Ars Technica, this will allow you to take advantage of the wealth of resources offered by companies like Microsoft, with their Azure PaaS (Platform-as-a-Service) offering or by Google Cloud Platform’s Google App Engine PaaS service.

Sometimes, however, that’s not the job. Sometimes, you have to take a native application running on a server in your local data center or colocation facility and make it run in the cloud. That means virtual machines.

Before we get into the details, let’s define “native application.” For the purposes of this exercise, it’s an application written in a high-level programming language, like C/C++, C#, or Java. It’s an application running directly on a machine talking to an operating system, like Linux or Windows, that you want to run on a cloud platform like Windows Azure, Amazon Web Services (AWS), or Google Cloud Platform (GCP).

What we are not talking about is an application that has already been virtualized, such as already running within VMware’s ESXi or Microsoft’s Hyper-V virtual machine. Sure, moving an ESXi or Hyper-V application running on-premises into the cloud is an important migration that may improve performance and add elasticity while switching capital expenses to operational expenses. Important, yes, but not a challenge. All the virtual machine giants and cloud hosts have copious documentation to help you make the switch… which amounts to basically copying the virtual machine file onto a cloud server and turning it on.

Many possible scenarios exist for moving a native datacenter application into the cloud. They boil down to two main types of migrations, and there’s no clear reason to choose one over the other:

The first is to create a virtual server within your chosen cloud provider, perhaps running Windows Server or running a flavor of Linux. Once that virtual server has been created, you migrate the application from your on-prem server to the new virtual server—exactly as you would if you were moving from one of your servers to a new server. The benefits: the application migration is straightforward, and you have 100-percent control of the server, the application, and security. The downside: the application doesn’t take advantage of cloud APIs or other special servers. It’s simply a migration that gets a server out of your data center. When you do this, you are leveraging a type of cloud called Infrastructure-as-a-Service (IaaS). You are essentially treating the cloud like a colocation facility.

The second is to see if your application code can be ported to run within the native execution engine provided by the cloud service. This is called Platform-as-a-Service (PaaS). The benefits are that you can leverage a wealth of APIs and other services offered by the cloud provider. The downsides are that you have to ensure that your code can work on the service (which may require recoding or even redesign) in order to use those APIs or even to run at all. You also don’t have full control over the execution environment, which means that security is managed by the cloud provider, not by you.

And of course, there’s the third option mentioned at the beginning: Writing an entirely new application native for the cloud provider’s PaaS. That’s still the best option, if you can do it. But our task today is to focus on migrating an existing application.

Let’s look into this more closely, via my recent article for Ars Technica, “Great app migration takes enterprise “on-prem” applications to the Cloud.”

wayne-rashWhen an employee account is compromised by malware, the malware establishes a foothold on the user’s computer – and immediately tries to gain access to additional resources. It turns out that with the right data gathering tools, and with the right Big Data analytics and machine-learning methodologies, the anomalous network traffic caused by this activity can be detected – and thwarted.

That’s the role played by Blindspotter, a new anti-malware system that seems like a specialized version of a network intrusion detection/prevention system (IDPS). Blindspotter can help against many types of malware attacks. Those include one of the most insidious and successful hack vectors today: spear phishing. That’s when a high-level target in your company is singled out for attack by malicious emails or by compromised websites. All the victim has to do is open an email, or click on a link, and wham – malware is quietly installed and operating. (High-level targets include top executives, financial staff and IT administrators.)

My colleague Wayne Rash recently wrote about this network monitoring solution and its creator, Balabit, for eWeek in “Blindspotter Uses Machine Learning to Find Suspicious Network Activity”:

The idea behind Balabit’s Blindspotter and Shell Control Box is that if you gather enough data and subject it to analysis comparing activity that’s expected with actual activity on an active network, it’s possible to tell if someone is using a person’s credentials who shouldn’t be or whether a privileged user is abusing their access rights.

 The Balabit Shell Control Box is an appliance that monitors all network activity and records the activity of users, including all privileged users, right down to every keystroke and mouse movement. Because privileged users such as network administrators are a key target for breaches it can pay special attention to them.

The Blindspotter software sifts through the data collected by the Shell Control Box and looks for anything out of the ordinary. In addition to spotting things like a user coming into the network from a strange IP address or at an unusual time of day—something that other security software can do—Blindspotter is able to analyze what’s happening with each user, but is able to spot what is not happening, in other words deviations from normal behavior.

Read the whole story here. Thank you, Wayne, for telling us about Blindspotter.

bloombergMedical devices are incredibly vulnerable to hacking attacks. In some cases it’s because of software defects that allow for exploits, like buffer overflows, SQL injection or insecure direct object references. In other cases, you can blame misconfigurations, lack of encryption (or weak encryption), non-secure data/control networks, unfettered wireless access, and worse.

Why would hackers go after medical devices? Lots of reasons. To name but one: It’s a potential terrorist threat against real human beings. Remember that Dick Cheney famously disabled the wireless capabilities of his implanted heart monitor for fear of an assassination attack.

Certainly healthcare organizations are being targeted for everything from theft of medical records to ransomware. To quote the report “Hacking Healthcare IT in 2016,” from the Institute for Critical Infrastructure Technology (ICIT):

The Healthcare sector manages very sensitive and diverse data, which ranges from personal identifiable information (PII) to financial information. Data is increasingly stored digitally as electronic Protected Health Information (ePHI). Systems belonging to the Healthcare sector and the Federal Government have recently been targeted because they contain vast amounts of PII and financial data. Both sectors collect, store, and protect data concerning United States citizens and government employees. The government systems are considered more difficult to attack because the United States Government has been investing in cybersecurity for a (slightly) longer period. Healthcare systems attract more attackers because they contain a wider variety of information. An electronic health record (EHR) contains a patient’s personal identifiable information, their private health information, and their financial information.

EHR adoption has increased over the past few years under the Health Information Technology and Economics Clinical Health (HITECH) Act. Stan Wisseman [from Hewlett-Packard] comments, “EHRs enable greater access to patient records and facilitate sharing of information among providers, payers and patients themselves. However, with extensive access, more centralized data storage, and confidential information sent over networks, there is an increased risk of privacy breach through data leakage, theft, loss, or cyber-attack. A cautious approach to IT integration is warranted to ensure that patients’ sensitive information is protected.”

Let’s talk devices. Those could be everything from emergency-room monitors to pacemakers to insulin pumps to X-ray machines whose radiation settings might be changed or overridden by malware. The ICIT report says,

Mobile devices introduce new threat vectors to the organization. Employees and patients expand the attack surface by connecting smartphones, tablets, and computers to the network. Healthcare organizations can address the pervasiveness of mobile devices through an Acceptable Use policy and a Bring-Your-Own-Device policy. Acceptable Use policies govern what data can be accessed on what devices. BYOD policies benefit healthcare organizations by decreasing the cost of infrastructure and by increasing employee productivity. Mobile devices can be corrupted, lost, or stolen. The BYOD policy should address how the information security team will mitigate the risk of compromised devices. One solution is to install software to remotely wipe devices upon command or if they do not reconnect to the network after a fixed period. Another solution is to have mobile devices connect from a secured virtual private network to a virtual environment. The virtual machine should have data loss prevention software that restricts whether data can be accessed or transferred out of the environment.

The Internet of Things – and the increased prevalence of medical devices connected hospital or home networks – increase the risk. What can you do about it? The ICIT report says,

The best mitigation strategy to ensure trust in a network connected to the internet of things, and to mitigate future cyber events in general, begins with knowing what devices are connected to the network, why those devices are connected to the network, and how those devices are individually configured. Otherwise, attackers can conduct old and innovative attacks without the organization’s knowledge by compromising that one insecure system.

Given how common these devices are, keeping IT in the loop may seem impossible — but we must rise to the challenge, ICIT says:

If a cyber network is a castle, then every insecure device with a connection to the internet is a secret passage that the adversary can exploit to infiltrate the network. Security systems are reactive. They have to know about something before they can recognize it. Modern systems already have difficulty preventing intrusion by slight variations of known malware. Most commercial security solutions such as firewalls, IDS/ IPS, and behavioral analytic systems function by monitoring where the attacker could attack the network and protecting those weakened points. The tools cannot protect systems that IT and the information security team are not aware exist.

The home environment – or any use outside the hospital setting – is another huge concern, says the report:

Remote monitoring devices could enable attackers to track the activity and health information of individuals over time. This possibility could impose a chilling effect on some patients. While the effect may lessen over time as remote monitoring technologies become normal, it could alter patient behavior enough to cause alarm and panic.

Pain medicine pumps and other devices that distribute controlled substances are likely high value targets to some attackers. If compromise of a system is as simple as downloading free malware to a USB and plugging the USB into the pump, then average drug addicts can exploit homecare and other vulnerable patients by fooling the monitors. One of the simpler mitigation strategies would be to combine remote monitoring technologies with sensors that aggregate activity data to match a profile of expected user activity.

A major responsibility falls onto the device makers – and the programmers who create the embedded software. For the most part, they are simply not up to the challenge of designing secure devices, and may not have the polices, practices and tools in place to get cybersecurity right. Regrettably, the ICIT report doesn’t go into much detail about the embedded software, but does state,

Unlike cell phones and other trendy technologies, embedded devices require years of research and development; sadly, cybersecurity is a new concept to many healthcare manufacturers and it may be years before the next generation of embedded devices incorporates security into its architecture. In other sectors, if a vulnerability is discovered, then developers rush to create and issue a patch. In the healthcare and embedded device environment, this approach is infeasible. Developers must anticipate what the cyber landscape will look like years in advance if they hope to preempt attacks on their devices. This model is unattainable.

In November 2015, Bloomberg Businessweek published a chilling story, “It’s Way too Easy to Hack the Hospital.” The authors, Monte Reel and Jordon Robertson, wrote about one hacker, Billy Rios:

Shortly after flying home from the Mayo gig, Rios ordered his first device—a Hospira Symbiq infusion pump. He wasn’t targeting that particular manufacturer or model to investigate; he simply happened to find one posted on EBay for about $100. It was an odd feeling, putting it in his online shopping cart. Was buying one of these without some sort of license even legal? he wondered. Is it OK to crack this open?

Infusion pumps can be found in almost every hospital room, usually affixed to a metal stand next to the patient’s bed, automatically delivering intravenous drips, injectable drugs, or other fluids into a patient’s bloodstream. Hospira, a company that was bought by Pfizer this year, is a leading manufacturer of the devices, with several different models on the market. On the company’s website, an article explains that “smart pumps” are designed to improve patient safety by automating intravenous drug delivery, which it says accounts for 56 percent of all medication errors.

Rios connected his pump to a computer network, just as a hospital would, and discovered it was possible to remotely take over the machine and “press” the buttons on the device’s touchscreen, as if someone were standing right in front of it. He found that he could set the machine to dump an entire vial of medication into a patient. A doctor or nurse standing in front of the machine might be able to spot such a manipulation and stop the infusion before the entire vial empties, but a hospital staff member keeping an eye on the pump from a centralized monitoring station wouldn’t notice a thing, he says.

 The 97-page ICIT report makes some recommendations, which I heartily agree with.

  • With each item connected to the internet of things there is a universe of vulnerabilities. Empirical evidence of aggressive penetration testing before and after a medical device is released to the public must be a manufacturer requirement.
  • Ongoing training must be paramount in any responsible healthcare organization. Adversarial initiatives typically start with targeting staff via spear phishing and watering hole attacks. The act of an ill- prepared executive clicking on a malicious link can trigger a hurricane of immediate and long term negative impact on the organization and innocent individuals whose records were exfiltrated or manipulated by bad actors.
  • A cybersecurity-centric culture must demand safer devices from manufacturers, privacy adherence by the healthcare sector as a whole and legislation that expedites the path to a more secure and technologically scalable future by policy makers.

This whole thing is scary. The healthcare industry needs to set up its game on cybersecurity.

Web filtering. The phrase connotes keeping employees from spending too much time monitoring Beanie Baby auctions on eBay, and stopping school children from encountering (accidentally or deliberately) naughty images on the internet. Were it that simple — but nowadays, web filtering goes far beyond monitoring staff productivity and maintaining the innocence of childhood. For nearly every organization today, web filtering should be considered an absolute necessity. Small business, K-12 school district, Fortune 500, non-profit or government… it doesn’t matter. The unfiltered internet is not your friend, and legally, it’s a liability; a lawsuit waiting to happen.

Web filtering means blocking internet applications – including browsers – from contacting or retrieving content from websites that violate an Acceptable Use Policy (AUP). The policy might set rules blocking some specific websites (like a competitor’s website). It might block some types of content (like pornography), or detected malware, or even access to external email systems via browser or dedicated clients. In some cases, the AUP might include what we might call government-mandated restrictions (like certain websites in hostile countries, or specific news sources).

Unacceptable use in the AUP

The specifics of the AUP might be up to the organization to define entirely on its own; that would be the case for a small business, perhaps. Government organizations, such as schools or military contractors, might have specific AUP requirements placed on them by funders or government regulators, thereby becoming a compliance/governance issue as well. And of course, legal counsel should be sought when creating policies that balance an employee’s ability to access content of his/her choice, against the company’s obligations to protect the employee (or the company) from unwanted content.

It sounds easy – the organization sets an AUP, consulting legal, IT and the executive suite. The IT department implements the AUP through web filtering, perhaps with software installed and configured on devices; perhaps through firewall settings at the network level; and perhaps through filters managed by the internet service provider. It’s not simple, however. The internet is constantly changing, employees are adept at finding ways around web filters; and besides, it’s tricky to translate policies written in English (as in the legal policy document) into technological actions. We’ll get into that a bit more shortly. First, let’s look more closely at why organizations need those Acceptable Use Policies, and what should be in them.

  • Improving employee productivity. This is the low-hanging fruit. You may not want employees spending too much time on Facebook on their company computers. (Of course, if they are permitted to bring mobile devices into the office, they can still access social media via cellular). That’s a policy consideration, though the jury is out if a blank blockage is the best way to improve productivity.
  • Preserving bandwidth. For technical reasons, you may not want employees streaming Netflix movies or Hulu-hosted classic TV shows across the business network. Seinfeld is fun, but not on company bandwidth. As with social media, this is truly up to the organization to decide.
  • Blocking email access. Many organizations do not want their employees accessing external email services from the business computers. That’s not only for productivity purposes, but also makes it difficult to engage in unapproved communications – such as emailing confidential documents to yourself. Merely configuring your corporate email server to block the exfiltration of intellectual property is not enough if users can access personal gmail.com or hushmail.com accounts. Blocking external email requires filtering multiple protocols as well as specific email hosts, and may be required to protect not only your IP, but also customers’ data, in addition to complying with regulations from organizations like the U.S. Securities and Exchange Commission.
  • Blocking access to pornography and NSFW content. It’s not that you are being a stick-in-the-mud prude, or protecting children. The initial NSFW (not safe for work) are often said as a joke, but in reality, some content can be construed as contributing to an hostile work environment. Just like the need to maintain a physically safe work environment – no blocked fire exits, for example – so too must you maintain a safe internet environment. If users can be unwillingly subjected to offensive content by other employees, there may be significant legal, financial and even public-relations consequences if it’s seen as harassment.
  • Blocking access to malware. A senior manager receives a spear-phishing email that looks legit. He clicks the link and, wham; ransomware is on his computer. Or spyware, like a keylogger. Or perhaps a back-door that allows other access by hackers. You can train employees over and over, and they will still click on unsafe email links or on web pages. Anti-malware software on the computer can help, but web filtering is part of a layered approach to anti-malware protection. This applies to trackers as well: As part of the AUP, the web filters may be configured to block ad networks, behavior trackers and other web services that attempt to glean information about your company and its workers.
  • Blocking access to specific internet applications. Whether you consider it Shadow IT or simply an individual’s personal preference, it’s up to an AUP to decide which online services should be accessible; either through an installed application or via a web interface. Think about online storage repositories such as Microsoft OneDrive, Google Drive, Dropbox or Box: Personal accounts can be high-bandwidth conduits for exfiltration of vast quantities of valuable IP. Web filtering can help manage the situation.
  • Compliance with government regulations. Whether it’s a military base commander making a ruling, or a government restricting access to news sites out-of-favor with the current regime; those are rules that often must be followed without question. It’s not my purpose here to discuss whether this is “censorship,” though in some cases it certainly is. However, the laws of the United States do not apply outside the United States, and blocking some internet sites or types of web content may be part of the requirements for doing business in some countries or with some governments. What’s important here is to ensure that you have effective controls and technology in place to implement the AUP – but don’t go broadly beyond it.
  • Compliance with industry requirements. Let’s use the example of the requirements that schools or public libraries must protect students (and the general public) from content deemed to be unacceptable in that environment. After all, just because a patron is an adult doesn’t mean he/she is allowed to watch pornography on one of the library’s publicly accessible computers, or even on his/her computer on the library’s Wi-Fi network.

What about children?

A key ingredient in creating an AUP for schools and libraries in the United States is the Children’s Internet Protection Act (CIPA). In order to receive government subsidies or discounts, schools and libraries must comply with these regulations. (Other countries may have an equivalent to these policies.)

Learn more about how the CIPA should drive the AUP for any organization where minors can be found, and how best to implement an AUP for secure protection. That’s all covered in my article for Upgrade Magazine, “Web filtering for business: Keep your secrets safe, and keep your employees happy.”

vz_use_outdoor_headerThank you, NetGear, for taking care of your valued customers. On July 1, the company announced that it would be shutting down the proprietary back-end cloud services required for its VueZone cameras to work – turning them into expensive camera-shaped paperweights. See “Throwing our IoT investment in the trash thanks to NetGear.”

The next day, I was contacted by the company’s global communications manager. He defended the policy, arguing that NetGear was not only giving 18 months’ notice of the shutdown, but they are “doing our best to help VueZone customers migrate to the Arlo platform by offering significant discounts, exclusive to our VueZone customers.” See “A response from NetGear regarding the VueZone IoT trashcan story.”

And now, the company has done a 180° turn. NetGear will not turn off the service, at least not at this time. Well done. Here’s the email that came a few minutes ago. The good news for VueZone customers is that they can continue. On the other hand, let’s not party too heartily. The danger posed by proprietary cloud services driving IoT devices remains. When the vendor decides to turn it off, all you have is recycle-ware and potentially, one heck of a migration issue.

Subject: VueZone Services to Continue Beyond January 1, 2018

Dear valued VueZone customer,

On July 1, 2016, NETGEAR announced the planned discontinuation of services for the VueZone video monitoring product line, which was scheduled to begin as of January 1, 2018.

Since the announcement, we have received overwhelming feedback from our VueZone customers expressing a desire for continued services and support for the VueZone camera system. We have heard your passionate response and have decided to extend service for the VueZone product line. Although NETGEAR no longer manufactures or sells VueZone hardware, NETGEAR will continue to support existing VueZone customers beyond January 1, 2018.

We truly appreciate the loyalty of our customers and we will continue our commitment of delivering the highest quality and most innovative solutions for consumers and businesses. Thank you for choosing us.

Best regards,

The NETGEAR VueZone Team

July 19, 2016

pidgeonThere are standards for everything, it seems. And those of us who work on Internet things are often amused (or bemused) by what comes out of the Internet Engineering Task Force (IETF). An oldie but a goodie is a document from 1999, RFC-2549, “IP over Avian Carriers with Quality of Service.”

An RFC, or Request for Comment, is what the IETF calls a standards document. (And yes, I’m browsing my favorite IETF pages during a break from doing “real” work. It’s that kind of day.)

RFC-2549 updates RFC-1149, “A Standard for the Transmission of IP Datagrams on Avian Carriers.” That older standard did not address Quality of Service. I’ll leave it for you to enjoy both those documents, but let me share this part of RFC-2549:

Overview and Rational

The following quality of service levels are available: Concorde, First, Business, and Coach. Concorde class offers expedited data delivery. One major benefit to using Avian Carriers is that this is the only networking technology that earns frequent flyer miles, plus the Concorde and First classes of service earn 50% bonus miles per packet. Ostriches are an alternate carrier that have much greater bulk transfer capability but provide slower delivery, and require the use of bridges between domains.

The service level is indicated on a per-carrier basis by bar-code markings on the wing. One implementation strategy is for a bar-code reader to scan each carrier as it enters the router and then enqueue it in the proper queue, gated to prevent exit until the proper time. The carriers may sleep while enqueued.

Most years, the IETF publishes so-called April Fool’s RFCs. The best list of them I’ve seen is on Wikipedia. If you’re looking to take a work break, give ’em a read. Many of them are quite clever! However, I still like RFC-2549 the best.

A prized part of my library is “The Complete April Fools’ Day RFCs” compiled by by Thomas Limoncelli and Peter Salus. Sadly this collection stops at 2007. Still, it’s a great coffee table book to leave lying around for when people like Bob MetcalfeTim Berners-Lee or Al Gore come by to visit.

5d3_9839-100670811-primary.idgeThank you, NetGear, for the response to my July 11 opinion essay for NetworkWorld, “Throwing our IoT investment in the trash thanks to NetGear.” In that story, I used the example of our soon-to-be-obsolete VueZone home video monitoring system: At the end of 2017, NetGear is turning off the back-end servers that make VueZone work – and so all the hardware will become fancy camera-shaped paperweights.

The broader message of the story is that every IoT device tied into a proprietary back-end service will be turned to recycleware if (or when) the service provider chooses to turn it off. My friend Jason Perlow picked up this theme in his story published on July 12 on ZDNet, “All your IoT devices are doomed” and included a nice link to my NetworkWorld story. As Jason wrote,

First, it was Aether’s smart speaker, the Cone. Then, it was the Revolv smart hub. Now, it appears NetGear’s connected home wireless security cameras, VueZone, is next on the list.

I’m sure I’ve left out more than a few others that have slipped under the radar. It seems like every month an Internet of Things (IoT) device becomes abandonware after its cloud service is discontinued.

Many of these devices once disconnected from the cloud become useless. They can’t be remotely managed, and some of them stop functioning as standalone (or were never capable of it in the first place). Are these products going end-of-life too soon? What are we to do about this endless pile of e-waste that seems to be the inevitable casualty of the connected-device age?

I would like to publicly acknowledge NetGear for sending a quick response to my story. Apparently — and contrary to what I wrote — the company did offer a migration path for existing VueZone customers. I can’t find the message anywhere, but can’t ignore the possibility that it was sucked into the spamverse.

Here is the full response from Nathan Papadopulos, Global Communications & Strategic Marketing for NetGear:

Hello Alan,

I am writing in response to your recent article about disposing of IoT products. As you may know, the VueZone product line came to Netgear   as part of our acquisition of Avaak, Inc. back in 2012, and is the predecessor of the current Arlo security system. Although we wanted to avoid interruptions of the VueZone services as much as possible, we are now faced with the need to discontinue support  for the camera line. VueZone was built on technologies which are now outdated and a platform which is not scalable. Netgear has since shifted our resources to building better, more robust products which are the Arlo system of security cameras. Netgear is doing our best to help VueZone customers migrate to the Arlo platform by offering significant discounts, exclusive to our VueZone customers.

1. On July 1, 2016, Netgear officially announced the discontinuation of VueZone services to VueZone customers. Netgear has sent out an email notification to the entire VueZone customer base with the content in the “Official End-of-Services Announcement.” Netgear is providing the VueZone customers with an 18-month notice, which means that the actual effective date of this discontinuation of services will be on January 1, 2018.

2. Between July 2 and July 6, 26,000+ customers who currently have an active VueZone base station have received an email with an offer to purchase an Arlo 4-camera kit. There will be two options for them to choose from:

a. Standard Arlo 4-camera kit for $299.99

b. Refurbished Arlo 4-camera kit for $149.99

Both refurbished and new Arlo systems come with the NETGEAR limited 1-year hardware warranty. The promotion will run until the end of July 31, 2016.

It appears NetGear is trying to do the right thing, though they lose points for offering the discounted migration path for less than one month. Still, the fact remains that obsolescence of service-dependent IoT devices is a big problem. Some costly devices will cease functioning if the service goes down; others will lose significant functionality.

And thank you, Jason, for the new word: Abandonware.

SharePoint-2016-Preview-tiltedExcellent story about SharePoint in ComputerWorld this week. It gives encouragement to those who prefer to run SharePoint in their own data centers (on-premises), rather than in the cloud. In “The Future of SharePoint,” Brian Alderman writes,

In case you missed it, on May 4 Microsoft made it loud and clear it has resuscitated SharePoint On-Premises and there will be future versions, even beyond SharePoint Server 2016. However, by making you aware of the scenarios most appropriate for On-Premises and the scenarios where you can benefit from SharePoint Online, Microsoft is going to remain adamant about allowing you to create the perfect SharePoint hybrid deployment.

The future of SharePoint begins with SharePoint Online, meaning changes, features and functionality will first be deployed to SharePoint Online, and then rolled out to your SharePoint Server On-Premises deployment. This approach isn’t much of a surprise, being that SharePoint Server 2016 On-Premises was “engineered” from SharePoint Online.

Brian was writing about a post on the Microsoft SharePoint blog, and one I had overlooked (else I’d have written about it back in May. In the post, “SharePoint Server 2016—your foundation for the future,” the SharePoint Team says,

We remain committed to our on-premises customers and recognize the need to modernize experiences, patterns and practices in SharePoint Server. While our innovation will be delivered to Office 365 first, we will provide many of the new experiences and frameworks to SharePoint Server 2016 customers with Software Assurance through Feature Packs. This means you won’t have to wait for the next version of SharePoint Server to take advantage of our cloud-born innovation in your datacenter.

The first Feature Pack will be delivered through our public update channel starting in calendar year 2017, and customers will have control over which features are enabled in their on-premises farms. We will provide more detail about our plans for Feature Packs in coming months.

In addition, we will deliver a set of capabilities for SharePoint Server 2016 that address the unique needs of on-premises customers.

Now, make no mistake: The emphasis at Microsoft is squarely on Office 365 and SharePoint Online. Or as the company says SharePoint Server is, “powering your journey to the mobile-first, cloud-first world.” However, it is clear that SharePoint On-Premises will continue for some period of time. Later in the blog post in the FAQ, this is stated quite definitively:

Is SharePoint Server 2016 the last server release?

No, we remain committed to our customer’s on-premises and do not consider SharePoint Server 2016 to be the last on-premises server release.

The best place to learn about SharePoint 2016 is at BZ Media’s SPTechCon, returning to San Francisco from Dec. 5-8. (I am the Z of BZ Media.) SPTechCon, the SharePoint Technology Conference, offers more than 80 technical classes and tutorials — presented by the most knowledgeable instructors working in SharePoint today — to help you improve your skills and broaden your knowledge of Microsoft’s collaboration and productivity software.

SPTechCon will feature the first conference sessions on SharePoint 2016. Be there! Learn more at http://www.sptechcon.com.

crashCloud services crash. Of course, non-cloud-services crash too — a server in your data center can go down, too. At least there you can do something, or if it’s a critical system you can plan with redundancies and failover.

Not so much with cloud services, as this morning’s failure of Google Calendar clearly shows. The photo shows Google’s status dashboard as of 6:53am on Thursday, June 30.

I wrote about crashes at Amazon Web Services and Apple’s MobileMe back in 2008 in “When the cloud was good, it was very good. But when it was bad it was rotten.”

More recently, in 2011, I covered another AWS failure in “Skynet didn’t take down Amazon Web Services.”

Overall, cloud services are quite reliable. But they are not perfect, and it’s a mistake to think that just because they are offered by huge corporations, they will be error-free and offer 100% uptime. Be sure to work that into your plans, especially if you and your employees rely upon public cloud services to get your job done, or if your customers interact with you through cloud services.

stopwatchI can hear the protesters. “What do we want? Faster automated emails! When do we want them? In under 20 nanoseconds!

Some things have to be snappy. A Web page must load fast, or your customers will click away. Moving the mouse has to move the cursor without pauses or hesitations. Streaming video should buffer rarely and unobtrusively; it’s almost always better to temporarily degrade the video quality than to pause the playback. And of course, for a touch interface to work well, it must be snappy, which Apple has learned with iOS, and which Google learned with Project Butter.

The same is true with automated emails. They should be generated and transmitted immediately — that is, is under a minute.

I recently went to book a night’s stay at a Days Inn, a part of the Wyndham Hotel Group, and so I had to log into my Wyndham account. Bad news: I couldn’t remember the password. So, I used the password retrieval system, giving my account number and info. The website said to check my e-mail for the reset link. Kudos: That’s a lot better than saying “We’ll mail you your password,” and then sending it in plain text!!

So, I flipped over to my e-mail client. Checked for new mail. Nothing. Checked again. Nothing. Checked again. Nothing. Checked the spam folder. Nothing. Checked for new mail. Nothing. Checked again. Nothing.

I submitted the request for the password reset at 9:15 a.m. The link appeared in my inbox at 10:08 a.m. By that time, I had already booked the stay with Best Western. Sorry, Days Inn! You snooze, you lose.

What happened? The e-mail header didn’t show a transit delay, so we can’t blame the Internet. Rather, it took nearly an hour for the email to be uploaded from the originating server. This is terrible customer service, plain and simple.

It’s not merely Wyndham. When I purchase something from Amazon, the confirmation e-mail generally arrives in less than 30 seconds. When I purchase from Barnes & Noble, a confirmation e-mail can take an hour. The worst is Apple: Confirmations of purchases from the iTunes Store can take three days to appear. Three days!

It’s time to examine your policies for generating automated e-mails. You do have policies, right? I would suggest a delay of no more than one minute from when the user performs an action that would generate an e-mail and having the message delivered to the SMTP server.

Set the policy. Automated emails should go out in seconds — certainly in under one minute. Design for that and test for that. More importantly, audit the policy on a regular basis, and monitor actual performance. If password resets or order confirmations are taking 53 minutes to hit the Internet, you have a problem.

world-wifi-dayWiFi is the present and future of local area networking. Forget about families getting rid of the home phone. The real cable-cutters are dropping the Cat-5 Ethernet in favor of IEEE 802.11 Wireless Local Area Networks, generally known as WiFi. Let’s celebrate World WiFi Day!

There are no Cat-5 cables connected in my house and home office. Not one. And no Ethernet jacks either. (By contrast, when we moved into our house in the Bay Area in the early 1990s, I wired nearly every room with Ethernet jacks.) There’s a box of Ethernet cables, but I haven’t touched them in years. Instead, it’s all WiFi. (Technically, WiFi refers to industry products that are compatible with the IEEE 802.11 specification, but for our casual purposes here, it’s all the same thing.)

My 21” iMac (circa 2011) has an Ethernet port. I’ve never used it. My MacBook Air (also circa 2011) doesn’t have an Ethernet port at all; I used to carry a USB-to-Ethernet dongle, but it disappeared a long time ago. It’s not missed. My tablets (iOS, Android and Kindle) are WiFi-only for connectivity. Life is good.

The first-ever World WiFi Day is today — June 20, 2016 . It was declared by the Wireless Broadband Alliance to

be a global platform to recognize and celebrate the significant role Wi-Fi is playing in getting cities and communities around the world connected. It will champion exciting and innovative solutions to help bridge the digital divide, with Connected City initiatives and new service launches at its core.

Sadly, the World WiFi Day initiative is not about the wire-free convenience of Alan’s home office and personal life. Rather, it’s about bringing Internet connectivity to third-world, rural, poor, or connectivity-disadvantaged areas. According to the organization, here are eight completed projects:

  • KT – KT Giga Island – connecting islands to the mainland through advanced networks
  • MallorcaWiFi – City of Palma – Wi-Fi on the beach
  • VENIAM – Connected Port @ Leixões Porto, Portugal
  • ISOCEL – Isospot – Building a Wi-Fi hotspot network in Benin
  • VENIAM – Smart City @ Porto, Portugal
  • Benu Neworks – Carrier Wi-Fi Business Case
  • MCI – Free Wi-Fi for Arbaeen
  • Fon – After the wave: Japan and Fon’s disaster support procedure

It’s a worthy cause. Happy World WiFi Day, everyone!

Waybackmachine3Fire up the WABAC Machine, Mr. Peabody: In June 2008, I wrote a piece for MIT Technology Review explaining “How Facebook Works.”

The story started with this:

Facebook is a wonderful example of the network effect, in which the value of a network to a user is exponentially proportional to the number of other users that network has.

Facebook’s power derives from what Jeff Rothschild, its vice president of technology, calls the “social graph”–the sum of the wildly various connections between the site’s users and their friends; between people and events; between events and photos; between photos and people; and between a huge number of discrete objects linked by metadata describing them and their connections.

Facebook maintains data centers in Santa Clara, CA; San Francisco; and Northern Virginia. The centers are built on the backs of three tiers of x86 servers loaded up with open-source software, some that Facebook has created itself.

Let’s look at the main facility, in Santa Clara, and then show how it interacts with its siblings.

Read the whole story here… and check out Facebook’s current Open Source project pages too.

5D3_9411Forget vendor lock-in: Carrier operation support systems (OSS) and business support systems (BSS) are going open source. And so are many of the other parts of the software stack that drive the end-to-end services within and between carrier networks.

That’s the message from TM Forum Live, one of the most important conferences for the telecommunications carrier industry.

Held in Nice, France, from May 9-12, 2016, TM Forum Live is produced by TM Forum, a key organization in the carrier universe.

TM Forum works closely with other industry groups, like the MEF, OpenDaylight and OPNFV. I am impressed how so many open-source projects, standards-defining bodies and vendor consortia are collaborating a very detailed level to improve interoperability at many, many levels. The key to making that work: Open source.

You can read more about open source and collaboration between these organizations in my NetworkWorld column, “Open source networking: The time is now.”

While I’m talking about TM Forum Live, let me give a public shout-out to:

Pipeline Magazine – this is the best publication, bar none, for the OSS, BSS, digital transformation and telecommunications service provider space. At TM Forum Live, I attended their annual Innovation Awards, which is the best-prepared, best-vetted awards program I’ve ever seen.

Netcracker Technology — arguably the top vendor in providing software tools for telecommunications and cable companies. They are leading the charge for the agile reinvention of a traditionally slow-moving industry. I’d like to thank them for hosting a delicious press-and-analyst dinner at the historic Hotel Negresco – wow.

Looking forward to next year’s TM Forum Live, May 15-18, 2017.

Have you done your backups lately? If not… now is the time, thanks to ransomware. Ransomware is a huge problem that’s causing real harm to businesses and individuals. Technology service providers are gearing up to fight these cyberattacks – and that’s coming none too soon.

In March 2016, Methodist Hospital reported that it was operating in an internal state of emergency after a ransomware attack encrypted files on its file servers. The data on those servers was inaccessible to the Kentucky-based hospital’s doctors and administrators unless the hackers received about $1,600 in Bitcoins.

A month earlier, a hospital in Los Angeles paid about $17,000 in ransom money to recover its data after a similar hack attack. According to the CEO of Hollywood Presbyterian Medical Center, Allen Stefanek, “The quickest and most efficient way to restore our systems and administrative functions was to pay the ransom and obtain the decryption key.”

As far as we know, no lives have been lost due to ransomware. Even so, the attacks keep coming – and consumers and businesses are often left with no choice but to pay the ransom, usually in untraceable Bitcoins.

The culprit in many of the attacks — but not all of them — is a sophisticated trojan called Locky. First appearing in 2013, Locky is described by Avast as using top-class features, “such as a domain generation algorithm, custom encrypted communication, TOR/BitCoin payment, strong RSA-2048+AES-128 file encryption and can encrypt over 160 different file types, including virtual disks, source codes and databases.” Multiple versions of Locky are on the Internet today, which makes fighting it particularly frustrating. Another virulent ransomware trojan is called CryptoLocker, which works in a similar way.

Ransomware is a type of cyberattack where bad actors gain access to a system, such as a consumer’s desktop or a corporate server. The attack vector might be provided by downloading a piece of malware attached to an email, visiting a corrupted website that runs a script that installs the malware or by opening a document that contains a malicious macro that downloads the malware. In most ransomware attacks, the malware encrypts the user’s data and then demands an untraceable ransom in order to either decrypt the data or provide the user with a key to decrypt it. Because the data is encrypted, even removing the malware from the computer will not restore system functionality; typically, the victim has to restore the entire system from a backup or pay the ransom and hope for the best.

As cyberattacks go, ransomware has proven to be extremely effective at both frustrating users and obtaining ransom money for the attackers. Beyond the ransom demands, of course, there are other concerns. Once the malware has access to the user or server data… what’s to prevent it from scanning for passwords, bank account information, or other types of sensitive intellectual property? Or deleting files in a way where they can’t be retrieved? Nothing. Nothing at all. And even if you pay the ransom, there’s no guarantee that you’ll get your files back. The only true solution to ransomware is prevention.

Read about how to prevent ransomware in my essay for Upgrade Magazine, “What we can do about ransomware – today and tomorrow.” And do your backups!

dell-pcslimitedGet used to new names. Instead of Dell the computer company, think Dell Technologies. Instead of EMC, think Dell EMC. So far, it seems that VMware won’t be renamed Dell VMware, but one never can tell. (They’ve come a long way since PC’s Limited.)

What’s in a name? Not much. What’s in an acquisition of this magnitude (US$67 billion)? In this case, lots of synergies.

Within the Dell corporate structure, EMC will find a stable, predictable management team. Michael Dell is a thoughtful leader, and is unlikely to do anything stupid with EMC’s technology, products, branding and customer base. The industry shouldn’t expect bet-the-business moonshots. Satisfied customers should expect more-of-the-same, but with Dell’s deep pockets to fuel innovation.

Dell’s private ownership is another asset. Without the distraction of stock prices and quarterly reporting, managers don’t have to worry about beating the Street. They can focus on beating competitors.

EMC and Dell have partnered to develop technology and products since 2001. While the partnership dissolved in 2011, the synergies remained… and now will be locked in, obviously, by the acquisition. That means new products for physical data centers, the cloud, and hybrid environments. Those will be boosted by Dell. Similarly, there are tons of professional services. The Dell relationship will only expand those opportunities.

Nearly everyone will be a winner…. Everyone, that is, except for Dell/EMC’s biggest competitors, like HPE and IBM. They must be quaking in their boots.

panamaThe Panama Papers should be a wake-up call to every CEO, COO, CTO and CIO in every company.

Yes, it’s good that alleged malfeasance by governments and big institutions came to light. However, it’s also clear that many companies simply take for granted that their confidential information will remain confidential. This includes data that’s shared within the company, as well as information that’s shared with trusted external partners, such as law firms, financial advisors and consultants. We’re talking everything from instant messages to emails, from documents to databases, from passwords to billing records.

Clients of Mossack Fonseca, the hacked Panamanian law firm, erroneously thought its documents were well protected. How well protected are your documents and IP held by your company’s law firms and other partners? It’s a good question, and shadow IT makes the problem worse. Much worse.

Read why in my column in NetworkWorld: Fight corporate data loss with secure, easy-to-use collaboration tools.

sauronBarcelona, Mobile World Congress 2016—IoT success isn’t about device features, like long-life batteries, factory-floor sensors and snazzy designer wristbands. The real power, the real value, of the Internet of Things is in the data being transmitted from devices to remote servers, and from those remote servers back to the devices.

“Is it secret? Is it safe?” Gandalf asks Frodo in the “Lord of the Rings” movies about the seductive One Ring to Rule Them All. He knows that the One Ring is the ultimate IoT wearable: Sure, the wearer is uniquely invisible, but he’s also vulnerable because the ring’s communications can be tracked and hijacked by the malicious Nazgûl and their nation/state sponsor of terrorism.

Wearables, sensors, batteries, cool apps, great wristbands. Sure, those are necessary for IoT success, but the real trick is to provision reliable, secure and private communications that Black Riders and hordes of nasty Orcs can’t intercept. Read all about it in my NetworkWorld column, “We need secure network infrastructure – not shiny rings – to keep data safe.”

ddjSoftware-defined networks and Network Functions Virtualization will redefine enterprise computing and change the dynamics of the cloud. Data thefts and professional hacks will grow, and development teams will shift their focus from adding new features to hardening against attacks. Those are two of my predictions for 2015.

Big Security: As 2014 came to a close, huge credit-card breaches from retailers like Target faded into the background. Why? The Sony Pictures hack, and the release of an incredible amount of corporate data, made us ask a bigger question: “What is all that information doing on the network anyway?” Attackers took off with Sony Pictures’ spreadsheets about executive salaries, confidential e-mails about actors and actresses, and much, much more.

What information could determined, professional hackers make off with from your own company? If it’s on the network, if it’s on a server, then it could be stolen. And if hackers can gain access to your cloud systems (perhaps through social engineering, perhaps by exploiting bugs), then it’s game over. From pre-released movies and music albums by artists like Madonna, to sensitive healthcare data and credit-card numbers, if it’s on a network, it’s fair game.

No matter where you turn, vulnerabilities are everywhere. Apple patched a hole in its Network Time Protocol implementation. Who’d have thought attackers would use NTP? GitHub has new security flaws. ICANN has scary security flaws. Microsoft released flawed updates. Inexpensive Android phones and tablets are found to have backdoor malware baked right into the devices. I believe that 2015 will demonstrate that attackers can go anywhere and steal anything.

That’s why I think that savvy development organizations will focus on reviewing their new code and existing applications, prioritizing security over adding new functionality. It’s not fun, but it’s 100% necessary.

Big Cloud: Software-defined networking and Network Functions Virtualization are reinventing the network. The fuzzy line between intranet and Internet is getting fuzzier. Cloud Ethernet is linking the data center directly to the cloud. The network edge and core are indistinguishable. SDN and NFV are pushing functions like caching, encryption, load balancing and firewalls into the cloud, improving efficiency and enhancing the user experience.

In the next year, mainstream enterprise developers will begin writing (and rewriting) back-end applications to specifically target and leverage SDN/NFV-based networks. The question of whether the application is going to run on-premises or in the cloud will cease to be relevant. In addition, as cloud providers become more standards-based and interoperable, enterprises will gain more confidence in that model of computing. Get used to cloud APIs; they are the future.

Looking to boost your job skills? Learn about SDN and NFV. Want to bolster your development team’s efforts? Study your corporate networking infrastructure, and tailor your efforts to matching the long-term IT plans. And put security first—both of your development environments and your deployed applications.

Big Goodbye: The tech media world is constantly changing, and not always for the better. The biggest one is the sunsetting of Dr. Dobb’s Journal, a website for serious programmers, and an enthusiastic bridge between the worlds of computer science and enterprise computing. After 38 years in print and online, the website will continue, but no new articles or content will be commissioned or published.

DDJ was the greatest programming magazine ever. There’s a lot that can be said about its sad demise, and I will refer you to two people who are quite eloquent on the subject: Andrew Binstock, the editor of DDJ, and Larry O’Brien, SD Times columnist and former editor of Software Development Magazine, which was folded into DDJ a long time ago.

Speaking as a long-time reader—and as one of the founding judges of DDJ’s Jolt Awards—I can assure you that Dr. Dobb’s will be missed.

sony_pictures_logoFor development teams, cloud computing is enthralling. Where’s the best place for distributed developers, telecommuters and contractors to reach the code repository? In the cloud. Where do you want the high-performance build servers? At a cloud host, where you can commandeer CPU resources as needed. Storing artifacts? Use cheap cloud storage. Hosting test harness? The cloud has tremendous resources. Load testing? The scales. Management of beta sites? Cloud. Distribution of finished builds? Cloud. Access to libraries and other tools? Other than the primary IDE itself, cloud. (I’m not a fan of working in a browser, sorry.)

Sure, a one-person dev team can store an entire software development environment on a huge workstation or a convenient laptop. Sure, a corporation or government that has exceptional concerns or extraordinary requirements may choose to host its own servers and tools. In most cases, however, there are undeniable benefits for cloud-oriented development, and if developers aren’t there today, they will be soon. My expectation is that new projects and team launch on the cloud. Existing projects and teams will remain on their current dev platforms (and on-prem) until there’s a good reason to make the switch.

The economics are unassailable, the convenience is unparalleled, and both performance and scalability can’t be matched by in-house code repositories. Security in the cloud may also outmatch most organizations’ internal software development servers too.

We have read horror stories about the theft of millions of credit cards and other personal data, medical data, business documents, government diplomatic files, e-mails and so-on. It’s all terrible and unlikely to stop, as the recent hacking of Sony Pictures demonstrates.

What we haven’t heard about, through all these hacks, is the broad theft of source code, and certainly not thefts from hosted development environments. Such hacks would be bad, not only because proprietary source code contains trade secrets, but also because the source can be reverse-engineered to reveal attack vulnerabilities. (Open-source projects also can be reverse-engineered, of course, but that is expected and in fact encouraged.)

Even worse that reverse-engineering of stolen source code would be unauthorized and undetected modifications to a codebase. Can you imagine if hackers could infiltrate an e-commerce system’s hosted code and inject a back door or keylogger? You get the idea.

I am not implying that cloud-based software development systems are more secure than on-premises systems. I am also not implying the inverse. My instinct is to suggest that hosted cloud dev systems are as safe, or safer, than internal data center systems. However, there’s truly no way to know.

A recent report from the analyst firm Technology Business Research took this stance, arguing that security for cloud-based services will end up being better than security at local servers and data centers. While not speaking specifically to software development, a recent TBR report concluded, “Security remains the driving force behind cloud vendor adoption, while the emerging trends of hybrid IT and analytics, and the associated security complications they bring to the table, foreshadow steady and growing demand for cloud professional services over the next few years.”

Let me close by drawing your attention to a competition geared at startups innovating in the cloud. The Clouded Leopard’s Den is for young companies looking for A-series or B/C-series funding, and offers tools and resources to help them grow, attract publicity, and possibly even find new funding. If you work at a cloud startup, check it out!

lawyer

Cloud-based storage is amazing. Simply amazing. That’s especially true when you are talking about data from end users that are accessing your applications via the public Internet.

If you store data in your local data center, you have the best control over it. You can place it close to your application servers. You can amortize it as a long-term asset. You can see it, touch it and secure it—or at least, have full control over security.

There are downsides, of course, to maintaining your own on-site data storage. You have to back it up. You have to plan for disasters. You have to anticipate future capacity requirements through budgeting and advance purchases. You have to pay for the data center itself, including real estate, electricity, heating, cooling, racks and other infrastructure. Operationally you have to pipe that data to and from your remote end users through your own connections to the Internet or to cloud application servers.

By contrast, cloud storage is very appealing. You pay only for what you use. You can hold service providers to service-level guarantees. You can pay the cloud provider to replicate the storage in various locations, so customers and end-users are closer to their data. You can pay for security, for backups, for disaster recovery provisions. And if you find that performance isn’t sufficient, you can migrate to another provider or order up a faster pipe. That’s a lot easier, cheaper and faster than ripping-and-replacing outdated storage racks in your own data center.

Gotta say, if I were setting up a new application for use by off-site users (whether customers or employees), I’d lean toward cloud storage. In most cases, the costs are comparable, and the operational convenience can’t be beat.

Plus, if you are at a startup, a monthly storage bill is easier to work with than a large initial outlay for on-site storage infrastructure.

Case closed? No, not exactly. On-site still has some tricks up its sleeve. If your application servers are on-site, local storage is faster to access. If your users are within your own building or campus, you can keep everything within your local area network.

There also may be legal advantages to maintaining and using onsite storage. For compliance purposes, you know exactly where the data is at all times. You can set up your own instruction detection systems and access logs, rather than relying upon the access controls offered by the cloud provider. (If your firm isn’t good at security, of course, you may want to trust the cloud provider over your own IT department.)

On that subject: Lawsuits. In her story, “Eek! Lawyers are Coming After Your Fitbit!,” Sharon Fisher writes about insurance attorneys issuing subpoenas against a client’s FitBit data to show that she wasn’t truly as injured as she claimed. The issue here isn’t only about wearables or healthcare. It’s also about access. “Will legal firms be able to subpoena your cloud provider if that’s where your fitness data is stored? How much are they going to fight to protect you?” Fisher asks.

Say a hostile attorney wants to subpoena some of your data. If the storage is in your own data center, the subpoena comes to your company, where your own legal staff can advise whether to respond by complying or fighting the subpoena.

Yet: If the data is stored in the cloud, attorneys or government officials could come after you, or try to get access by giving a subpoena to the cloud service provider. Of course, encryption might prevent the cloud provider from complying. Still, this is a new concern, especially given the broad subpoena powers granted to prosecutors, litigating attorneys and government agencies.

It’s something to talk to your corporate counsel about. Bring your legal eagles into the conversation.

bob-metcalfeWashington, D.C. — “It’s not time to regulate and control and tax the Internet.” Those are words of wisdom about Net Neutrality from Dr. Robert Metcalfe, inventor of Ethernet, held here at the MEF GEN14, the annual conference from the Metro Ethernet Forum.

Bob Metcalfe is a legend. Not only for his role in inventing Ethernet and founding 3Com, but also now for his role as a professor of innovation at the University of Texas at Austin. (Disclosure: Bob is also a personal friend and former colleague.)

At MEF GEN14, Bob gave a keynote, chaired a panel on innovation, and was behind the microphone on several other occasions. I’m going to share some of his comments and observations.

  • Why didn’t WiFi appear earlier? According to Bob, radio links were part of the original work on Ethernet, but the radios themselves were too slow, too large, and required too much electricity. “It was Moore’s Law,” he explained, saying that chips and circuits needed to evolve in order to make radio-based Ethernet viable.
  • Interoperability is key for innovation. Bob believes that in order to have strong competitive markets, you need to have frameworks for compatibility, such as standards organizations and common protocols. This helps startups and established players compete by creating faster, better and cheaper implementations, and also creating new differentiated value-added features on top of those standards. “The context must be interoperability,” he insisted.
  • Implicit with interoperability is that innovation must respect backward compatibility. Whether in consumer or enterprise computing, customers and markets do not like to throw away their prior investments. “I have learned about efficacy of FOCACA: Freedom of Choice Among Competing Alternatives. That’s the lesson,” Bob said, citing Ethernet protocols but also pointing at all layers of the protocol stack.
  • There is a new Internet coming: the Gigabit Internet. “We started with the Kilobit Internet, where the killer apps were remote login and tty,” Bob explained. Technology and carriers then moved to today’s ubiquitous Megabit Internet, “where we got the World Wide Web and social media.” The next step is the Gigabit Internet. “What will the killer app be for the Gigabit Internet? Nobody knows.”
  • With the Internet of Things, is Moore’s Law going to continue? Bob sees the IoT being constrained by hardware, especially microprocessors. He pointed out that as semiconductor feature sizes have gone down to 14nm scale, the costs of building fabrication factories has grown to billions of dollars. While chip features shrink, the industry has also moved to consolidation, larger wafers, 3D packing, and much lower power consumption—all of which are needed to make cheap chips for IoT devices. There is a lot of innovation in the semiconductor market, Bob said, “but with devices counted in the trillions, the bottleneck is how long it takes to design and build the chips!”
  • With Net Neutrality, the U.S. Federal Communications Commission should keep out. “The FCC is being asked to invade this party,” Bob said. “The FCC used to run the Internet. Do you remember that everyone had to use acoustic couplers because it was too dangerous to connect customer equipment to the phone network directly?” He insists that big players—he named Google—are playing with fire by lobbying for Net Neutrality. “Inviting the government to come in and regulate the Internet. Where could it go? Not in the way of innovation!” he insisted.

forecastMalicious agents can crash a website by implementing a DDoS—a Distributed Denial of Service Attack—against a server. So can sloppy programmers.

Take, for example, the National Weather Service’s website, operated by the United States National Oceanic and Atmospheric Administration, or NOAA. On August 29, the service went down, hard, as single rogue Android app overwhelmed the NOAA’s servers.

As far as anyone knows, there was nothing deliberately malicious about the Android app, and of course there is nothing specific to Android in this situation. However, the app in question was making service requests of the NOAA server’s public APIs every few milliseconds. With hundreds, thousands or tens of thousands of instances of that app running simultaneously, the NOAA system collapsed.

There is plenty of blame to go around. Let’s start with the app developer.

Certainly the app developer was sloppy, sloppy, sloppy. I can imagine that the app worked great in testing, when only one or two instances of the app were running at any one time on a simulator or on actual devices. Scale it up—boom! This is a case where manual code reviews may have found the problem. Maybe not.

Alternatively, the app developer could have checked to see if the public APIs it required (such as NOAA’s weather API) could handle the anticipated load. However, if the coders didn’t write the software correctly, load testing may not have sufficed. For example, say that the design of the app was to pull data every 10 seconds. If the programmers accidentally set up the data retrieval to pull the data every 10 milliseconds, the load would be 1,000x greater than anticipated. Every 10 seconds, no problem. Every 10 milliseconds, big problem. Boom!

This is a nasty bug, to be sure. Compilers, libraries, test systems, all would verify that the software ran correctly, because it did run correctly. In the scenario I’ve painted, it simply wasn’t coded to meet the design. The bug might have been spotted if someone noticed a very high number of external API calls, or again, perhaps during a manual code review. Otherwise, it’s not hard to see how it would slip through the crack.

Let’s talk about NOAA now. In 2004, the weather service beefed up its Internet loads in anticipation of Hurricane Charley, contracting with Akamai to host some of its busiest Web pages, using distributed edge caching to reduce the load. This worked well, and Akamai continued to work with NOAA. It’s unclear if Akamai also fronted public API calls; my guess is that those were passed straight through to the National Weather Service servers.

NOAA’s biggest problem is that it has little control over external applications that use its public APIs. Even so, Akamai was still in the circuit and, fortunately, was able to help with the response to the Aug. 29 accidental DDoS situation. At that time, the National Weather Service put out a bulletin on its NIDS messaging service that said:

TO – ALL CUSTOMERS SUBJECT – POINT FORECAST ISSUES. WE ARE PROVIDING NOTICE TO ALL THAT NIDS HAS IDENTIFIED AN ABUSING ANDROID APP THAT IS IMPACTING FORECAST.WEATHER.GOV. WE HAVE FORCED ALL SITES TO ZONES WHILE WE WORK WITH THE DEVELOPER. AKAMAI IS BEING ENGAGED TO BLOCK THE APPLICATION. WE CONTINUE TO WORK ON THIS ISSUE AND APPRECIATE YOUR PATIENCE AS WE WORK TO RESOLVE THIS ISSUE.

Kudos to NOAA for responding quickly and transparently to this issue. Still, this appalling situation—that a single DDoS attack could cripple such a vital service—is unacceptable. Imagine if this had been a malicious attack, rather than an accidental coding error, and if the attacker was able to modify the attack in real time to go around Akamai’s attempts to block the traffic.

What could NOAA have done differently? For best results, DDoS attacks must be blocked within the network before they reach (and overwhelm) the server. Therefore, DDoS detection and blocking systems should already have been in place.

For example, with the ability to detect potential attacks due to abnormally high volumes of requests from a specific app, raise alarms, and also drop such requests (which is fast and takes few resources), instead of servicing them (which is slow and takes more resources). Perfect? No. DDoS scenarios are nasty and messy. No matter how you slice it, though, a single misbehaving app should never be able to crash your server.

caprizaHTML browser virtualization, not APIs, may be the best way to mobilize existing enterprise applications like SAP ERP, Oracle E-Business Suite or Microsoft Dynamics.

At least, that’s the perspective of Capriza, a company offering a SaaS-based mobility platform that uses a cloud-based secure virtualized browser to screen-scrape data and context from the enterprise application’s Web interface. That data is then sent to a mobile device (like a phone or tablet), where it’s rendered and presented through Capriza’s app.

The process is bidirectional: New transactional data can be entered into the phone’s Capriza app, which transmits it to the cloud-based platform. The Capriza cloud, in turn, opens up a secure virtual browser session with the enterprise software and performs the transaction.

The Capriza platform, which I saw demonstrated last week, is designed for employees to access enterprise applications from their Android or Apple phones, or from tablets.

The platform isn’t cheap – it’s licensed on a per-seat, per-enterprise-application basis, and you can expect a five-digit or six-digit annual cost, at the least. However, Capriza is solving a pesky problem.

Think about the mainstream way to deploy a mobile application that accesses big enterprise back-end platforms. Of course, if the enterprise software vendor offers a mobile app, and if that app meets you needs, that’s the way to go. What if the enterprise software’s vendor doesn’t have a mobile app – or if the software is homegrown? The traditional approach would be to open up some APIs allowing custom mobile apps to access the back-end systems.

That approach is fraught with peril. It takes a long time. It’s expensive. It could destabilize the platform. It’s hard to ensure security, and often it’s a challenge to synchronize API access policies with client/server or browser-based access policies and ACLs. Even if you can license the APIs from an enterprise software vendor, how comfortable are you exposing them over the public Internet — or even through a VPN?

That’s why I like the Capriza approach of using a virtual browser to access the existing Web-based interface. In theory (and probably in practice), the enterprise software doesn’t have to be touched at all. Since the Capriza SaaS platform has each mobile user log into the enterprise software using the user’s existing Web interface credentials, there should be no security policies and ACLs to replicate or synchronize.

In fact, you can think of Capriza as an intentional man-in-the-middle for mobile users, translating mobile transactions to and from Web transactions on the fly, in real time.

As the company explains it, “Capriza helps companies leverage their multi-million dollar investments in existing enterprise software and leapfrog into the modern mobile era. Rather than recreate the wheel trying to make each enterprise application run on a mobile device, Capriza breaks complex, über business processes into mini ones. Its approach bypasses the myriad of tools, SDKs, coding, integration and APIs required in traditional mobile app development approaches, avoiding the perpetual cost and time requirements, risk and questionable ROI.”

It certainly looks like Capriza wins this week’s game of Buzzword Bingo. Despite the marketing jargon, however, the technology is sound, and Capriza has real customers—and has recently landed a US$27 million investment. That means we’re going to see a lot of more this solution.

Can Capriza do it all? Well, no. It works best on plain vanilla Web sites; no Flash, no Java, no embedded apps. While it’s somewhat resilient, changes to an internal Web site can break the screen-scraping technology. And while the design process for new mobile integrations doesn’t require a real programmer, the designer must be very proficient with the enterprise application, and model all the pathways through the software. This can be tricky to design and test.

Plus, of course, you have to be comfortable letting a third-party SaaS platform act as the man-in-the-middle to your business’s most sensitive applications.

Bottom line: If you are mobilizing enterprise software — either commercial or home-grown — that allow browser access, Capriza offers a solution worth considering.

Neineil-sedakal Sedaka insists that breakin’ up is hard to do. Will that apply to the planned split of Hewlett-Packard into two companies? Let’s be clear: This split is a wonderful idea, and it’s long overdue.

Once upon a time, HP was in three businesses: Electronics test equipment (like gas spectrometers); expensive, high-margin data center products and services (like minicomputers and consulting); and cheap, low-margin commodity tech products (like laptops, small business routers and ink-jet printers).

HP spun off the legacy test-equipment business in 1999 (forming Agilent Technologies) and that was a win-win for both Agilent and for the somewhat-more-focused remainder of HP. Now it’s time to do it again.

There are precious few synergies between the enterprise side of HP and the commodity side. The enterprise side has everything that a big business would want, from high-end hyperscale servers to Big Data, Software Defined Networks, massive storage arrays, e-commerce security, and oh, lots of consulting services.

Over the past few years, HP has been on an acquisitions binge to support its enterprise portfolio, helping make it more competitive against arch-rival IBM. The company has snapped up ArcSight and Fortify Software (software security); Electronic Data Systems (IT services and consulting); 3PAR (storage); Vertica Systems (database analytics); Shunra (network virtualization); Eucalyptus (private and hybrid cloud); Stratavia/ExtraQuest (data center automation); and of course, the absurdly overpriced Autonomy (data management).

Those high-touch, high-cost, high-margin enterprise products and services have little synergy with, say, the HP Deskjet 1010 Color Printer, available for US$29.99 at Staples. Sure, there’s money in printers, toner and ink, monitors, laptops and so on. But that’s a very different market, with a race-to-the-bottom drive for market share, horrible margins, crazy supply chain and little to differentiate one Windows-based product from another.

Analysts and investors have been calling for the breakup of HP for years; the company refused, saying that the unified company benefitted from an economy of scale. It’s good that CEO Meg Whitman has acknowledged what everyone knew: HP is sick, and this breakup into Hewlett-Packard Enterprise and HP Inc. is absolutely necessary.

Is breaking up hard to do? For most companies it’s a challenge at the best of times, but this one should be relatively painless. First of all, HP has split up before, so at least there’s some practice. Second, these businesses are so different that it should be obvious where most of HP’s employees, products, customer relationships, partner relationships and intellectual properly will end up.

That’s not to say it’s going to be easy. However, it’s at least feasible.

Both organizations will be attractive takeover targets, that’s for sure. I give it a 50/50 chance that within five years, IBM or Oracle will make a play for Hewlett-Packard Enterprise, or it will combine with a mid-tier player like VMware or EMC.

The high-volume, low-margin HP Inc. will have trouble surviving on its own, because that is an area where scale helps drive down costs and helps manage the supply chain and retail channels. I could see HP Inc. being acquired by Dell or Lenovo, or even by a deep-pocket Internet retailer like Amazon.com.

This breakup is necessary and may be the salvation of Hewlett-Packard’s enterprise business. It may also be the beginning of the end for the most storied company in Silicon Valley.

Big Data Divination Pam BakerYou’ve gotta read “Data Divination: Big Data Strategies,” Pam Baker’s new book about Big Data.

Actually, let me change my recommendation. If you are a techie and you are looking for suggestions on how to configure your Hadoop installation or optimize the storage throughput in your NAS array, this isn’t the book for you. Rather, this is the book for your business-side manager or partner, who is looking to understand not only what Big Data is, but really really learn how to apply data analysis to business problems.

One of the challenges with Big Data is simply understanding it. The phrase is extremely broad and quite nebulous. Yet behind the overhyping of Big Data, there are genuine use cases that demonstrate that looking at your business’ data in a new way can transform your business. It is real, and it is true.

Bake is the editor of the “Fierce Big Data” website. She deconstructs the concept by dispensing with the jargon and the, well, overly smug Big Data worship that one finds in a lot of literature and pushed out by the vendors. With a breezy style that reflects her background as a technology journalist, Baker uses clear examples and lots of interviews to make her points.

What will you learn? To start with, “Data Divination” teaches you how to ask good questions. After all, if you don’t ask, you won’t learn anything from all that data and all those reports. Whether it’s predictive analytics or trend spotting or real-time analysis, she helps you understand which data is valuable and which isn’t. That’s why this book is best for the executive and business-side managers, who are the ultimate beneficiaries of your enterprise’s Big Data investments.

This book goes beyond other books on the subject, which could generally be summarized either as too fluffy and cheerleading, or as myopically focused on implementation details of specific Big Data architectures. For example, there is a lengthy chapter on the privacy implications of data gathering and data analysis, the sort of chapter that a journalist would write, but an engineer wouldn’t even think about.

Once you’ve finished with the basics, Baker jumps into several fascinating use cases: in healthcare, in the security industry, in government and law enforcement, in small business, in agriculture, in transportation, in energy, in retail, in manufacturing, and so on. Those are the most interesting parts of the book, and each use had takeaways that could apply to any industry. Baker is to be commended for digging into the noteworthy challenges that Big Data attempts to help businesses overcome.

It’s a good book. Read it. And tell your business partner, CIO or even CEO to read it too.

azure-statusCloud-based development tools are great. Until they don’t work.

I don’t know if you were affected by Microsoft’s Azure service outage on Thursday, August 14, 2014. As of my deadline, services had been offline for nearly six hours. On its status page, Microsoft was reporting:

Visual Studio Online – Multi-Region – Full Service Interruption

Starting 22:45 13 Aug, 2014 UTC, Visual Studio Online customers may have experienced issues with latency and extended Execution times. The initial incident mitigated at approximately 14:00 UTC. During investigation at 13:52 14 Aug, 2014 UTC, engineering teams began receiving alerts for a separate issue where customers were unable to log in to their Visual Studio Online services. From 13:52 to 19:45 on 14 Aug, 2014 UTC, customers were unable to access their Visual Studio Online resources. Engineering teams have validated their mitigation efforts for both issues and have confirmed that full service has been restored to our Visual Studio Online users. These incidents are now mitigated.

My goal here isn’t to throw Microsoft under the bus. Azure has been quite stable, and other cloud providers, including Amazon, Apple and Google, have seen similar problems. Actually, Amazon in particular has seen a lot of uptime and stability problems with AWS over the past couple of years, though its dashboard on Thursday afternoon shows full service availability.

Let’s think about the broader issue. What’s your contingency plan if your cloud-based services go down, whether it’s one of those players, or a service like GitHub, Salesforce.com, SourceForge, or you-name-it? Do you have backups, in case code or artifacts are lost or corrupted? (Do you have any way to know if data is lost or corrupted?)

This is a worry.

In the case of the August 14 outage, the system wasn’t down for long — but long enough to kill a day’s productivity for many workers. Microsoft’s Visual Studio Online blog has a little bit of insight into the problem, but not much. Posted at 16:56 UTC, Microsoft said:

The actual root cause is still under investigation, but initial investigation is indicating a contention in our core database seems to be causing blocking and performance issues in the services. Our DevOps teams have identified a couple of mitigation steps and currently going thru validations. We will provide an update as soon as we have a mitigation in place. We apologize for the inconvenience and appreciate your patience while working on resolving this issue

This time you can blame Microsoft for any loss of productivity. Next time the service goes down, if you haven’t made contingency plans, the blame is yours.

Microsoft-SharePoint-Foundation-2010-logoWhere do your employees go to find shared data? If it’s external data, probably an external search engine, like Google (which apparently holds 67.6% of the U.S. market) or Bing (18.7%) or one of the niche players.

What about internal corporate data? If your organization uses a platform like Microsoft’s SharePoint, that platform includes a pretty robust search engine. You can use SharePoint to find documents stored inside the SharePoint database, or external documents linked to it, and conversations and informal data hosted by SharePoint. If you are familiar with a product called FAST, which Microsoft acquired in 2008, SharePoint’s search contains some elements of FAST and some elements of Bing. It’s quite good.

What if you are not a SharePoint shop, or if you are in a shop that hasn’t rolled SharePoint out to every portion of the organization?  You probably don’t have any good way for employees to find structured and unstructured documents, as well as data. You’ve got information in Dropbox. In Box.com. In Lotus Notes, maybe. In private Facebook groups. In Yammer (another Microsoft acquisition, by the way). In Ribose, a neat startup. Any number of places that might be on enterprise servers or cloud services, and I’m not even talking about the myriad code repositories that you may have, from ClearCase to Perforce to Subversion to GitHub.

All of those sources are good. There are reasons to use each of them for document sharing and collaboration and source-code development. That’s the problem. Like the classic potato chips advertisements say, you can’t only eat one.

Even in a small company, the number of legitimate sharing platforms can proliferate like weeds. As organizations grow, the potential places to stash information can grow exponentially, especially if there is a culture that allows for end users or line-of-business departments to roll out ad hoc solutions. Add mobile, and the problem explodes.

This is a governance problem: How do you ensure that data is accounted for, check that external sharing solutions are secure, or even detect if information has been stolen or tampered with?

This is a productivity problem: How much time is wasted by employees looking for information?

This is a business problem: How much money is wasted, or how much work must be duplicated or redone because data can’t be found?

This is a Big Data problem: How can you analyze it if you can’t find it?

The answer has to be a smarter intranet portal. In a recent essay by the Nielsen Norman Group, usability experts Patty Caya and Kara Pernice write that “Intranet portals are the hub of the enterprise universe.”

The trick is to discover it, index it, and make it available to authorized users—without stifling productivity. That includes data from applications that your developers are creating and maintaining.

gitThere are lots of reasons to use Git as your source-code management system. Whether used as a primary system, or in conjunction with an existing legacy repository, I’m going to argue that if you’re not using Git now, you should be at least testing it out.

Basics of Git: It is open source, and runs on Linux, Unix and Windows servers. It is stable. It is solid. It is fast. It is supported by just about every major tool vendor. Developers love Git. Managers love Git.

Not long ago, much of the world standardized on Concurrent Versions System (CVS) as its version control system. Then Subversion (SVN) came along, and the world standardized on that. Yes, yes, I know there are dozens of other version control systems, ranging from Microsoft’s Visual SourceSafe and Team Foundation Server to IBM Rational’s ClearCase. Those have always been niche products. Some are very successful niche products, but the industry standards have been CVS and SVN for years.

Along came Git, designed by Linus Torvalds in 2005, now headed up by Junio Hamano. For a brief history of Git, read “The Legacy of Linus Torvalds: Linux, Git, and One Giant Flamethrower,” by Robert McMillan, published in Wired in November 2012. For the official history, see the Git website.

What’s so wonderful about Git? I’ll answer in two ways: industry support and impressive functionality.

For industry support, let me refer you to two new articles by SD Times’ Lisa Morgan. Those stories inspired this column. The first is“How to get Git into the enterprise,” and the other is “Git smart about tools: A Buyers Guide.” You’ll see that nearly every major industry player supports Git—even competing SCM systems have worked to ensure interoperability. That’s a heck of an endorsement, and shows the stability and maturity of the platform.

Don’t take my word for it for the impressive functionality. Instead, let me quote from other bloggers.

Tobias Günther: “Work Offline: What if you want to work while you’re on the move? With a centralized VCS like Subversion or CVS, you’re stranded if you’re not connected to the central repository. With Git, almost everything is possible simply on your local machine: make a commit, browse your project’s complete history, merge or create branches… Git lets you decide where and when you want to work.”

Stephen Ball: “Resolving conflicts is way easier (than SVN): In Git, if I have a private branch from a branch that has been updated with new (conflicting) commits, I can rebase its commits one at a time against the public destination branch. I can resolve conflicts as they arise between my code and the current codebase. This makes dealing with conflicts easy because I get the context of the conflict (my commit message) and only see one conflict at a time.

“In SVN if I merge a branch against another and there are a lot of conflicts, there’s nothing I can do but resolve them all at the same time. What a mess.”

Scott Chacon: “There are tons of fantastic and powerful features in Git that help with debugging, complex diffing and merging, and more. There is also a great developer community to tap into and become a part of and a number of really good free resources online to help you learn and use Git…

“I want to share with you the concept that you can think about version control not as a necessary inconvenience that you need to put up with in order to collaborate, but rather as a powerful framework for managing your work separately in contexts, for being able to switch and merge between those contexts quickly and easily, for being able to make decisions late and craft your work without having to pre-plan everything all the time. Git makes all of these things easy and prioritizes them and should change the way you think about how to approach a problem in any of your projects and version control itself.”

Nicola Paolucci:
“If you don’t like speed, being productive and more reliable coding practices, then you shouldn’t use Git.”

Peter Cho: “Most developers would be delighted if they can change their workflow to use Git. Switching over early would be more ideal unless, of course, your SCM relies on a large network of dependent applications. If it’s not viable to change SCM systems, I would highly recommend using it on future projects.

“Git is infamous for having a large suite of tools that even seasoned users need months to master. However, getting into the fundamentals of Git is simple if you’re trying to switch over from SVN or CVS. So give a try sometime.”

Thomas Koch: “Somebody probably already recommended you to switch to Git, because it’s the best VCS. I’d like to go a step further now and talk about the risk you’re taking if you won’t switch soon. By still using SVN (if you’re using CVS you’re doomed anyway), you communicate the following: We’re ignorant about the fact that the rest of the (free) world switched to Git. We don’t invest time to train our developers in new technologies. We don’t care to provide the best development infrastructure. We’re not used to collaborate with external contributors. We’re not aware how much Subversion sucks and that Subversion does not support any decent development process. Yes, our development process most certainly sucks too.”

Günther also wrote, “Go With the Flow: Only dead fish swim with the stream. And sometimes, clever developers do, too. Git is used by more and more well-known companies and Open Source projects: Ruby On Rails, jQuery, Perl, Debian, the Linux Kernel, and many more. A large community often is an advantage by itself because an ecosystem evolves around the system. Lots of tutorials, tools (do I have to mention Tower?) and services make Git even more attractive.”

I’m sure there are arguments against Git. Nearly all the ones I’ve heard have come to me via competing source-code management vendors, not from developers who have actually tried Git for more at least one pilot. If you aren’t using Git, check it out. It’s the present and future of version control systems.

arthur-hickenSouth San Francisco, California — Writing software would be oh, so much simpler if we didn’t have all those darned choices. HTML5 or native apps? Windows Server in the data center or Windows Azure in the cloud? Which Linux distro? Java or C#? Continuous Integration? Continuous Delivery? Git or Subversion or both? NoSQL? Which APIs? Node.js? Follow-the-sun?

In a panel discussion on real-world software delivery bottlenecks, “complexity” was suggested as a main challenge. The panel, held here at the SDLC Acceleration Summit, pointed out that the complexity of constantly evaluating new technologies, techniques and choices can bring uncertainty and doubt and consume valuable mental bandwidth—and those might sometimes negate the benefits of staying on the cutting edge. (Pictured: My friend Arthur Hicken, aka “The Code Curmudgeon,” chief evangelist at Parasoft, which sponsored the event.)

I was the moderator. Sitting on the panel were David Intersimone from Embarcadero Technologies; Paul Dhaliwal from 383 Media; Andrew Binstock, editor of Dr. Dobb’s Journal; and Norman Buck from SQS.

Choices are not simple. Merely keeping up with the latest technologies can consume tons of time. Not only reading resources like SD Times, but also following your favorite Twitter feeds, reading blogs like Stack Overflow, meeting thought leaders at conferences, and, of course, hearing vendor pitches.

While complexity can be overwhelming, the truth is that we can’t opt out. We must keep up with the latest platforms and changes. We must have a mobile strategy. Yes, you can choose to ignore, say, the recent advances in cloud computing, Web APIs and service virtualization, but if you do so, you’re potentially missing out on huge benefits. Yes, technologies like Software Defined Networking (SDN) and OpenFlow may not seem applicable to you today, but odds are that they will be soon. Ignore them now and play catch-up later.

Complexity is not new. If you were writing FORTRAN code back in the 1970s, you had choices of libraries. Developing client/server software for NetWare or AIX? Building with Oracle? We have always had complexity and choices in platforms, tools, methodologies, databases and libraries. We always had to ensure that our code ran (and ran properly) on a variety of different targets, including a wide range of browsers, Java runtimes, rendering engines and more.

Yet today the number of combinations and permutations seems to be significantly greater than at any time in the past. Clouds, virtual machines, mobile devices, APIs, tools. Perhaps we need a new abstraction layer. In any case, though, complexity is a root cause of our challenges with software delivery. We must deal with it.

Microsoft’s woes are too big to ignore.

Problem area number one: The high-profile Surface tablet/notebook device is flopping. While the 64-bit Intel-based Surface Pro hasn’t sold well, the 32-bit ARM-based Surface RT tanked. Big time. Microsoft just slashed its price — maybe that will help. Too little too late?

To quote from Nathan Ingraham’s recent story in The Verve, 

Microsoft just announced earnings for its fiscal Q4 2013, and while the company posted strong results it also revealed some details on how the Surface RT project is costing the business money. Microsoft’s results showed a $900 million loss due to Surface RT “inventory adjustments,” a charge that comes just a few days after the company officially cut Surface RT prices significantly. This $900 million loss comes out of the company’s total Windows revenue, though its worth noting that Windows revenue still increased year-over-year. Unfortunately, Microsoft still doesn’t give specific Windows 8 sales or revenue numbers, but it probably performed well this quarter to make up for the big Surface RT loss.

At the end of the day, though, it looks like Microsoft just made too many Surface RT tablets — we heard late last year that Microsoft was building three to five million Surface RT tablets in the fourth quarter, and we also heard that Microsoft had only sold about one million of those tablets in March. We’ll be listening to Microsoft’s earnings call this afternoon to see if they further address Surface RT sales or future plans.

Microsoft has spent heavily, and invested a lot of its prestige, in the Surface. It needs to fix Windows 8 and make this platform work.

Problem are number two: A dysfunctional structure. A recent story in the New York Times reminded me of this 2011 cartoon describing six tech company’s charts. Look at Microsoft. Yup.

Steve Ballmer, who has been CEO since 2000, is finally trying to do something about the battling business units. The new structure, announced on July 11, is called “One Microsoft,” and in a public memo by Ballmer, the goal is described as:

Going forward, our strategy will focus on creating a family of devices and services for individuals and businesses that empower people around the globe at home, at work and on the go, for the activities they value most. 

Editing and restructuring the info in that memo somewhat, here’s what the six key non-administrative groups will look like:

Operating Systems Engineering Group will span all OS work for console, to mobile device, to PC, to back-end systems. The core cloud services for the operating system will be in this group.

Devices and Studios Engineering Group will have all hardware development and supply chain from the smallest to the largest devices, and studios experiences including all games, music, video and other entertainment.

Applications and Services Engineering Group will have broad applications and services core technologies in productivity, communication, search and other information categories.

Cloud and Enterprise Engineering Group will lead development of back-end technologies like datacenter, database and specific technologies for enterprise IT scenarios and development tools, plus datacenter development, construction and operation.

Advanced Strategy and Research Group will be focused on the intersection of technology and policy, and will drive the cross-company looks at key new technology trends.

Business Development and Evangelism Group will focus on key partnerships especially with innovation partners (OEMs, silicon vendors, key developers, Yahoo, Nokia, etc.) and broad work on evangelism and developer outreach. 

If implemented as described, this new organization should certainly eliminate waste, including redundant research and product developments. It might improve compatibility between different platforms and cut down on mixed messages.

However, it may also constraint the freedom to innovate, and promote the unhealthy “Windows everywhere” philosophy that has hamstrung Microsoft for years. It’s bad to spend time creating multiple operating systems, multiple APIs, multiple dev tool chains, multiple support channels. It’s equally bad to make one operating system, API set, dev tool chain and support channel fit all platforms and markets.

Another concern is the movement of developer outreach into a separate group that’s organizationally distinct from the product groups. Will that distance Microsoft’s product developers from customers and ISVs? Maybe. Will the most lucrative products get better developer support? Maybe.

Microsoft has excelled in developer support, and I’d hate to see that suffer as part of the new strategy. 

Read Steve Ballmer’s memo. What do you think?

Z Trek Copyright (c) Alan Zeichick