Cybercriminals want your credentials and your employees’ credentials. When those hackers succeed in stealing that information, it can be bad for individuals – and even worse for corporations and other organizations. This is a scourge that’s bad, and it will remain bad.

Credentials come in two types. There are personal credentials, such as the login and password for an email account, bank and retirement accounts, credit-card numbers, airline membership program, online shopping and social media. When hackers manage to obtain those credentials, such as through phishing, they can steal money, order goods and services, and engage in identity theft. This can be extremely costly and inconvenient for victims, but the damage is generally contained to that one unfortunate individual.

Corporate digital credentials, on the other hand, are the keys to an organization’s network. Consider a manager, executive or information-technology worker within a typical medium-size or larger-size business. Somewhere in the organization is a database that describes that employee – and describes which digital assets that employee is authorized to use. If cybercriminals manage to steal the employee’s corporate digital credentials, the criminals can then access those same assets, without setting off any alarm bells. Why? Because they have valid credentials.

What might those assets be? Depending on the employee:

  • It might range from everything to file servers that contain intellectual property, as pricing sheets, product blueprints, or patent applications.
  • It might include email archives that describe business plans. Or accounting servers that contain important financial information that could help competitors or allow for “insider trading.”
  • It might be human resources data that can help the hackers attack other individuals. Or engage in identity theft or even blackmail.

What if the stolen credentials are for individuals in the IT or information security department? The hackers can learn a great deal about the company’s technology infrastructure, perhaps including passwords to make changes to configurations, open up backdoors, or even disable security systems.

Read my whole story about this —including what to do about it — in Telecom Times, “The CyberSecurity Scourge of Credentials Theft.”

You keep reading the same three names over and over again. Amazon Web Services. Google Cloud Platform. Microsoft Windows Azure. For the past several years, that’s been the top tier, with a wide gap between them and everyone else. Well, there’s a fourth player, the IBM cloud, with their SoftLayer acquisition. But still, it’s AWS in the lead when it comes to Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS), with many estimates showing about a 37-40% market share in early 2017. In second place, Azure, at around 28-31%. Third, place, Google at around 16-18%. Fourth place, IBM SoftLayer, at 3-5%.

Add that all up, and you get the big four (let’s count IBM) at between 84% and 94%. That doesn’t leave much room for everyone else, including companies like Rackspace, and all the cloud initiatives launched by major computer companies like Alibaba, Dell, HP Enterprise, Oracle, and all the telcos around the world.

Of course, IaaS and PaaS can’t account for all the cloud activity. In the Software-as-a-Service realm, companies like Salesforce.com and Oracle operate their own clouds, which are huge. And then there are the private clouds, operated by the likes of Apple and Facebook, which are immense, with data centers all around the world.

Still, it’s clear that when it comes to the public cloud, there are very few choices. That covers the clouds telcos want to monetize, and enterprises need for hybrid clouds or full migrations,  You can go with the big winner, which is Amazon. You can look to Azure (which is appealing, of course, to Microsoft shops) or Google. And then you can look at everyone else, including IBM SoftLayer, Rackspace, and, well, everyone else.

Amazon Web Services Inside?

Remember when computer makers were touting “Intel Inside”? In today’s world, many SaaS providers are basing their platforms on Amazon, Azure or Google. And many IaaS and PaaS players are doing the same —except in many cases, they’re not advertising it. Unlike many of the smaller PC companies, who wanted to hitch their star to Intel’s huge advertising budget, cloud software companies want to build out their own brands. In the international space, they also don’t want to be seen as fronting U.S.-based technology providers, but rather, want to appeal as a local option.

Speaking of international, the dominance of the IaaS/PaaS market by three U.S. companies can create a bit of a conundrum for global tech providers. Many governments and global businesses are leery of letting their data touch U.S. servers, and in some cases, even if the Amazon/Azure/Google data center is based in Europe or Asia, there are legal minefields regarding U.S. courts and surveillance. Not only that, but across the globe, privacy laws are increasingly strict about where consumer information may be stored.

What does this add up to? Probably not much in the long run. There’s no reason to expect that the lineup of Amazon, Azure and Google will change much over the next year or two, or that they will lose market share to smaller players. In fact, to the contrary: The big players are getting bigger at the expense of the niche offerings. According to a recent report from Synergy Research Group:

New Q4 data from Synergy Research Group shows that Amazon Web Services (AWS) is maintaining its dominant share of the burgeoning public cloud services market at over 40%, while the three main chasing cloud providers – Microsoft, Google and IBM – are gaining ground but at the expense of smaller players in the market. In aggregate the three have increased their worldwide market share by almost five percentage points over the last year and together now account for 23% of the total public IaaS and PaaS market, helped by particularly strong growth at Microsoft and Google.

The bigger are getting bigger. The smaller are getting smaller. That’s the cloud market story, in a nutshell.

Cloud-based firewalls come in two delicious flavors: vanilla and strawberry. Both flavors are software that checks incoming and outgoing packets to filter against access policies and block malicious traffic. Yet they are also quite different. Think of them as two essential network security tools: Both are designed to protect you, your network, and your real and virtual assets, but in different contexts.

Disclosure: I made up the terms “vanilla firewall” and “strawberry firewall” for this discussion. Hopefully they help us differentiate between the two models as we dig deeper.

Let’s start with a quick overview:

  • Vanilla firewalls are usually stand-alone products or services designed to protect an enterprise network and its users — like an on-premises firewall appliance, except that it’s in the cloud. Service providers call this a software-as-a-service (SaaS) firewall, security as a service (SECaaS), or even firewall as a service (FaaS).
  • Strawberry firewalls are cloud-based services that are designed to run in a virtual data center using your own servers in a platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS) model. In these cases, the firewall application runs on the virtual servers and protects traffic going to, from, and between applications in the cloud. The industry sometimes calls these next-generation firewalls, though the term is inconsistently applied and sometimes refers to any advanced firewall system running on-prem or in the cloud.

So why do we need these new firewalls? Why not stick a 1U firewall appliance into a rack, connect it up to the router, and call it good? Easy: Because the definition of the network perimeter has changed. Firewalls used to be like guards at the entrance to a secured facility. Only authorized people could enter that facility, and packages were searched as they entered and left the building. Moreover, your users worked inside the facility, and the data center and its servers were also inside. Thus, securing the perimeter was fairly easy. Everything inside was secure, everything outside was not secure, and the only way in and out was through the guard station.

Intrigued? Hungry? Both? Please read the rest of my story, called “Understanding cloud-based firewalls,” published on Enterprise.nxt.

Want to open up your eyes, expand your horizons, and learn from really smart people? Attend a conference or trade show. Get out there. Meet people. Have conversations. Network. Be inspired by keynotes. Take notes in classes that are delivering great material, and walk out of boring sessions and find something better.

I wrote an article about the upcoming 2017 conferences and trade shows about cloud computing and enterprise infrastructure. Think big and think outside the cubicle: Don’t go to only the events that are about the exact thing you do, and don’t attend only the sessions about the exact thing you do.

The list is organized alphabetically in “must attend,” worth attending,” and “worthy mentions” sections. Those are my subjective labels (though based on experience, having attended many of these conferences in the past decades), so read the descriptions carefully and make your own decisions. If you don’t use Amazon Web Services, then AWS re:Invent simply isn’t right for you. However, if you use or might use the company’s cloud services, then, yes, it’s a must-attend.

And oh, a word about the differences between conferences and trade shows (also known as expos). These can be subtle, and reasonable people might disagree in some edge cases. However, a conference’s main purpose is education: The focus is on speakers, panels, classes, and other sessions. While there might be an exhibit floor for vendors, it’s probably small and not very useful. In contrast, a trade show is designed to expose you to the greatest number of exhibitors, including vendors and trade associations. The biggest value is in walking the floor; while the trade show may offer classes, they are secondary and often (but not always) vendor fluff sessions “awarded” to big advertisers in return for their gold sponsorships.

So if you want to learn from classes, panels, and workshops, you probably want a conference. If you want to talk to vendors, kick the tires on products, and decide which solutions to buy or recommend, you want a trade show or an expo.

And now, on with the list: the most important events in cloud computing and enterprise infrastructure, compiled at the very beginning of 2017. Note that events can change their dates or cities without notice, or even be cancelled, so keep an eye on the websites. You can read the list here.

According to a recent study, 46% of the top one million websites are considered risky. Why? Because the homepage or background ad sites are running software with known vulnerabilities, the site was categorized as a known bad for phishing or malware, or the site had a security incident in the past year.

According to Menlo Security, in its “State of the Web 2016” report introduced mid-December 2016, “… nearly half (46%) of the top million websites are risky.” Indeed, Menlo says, “Primarily due to outdated software, cyber hackers now have their veritable pick of half the web to exploit. And exploitation is becoming more widespread and effective for three reasons: 1. Risky sites have never been easier to exploit; 2. Traditional security products fail to provide adequate protection; 3. Phishing attacks can now utilize legitimate sites.”

This has been a significant issue for years. However, the issue came to the forefront earlier this year when several well-known media sites were essentially hijacked by malicious ads. The New York Times, the BBC, MSN and AOL were hit by tainted advertising that installed ransomware, reports Ars Technica. From their March 15, 2016, article, “Big-name sites hit by rash of malicious ads spreading crypto ransomware”:

The new campaign started last week when ‘Angler,’ a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

The results of this attack, reported The Guardian at around the same time: 

When the infected adverts hit users, they redirect the page to servers hosting the malware, which includes the widely-used (amongst cybercriminals) Angler exploit kit. That kit then attempts to find any back door it can into the target’s computer, where it will install cryptolocker-style software, which encrypts the user’s hard drive and demands payment in bitcoin for the keys to unlock it.

If big-money trusted media sites can be hit, so can nearly any corporate site, e-commerce portal, or any website that uses third-party tools – or where there might be the possibility of unpatched servers and software. That means just about anyone. After all, not all organizations are diligent about monitoring for common vulnerabilities and exploits (CVE) on their on-premises servers. When companies run their websites on multi-tenant hosting facilities, they don’t even have access to the operating system directly, but rely upon the hosting company to install patches and fixes to Windows Server, Linux, Joomla, WordPress and so-on.

A single unpatched operating system, web server platform, database or extension can introduce a vulnerability which can be scanned for. Once found, that CVE can be exploited, by a talented hacker — or by a disgruntled teenager with a readily-available web exploit kit

What can you do about it? Well, you can read my complete story on this subject, “Malware explosion: The web is risky,” published on ITProPortal.

hackerOnce upon a time, goes the story, there was honor between thieves and victims. They held a member of your family for ransom; you paid the ransom; they left you alone. The local mob boss demanded protection money; if you didn’t pay, your business burned down, but if you did pay and didn’t rat him out to the police, his and his gang honestly tried to protect you. And hackers may have been operating outside legal boundaries, but for the most part, they were explorers and do-gooders intending to shine a bright light on the darkness of the Internet – not vandals, miscreants, hooligans and ne’er-do-wells.

That’s not true any more, perhaps. As I write in IT Pro Portal, “Faceless and faithless: A true depiction of today’s cyber-criminals?

Not that long ago, hackers emerged as modern-day Robin Hoods, digital heroes who relentlessly uncovered weaknesses in applications and networks to reveal the dangers of using technology carelessly. They are curious, provocative; love to know how things work and how they can be improved.

Today, however, there is blight on their good name. Hackers have been maligned by those who do not have our best interests at heart, but are instead motivated by money – attackers who steal our assets and hold organisations such as banks and hospitals to ransom.

(My apologies for the British spelling – it’s a British website and they’re funny that way.)

It’s hard to lose faith in hackers, but perhaps we need to. Sure, not all are cybercriminals, but with the rise of ransomware, nasty action by state actors, and some pretty nasty attacks like the new single-pixel malvertising exploit written up yesterday by Dan Goodwin in Ars Technica (which was discovered out after I wrote this story), it’s hard to trust that most hackers secretly have our best interests at heart.

This reminds me of Ranscam. In a blog post, “When Paying Out Doesn’t Pay Off,” Talos reports that:

Ranscam is one of these new ransomware variants. It lacks complexity and also tries to use various scare tactics to entice the user to paying, one such method used by Ranscam is to inform the user they will delete their files during every unverified payment click, which turns out to be a lie. There is no longer honor amongst thieves. Similar to threats like AnonPop, Ranscam simply delete victims’ files, and provides yet another example of why threat actors cannot always be trusted to recover a victim’s files, even if the victim complies with the ransomware author’s demands. With some organizations likely choosing to pay the ransomware author following an infection,  Ranscam further justifies the importance of ensuring that you have a sound, offline backup strategy in place rather than a sound ransom payout strategy. Not only does having a good backup strategy in place help ensure that systems can be restored, it also ensures that attackers are no longer able to collect revenue that they can then reinvest into the future development of their criminal enterprise.

Scary — and it shows that often, crime does pay. No honor among thieves indeed.

wayne-rashWhen an employee account is compromised by malware, the malware establishes a foothold on the user’s computer – and immediately tries to gain access to additional resources. It turns out that with the right data gathering tools, and with the right Big Data analytics and machine-learning methodologies, the anomalous network traffic caused by this activity can be detected – and thwarted.

That’s the role played by Blindspotter, a new anti-malware system that seems like a specialized version of a network intrusion detection/prevention system (IDPS). Blindspotter can help against many types of malware attacks. Those include one of the most insidious and successful hack vectors today: spear phishing. That’s when a high-level target in your company is singled out for attack by malicious emails or by compromised websites. All the victim has to do is open an email, or click on a link, and wham – malware is quietly installed and operating. (High-level targets include top executives, financial staff and IT administrators.)

My colleague Wayne Rash recently wrote about this network monitoring solution and its creator, Balabit, for eWeek in “Blindspotter Uses Machine Learning to Find Suspicious Network Activity”:

The idea behind Balabit’s Blindspotter and Shell Control Box is that if you gather enough data and subject it to analysis comparing activity that’s expected with actual activity on an active network, it’s possible to tell if someone is using a person’s credentials who shouldn’t be or whether a privileged user is abusing their access rights.

 The Balabit Shell Control Box is an appliance that monitors all network activity and records the activity of users, including all privileged users, right down to every keystroke and mouse movement. Because privileged users such as network administrators are a key target for breaches it can pay special attention to them.

The Blindspotter software sifts through the data collected by the Shell Control Box and looks for anything out of the ordinary. In addition to spotting things like a user coming into the network from a strange IP address or at an unusual time of day—something that other security software can do—Blindspotter is able to analyze what’s happening with each user, but is able to spot what is not happening, in other words deviations from normal behavior.

Read the whole story here. Thank you, Wayne, for telling us about Blindspotter.

bloombergMedical devices are incredibly vulnerable to hacking attacks. In some cases it’s because of software defects that allow for exploits, like buffer overflows, SQL injection or insecure direct object references. In other cases, you can blame misconfigurations, lack of encryption (or weak encryption), non-secure data/control networks, unfettered wireless access, and worse.

Why would hackers go after medical devices? Lots of reasons. To name but one: It’s a potential terrorist threat against real human beings. Remember that Dick Cheney famously disabled the wireless capabilities of his implanted heart monitor for fear of an assassination attack.

Certainly healthcare organizations are being targeted for everything from theft of medical records to ransomware. To quote the report “Hacking Healthcare IT in 2016,” from the Institute for Critical Infrastructure Technology (ICIT):

The Healthcare sector manages very sensitive and diverse data, which ranges from personal identifiable information (PII) to financial information. Data is increasingly stored digitally as electronic Protected Health Information (ePHI). Systems belonging to the Healthcare sector and the Federal Government have recently been targeted because they contain vast amounts of PII and financial data. Both sectors collect, store, and protect data concerning United States citizens and government employees. The government systems are considered more difficult to attack because the United States Government has been investing in cybersecurity for a (slightly) longer period. Healthcare systems attract more attackers because they contain a wider variety of information. An electronic health record (EHR) contains a patient’s personal identifiable information, their private health information, and their financial information.

EHR adoption has increased over the past few years under the Health Information Technology and Economics Clinical Health (HITECH) Act. Stan Wisseman [from Hewlett-Packard] comments, “EHRs enable greater access to patient records and facilitate sharing of information among providers, payers and patients themselves. However, with extensive access, more centralized data storage, and confidential information sent over networks, there is an increased risk of privacy breach through data leakage, theft, loss, or cyber-attack. A cautious approach to IT integration is warranted to ensure that patients’ sensitive information is protected.”

Let’s talk devices. Those could be everything from emergency-room monitors to pacemakers to insulin pumps to X-ray machines whose radiation settings might be changed or overridden by malware. The ICIT report says,

Mobile devices introduce new threat vectors to the organization. Employees and patients expand the attack surface by connecting smartphones, tablets, and computers to the network. Healthcare organizations can address the pervasiveness of mobile devices through an Acceptable Use policy and a Bring-Your-Own-Device policy. Acceptable Use policies govern what data can be accessed on what devices. BYOD policies benefit healthcare organizations by decreasing the cost of infrastructure and by increasing employee productivity. Mobile devices can be corrupted, lost, or stolen. The BYOD policy should address how the information security team will mitigate the risk of compromised devices. One solution is to install software to remotely wipe devices upon command or if they do not reconnect to the network after a fixed period. Another solution is to have mobile devices connect from a secured virtual private network to a virtual environment. The virtual machine should have data loss prevention software that restricts whether data can be accessed or transferred out of the environment.

The Internet of Things – and the increased prevalence of medical devices connected hospital or home networks – increase the risk. What can you do about it? The ICIT report says,

The best mitigation strategy to ensure trust in a network connected to the internet of things, and to mitigate future cyber events in general, begins with knowing what devices are connected to the network, why those devices are connected to the network, and how those devices are individually configured. Otherwise, attackers can conduct old and innovative attacks without the organization’s knowledge by compromising that one insecure system.

Given how common these devices are, keeping IT in the loop may seem impossible — but we must rise to the challenge, ICIT says:

If a cyber network is a castle, then every insecure device with a connection to the internet is a secret passage that the adversary can exploit to infiltrate the network. Security systems are reactive. They have to know about something before they can recognize it. Modern systems already have difficulty preventing intrusion by slight variations of known malware. Most commercial security solutions such as firewalls, IDS/ IPS, and behavioral analytic systems function by monitoring where the attacker could attack the network and protecting those weakened points. The tools cannot protect systems that IT and the information security team are not aware exist.

The home environment – or any use outside the hospital setting – is another huge concern, says the report:

Remote monitoring devices could enable attackers to track the activity and health information of individuals over time. This possibility could impose a chilling effect on some patients. While the effect may lessen over time as remote monitoring technologies become normal, it could alter patient behavior enough to cause alarm and panic.

Pain medicine pumps and other devices that distribute controlled substances are likely high value targets to some attackers. If compromise of a system is as simple as downloading free malware to a USB and plugging the USB into the pump, then average drug addicts can exploit homecare and other vulnerable patients by fooling the monitors. One of the simpler mitigation strategies would be to combine remote monitoring technologies with sensors that aggregate activity data to match a profile of expected user activity.

A major responsibility falls onto the device makers – and the programmers who create the embedded software. For the most part, they are simply not up to the challenge of designing secure devices, and may not have the polices, practices and tools in place to get cybersecurity right. Regrettably, the ICIT report doesn’t go into much detail about the embedded software, but does state,

Unlike cell phones and other trendy technologies, embedded devices require years of research and development; sadly, cybersecurity is a new concept to many healthcare manufacturers and it may be years before the next generation of embedded devices incorporates security into its architecture. In other sectors, if a vulnerability is discovered, then developers rush to create and issue a patch. In the healthcare and embedded device environment, this approach is infeasible. Developers must anticipate what the cyber landscape will look like years in advance if they hope to preempt attacks on their devices. This model is unattainable.

In November 2015, Bloomberg Businessweek published a chilling story, “It’s Way too Easy to Hack the Hospital.” The authors, Monte Reel and Jordon Robertson, wrote about one hacker, Billy Rios:

Shortly after flying home from the Mayo gig, Rios ordered his first device—a Hospira Symbiq infusion pump. He wasn’t targeting that particular manufacturer or model to investigate; he simply happened to find one posted on EBay for about $100. It was an odd feeling, putting it in his online shopping cart. Was buying one of these without some sort of license even legal? he wondered. Is it OK to crack this open?

Infusion pumps can be found in almost every hospital room, usually affixed to a metal stand next to the patient’s bed, automatically delivering intravenous drips, injectable drugs, or other fluids into a patient’s bloodstream. Hospira, a company that was bought by Pfizer this year, is a leading manufacturer of the devices, with several different models on the market. On the company’s website, an article explains that “smart pumps” are designed to improve patient safety by automating intravenous drug delivery, which it says accounts for 56 percent of all medication errors.

Rios connected his pump to a computer network, just as a hospital would, and discovered it was possible to remotely take over the machine and “press” the buttons on the device’s touchscreen, as if someone were standing right in front of it. He found that he could set the machine to dump an entire vial of medication into a patient. A doctor or nurse standing in front of the machine might be able to spot such a manipulation and stop the infusion before the entire vial empties, but a hospital staff member keeping an eye on the pump from a centralized monitoring station wouldn’t notice a thing, he says.

 The 97-page ICIT report makes some recommendations, which I heartily agree with.

  • With each item connected to the internet of things there is a universe of vulnerabilities. Empirical evidence of aggressive penetration testing before and after a medical device is released to the public must be a manufacturer requirement.
  • Ongoing training must be paramount in any responsible healthcare organization. Adversarial initiatives typically start with targeting staff via spear phishing and watering hole attacks. The act of an ill- prepared executive clicking on a malicious link can trigger a hurricane of immediate and long term negative impact on the organization and innocent individuals whose records were exfiltrated or manipulated by bad actors.
  • A cybersecurity-centric culture must demand safer devices from manufacturers, privacy adherence by the healthcare sector as a whole and legislation that expedites the path to a more secure and technologically scalable future by policy makers.

This whole thing is scary. The healthcare industry needs to set up its game on cybersecurity.

zebra-tc8000Are you a coder? Architect? Database guru? Network engineer? Mobile developer? User-experience expert? If you have hands-on tech skills, get those hands dirty at a Hackathon.

Full disclosure: Years ago, I thought Hackathons were, well, silly. If you’ve got the skills and extra energy, put them to work for coding your own mobile apps. Do a startup! Make some dough! Contribute to an open-source project! Do something productive instead of taking part in coding contests!

Since then, I’ve seen the light, because it’s clear that Hackathons are a win-win-win.

  • They are a win for techies, because they get to hone their abilities, meet people, and learn stuff.
  • They are a win for Hackathon sponsors, because they often give the latest tools, platforms and APIs a real workout.
  • They are a win for the industry, because they help advance the creation and popularization of emerging standards.

One upcoming Hackathon that I’d like to call attention to: The MEF LSO Hackathon will be at the upcoming MEF16 Global Networking Conference, in Baltimore, Nov. 7-10. The work will support Third Network service projects that are built upon key OpenLSO scenarios and OpenCS use cases for constructing Layer 2 and Layer 3 services. You can read about a previous MEF LSO Hackathon here.

Build your skills! Advance the industry! Meet interesting people! Sign up for a Hackathon!

vz_use_outdoor_headerThank you, NetGear, for taking care of your valued customers. On July 1, the company announced that it would be shutting down the proprietary back-end cloud services required for its VueZone cameras to work – turning them into expensive camera-shaped paperweights. See “Throwing our IoT investment in the trash thanks to NetGear.”

The next day, I was contacted by the company’s global communications manager. He defended the policy, arguing that NetGear was not only giving 18 months’ notice of the shutdown, but they are “doing our best to help VueZone customers migrate to the Arlo platform by offering significant discounts, exclusive to our VueZone customers.” See “A response from NetGear regarding the VueZone IoT trashcan story.”

And now, the company has done a 180° turn. NetGear will not turn off the service, at least not at this time. Well done. Here’s the email that came a few minutes ago. The good news for VueZone customers is that they can continue. On the other hand, let’s not party too heartily. The danger posed by proprietary cloud services driving IoT devices remains. When the vendor decides to turn it off, all you have is recycle-ware and potentially, one heck of a migration issue.

Subject: VueZone Services to Continue Beyond January 1, 2018

Dear valued VueZone customer,

On July 1, 2016, NETGEAR announced the planned discontinuation of services for the VueZone video monitoring product line, which was scheduled to begin as of January 1, 2018.

Since the announcement, we have received overwhelming feedback from our VueZone customers expressing a desire for continued services and support for the VueZone camera system. We have heard your passionate response and have decided to extend service for the VueZone product line. Although NETGEAR no longer manufactures or sells VueZone hardware, NETGEAR will continue to support existing VueZone customers beyond January 1, 2018.

We truly appreciate the loyalty of our customers and we will continue our commitment of delivering the highest quality and most innovative solutions for consumers and businesses. Thank you for choosing us.

Best regards,

The NETGEAR VueZone Team

July 19, 2016

BirthInternetLIn the “you learn something every day” department: Discovered today that there’s a plaque at Stanford honoring the birth of the Internet. The plaque was dedicated on July 28, 2005, and is in the Gates Computer Science Building.

You can read all about the plaque, and see it more clearly, on J. Noel Chiappa’s website. His name is on the plaque.

Here’s what the plaque says. Must check it out during my next trip to Palo Alto.


BIRTH OF THE INTERNET

THE ARCHITECTURE OF THE INTERNET AND THE DESIGN OF THE CORE NETWORKING PROTOCOL TCP (WHICH LATER BECAME TCP/IP) WERE CONCEIVED BY VINTON G. CERF AND ROBERT E. KAHN DURING 1973 WHILE CERF WAS AT STANFORD’S DIGITAL SYSTEMS LABORATORY AND KAHN WAS AT ARPA (LATER DARPA). IN THE SUMMER OF 1976, CERF LEFT STANFORD TO MANAGE THE PROGRAM WITH KAHN AT ARPA.

THEIR WORK BECAME KNOWN IN SEPTEMBER 1973 AT A NETWORKING CONFERENCE IN ENGLAND. CERF AND KAHN’S SEMINAL PAPER WAS PUBLISHED IN MAY 1974.

CERF, YOGEN K. DALAL, AND CARL SUNSHINE WROTE THE FIRST FULL TCP SPECIFICATION IN DECEMBER 1974. WITH THE SUPPORT OF DARPA, EARLY IMPLEMENTATIONS OF TCP (AND IP LATER) WERE TESTED BY BOLT BERANEK AND NEWMAN (BBN), STANFORD, AND UNIVERSITY COLLEGE LONDON DURING 1975.

BBN BUILT THE FIRST INTERNET GATEWAY, NOW KNOWN AS A ROUTER, TO LINK NETWORKS TOGETHER. IN SUBSEQUENT YEARS, RESEARCHERS AT MIT AND USC-ISI, AMONG MANY OTHERS, PLAYED KEY ROLES IN THE DEVELOPMENT OF THE SET OF INTERNET PROTOCOLS.

KEY STANFORD RESEARCH ASSOCIATES AND FOREIGN VISITORS

  • VINTON CERF
  • DAG BELSNES
  • RONALD CRANE
  • BOB METCALFE
  • YOGEN DALAL
  • JUDITH ESTRIN
  • RICHARD KARP
  • GERARD LE LANN
  • JAMES MATHIS
  • DARRYL RUBIN
  • JOHN SHOCH
  • CARL SUNSHINE
  • KUNINOBU TANNO

DARPA

  • ROBERT KAHN

COLLABORATING GROUPS

BOLT BERANEK AND NEWMAN

  • WILLIAM PLUMMER
  • GINNY STRAZISAR
  • RAY TOMLINSON

MIT

  • NOEL CHIAPPA
  • DAVID CLARK
  • STEPHEN KENT
  • DAVID P. REED

NDRE

  • YNGVAR LUNDH
  • PAAL SPILLING

UNIVERSITY COLLEGE LONDON

  • FRANK DEIGNAN
  • MARTINE GALLAND
  • PETER HIGGINSON
  • ANDREW HINCHLEY
  • PETER KIRSTEIN
  • ADRIAN STOKES

USC-ISI

  • ROBERT BRADEN
  • DANNY COHEN
  • DANIEL LYNCH
  • JON POSTEL

ULTIMATELY, THOUSANDS IF NOT TENS TO HUNDREDS OF THOUSANDS HAVE CONTRIBUTED THEIR EXPERTISE TO THE EVOLUTION OF THE INTERNET.

DEDICATED JULY 28, 2005

pidgeonThere are standards for everything, it seems. And those of us who work on Internet things are often amused (or bemused) by what comes out of the Internet Engineering Task Force (IETF). An oldie but a goodie is a document from 1999, RFC-2549, “IP over Avian Carriers with Quality of Service.”

An RFC, or Request for Comment, is what the IETF calls a standards document. (And yes, I’m browsing my favorite IETF pages during a break from doing “real” work. It’s that kind of day.)

RFC-2549 updates RFC-1149, “A Standard for the Transmission of IP Datagrams on Avian Carriers.” That older standard did not address Quality of Service. I’ll leave it for you to enjoy both those documents, but let me share this part of RFC-2549:

Overview and Rational

The following quality of service levels are available: Concorde, First, Business, and Coach. Concorde class offers expedited data delivery. One major benefit to using Avian Carriers is that this is the only networking technology that earns frequent flyer miles, plus the Concorde and First classes of service earn 50% bonus miles per packet. Ostriches are an alternate carrier that have much greater bulk transfer capability but provide slower delivery, and require the use of bridges between domains.

The service level is indicated on a per-carrier basis by bar-code markings on the wing. One implementation strategy is for a bar-code reader to scan each carrier as it enters the router and then enqueue it in the proper queue, gated to prevent exit until the proper time. The carriers may sleep while enqueued.

Most years, the IETF publishes so-called April Fool’s RFCs. The best list of them I’ve seen is on Wikipedia. If you’re looking to take a work break, give ’em a read. Many of them are quite clever! However, I still like RFC-2549 the best.

A prized part of my library is “The Complete April Fools’ Day RFCs” compiled by by Thomas Limoncelli and Peter Salus. Sadly this collection stops at 2007. Still, it’s a great coffee table book to leave lying around for when people like Bob MetcalfeTim Berners-Lee or Al Gore come by to visit.

hackathonThe MEF recently conducted its second LSO Hackathon at a Rome event called Euro16. You can read my story about it here in DiarioTi: LSO Hackathons Bring Together Open Standards, Open Source.

Alas, my coding skills are too rusty for a Hackathon, unless the objective is to develop a fully buzzword compliant implementation of “Hello World.” Fortunately, there are others with better skills, as well as a broader understanding of today’s toughest problems.

Many Hackathons are thinly veiled marketing exercises by companies, designed to find ways to get programmers hooked on their tools, platforms, APIs, etc. Not all! One of the most interesting Hackathons is from the MEF, an industry group that drives communications interoperability. As a standards defining organization (SDO), the MEF wants to help carriers and equipment vendors design products/services ready for the next generation of connectivity. That means building on a foundation of SDN (software defined networks), NFV (network functions virtualization), LSO (lifecycle service orchestration) and CE2.0 (Carrier Ethernet 2.0).

To make all this happen:

  • What the MEF does: Create open standards for architectures and specifications.
  • What vendors, carriers and open source projects do: Write software to those specifications.
  • What the Hackathon does: Give everyone a chance to work together, make sure the code is truly interoperable, and find areas where the written specs might be ambiguous.

Thus, the MEF LSO Hackathons. They bring together a wide swatch of the industry to move beyond the standards documents and actually write and test code that implements those specs.

As mentioned above, the MEF just completed its second Hackathon at Euro16. The first LSO Hackathon was at last year’s MEF GEN15 annual conference in Dallas. Here’s my story about it in Telecom Ramblings: The MEF LSO Hackathon: Building Community, Swatting Bugs, Writing Code.

The third LSO Hackathon will be at this year’s MEF annual conference, MEF16, in Baltimore, Nov. 7-10. I will be there as an observer – alas, without the up-to-date, practical skills to be a coding participant.

world-wifi-dayWiFi is the present and future of local area networking. Forget about families getting rid of the home phone. The real cable-cutters are dropping the Cat-5 Ethernet in favor of IEEE 802.11 Wireless Local Area Networks, generally known as WiFi. Let’s celebrate World WiFi Day!

There are no Cat-5 cables connected in my house and home office. Not one. And no Ethernet jacks either. (By contrast, when we moved into our house in the Bay Area in the early 1990s, I wired nearly every room with Ethernet jacks.) There’s a box of Ethernet cables, but I haven’t touched them in years. Instead, it’s all WiFi. (Technically, WiFi refers to industry products that are compatible with the IEEE 802.11 specification, but for our casual purposes here, it’s all the same thing.)

My 21” iMac (circa 2011) has an Ethernet port. I’ve never used it. My MacBook Air (also circa 2011) doesn’t have an Ethernet port at all; I used to carry a USB-to-Ethernet dongle, but it disappeared a long time ago. It’s not missed. My tablets (iOS, Android and Kindle) are WiFi-only for connectivity. Life is good.

The first-ever World WiFi Day is today — June 20, 2016 . It was declared by the Wireless Broadband Alliance to

be a global platform to recognize and celebrate the significant role Wi-Fi is playing in getting cities and communities around the world connected. It will champion exciting and innovative solutions to help bridge the digital divide, with Connected City initiatives and new service launches at its core.

Sadly, the World WiFi Day initiative is not about the wire-free convenience of Alan’s home office and personal life. Rather, it’s about bringing Internet connectivity to third-world, rural, poor, or connectivity-disadvantaged areas. According to the organization, here are eight completed projects:

  • KT – KT Giga Island – connecting islands to the mainland through advanced networks
  • MallorcaWiFi – City of Palma – Wi-Fi on the beach
  • VENIAM – Connected Port @ Leixões Porto, Portugal
  • ISOCEL – Isospot – Building a Wi-Fi hotspot network in Benin
  • VENIAM – Smart City @ Porto, Portugal
  • Benu Neworks – Carrier Wi-Fi Business Case
  • MCI – Free Wi-Fi for Arbaeen
  • Fon – After the wave: Japan and Fon’s disaster support procedure

It’s a worthy cause. Happy World WiFi Day, everyone!

Waybackmachine3Fire up the WABAC Machine, Mr. Peabody: In June 2008, I wrote a piece for MIT Technology Review explaining “How Facebook Works.”

The story started with this:

Facebook is a wonderful example of the network effect, in which the value of a network to a user is exponentially proportional to the number of other users that network has.

Facebook’s power derives from what Jeff Rothschild, its vice president of technology, calls the “social graph”–the sum of the wildly various connections between the site’s users and their friends; between people and events; between events and photos; between photos and people; and between a huge number of discrete objects linked by metadata describing them and their connections.

Facebook maintains data centers in Santa Clara, CA; San Francisco; and Northern Virginia. The centers are built on the backs of three tiers of x86 servers loaded up with open-source software, some that Facebook has created itself.

Let’s look at the main facility, in Santa Clara, and then show how it interacts with its siblings.

Read the whole story here… and check out Facebook’s current Open Source project pages too.

mobile_everythingSecurity standards for cellular communications are pretty much invisible. The security standards, created by groups like the 3GPP, play out behind the scenes, embedded into broader cellular protocols like 3G, 4G, LTE and the oft-discussed forthcoming 5G. Due to the nature of the security and other cellular specs, they evolve very slowly and deliberately; it’s a snail-like pace compared to, say, WiFi or Bluetooth.

Why the glacial pace? One reason is that cellular standards of all sorts must be carefully designed and tested in order to work in a transparent global marketplace. There are also a huge number of participants in the value chain, from handset makers to handset firmware makers to radio manufacturers to tower equipment to carriers… the list goes on and on.

Another reason why cellular software, including security protocols and algorithms goes slowly is that it’s all bound up in large platform versions. The current cellular security system is unlikely to change significantly before the roll-out of 5G… and even then, older devices will continue to use the security protocols embedded in their platform, unless a bug forces a software patch. Those security protocols cover everything from authentication of the cellular device to the tower, to the authentication of the tower to the device, to encryption of voice and data traffic.

We can only hope that end users will move swiftly to 5G. Why? because 4G and older platforms aren’t incredibly secure. Sure, they are good enough today, but that’s only “good enough.” The downside is that everything is pretty fuzzy when it comes to what 5G will actually offer… or even how many 5G standards there will be.

Read more in my story in Pipeline Magazine, “Wireless Security Standards.”

5D3_9411Forget vendor lock-in: Carrier operation support systems (OSS) and business support systems (BSS) are going open source. And so are many of the other parts of the software stack that drive the end-to-end services within and between carrier networks.

That’s the message from TM Forum Live, one of the most important conferences for the telecommunications carrier industry.

Held in Nice, France, from May 9-12, 2016, TM Forum Live is produced by TM Forum, a key organization in the carrier universe.

TM Forum works closely with other industry groups, like the MEF, OpenDaylight and OPNFV. I am impressed how so many open-source projects, standards-defining bodies and vendor consortia are collaborating a very detailed level to improve interoperability at many, many levels. The key to making that work: Open source.

You can read more about open source and collaboration between these organizations in my NetworkWorld column, “Open source networking: The time is now.”

While I’m talking about TM Forum Live, let me give a public shout-out to:

Pipeline Magazine – this is the best publication, bar none, for the OSS, BSS, digital transformation and telecommunications service provider space. At TM Forum Live, I attended their annual Innovation Awards, which is the best-prepared, best-vetted awards program I’ve ever seen.

Netcracker Technology — arguably the top vendor in providing software tools for telecommunications and cable companies. They are leading the charge for the agile reinvention of a traditionally slow-moving industry. I’d like to thank them for hosting a delicious press-and-analyst dinner at the historic Hotel Negresco – wow.

Looking forward to next year’s TM Forum Live, May 15-18, 2017.

dell-pcslimitedGet used to new names. Instead of Dell the computer company, think Dell Technologies. Instead of EMC, think Dell EMC. So far, it seems that VMware won’t be renamed Dell VMware, but one never can tell. (They’ve come a long way since PC’s Limited.)

What’s in a name? Not much. What’s in an acquisition of this magnitude (US$67 billion)? In this case, lots of synergies.

Within the Dell corporate structure, EMC will find a stable, predictable management team. Michael Dell is a thoughtful leader, and is unlikely to do anything stupid with EMC’s technology, products, branding and customer base. The industry shouldn’t expect bet-the-business moonshots. Satisfied customers should expect more-of-the-same, but with Dell’s deep pockets to fuel innovation.

Dell’s private ownership is another asset. Without the distraction of stock prices and quarterly reporting, managers don’t have to worry about beating the Street. They can focus on beating competitors.

EMC and Dell have partnered to develop technology and products since 2001. While the partnership dissolved in 2011, the synergies remained… and now will be locked in, obviously, by the acquisition. That means new products for physical data centers, the cloud, and hybrid environments. Those will be boosted by Dell. Similarly, there are tons of professional services. The Dell relationship will only expand those opportunities.

Nearly everyone will be a winner…. Everyone, that is, except for Dell/EMC’s biggest competitors, like HPE and IBM. They must be quaking in their boots.

panamaThe Panama Papers should be a wake-up call to every CEO, COO, CTO and CIO in every company.

Yes, it’s good that alleged malfeasance by governments and big institutions came to light. However, it’s also clear that many companies simply take for granted that their confidential information will remain confidential. This includes data that’s shared within the company, as well as information that’s shared with trusted external partners, such as law firms, financial advisors and consultants. We’re talking everything from instant messages to emails, from documents to databases, from passwords to billing records.

Clients of Mossack Fonseca, the hacked Panamanian law firm, erroneously thought its documents were well protected. How well protected are your documents and IP held by your company’s law firms and other partners? It’s a good question, and shadow IT makes the problem worse. Much worse.

Read why in my column in NetworkWorld: Fight corporate data loss with secure, easy-to-use collaboration tools.

sauronBarcelona, Mobile World Congress 2016—IoT success isn’t about device features, like long-life batteries, factory-floor sensors and snazzy designer wristbands. The real power, the real value, of the Internet of Things is in the data being transmitted from devices to remote servers, and from those remote servers back to the devices.

“Is it secret? Is it safe?” Gandalf asks Frodo in the “Lord of the Rings” movies about the seductive One Ring to Rule Them All. He knows that the One Ring is the ultimate IoT wearable: Sure, the wearer is uniquely invisible, but he’s also vulnerable because the ring’s communications can be tracked and hijacked by the malicious Nazgûl and their nation/state sponsor of terrorism.

Wearables, sensors, batteries, cool apps, great wristbands. Sure, those are necessary for IoT success, but the real trick is to provision reliable, secure and private communications that Black Riders and hordes of nasty Orcs can’t intercept. Read all about it in my NetworkWorld column, “We need secure network infrastructure – not shiny rings – to keep data safe.”

diet-cokeA hackathon – like the debut LSO Hackathon held in November 2015 at the MEF’s GEN15 conference – is where magic happens. It’s where theory turns into practice, and the state of the art advances. Dozens of techies sitting in a room, hunched over laptops, scribbling on whiteboards, drinking excessive quantities of coffee and Diet Coke. A hubbub of conversation. Focus. Laughter. A sense of challenge.

More than 50 network and/or software experts joined the first-ever LSO Hackathon, representing a very diverse group of 20 companies. They were asked to focus on two Reference Points of the MEF’s Lifecycle Service Orchestration (LSO) Reference Architecture. As explained by , Director of Certification and Strategic Programs at the MEF and one of the architects of the LSO Hackathon series, these included:

  • LSO Adagio, which defines the element management reference point needed to manage network resources, including element view management functions
  • LSO Presto, which defines the network management reference point needed to manage the network infrastructure, including network view management functions

Read more about the LSO Hackathon in my story in Telecom Ramblings, “Building Community, Swatting Bugs, Writing Code.”

ddjSoftware-defined networks and Network Functions Virtualization will redefine enterprise computing and change the dynamics of the cloud. Data thefts and professional hacks will grow, and development teams will shift their focus from adding new features to hardening against attacks. Those are two of my predictions for 2015.

Big Security: As 2014 came to a close, huge credit-card breaches from retailers like Target faded into the background. Why? The Sony Pictures hack, and the release of an incredible amount of corporate data, made us ask a bigger question: “What is all that information doing on the network anyway?” Attackers took off with Sony Pictures’ spreadsheets about executive salaries, confidential e-mails about actors and actresses, and much, much more.

What information could determined, professional hackers make off with from your own company? If it’s on the network, if it’s on a server, then it could be stolen. And if hackers can gain access to your cloud systems (perhaps through social engineering, perhaps by exploiting bugs), then it’s game over. From pre-released movies and music albums by artists like Madonna, to sensitive healthcare data and credit-card numbers, if it’s on a network, it’s fair game.

No matter where you turn, vulnerabilities are everywhere. Apple patched a hole in its Network Time Protocol implementation. Who’d have thought attackers would use NTP? GitHub has new security flaws. ICANN has scary security flaws. Microsoft released flawed updates. Inexpensive Android phones and tablets are found to have backdoor malware baked right into the devices. I believe that 2015 will demonstrate that attackers can go anywhere and steal anything.

That’s why I think that savvy development organizations will focus on reviewing their new code and existing applications, prioritizing security over adding new functionality. It’s not fun, but it’s 100% necessary.

Big Cloud: Software-defined networking and Network Functions Virtualization are reinventing the network. The fuzzy line between intranet and Internet is getting fuzzier. Cloud Ethernet is linking the data center directly to the cloud. The network edge and core are indistinguishable. SDN and NFV are pushing functions like caching, encryption, load balancing and firewalls into the cloud, improving efficiency and enhancing the user experience.

In the next year, mainstream enterprise developers will begin writing (and rewriting) back-end applications to specifically target and leverage SDN/NFV-based networks. The question of whether the application is going to run on-premises or in the cloud will cease to be relevant. In addition, as cloud providers become more standards-based and interoperable, enterprises will gain more confidence in that model of computing. Get used to cloud APIs; they are the future.

Looking to boost your job skills? Learn about SDN and NFV. Want to bolster your development team’s efforts? Study your corporate networking infrastructure, and tailor your efforts to matching the long-term IT plans. And put security first—both of your development environments and your deployed applications.

Big Goodbye: The tech media world is constantly changing, and not always for the better. The biggest one is the sunsetting of Dr. Dobb’s Journal, a website for serious programmers, and an enthusiastic bridge between the worlds of computer science and enterprise computing. After 38 years in print and online, the website will continue, but no new articles or content will be commissioned or published.

DDJ was the greatest programming magazine ever. There’s a lot that can be said about its sad demise, and I will refer you to two people who are quite eloquent on the subject: Andrew Binstock, the editor of DDJ, and Larry O’Brien, SD Times columnist and former editor of Software Development Magazine, which was folded into DDJ a long time ago.

Speaking as a long-time reader—and as one of the founding judges of DDJ’s Jolt Awards—I can assure you that Dr. Dobb’s will be missed.

sony_pictures_logoFor development teams, cloud computing is enthralling. Where’s the best place for distributed developers, telecommuters and contractors to reach the code repository? In the cloud. Where do you want the high-performance build servers? At a cloud host, where you can commandeer CPU resources as needed. Storing artifacts? Use cheap cloud storage. Hosting test harness? The cloud has tremendous resources. Load testing? The scales. Management of beta sites? Cloud. Distribution of finished builds? Cloud. Access to libraries and other tools? Other than the primary IDE itself, cloud. (I’m not a fan of working in a browser, sorry.)

Sure, a one-person dev team can store an entire software development environment on a huge workstation or a convenient laptop. Sure, a corporation or government that has exceptional concerns or extraordinary requirements may choose to host its own servers and tools. In most cases, however, there are undeniable benefits for cloud-oriented development, and if developers aren’t there today, they will be soon. My expectation is that new projects and team launch on the cloud. Existing projects and teams will remain on their current dev platforms (and on-prem) until there’s a good reason to make the switch.

The economics are unassailable, the convenience is unparalleled, and both performance and scalability can’t be matched by in-house code repositories. Security in the cloud may also outmatch most organizations’ internal software development servers too.

We have read horror stories about the theft of millions of credit cards and other personal data, medical data, business documents, government diplomatic files, e-mails and so-on. It’s all terrible and unlikely to stop, as the recent hacking of Sony Pictures demonstrates.

What we haven’t heard about, through all these hacks, is the broad theft of source code, and certainly not thefts from hosted development environments. Such hacks would be bad, not only because proprietary source code contains trade secrets, but also because the source can be reverse-engineered to reveal attack vulnerabilities. (Open-source projects also can be reverse-engineered, of course, but that is expected and in fact encouraged.)

Even worse that reverse-engineering of stolen source code would be unauthorized and undetected modifications to a codebase. Can you imagine if hackers could infiltrate an e-commerce system’s hosted code and inject a back door or keylogger? You get the idea.

I am not implying that cloud-based software development systems are more secure than on-premises systems. I am also not implying the inverse. My instinct is to suggest that hosted cloud dev systems are as safe, or safer, than internal data center systems. However, there’s truly no way to know.

A recent report from the analyst firm Technology Business Research took this stance, arguing that security for cloud-based services will end up being better than security at local servers and data centers. While not speaking specifically to software development, a recent TBR report concluded, “Security remains the driving force behind cloud vendor adoption, while the emerging trends of hybrid IT and analytics, and the associated security complications they bring to the table, foreshadow steady and growing demand for cloud professional services over the next few years.”

Let me close by drawing your attention to a competition geared at startups innovating in the cloud. The Clouded Leopard’s Den is for young companies looking for A-series or B/C-series funding, and offers tools and resources to help them grow, attract publicity, and possibly even find new funding. If you work at a cloud startup, check it out!

lawyer

Cloud-based storage is amazing. Simply amazing. That’s especially true when you are talking about data from end users that are accessing your applications via the public Internet.

If you store data in your local data center, you have the best control over it. You can place it close to your application servers. You can amortize it as a long-term asset. You can see it, touch it and secure it—or at least, have full control over security.

There are downsides, of course, to maintaining your own on-site data storage. You have to back it up. You have to plan for disasters. You have to anticipate future capacity requirements through budgeting and advance purchases. You have to pay for the data center itself, including real estate, electricity, heating, cooling, racks and other infrastructure. Operationally you have to pipe that data to and from your remote end users through your own connections to the Internet or to cloud application servers.

By contrast, cloud storage is very appealing. You pay only for what you use. You can hold service providers to service-level guarantees. You can pay the cloud provider to replicate the storage in various locations, so customers and end-users are closer to their data. You can pay for security, for backups, for disaster recovery provisions. And if you find that performance isn’t sufficient, you can migrate to another provider or order up a faster pipe. That’s a lot easier, cheaper and faster than ripping-and-replacing outdated storage racks in your own data center.

Gotta say, if I were setting up a new application for use by off-site users (whether customers or employees), I’d lean toward cloud storage. In most cases, the costs are comparable, and the operational convenience can’t be beat.

Plus, if you are at a startup, a monthly storage bill is easier to work with than a large initial outlay for on-site storage infrastructure.

Case closed? No, not exactly. On-site still has some tricks up its sleeve. If your application servers are on-site, local storage is faster to access. If your users are within your own building or campus, you can keep everything within your local area network.

There also may be legal advantages to maintaining and using onsite storage. For compliance purposes, you know exactly where the data is at all times. You can set up your own instruction detection systems and access logs, rather than relying upon the access controls offered by the cloud provider. (If your firm isn’t good at security, of course, you may want to trust the cloud provider over your own IT department.)

On that subject: Lawsuits. In her story, “Eek! Lawyers are Coming After Your Fitbit!,” Sharon Fisher writes about insurance attorneys issuing subpoenas against a client’s FitBit data to show that she wasn’t truly as injured as she claimed. The issue here isn’t only about wearables or healthcare. It’s also about access. “Will legal firms be able to subpoena your cloud provider if that’s where your fitness data is stored? How much are they going to fight to protect you?” Fisher asks.

Say a hostile attorney wants to subpoena some of your data. If the storage is in your own data center, the subpoena comes to your company, where your own legal staff can advise whether to respond by complying or fighting the subpoena.

Yet: If the data is stored in the cloud, attorneys or government officials could come after you, or try to get access by giving a subpoena to the cloud service provider. Of course, encryption might prevent the cloud provider from complying. Still, this is a new concern, especially given the broad subpoena powers granted to prosecutors, litigating attorneys and government agencies.

It’s something to talk to your corporate counsel about. Bring your legal eagles into the conversation.

bob-metcalfeWashington, D.C. — “It’s not time to regulate and control and tax the Internet.” Those are words of wisdom about Net Neutrality from Dr. Robert Metcalfe, inventor of Ethernet, held here at the MEF GEN14, the annual conference from the Metro Ethernet Forum.

Bob Metcalfe is a legend. Not only for his role in inventing Ethernet and founding 3Com, but also now for his role as a professor of innovation at the University of Texas at Austin. (Disclosure: Bob is also a personal friend and former colleague.)

At MEF GEN14, Bob gave a keynote, chaired a panel on innovation, and was behind the microphone on several other occasions. I’m going to share some of his comments and observations.

  • Why didn’t WiFi appear earlier? According to Bob, radio links were part of the original work on Ethernet, but the radios themselves were too slow, too large, and required too much electricity. “It was Moore’s Law,” he explained, saying that chips and circuits needed to evolve in order to make radio-based Ethernet viable.
  • Interoperability is key for innovation. Bob believes that in order to have strong competitive markets, you need to have frameworks for compatibility, such as standards organizations and common protocols. This helps startups and established players compete by creating faster, better and cheaper implementations, and also creating new differentiated value-added features on top of those standards. “The context must be interoperability,” he insisted.
  • Implicit with interoperability is that innovation must respect backward compatibility. Whether in consumer or enterprise computing, customers and markets do not like to throw away their prior investments. “I have learned about efficacy of FOCACA: Freedom of Choice Among Competing Alternatives. That’s the lesson,” Bob said, citing Ethernet protocols but also pointing at all layers of the protocol stack.
  • There is a new Internet coming: the Gigabit Internet. “We started with the Kilobit Internet, where the killer apps were remote login and tty,” Bob explained. Technology and carriers then moved to today’s ubiquitous Megabit Internet, “where we got the World Wide Web and social media.” The next step is the Gigabit Internet. “What will the killer app be for the Gigabit Internet? Nobody knows.”
  • With the Internet of Things, is Moore’s Law going to continue? Bob sees the IoT being constrained by hardware, especially microprocessors. He pointed out that as semiconductor feature sizes have gone down to 14nm scale, the costs of building fabrication factories has grown to billions of dollars. While chip features shrink, the industry has also moved to consolidation, larger wafers, 3D packing, and much lower power consumption—all of which are needed to make cheap chips for IoT devices. There is a lot of innovation in the semiconductor market, Bob said, “but with devices counted in the trillions, the bottleneck is how long it takes to design and build the chips!”
  • With Net Neutrality, the U.S. Federal Communications Commission should keep out. “The FCC is being asked to invade this party,” Bob said. “The FCC used to run the Internet. Do you remember that everyone had to use acoustic couplers because it was too dangerous to connect customer equipment to the phone network directly?” He insists that big players—he named Google—are playing with fire by lobbying for Net Neutrality. “Inviting the government to come in and regulate the Internet. Where could it go? Not in the way of innovation!” he insisted.

caprizaHTML browser virtualization, not APIs, may be the best way to mobilize existing enterprise applications like SAP ERP, Oracle E-Business Suite or Microsoft Dynamics.

At least, that’s the perspective of Capriza, a company offering a SaaS-based mobility platform that uses a cloud-based secure virtualized browser to screen-scrape data and context from the enterprise application’s Web interface. That data is then sent to a mobile device (like a phone or tablet), where it’s rendered and presented through Capriza’s app.

The process is bidirectional: New transactional data can be entered into the phone’s Capriza app, which transmits it to the cloud-based platform. The Capriza cloud, in turn, opens up a secure virtual browser session with the enterprise software and performs the transaction.

The Capriza platform, which I saw demonstrated last week, is designed for employees to access enterprise applications from their Android or Apple phones, or from tablets.

The platform isn’t cheap – it’s licensed on a per-seat, per-enterprise-application basis, and you can expect a five-digit or six-digit annual cost, at the least. However, Capriza is solving a pesky problem.

Think about the mainstream way to deploy a mobile application that accesses big enterprise back-end platforms. Of course, if the enterprise software vendor offers a mobile app, and if that app meets you needs, that’s the way to go. What if the enterprise software’s vendor doesn’t have a mobile app – or if the software is homegrown? The traditional approach would be to open up some APIs allowing custom mobile apps to access the back-end systems.

That approach is fraught with peril. It takes a long time. It’s expensive. It could destabilize the platform. It’s hard to ensure security, and often it’s a challenge to synchronize API access policies with client/server or browser-based access policies and ACLs. Even if you can license the APIs from an enterprise software vendor, how comfortable are you exposing them over the public Internet — or even through a VPN?

That’s why I like the Capriza approach of using a virtual browser to access the existing Web-based interface. In theory (and probably in practice), the enterprise software doesn’t have to be touched at all. Since the Capriza SaaS platform has each mobile user log into the enterprise software using the user’s existing Web interface credentials, there should be no security policies and ACLs to replicate or synchronize.

In fact, you can think of Capriza as an intentional man-in-the-middle for mobile users, translating mobile transactions to and from Web transactions on the fly, in real time.

As the company explains it, “Capriza helps companies leverage their multi-million dollar investments in existing enterprise software and leapfrog into the modern mobile era. Rather than recreate the wheel trying to make each enterprise application run on a mobile device, Capriza breaks complex, über business processes into mini ones. Its approach bypasses the myriad of tools, SDKs, coding, integration and APIs required in traditional mobile app development approaches, avoiding the perpetual cost and time requirements, risk and questionable ROI.”

It certainly looks like Capriza wins this week’s game of Buzzword Bingo. Despite the marketing jargon, however, the technology is sound, and Capriza has real customers—and has recently landed a US$27 million investment. That means we’re going to see a lot of more this solution.

Can Capriza do it all? Well, no. It works best on plain vanilla Web sites; no Flash, no Java, no embedded apps. While it’s somewhat resilient, changes to an internal Web site can break the screen-scraping technology. And while the design process for new mobile integrations doesn’t require a real programmer, the designer must be very proficient with the enterprise application, and model all the pathways through the software. This can be tricky to design and test.

Plus, of course, you have to be comfortable letting a third-party SaaS platform act as the man-in-the-middle to your business’s most sensitive applications.

Bottom line: If you are mobilizing enterprise software — either commercial or home-grown — that allow browser access, Capriza offers a solution worth considering.

Neineil-sedakal Sedaka insists that breakin’ up is hard to do. Will that apply to the planned split of Hewlett-Packard into two companies? Let’s be clear: This split is a wonderful idea, and it’s long overdue.

Once upon a time, HP was in three businesses: Electronics test equipment (like gas spectrometers); expensive, high-margin data center products and services (like minicomputers and consulting); and cheap, low-margin commodity tech products (like laptops, small business routers and ink-jet printers).

HP spun off the legacy test-equipment business in 1999 (forming Agilent Technologies) and that was a win-win for both Agilent and for the somewhat-more-focused remainder of HP. Now it’s time to do it again.

There are precious few synergies between the enterprise side of HP and the commodity side. The enterprise side has everything that a big business would want, from high-end hyperscale servers to Big Data, Software Defined Networks, massive storage arrays, e-commerce security, and oh, lots of consulting services.

Over the past few years, HP has been on an acquisitions binge to support its enterprise portfolio, helping make it more competitive against arch-rival IBM. The company has snapped up ArcSight and Fortify Software (software security); Electronic Data Systems (IT services and consulting); 3PAR (storage); Vertica Systems (database analytics); Shunra (network virtualization); Eucalyptus (private and hybrid cloud); Stratavia/ExtraQuest (data center automation); and of course, the absurdly overpriced Autonomy (data management).

Those high-touch, high-cost, high-margin enterprise products and services have little synergy with, say, the HP Deskjet 1010 Color Printer, available for US$29.99 at Staples. Sure, there’s money in printers, toner and ink, monitors, laptops and so on. But that’s a very different market, with a race-to-the-bottom drive for market share, horrible margins, crazy supply chain and little to differentiate one Windows-based product from another.

Analysts and investors have been calling for the breakup of HP for years; the company refused, saying that the unified company benefitted from an economy of scale. It’s good that CEO Meg Whitman has acknowledged what everyone knew: HP is sick, and this breakup into Hewlett-Packard Enterprise and HP Inc. is absolutely necessary.

Is breaking up hard to do? For most companies it’s a challenge at the best of times, but this one should be relatively painless. First of all, HP has split up before, so at least there’s some practice. Second, these businesses are so different that it should be obvious where most of HP’s employees, products, customer relationships, partner relationships and intellectual properly will end up.

That’s not to say it’s going to be easy. However, it’s at least feasible.

Both organizations will be attractive takeover targets, that’s for sure. I give it a 50/50 chance that within five years, IBM or Oracle will make a play for Hewlett-Packard Enterprise, or it will combine with a mid-tier player like VMware or EMC.

The high-volume, low-margin HP Inc. will have trouble surviving on its own, because that is an area where scale helps drive down costs and helps manage the supply chain and retail channels. I could see HP Inc. being acquired by Dell or Lenovo, or even by a deep-pocket Internet retailer like Amazon.com.

This breakup is necessary and may be the salvation of Hewlett-Packard’s enterprise business. It may also be the beginning of the end for the most storied company in Silicon Valley.

bankamericard“My name is Patricia from the Bank of America fraud prevention department. This important message is for Mr. Alan Zeichick. We are calling to verify some potentially suspicious activity on your account. It is very important that we speak with you.”

Tuesday’s voicemail from my bank was short and simple. Nobody had pilfered a credit-card receipt or hacked into my account, the representative told me during our conversation. Rather, BofA had been notified by Visa (the credit card clearinghouse) that a retailer had been hacked, and many credit card numbers were stolen. Including mine. As of right now, my card was frozen; the bank will issue me a new card with a new number.

Who was the merchant? According to the BofA representative, Visa didn’t divulge that information due to an ongoing investigation. Nor did the representative know how many credit card numbers were stolen; all she knew what was that BofA was given a list of their bank’s customers who were affected.

These stories are coming far too often. Millions of cards were stolen in 2014 from diverse merchants like P.F. Chang’s China Bistro (a restaurant chain), Michaels Stores (art supplies), Sally Beauty (cosmetics), and Shaws (grocery stores). And those are only a few of the major vendors. Who knows how many smaller card thefts are either never reported, or aren’t deemed sufficiently juicy by the news media?

Some of you might be thinking, “We don’t take credit card numbers on our websites, so there’s no potential risk exposure.” Wrong. I am frequently astonished by the number of companies that maintain lots of customer data, and have that data pilfered. The Payment Card Industry (PCI) standards say that you should never store customer payment information. We’ve all seen that those standards are not followed, sometimes intentionally through neglect, and sometimes through flawed architecture, bad coding or lousy testing.

Let’s be clear: Encrypting browser communications does not protect your customers’ personal or financial information. If you are storing that information anywhere—in your data center, in the cloud—it is at real risk. The threats are active. Are your countermeasures active?

What is even more astonishing is that many of these thefts are of personal information stored on employees’ laptops. You may recall a high-profile case in 2013, where nearly 840,000 Horizon Blue Cross Blue Shield customers had their information compromised when two laptops were stolen from the New Jersey-based health insurance company.

To quote from the Star Ledger’s story,

The stolen laptops were password-protected but had unencrypted data, Horizon said in a statement today. A subsequent investigation determined the computers may have contained files with personal information, including names, addresses, dates of birth, and, in some instances, Social Security numbers and limited clinical information, the insurer said.

How is that possible? No possible scenario should allow customer information to be downloaded onto a desktop or laptop or tablet or phone. Ever. Encrypted or not, the data should never leave the server.

Please tell me you aren’t storing credit card info in files that can be stolen. Please tell me  your company has actively sought to ensure that customer information can never ever ever be downloaded from servers.

Data theft is a nuisance, for cardholders like me, and for businesses like yours. Do you protect your customers’ information?

azure-statusCloud-based development tools are great. Until they don’t work.

I don’t know if you were affected by Microsoft’s Azure service outage on Thursday, August 14, 2014. As of my deadline, services had been offline for nearly six hours. On its status page, Microsoft was reporting:

Visual Studio Online – Multi-Region – Full Service Interruption

Starting 22:45 13 Aug, 2014 UTC, Visual Studio Online customers may have experienced issues with latency and extended Execution times. The initial incident mitigated at approximately 14:00 UTC. During investigation at 13:52 14 Aug, 2014 UTC, engineering teams began receiving alerts for a separate issue where customers were unable to log in to their Visual Studio Online services. From 13:52 to 19:45 on 14 Aug, 2014 UTC, customers were unable to access their Visual Studio Online resources. Engineering teams have validated their mitigation efforts for both issues and have confirmed that full service has been restored to our Visual Studio Online users. These incidents are now mitigated.

My goal here isn’t to throw Microsoft under the bus. Azure has been quite stable, and other cloud providers, including Amazon, Apple and Google, have seen similar problems. Actually, Amazon in particular has seen a lot of uptime and stability problems with AWS over the past couple of years, though its dashboard on Thursday afternoon shows full service availability.

Let’s think about the broader issue. What’s your contingency plan if your cloud-based services go down, whether it’s one of those players, or a service like GitHub, Salesforce.com, SourceForge, or you-name-it? Do you have backups, in case code or artifacts are lost or corrupted? (Do you have any way to know if data is lost or corrupted?)

This is a worry.

In the case of the August 14 outage, the system wasn’t down for long — but long enough to kill a day’s productivity for many workers. Microsoft’s Visual Studio Online blog has a little bit of insight into the problem, but not much. Posted at 16:56 UTC, Microsoft said:

The actual root cause is still under investigation, but initial investigation is indicating a contention in our core database seems to be causing blocking and performance issues in the services. Our DevOps teams have identified a couple of mitigation steps and currently going thru validations. We will provide an update as soon as we have a mitigation in place. We apologize for the inconvenience and appreciate your patience while working on resolving this issue

This time you can blame Microsoft for any loss of productivity. Next time the service goes down, if you haven’t made contingency plans, the blame is yours.

four-cornersWe drove slightly more than 2,500 miles (4,000 kilometers), my wife and I, during a weeklong holiday. We explored different states in the western United States: Arizona (where we live), Colorado, New Mexico and Wyoming. The Rocky Mountains are incredible. Most of our vacation was at altitudes above 6,000 feet (1,800 meters). Many of the mountain peaks were above 14,000 feet (4,200 meters), and one road went above 11,000 feet (3,300 meters). Exciting!

The adventure involved bringing only smartphones, one running Android, one running iOS. We used mobile apps for navigation, for communication, for photography, for reading, for social media, for finding hotels and restaurants, just about everything.

We learned that apps only seem to run well when there is copious bandwidth, either WiFi at a hotel or a fast cellular data link. If a smartphone registered 4G or LTE, all was good. If the phone indicated that the connection was EDGE, GPRS or 3G, all bets were off. It’s not that data loaded slowly. That would be expected. It’s that the apps would crash, or time out, or posting data would fail, or nothing would happen at all. Many modern apps expect or demand lots and lots of bandwidth.

I’m not talking here about apps running completely offline. That’s an entirely different conversation. I’m talking about apps not gracefully handling situations where the bandwidth is narrower than a drinking straw.

Many developers test out their mobile apps using simulators. That, or on devices that have very high bandwidth connections, such an office WiFi network or the type of high-speed network that you’ll find in Silicon Valley, New York City, or other major tech hubs around the world. Having lots of mobile bandwidth is undoubtedly a blessing for developers, but for many consumers, that’s simply not the case.

Lots of customers live in areas with poor bandwidth, or find themselves traveling in places where connectivity is slow or intermittent. Given the use cases for mobile devices—that is, they are frequently used when not at home or in an office—optimizing apps for bad bandwidth should be mandatory. Hey, this isn’t about streaming 1080p movies. This is about being able to use a search engine, or call up a map, or be able to find a hotel room.

Will people use your apps in poor-bandwidth or intermittent-bandwidth situations? If so, here are some steps you can do to improve the user experience:

  1. Make sure that part of your testing involves low-bandwidth and intermittent-bandwidth scenarios. Find beta testers who live with poor bandwidth or who travel to such locations.
  2. Have your app test for throughput, and not only at application launch. Merely detecting whether the connection is WiFi or cellular is insufficient. If throughput is low, consider degrading the experience, such as by using lower-end graphics, in order to keep data moving.
  3. Cache, cache, cache.
  4. Don’t insist on reloading data each and every time the user either launches the app or switches to it. Alan’s pet peeves include news and other websites that freeze the UI while loading the latest headlines or content each time the app is brought to the foreground.
  5. If you detect that the device is in a low-bandwidth environment, pause background data syncing, or at least ask the user if he/she would like to do so.
  6. If you are sending audio or video, compress the heck out of it. That may involve choosing different algorithms for different bandwidth situations, with low-bandwidth scenarios using narrower and lossier codecs.