, ,

Where’s the best Java coding style guide? Not at Oracle

For programmers, a language style guide is essential for helping learn a language’s standards. A style guide also can resolve potential ambiguities in syntax and usage. Interestingly, though, the official Code Conventions for the Java Programming Language guide has not been updated since April 20,1999 – back from long before Oracle bought Sun Microsystems. In fact, the page is listed as for “Archival Purposes Only.”

What’s up with that? I wrote to Andrew Binstock (@PlatypusGuy), the editor-in-chief of Oracle Java Magazine. In the November/December 2016 issue of the magazine, Andrew explained that according to the Java team, the Code Conventions guide was meant as an internal coding guide – not as an attempt to standardize the language.

Instead of Coding Conventions, Mr. B recommends the Google Java Style Guide as a “full set of well-reasoned Java coding guidelines.” So there you have it: If you want the good Java guidelines, look to Google — not to Oracle. Here’s the letter and the response.

, , , , ,

Medical devices – the wild west for cybersecurity vulnerabilities and savvy hackers

bloombergMedical devices are incredibly vulnerable to hacking attacks. In some cases it’s because of software defects that allow for exploits, like buffer overflows, SQL injection or insecure direct object references. In other cases, you can blame misconfigurations, lack of encryption (or weak encryption), non-secure data/control networks, unfettered wireless access, and worse.

Why would hackers go after medical devices? Lots of reasons. To name but one: It’s a potential terrorist threat against real human beings. Remember that Dick Cheney famously disabled the wireless capabilities of his implanted heart monitor for fear of an assassination attack.

Certainly healthcare organizations are being targeted for everything from theft of medical records to ransomware. To quote the report “Hacking Healthcare IT in 2016,” from the Institute for Critical Infrastructure Technology (ICIT):

The Healthcare sector manages very sensitive and diverse data, which ranges from personal identifiable information (PII) to financial information. Data is increasingly stored digitally as electronic Protected Health Information (ePHI). Systems belonging to the Healthcare sector and the Federal Government have recently been targeted because they contain vast amounts of PII and financial data. Both sectors collect, store, and protect data concerning United States citizens and government employees. The government systems are considered more difficult to attack because the United States Government has been investing in cybersecurity for a (slightly) longer period. Healthcare systems attract more attackers because they contain a wider variety of information. An electronic health record (EHR) contains a patient’s personal identifiable information, their private health information, and their financial information.

EHR adoption has increased over the past few years under the Health Information Technology and Economics Clinical Health (HITECH) Act. Stan Wisseman [from Hewlett-Packard] comments, “EHRs enable greater access to patient records and facilitate sharing of information among providers, payers and patients themselves. However, with extensive access, more centralized data storage, and confidential information sent over networks, there is an increased risk of privacy breach through data leakage, theft, loss, or cyber-attack. A cautious approach to IT integration is warranted to ensure that patients’ sensitive information is protected.”

Let’s talk devices. Those could be everything from emergency-room monitors to pacemakers to insulin pumps to X-ray machines whose radiation settings might be changed or overridden by malware. The ICIT report says,

Mobile devices introduce new threat vectors to the organization. Employees and patients expand the attack surface by connecting smartphones, tablets, and computers to the network. Healthcare organizations can address the pervasiveness of mobile devices through an Acceptable Use policy and a Bring-Your-Own-Device policy. Acceptable Use policies govern what data can be accessed on what devices. BYOD policies benefit healthcare organizations by decreasing the cost of infrastructure and by increasing employee productivity. Mobile devices can be corrupted, lost, or stolen. The BYOD policy should address how the information security team will mitigate the risk of compromised devices. One solution is to install software to remotely wipe devices upon command or if they do not reconnect to the network after a fixed period. Another solution is to have mobile devices connect from a secured virtual private network to a virtual environment. The virtual machine should have data loss prevention software that restricts whether data can be accessed or transferred out of the environment.

The Internet of Things – and the increased prevalence of medical devices connected hospital or home networks – increase the risk. What can you do about it? The ICIT report says,

The best mitigation strategy to ensure trust in a network connected to the internet of things, and to mitigate future cyber events in general, begins with knowing what devices are connected to the network, why those devices are connected to the network, and how those devices are individually configured. Otherwise, attackers can conduct old and innovative attacks without the organization’s knowledge by compromising that one insecure system.

Given how common these devices are, keeping IT in the loop may seem impossible — but we must rise to the challenge, ICIT says:

If a cyber network is a castle, then every insecure device with a connection to the internet is a secret passage that the adversary can exploit to infiltrate the network. Security systems are reactive. They have to know about something before they can recognize it. Modern systems already have difficulty preventing intrusion by slight variations of known malware. Most commercial security solutions such as firewalls, IDS/ IPS, and behavioral analytic systems function by monitoring where the attacker could attack the network and protecting those weakened points. The tools cannot protect systems that IT and the information security team are not aware exist.

The home environment – or any use outside the hospital setting – is another huge concern, says the report:

Remote monitoring devices could enable attackers to track the activity and health information of individuals over time. This possibility could impose a chilling effect on some patients. While the effect may lessen over time as remote monitoring technologies become normal, it could alter patient behavior enough to cause alarm and panic.

Pain medicine pumps and other devices that distribute controlled substances are likely high value targets to some attackers. If compromise of a system is as simple as downloading free malware to a USB and plugging the USB into the pump, then average drug addicts can exploit homecare and other vulnerable patients by fooling the monitors. One of the simpler mitigation strategies would be to combine remote monitoring technologies with sensors that aggregate activity data to match a profile of expected user activity.

A major responsibility falls onto the device makers – and the programmers who create the embedded software. For the most part, they are simply not up to the challenge of designing secure devices, and may not have the polices, practices and tools in place to get cybersecurity right. Regrettably, the ICIT report doesn’t go into much detail about the embedded software, but does state,

Unlike cell phones and other trendy technologies, embedded devices require years of research and development; sadly, cybersecurity is a new concept to many healthcare manufacturers and it may be years before the next generation of embedded devices incorporates security into its architecture. In other sectors, if a vulnerability is discovered, then developers rush to create and issue a patch. In the healthcare and embedded device environment, this approach is infeasible. Developers must anticipate what the cyber landscape will look like years in advance if they hope to preempt attacks on their devices. This model is unattainable.

In November 2015, Bloomberg Businessweek published a chilling story, “It’s Way too Easy to Hack the Hospital.” The authors, Monte Reel and Jordon Robertson, wrote about one hacker, Billy Rios:

Shortly after flying home from the Mayo gig, Rios ordered his first device—a Hospira Symbiq infusion pump. He wasn’t targeting that particular manufacturer or model to investigate; he simply happened to find one posted on EBay for about $100. It was an odd feeling, putting it in his online shopping cart. Was buying one of these without some sort of license even legal? he wondered. Is it OK to crack this open?

Infusion pumps can be found in almost every hospital room, usually affixed to a metal stand next to the patient’s bed, automatically delivering intravenous drips, injectable drugs, or other fluids into a patient’s bloodstream. Hospira, a company that was bought by Pfizer this year, is a leading manufacturer of the devices, with several different models on the market. On the company’s website, an article explains that “smart pumps” are designed to improve patient safety by automating intravenous drug delivery, which it says accounts for 56 percent of all medication errors.

Rios connected his pump to a computer network, just as a hospital would, and discovered it was possible to remotely take over the machine and “press” the buttons on the device’s touchscreen, as if someone were standing right in front of it. He found that he could set the machine to dump an entire vial of medication into a patient. A doctor or nurse standing in front of the machine might be able to spot such a manipulation and stop the infusion before the entire vial empties, but a hospital staff member keeping an eye on the pump from a centralized monitoring station wouldn’t notice a thing, he says.

 The 97-page ICIT report makes some recommendations, which I heartily agree with.

  • With each item connected to the internet of things there is a universe of vulnerabilities. Empirical evidence of aggressive penetration testing before and after a medical device is released to the public must be a manufacturer requirement.
  • Ongoing training must be paramount in any responsible healthcare organization. Adversarial initiatives typically start with targeting staff via spear phishing and watering hole attacks. The act of an ill- prepared executive clicking on a malicious link can trigger a hurricane of immediate and long term negative impact on the organization and innocent individuals whose records were exfiltrated or manipulated by bad actors.
  • A cybersecurity-centric culture must demand safer devices from manufacturers, privacy adherence by the healthcare sector as a whole and legislation that expedites the path to a more secure and technologically scalable future by policy makers.

This whole thing is scary. The healthcare industry needs to set up its game on cybersecurity.

, , , , , ,

Hackathons are great for learning — and great for the industry too

zebra-tc8000Are you a coder? Architect? Database guru? Network engineer? Mobile developer? User-experience expert? If you have hands-on tech skills, get those hands dirty at a Hackathon.

Full disclosure: Years ago, I thought Hackathons were, well, silly. If you’ve got the skills and extra energy, put them to work for coding your own mobile apps. Do a startup! Make some dough! Contribute to an open-source project! Do something productive instead of taking part in coding contests!

Since then, I’ve seen the light, because it’s clear that Hackathons are a win-win-win.

  • They are a win for techies, because they get to hone their abilities, meet people, and learn stuff.
  • They are a win for Hackathon sponsors, because they often give the latest tools, platforms and APIs a real workout.
  • They are a win for the industry, because they help advance the creation and popularization of emerging standards.

One upcoming Hackathon that I’d like to call attention to: The MEF LSO Hackathon will be at the upcoming MEF16 Global Networking Conference, in Baltimore, Nov. 7-10. The work will support Third Network service projects that are built upon key OpenLSO scenarios and OpenCS use cases for constructing Layer 2 and Layer 3 services. You can read about a previous MEF LSO Hackathon here.

Build your skills! Advance the industry! Meet interesting people! Sign up for a Hackathon!

, , ,

With Big Data, Facebook knows you by the company you keep

liberalAs Aesop wrote in his short fable, “The Donkey and His Purchaser,” you can quite accurately judge people by the company they keep.

I am “very liberal,” believes Facebook. If you know me, you are probably not surprised by that. However, I was: I usually think of myself as a small-l libertarian who caucuses with the Democrats on social issues. But Facebook, by looking at what I write, who I follow, and which pages I like, probably has a more accurate assessment.

The spark for this particular revelation is “Liberal, Moderate or Conservative? See How Facebook Labels You.” The article, by Jeremy Merrill, in today’s New York Times, explains how to see how Facebook categorizes you (presumably this is most appropriate for U.S. residents):

Try this (it works best on your desktop computer):

Go to facebook.com/ads/preferences on your browser. (You may have to log in to Facebook first.)

That will bring you to a page featuring your ad preferences. Under the “Interests” header, click the “Lifestyle and Culture” tab.

Then look for a box titled “US Politics.” In parentheses, it will describe how Facebook has categorized you, such as liberal, moderate or conservative.

(If the “US Politics” box does not show up, click the “See more” button under the grid of boxes.)

Part of the power of Big Data is that it can draw correlations based on vague inferences. So, yes, if you like Donald Trump’s page, but don’t like Hillary Clinton’s, you are probably conservative. What if you don’t follow either candidate? Jeremy writes,

Even if you do not like any candidates’ pages, if most of the people who like the same pages that you do — such as Ben and Jerry’s ice cream — identify as liberal, then Facebook might classify you as one, too.

This is about more than Facebook or political preferences. It’s how Big Data works in lots of instances where there is not only information about a particular person’s preference and actions, but a web of connections to other people and their preferences and actions. It’s certainly true about any social network where it’s easy to determine who you follow, and who follows you.

If most of your friends are Jewish, or Atheist, or Catholic, or Hindu, perhaps you are too, or have interests similar to theirs. If most of your friends are African-American or Italian-American, or simply Italian, perhaps you are too, or have interests similar to theirs. If many of your friends are seriously into car racing, book clubs, gardening, Game of Thrones, cruise ship vacations, or Elvis Presley, perhaps you are too.

Here is that Aesop fable, by the way:

The Donkey and his Purchaser

A man who wanted to buy a donkey went to market, and, coming across a likely-looking beast, arranged with the owner that he should be allowed to take him home on trial to see what he was like.

When he reached home, he put him into his stable along with the other donkeys. The newcomer took a look round, and immediately went and chose a place next to the laziest and greediest beast in the stable. When the master saw this he put a halter on him at once, and led him off and handed him over to his owner again.

The latter was a good deal surprised to seem him back so soon, and said, “Why, do you mean to say you have tested him already?”

“I don’t want to put him through any more tests,” replied the other. “I could see what sort of beast he is from the companion he chose for himself.”

Moral: “A man is known by the company he keeps.”

, , ,

Securely disposing of computers with spinning or solid state drives

big-shredderCan someone steal the data off your old computer? The short answer is yes. A determined criminal can grab the bits, including documents, images, spreadsheets, and even passwords.

If you donate, sell or recycle a computer, whoever gets hold of it can recover the information in its hard drive or solid-state storage (SSD). The platform doesn’t matter: Whether its Windows or Linux or Mac OS, you can’t 100% eliminate sensitive data by, say, eliminating user accounts or erasing files!

You can make the job harder by using the computer’s disk utilities to format the hard drive. Be aware, however, that formatting will thwart a casual thief, but not a determined hacker.

The only truly safe way to destroy the data is to physically destroy the storage media. For years, businesses have physically removed and destroyed the hard drives in desktops, servers and laptops. It used to be easy to remove the hard drive: take out a couple of screws, pop open a cover, unplug a cable, and lift the drive right out.

Once the hard drive is identified and removed, you can smash it with a hammer, drill holes in it, even take it apart (which is fun, albeit time-consuming). Some businesses will put the hard drive into an industrial shredder, which is a scaled-up version of an office paper shredder. Some also use magnetism to attempt to destroy the data. Not sure how effective that is, however, and magnets won’t work at all on SSDs.

It’s much harder to remove the storage from today’s ultra-thin, tightly sealed notebooks, such as a Microsoft Surface or Apple MacBook Air, or even from tablets. What if you want to destroy the storage in order to prevent hackers from gaining access? It’s a real challenge.

If you have access to an industrial shredder, an option is to shred the entire computer. It seems wasteful, and I can imagine that it’s not good to shred lithium-ion batteries – many of which are not easily removable, again, as in the Microsoft Surface or Apple MacBook Air. You don’t want those chemicals lying around. Still, that works, and works well.

Note that an industrial shredder is kinda big and expensive – you can see some from SSL World. However, if you live in any sort of medium-sized or larger urban area, you can probably find a shredding service that will destroy the computer right in front of you. I’ve found one such service here in Phoenix, Assured Document Destruction Inc., that claims to be compliant with industry regulations for privacy, such as HIPAA and Sarbanes-Oxley.

Don’t want to shred the whole computer? Let’s say the computer uses a standard hard drive, usually in a 3.5-inch form factor (desktops and servers) or 2.5-inch form factor (notebooks). If you have a set of small screwdrivers, you should be able to dismantle the computer, remove the storage device, and kill it – such as by smashing it with a maul, drilling holes in it, or taking it completely apart. Note that driving over it in your car, while satisfying, may not cause significant damage.

What about solid state storage? The same actually applies with SSDs, but it’s a bit trickier. Sometimes the drive still looks like a standard 2.5-inch hard drive. But sometimes the “solid state drive” is merely a few exposed chips on the motherboard or a smaller circuit board. You’ve got to smash that sucker. Remove it from the computer. Hulk Smash! Break up the circuit board, pulverize the chips. Only then will it be dead dead dead. (Though one could argue that government agencies like the NSA could still put Humpty Dumpty back together again.)

In short: Even if the computer itself seems totally worthless, its storage can be removed, connected to a working computer, and accessed by a skilled techie. If you want to ensure that your data remains private, you must destroy it.

, ,

Oracle’s reputation as community steward of Java EE is mixed

gaurdian_duke-1What’s it going to mean for Java? When Oracle purchased Sun Microsystems that was one of the biggest questions on the minds of many software developers, and indeed, the entire industry. In an April 2009 blog post, “Oracle, Sun, Winners, Losers,” written when the deal was announced (it closed in January 2010), I predicted,

Winner: Java. Java is very important to Sun. Expect a lot of investment — in the areas that are important to Oracle.

Loser: The Java Community Process. Oracle is not known for openness. Oracle is not known for embracing competitors, or for collaborating with them to create markets. Instead, Oracle is known to play hardball to dominate its markets.

Looks like I called that one correctly. While Oracle continues to invest in Java, it’s not big on true engagement with the community (aka, the Java Community Process). In a story in SD Times, “Java EE awaits its future,” published July 20, 2016, Alex Handy writes about what to expect at the forthcoming JavaOne conference, including about Java EE:

When Oracle purchased Sun Microsystems in 2010, the immediate worry in the marketplace was that the company would become a bad actor around Java. Six years later, it would seem that these fears have come true—at least in part. The biggest new platform for Java, Android, remains embroiled in ugly litigation between Google and Oracle.

Despite outward appearances of a danger for mainstream Java, however, it’s undeniable that the OpenJDK has continued along apace, almost at the same rate of change IT experienced at Sun. When Sun open-sourced the OpenJDK under the GPL before it was acquired by Oracle, it was, in a sense, ensuring that no single entity could control Java entirely, as with Linux.

Java EE, however, has lagged behind in its attention from Oracle. Java EE 7 arrived two years ago, and it’s already out of step with the new APIs introduced in OpenJDK 8. The executive committee at the Java Community Process is ready to move the enterprise platform along its road map. Yet something has stopped Java EE dead in its tracks at Oracle. JSR 366 laid out the foundations for this next revision of the platform in the fall of 2015. One would never know that, however, by looking at the Expert Committee mailing lists at the JCP: Those have been completely silent since 2014.

Alex continues,

One person who’s worried that JavaOne won’t reveal any amazing new developments in Java EE is Reza Rahman. He’s a former Java EE evangelist at Oracle, and is now one of the founders of the Java EE Guardians, a group dedicated to goading Oracle into action, or going around them entirely.

“Our principal goal is to move Java EE forward using community involvement. Our biggest concern now is if Oracle is even committed to delivering Java EE. There are various ways of solving it, but the best is for Oracle to commit to and deliver Java EE 8,” said Rahman.

His concerns come from the fact that the Java EE 8 specification has been, essentially, stalled by lack of action on Oracle’s part. The specification leads for the project are stuck in a sort of limbo, with their last chunk of work completed in December, followed by no indication of movement inside Oracle.

Alex quotes an executive at Red Hat, Craig Muzilla, who seems justifiably pessimistic:

The only thing standing in the way of evolving Java EE right now, said Muzilla, is Oracle. “Basically, what Oracle does is they hold the keys to the [Test Compatibility Kit] for certifying in EE, but in terms of creating other ways of using Java, other runtime environments, they don’t have anything other than their name on the language,” he said.

Java is still going strong. Oracle’s commitment to the community and the process – not so much. This is one “told you so” that I’m not proud of, not one bit.

, ,

Pick up… or click on… the latest issue of Java Magazine

javamagThe newest issue of the second-best software development publication is out – and it’s a doozy. You’ll definitely want to read the July/August 2016 issue of Java Magazine.

(The #1 publication in this space is my own Software Development Times. Yeah, SD Times rules.)

Here is how Andrew Binstock, editor-in-chief of Java Magazine, describes the latest issue:

…in which we look at enterprise Java – not so much at Java EE as a platform, but at individual services that can be useful as part of a larger solution, For example, we examine JSON-P, the two core Java libraries for parsing JSON data; JavaMail, the standalone library for sending and receiving email messages; and JASPIC , which is a custom way to handle security, often used with containers. For Java EE fans, one of the leaders of the JSF team discusses in considerable detail the changes being delivered in the upcoming JSF 2.3 release.

We also show off JShell from Java 9, which is an interactive shell (or REPL) useful for testing Java code snippets. It will surely become one of the most used features of the new language release, especially for testing code interactively without having to set up and run an entire project.

And we continue our series on JVM languages with JRuby, the JVM implementation of the Ruby scripting language. The article’s author, Charlie Nutter, who implemented most of the language, discusses not only the benefits of JRuby but how it became one of the fastest implementations of Ruby.

For new to intermediate programmers, we deliver more of our in-depth tutorials. Michael Kölling concludes his two-part series on generics by explaining the use of and logic behind wildcards in generics. And a book excerpt on NIO.2 illustrates advanced uses of files, paths, and directories, including an example that demonstrates how to monitor a directory for changes to its files.

In addition, we have our usual code quiz with its customary detailed solutions, a book review of a new text on writing maintainable code, an editorial about some of the challenges of writing code using only small classes, and the overview of a Java Enhancement Proposal (JEP) for Java linker. A linker in Java? Have a look.

The story I particularly recommend is “Using the Java APIs for JSON processing.” David Delabasseé covers the Java API for JavaScript Object Notation Processing (JSR-353) and its two parts, one of which is high-level object modal API, and the other a lower-level streaming API.

It’s a solid issue. Read it – and subscribe, it’s free!

, ,

5 things you should know about email unsubscribe links

sophos-naked-securityHere’s a popular article that I wrote on email security for Sophos’ “Naked Security” blog.

5 things you should know about email unsubscribe links before you click” starts with:

We all get emails we don’t want, and cleaning them up can be as easy as clicking ‘unsubscribe’ at the bottom of the email. However, some of those handy little links can cause more trouble than they solve. You may end up giving the sender a lot of information about you, or even an opportunity to infect you with malware.

Read the whole article here.

, , ,

Popular news websites can be malware delivery systems

jason-steerNews websites are an irresistible target for hackers because they are so popular. Why? because they are trusted brands, and because — by their very nature — they contain many external links and use lots of outside content providers and analytics/tracking services. It doesn’t take much to corrupt one of those websites, or one of the myriad partners sites they rely upon, like ad networks, content feeds or behavioral trackers.

Potentially, malware injected on any well-trafficked news website, could infect tremendous numbers of people with ransomware, keyloggers, zombie code, or worse. Alarmist? Perhaps, but with good reason. News websites, which can include both traditional media (like the Chicago Tribune and the BBC), or new-media platforms (such as BuzzFeed or Business Insider) attract a tremendous number of visitors, especially when there is a breaking news story of tremendous interest, like a natural disaster, political event or celebrity shenanigans.

Publishing companies are not technology companies. They are content providers who do their honest best to offer a secure experience, but can’t be responsible for external links. In fact, many say so right in their terms of use statements or privacy policies. What they can be responsible for are the third-party networks that provide content or services to their platforms, but in reality, the search for profits and/or a competitive advantage outweighs any other considerations. And of course, their platforms can be hacked as well.

According to a story in the BBC, news sites in Russia, including the Moscow Echo Radio Station, opposition newspaper New Times, and the Kommersant business newspaper were hacked back in March 2012. In November 2014, the Syrian Electronic Army claimed to have hacked news sites, including the Canada’s CBC News.

Also in November 2014, one of the U.K’s most popular sites, The Telegraph, tweeted, “A part of our website run by a third-party was compromised earlier today. We’ve removed the component. No Telegraph user data was affected.”

A year earlier, in January 2013, the New York Times self-reported, “Hackers in China Attacked The Times for Last 4 Months.” The story said that, “The attackers first installed malware — malicious software — that enabled them to gain entry to any computer on The Times’s network. The malware was identified by computer security experts as a specific strain associated with computer attacks originating in China.”

Regional news outlets can also be targets. On September 18, 2015, reported CBS Local in San Francisco, “Hackers took control of the five news websites of Palo Alto-based Embarcadero Media Group on Thursday night, according to the CBS. The websites of Palo Alto Weekly, The Almanac, Mountain View Voice and Pleasanton Weekly were all reportedly attacked at about 10:30 p.m. Thursday.

I talked recently with Jason Steer of Menlo Security, a security company based in Menlo Park, Calif. He put it very clearly:

You are taking active code from a source you didn’t request, and you are running it inside your PC and your network, without any inspection whatsoever. Because of the high volumes of users, it only takes a small number of successes to make the hacking worthwhile. Antivirus can’t really help here, either consumer or enterprise. Antivirus may not detect ransomware being installed from a site you visit, or malicious activity from a bad advertisement or bad JavaScript.

Jason pointed me to his blog post from November 12, 2015, “Top 50 UK Website Security Report.” His post says, in part,

Across the top 50 sites, a number of important findings were made:

• On average, when visiting a top 50 U.K. website, your browser will execute 19 scripts

• The top UK website executed 125 unique scripts when requested

His blog continued with a particularly scary observation:

15 of the top 50 sites (i.e. 30 percent) were running vulnerable versions of web-server code at time of testing. Microsoft IIS version 7.5 was the most prominent vulnerable version reported with known software vulnerabilities going back more than five years.

How many scripts are running on your browser from how many external servers? According to Jason’s research, if you visit the BBC website, your browser might be running 92 scripts pushed to it from 11 different servers. The Daily Mail? 127 scripts from 35 servers. The Financial Times? 199 scripts from 31 servers. The New Yorker? 113 scripts from 33 sites. The Economist? 185 scripts from 46 sites. The New York Times? 76 scripts from 29 servers. And Forbes, 100 scripts from 49 servers.

Most of those servers and scripts are benign. But if they’re not, they’re not. The headline on Ars Technica on March 15, 2016, says it all: “Big-name sites hit by rash of malicious ads spreading crypto ransomware.” The story begins,

Mainstream websites, including those published by The New York Times, the BBC, MSN, and AOL, are falling victim to a new rash of malicious ads that attempt to surreptitiously install crypto ransomware and other malware on the computers of unsuspecting visitors, security firms warned.

The tainted ads may have exposed tens of thousands of people over the past 24 hours alone, according to a blog post published Monday by Trend Micro. The new campaign started last week when “Angler,” a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

 According to a separate blog post from Trustwave’s SpiderLabs group, one JSON-based file being served in the ads has more than 12,000 lines of heavily obfuscated code. When researchers deciphered the code, they discovered it enumerated a long list of security products and tools it avoided in an attempt to remain undetected.

Let me share my favorite news website hack story, because of its sheer audacity. According to Jason’s blog, ad delivery systems can be turned into malware delivery systems, and nobody might every know:

If we take one such example in March 2016, one attacker waited patiently for the domain ‘brentsmedia[.]com’ to expire, registered in Utah, USA , a known ad network content provider. The domain in question had expired ownership for 66 days, was then taken over by an attacker in Russia (Pavel G Astahov) and 1 day later was serving up malicious ads to visitors of sites including the BBC, AOL & New York Times. No-one told any of these popular websites until the malicious ads had already appeared.

Jason recently published an article on this subject in SC Magazine, “Brexit leads to pageviews — pageviews lead to malware.” Check it out. And be aware that when you visit a trusted news website, you have no idea what code is being executed on your computer, what that code does, and who wrote that code.

, , , ,

NetGear blinked – will continue VueZone video cloud service

vz_use_outdoor_headerThank you, NetGear, for taking care of your valued customers. On July 1, the company announced that it would be shutting down the proprietary back-end cloud services required for its VueZone cameras to work – turning them into expensive camera-shaped paperweights. See “Throwing our IoT investment in the trash thanks to NetGear.”

The next day, I was contacted by the company’s global communications manager. He defended the policy, arguing that NetGear was not only giving 18 months’ notice of the shutdown, but they are “doing our best to help VueZone customers migrate to the Arlo platform by offering significant discounts, exclusive to our VueZone customers.” See “A response from NetGear regarding the VueZone IoT trashcan story.”

And now, the company has done a 180° turn. NetGear will not turn off the service, at least not at this time. Well done. Here’s the email that came a few minutes ago. The good news for VueZone customers is that they can continue. On the other hand, let’s not party too heartily. The danger posed by proprietary cloud services driving IoT devices remains. When the vendor decides to turn it off, all you have is recycle-ware and potentially, one heck of a migration issue.

Subject: VueZone Services to Continue Beyond January 1, 2018

Dear valued VueZone customer,

On July 1, 2016, NETGEAR announced the planned discontinuation of services for the VueZone video monitoring product line, which was scheduled to begin as of January 1, 2018.

Since the announcement, we have received overwhelming feedback from our VueZone customers expressing a desire for continued services and support for the VueZone camera system. We have heard your passionate response and have decided to extend service for the VueZone product line. Although NETGEAR no longer manufactures or sells VueZone hardware, NETGEAR will continue to support existing VueZone customers beyond January 1, 2018.

We truly appreciate the loyalty of our customers and we will continue our commitment of delivering the highest quality and most innovative solutions for consumers and businesses. Thank you for choosing us.

Best regards,

The NETGEAR VueZone Team

July 19, 2016

, ,

The Birth of the Internet Plaque at Stanford University

BirthInternetLIn the “you learn something every day” department: Discovered today that there’s a plaque at Stanford honoring the birth of the Internet. The plaque was dedicated on July 28, 2005, and is in the Gates Computer Science Building.

You can read all about the plaque, and see it more clearly, on J. Noel Chiappa’s website. His name is on the plaque.

Here’s what the plaque says. Must check it out during my next trip to Palo Alto.


BIRTH OF THE INTERNET

THE ARCHITECTURE OF THE INTERNET AND THE DESIGN OF THE CORE NETWORKING PROTOCOL TCP (WHICH LATER BECAME TCP/IP) WERE CONCEIVED BY VINTON G. CERF AND ROBERT E. KAHN DURING 1973 WHILE CERF WAS AT STANFORD’S DIGITAL SYSTEMS LABORATORY AND KAHN WAS AT ARPA (LATER DARPA). IN THE SUMMER OF 1976, CERF LEFT STANFORD TO MANAGE THE PROGRAM WITH KAHN AT ARPA.

THEIR WORK BECAME KNOWN IN SEPTEMBER 1973 AT A NETWORKING CONFERENCE IN ENGLAND. CERF AND KAHN’S SEMINAL PAPER WAS PUBLISHED IN MAY 1974.

CERF, YOGEN K. DALAL, AND CARL SUNSHINE WROTE THE FIRST FULL TCP SPECIFICATION IN DECEMBER 1974. WITH THE SUPPORT OF DARPA, EARLY IMPLEMENTATIONS OF TCP (AND IP LATER) WERE TESTED BY BOLT BERANEK AND NEWMAN (BBN), STANFORD, AND UNIVERSITY COLLEGE LONDON DURING 1975.

BBN BUILT THE FIRST INTERNET GATEWAY, NOW KNOWN AS A ROUTER, TO LINK NETWORKS TOGETHER. IN SUBSEQUENT YEARS, RESEARCHERS AT MIT AND USC-ISI, AMONG MANY OTHERS, PLAYED KEY ROLES IN THE DEVELOPMENT OF THE SET OF INTERNET PROTOCOLS.

KEY STANFORD RESEARCH ASSOCIATES AND FOREIGN VISITORS

  • VINTON CERF
  • DAG BELSNES
  • RONALD CRANE
  • BOB METCALFE
  • YOGEN DALAL
  • JUDITH ESTRIN
  • RICHARD KARP
  • GERARD LE LANN
  • JAMES MATHIS
  • DARRYL RUBIN
  • JOHN SHOCH
  • CARL SUNSHINE
  • KUNINOBU TANNO

DARPA

  • ROBERT KAHN

COLLABORATING GROUPS

BOLT BERANEK AND NEWMAN

  • WILLIAM PLUMMER
  • GINNY STRAZISAR
  • RAY TOMLINSON

MIT

  • NOEL CHIAPPA
  • DAVID CLARK
  • STEPHEN KENT
  • DAVID P. REED

NDRE

  • YNGVAR LUNDH
  • PAAL SPILLING

UNIVERSITY COLLEGE LONDON

  • FRANK DEIGNAN
  • MARTINE GALLAND
  • PETER HIGGINSON
  • ANDREW HINCHLEY
  • PETER KIRSTEIN
  • ADRIAN STOKES

USC-ISI

  • ROBERT BRADEN
  • DANNY COHEN
  • DANIEL LYNCH
  • JON POSTEL

ULTIMATELY, THOUSANDS IF NOT TENS TO HUNDREDS OF THOUSANDS HAVE CONTRIBUTED THEIR EXPERTISE TO THE EVOLUTION OF THE INTERNET.

DEDICATED JULY 28, 2005

, , , ,

Internet over Carrier Pigeon? There’s a standard for that

pidgeonThere are standards for everything, it seems. And those of us who work on Internet things are often amused (or bemused) by what comes out of the Internet Engineering Task Force (IETF). An oldie but a goodie is a document from 1999, RFC-2549, “IP over Avian Carriers with Quality of Service.”

An RFC, or Request for Comment, is what the IETF calls a standards document. (And yes, I’m browsing my favorite IETF pages during a break from doing “real” work. It’s that kind of day.)

RFC-2549 updates RFC-1149, “A Standard for the Transmission of IP Datagrams on Avian Carriers.” That older standard did not address Quality of Service. I’ll leave it for you to enjoy both those documents, but let me share this part of RFC-2549:

Overview and Rational

The following quality of service levels are available: Concorde, First, Business, and Coach. Concorde class offers expedited data delivery. One major benefit to using Avian Carriers is that this is the only networking technology that earns frequent flyer miles, plus the Concorde and First classes of service earn 50% bonus miles per packet. Ostriches are an alternate carrier that have much greater bulk transfer capability but provide slower delivery, and require the use of bridges between domains.

The service level is indicated on a per-carrier basis by bar-code markings on the wing. One implementation strategy is for a bar-code reader to scan each carrier as it enters the router and then enqueue it in the proper queue, gated to prevent exit until the proper time. The carriers may sleep while enqueued.

Most years, the IETF publishes so-called April Fool’s RFCs. The best list of them I’ve seen is on Wikipedia. If you’re looking to take a work break, give ’em a read. Many of them are quite clever! However, I still like RFC-2549 the best.

A prized part of my library is “The Complete April Fools’ Day RFCs” compiled by by Thomas Limoncelli and Peter Salus. Sadly this collection stops at 2007. Still, it’s a great coffee table book to leave lying around for when people like Bob MetcalfeTim Berners-Lee or Al Gore come by to visit.

, , ,

A response from NetGear regarding the VueZone IoT trashcan story

5d3_9839-100670811-primary.idgeThank you, NetGear, for the response to my July 11 opinion essay for NetworkWorld, “Throwing our IoT investment in the trash thanks to NetGear.” In that story, I used the example of our soon-to-be-obsolete VueZone home video monitoring system: At the end of 2017, NetGear is turning off the back-end servers that make VueZone work – and so all the hardware will become fancy camera-shaped paperweights.

The broader message of the story is that every IoT device tied into a proprietary back-end service will be turned to recycleware if (or when) the service provider chooses to turn it off. My friend Jason Perlow picked up this theme in his story published on July 12 on ZDNet, “All your IoT devices are doomed” and included a nice link to my NetworkWorld story. As Jason wrote,

First, it was Aether’s smart speaker, the Cone. Then, it was the Revolv smart hub. Now, it appears NetGear’s connected home wireless security cameras, VueZone, is next on the list.

I’m sure I’ve left out more than a few others that have slipped under the radar. It seems like every month an Internet of Things (IoT) device becomes abandonware after its cloud service is discontinued.

Many of these devices once disconnected from the cloud become useless. They can’t be remotely managed, and some of them stop functioning as standalone (or were never capable of it in the first place). Are these products going end-of-life too soon? What are we to do about this endless pile of e-waste that seems to be the inevitable casualty of the connected-device age?

I would like to publicly acknowledge NetGear for sending a quick response to my story. Apparently — and contrary to what I wrote — the company did offer a migration path for existing VueZone customers. I can’t find the message anywhere, but can’t ignore the possibility that it was sucked into the spamverse.

Here is the full response from Nathan Papadopulos, Global Communications & Strategic Marketing for NetGear:

Hello Alan,

I am writing in response to your recent article about disposing of IoT products. As you may know, the VueZone product line came to Netgear   as part of our acquisition of Avaak, Inc. back in 2012, and is the predecessor of the current Arlo security system. Although we wanted to avoid interruptions of the VueZone services as much as possible, we are now faced with the need to discontinue support  for the camera line. VueZone was built on technologies which are now outdated and a platform which is not scalable. Netgear has since shifted our resources to building better, more robust products which are the Arlo system of security cameras. Netgear is doing our best to help VueZone customers migrate to the Arlo platform by offering significant discounts, exclusive to our VueZone customers.

1. On July 1, 2016, Netgear officially announced the discontinuation of VueZone services to VueZone customers. Netgear has sent out an email notification to the entire VueZone customer base with the content in the “Official End-of-Services Announcement.” Netgear is providing the VueZone customers with an 18-month notice, which means that the actual effective date of this discontinuation of services will be on January 1, 2018.

2. Between July 2 and July 6, 26,000+ customers who currently have an active VueZone base station have received an email with an offer to purchase an Arlo 4-camera kit. There will be two options for them to choose from:

a. Standard Arlo 4-camera kit for $299.99

b. Refurbished Arlo 4-camera kit for $149.99

Both refurbished and new Arlo systems come with the NETGEAR limited 1-year hardware warranty. The promotion will run until the end of July 31, 2016.

It appears NetGear is trying to do the right thing, though they lose points for offering the discounted migration path for less than one month. Still, the fact remains that obsolescence of service-dependent IoT devices is a big problem. Some costly devices will cease functioning if the service goes down; others will lose significant functionality.

And thank you, Jason, for the new word: Abandonware.

, , ,

Coding in the Fast Lane: The Multi-Threaded Multi-Core World of AMD64

ThrivingandSurvivinginaMulti-CoreWorld-1I wrote five contributions for an ebook from AMD Developer Central — and forgot entirely about it! The book, called “Surviving and Thriving in a Multi-Core World: Taking Advantage of Threads and Cores on AMD64,” popped up in this morning’s Google Alerts report. I have no idea why!

Here are the pieces that I wrote for the book, published in 2006. Darn, they still read well! Other contributors include my friends Anderson Bailey, Alexa Weber Morales and Larry O’Brien.

  • Driving in the Fast Lane: Multi-Core Computing for Programmers, Part 1 (page 5)
  • Driving in the Fast Lane: Multi-Core Computing for Programmers, Part 2 (page 8)
  • Coarse-Grained Vs. Fine-Grained Threading for Native Applications, Part 1 (p. 37)
  • Coarse-Grained Vs. Fine-Grained Threading for Native Applications, Part 2 (p. 40)
  • Device Driver & BIOS Development for AMD Systems (p. 87)

I am still obsessed with questionable automotive analogies. The first article begins with:

The main road near my house, called Skyline Drive, drives me nuts. For several miles, it’s a quasi-limited access highway. But for some inexplicable reason, it keeps alternating between one and two lanes in each direction. In the two-lane part, traffic moves along swiftly, even during rush hour. In the one-lane part, the traffic merges back together, and everything crawls to a standstill. When the next two-lane part appears, things speed up again.

Two lanes are better than one — and not just because they can accommodate twice as many cars. What makes the two-lane section better is that people can overtake. In the one-lane portion (which has a double-yellow line, so there’s no passing), traffic is limited to the slowest truck’s speed, or to little-old-man-peering-over-the-steering-wheel-of-his-Dodge-Dart speed. Wake me when we get there. But in the two-lane section, the traffic can sort itself out. Trucks move to the right, cars pass on the left. Police and other priority traffic weave in and out, using both lanes depending on which has more capacity at any particular moment. Delivery services with a convoy of trucks will exploit both lanes to improve throughput. The entire system becomes more efficient, and net flow of cars through those two-lane sections is considerably higher.

Okay, you’ve figured out that this is all about dual-core and multi-core computing, where cars are analogous to application threads, and the lanes are analogous to processor cores.

I’ll have to admit that my analogy is somewhat simplistic, and purists will say that it’s flawed, because an operating system has more flexibility to schedule tasks in a single-core environment under a preemptive multiprocessing environment. But that flexibility comes at a cost. Yes, if I were really modeling a microprocessor using Skyline Drive, cars would be able to pass each other in the single-lane section, but only if the car in front were to pull over and stop.

Okay, enough about cars. Let’s talk about dual-core and multi-core systems, why businesses are interested in buying them, and what implications all that should have for software developers like us.

Download and enjoy the book – it’s not gated and entirely free.

, , ,

SharePoint 2016 On-Premises – Better than ever with a bright future

SharePoint-2016-Preview-tiltedExcellent story about SharePoint in ComputerWorld this week. It gives encouragement to those who prefer to run SharePoint in their own data centers (on-premises), rather than in the cloud. In “The Future of SharePoint,” Brian Alderman writes,

In case you missed it, on May 4 Microsoft made it loud and clear it has resuscitated SharePoint On-Premises and there will be future versions, even beyond SharePoint Server 2016. However, by making you aware of the scenarios most appropriate for On-Premises and the scenarios where you can benefit from SharePoint Online, Microsoft is going to remain adamant about allowing you to create the perfect SharePoint hybrid deployment.

The future of SharePoint begins with SharePoint Online, meaning changes, features and functionality will first be deployed to SharePoint Online, and then rolled out to your SharePoint Server On-Premises deployment. This approach isn’t much of a surprise, being that SharePoint Server 2016 On-Premises was “engineered” from SharePoint Online.

Brian was writing about a post on the Microsoft SharePoint blog, and one I had overlooked (else I’d have written about it back in May. In the post, “SharePoint Server 2016—your foundation for the future,” the SharePoint Team says,

We remain committed to our on-premises customers and recognize the need to modernize experiences, patterns and practices in SharePoint Server. While our innovation will be delivered to Office 365 first, we will provide many of the new experiences and frameworks to SharePoint Server 2016 customers with Software Assurance through Feature Packs. This means you won’t have to wait for the next version of SharePoint Server to take advantage of our cloud-born innovation in your datacenter.

The first Feature Pack will be delivered through our public update channel starting in calendar year 2017, and customers will have control over which features are enabled in their on-premises farms. We will provide more detail about our plans for Feature Packs in coming months.

In addition, we will deliver a set of capabilities for SharePoint Server 2016 that address the unique needs of on-premises customers.

Now, make no mistake: The emphasis at Microsoft is squarely on Office 365 and SharePoint Online. Or as the company says SharePoint Server is, “powering your journey to the mobile-first, cloud-first world.” However, it is clear that SharePoint On-Premises will continue for some period of time. Later in the blog post in the FAQ, this is stated quite definitively:

Is SharePoint Server 2016 the last server release?

No, we remain committed to our customer’s on-premises and do not consider SharePoint Server 2016 to be the last on-premises server release.

The best place to learn about SharePoint 2016 is at BZ Media’s SPTechCon, returning to San Francisco from Dec. 5-8. (I am the Z of BZ Media.) SPTechCon, the SharePoint Technology Conference, offers more than 80 technical classes and tutorials — presented by the most knowledgeable instructors working in SharePoint today — to help you improve your skills and broaden your knowledge of Microsoft’s collaboration and productivity software.

SPTechCon will feature the first conference sessions on SharePoint 2016. Be there! Learn more at http://www.sptechcon.com.

, , , ,

Photo and artwork guidelines for people, products, logos and screen shots

old-cameraIf you are asked to submit a photograph, screen shot or a logo to a publication or website, there’s the right way and the less-right way. Here are some suggestions that I wrote several years ago for BZ Media for use in lots of situations — in SD Times, for conferences, and so-on.

While they were written for the days of print publications, these are still good guidelines for websites, blog and other digital publishing media.

General Suggestions

  • Photos need to be high resolution. Bitmaps that would look great on a Web page will look dreadful in print. The recommended minimum size for a bitmap file should be two inches across by three inches high, at a resolution of 300 dpi — that is, 600×900 pixels, at the least. A smaller photograph may be usable, but frankly, it will probably not be.
  • Photos need to be in a high-color format. The best formats are high-resolution JPEG files (.jpg) and TIFF (.tif) files. Or camera RAW if you can. Avoid GIF files (.gif) because they are only 256 colors. However, in case of doubt, send the file in and hope for the best.
  • Photos should be in color. A color photograph will look better than a black-and-white photograph — but if all you have is B&W, send it in. As far as electronic files go, a 256-color image doesn’t reproduce well in print, so please use 24-bit or higher color depth. If the website wants B&W, they can convert a color image easily.
  • Don’t edit or alter the photograph. Please don’t crop it, modify it using Photoshop or anything, unless otherwise requested to do so. Just send the original image, and let the art director or photo editor handle the cropping and other post-processing.
  • Do not paste the image into a Word or PowerPoint document. Send the image as a separate file.

Logos

  • Send logos as vector-based EPS files (such as an Adobe Illustrator file with fonts converted to outlines) if possible. If a vector-based EPS file is not available, send a 300 dpi TIFF, JPEG or Photoshop EPS files (i.e., one that’s at least two inches long). Web-resolution logos are hard to resize, and often aren’t usable.

Screen Shots

  • Screen shots should be the native bitmap file or a lossless format. A native bitmapped screen capture from Windows will be a huge .BMP file. This may be converted to a compressed TIFF file, or compressed to a .ZIP file for emailing. PNG is also a good lossless format and is quite acceptable.
  • Do not convert a screen capture to JPEG or GIF.  JPEGs in particular make terrible screen shots due to the compression algorithms; solid color areas may become splotchy, and text can become fuzzy. Screen captures on other platforms should also be lossless files, typically in TIFF or PNG.

Hints for better-looking portraits

  • Strive for a professional appearance. The biggest element is a clean, uncluttered background. You may also wish to have the subject wear business casual or formal clothing, such as a shirt with a collar instead of a T-shirt. If you don’t have a photo like that, send what you have.
  • Side or front natural light is the best and most flattering. Taking pictures outdoors with overcast skies is best; a picture outdoors on a sunny day is also good, but direct overhead sunlight (near noon) is too harsh. If possible, keep away from indoor lighting, especially ceiling or fluorescent lights. Avoid unpleasant backlighting by making sure the subject isn’t standing between the camera and a window or lamp.
  • If you must use electronic flash… Reduce red-eye by asking the subject to look at the photographer, not at the camera. (Off-camera flash is better than on-camera flash.) Eliminate harsh and unpleasant shadows by ensuring that the subject isn’t standing or sitting within three feet of a wall, bookcase or other background objects. Another problem is white-out: If the camera is too close to the subject, the picture will be too bright and have too much contrast.
  • Maintain at least six feet separation between the camera and the subject, and three feet (or more) from the background. If the subject is closer than six feet to the camera, his/her facial features will be distorted, and the results will be unattractive. For best results, hold the camera more than six feet from the subject. It’s better to be farther away and use the camera’s optical zoom, rather than to shoot a close-up from a few feet away.
  • Focus on his/her eyes. If the eyes are sharp, the photo is probably okay. If the eyes aren’t sharp (but let’s say the nose or ears are), the photo looks terrible. That’s because people look at the eyes first.
, , ,

Crash! Down goes Google Calendar — cloud services are not perfect

crashCloud services crash. Of course, non-cloud-services crash too — a server in your data center can go down, too. At least there you can do something, or if it’s a critical system you can plan with redundancies and failover.

Not so much with cloud services, as this morning’s failure of Google Calendar clearly shows. The photo shows Google’s status dashboard as of 6:53am on Thursday, June 30.

I wrote about crashes at Amazon Web Services and Apple’s MobileMe back in 2008 in “When the cloud was good, it was very good. But when it was bad it was rotten.”

More recently, in 2011, I covered another AWS failure in “Skynet didn’t take down Amazon Web Services.”

Overall, cloud services are quite reliable. But they are not perfect, and it’s a mistake to think that just because they are offered by huge corporations, they will be error-free and offer 100% uptime. Be sure to work that into your plans, especially if you and your employees rely upon public cloud services to get your job done, or if your customers interact with you through cloud services.

, , ,

MEF LSO Hackathon at Euro16 brings together open source, open standards

hackathonThe MEF recently conducted its second LSO Hackathon at a Rome event called Euro16. You can read my story about it here in DiarioTi: LSO Hackathons Bring Together Open Standards, Open Source.

Alas, my coding skills are too rusty for a Hackathon, unless the objective is to develop a fully buzzword compliant implementation of “Hello World.” Fortunately, there are others with better skills, as well as a broader understanding of today’s toughest problems.

Many Hackathons are thinly veiled marketing exercises by companies, designed to find ways to get programmers hooked on their tools, platforms, APIs, etc. Not all! One of the most interesting Hackathons is from the MEF, an industry group that drives communications interoperability. As a standards defining organization (SDO), the MEF wants to help carriers and equipment vendors design products/services ready for the next generation of connectivity. That means building on a foundation of SDN (software defined networks), NFV (network functions virtualization), LSO (lifecycle service orchestration) and CE2.0 (Carrier Ethernet 2.0).

To make all this happen:

  • What the MEF does: Create open standards for architectures and specifications.
  • What vendors, carriers and open source projects do: Write software to those specifications.
  • What the Hackathon does: Give everyone a chance to work together, make sure the code is truly interoperable, and find areas where the written specs might be ambiguous.

Thus, the MEF LSO Hackathons. They bring together a wide swatch of the industry to move beyond the standards documents and actually write and test code that implements those specs.

As mentioned above, the MEF just completed its second Hackathon at Euro16. The first LSO Hackathon was at last year’s MEF GEN15 annual conference in Dallas. Here’s my story about it in Telecom Ramblings: The MEF LSO Hackathon: Building Community, Swatting Bugs, Writing Code.

The third LSO Hackathon will be at this year’s MEF annual conference, MEF16, in Baltimore, Nov. 7-10. I will be there as an observer – alas, without the up-to-date, practical skills to be a coding participant.

, , , ,

When do we want automated emails? Now!

stopwatchI can hear the protesters. “What do we want? Faster automated emails! When do we want them? In under 20 nanoseconds!

Some things have to be snappy. A Web page must load fast, or your customers will click away. Moving the mouse has to move the cursor without pauses or hesitations. Streaming video should buffer rarely and unobtrusively; it’s almost always better to temporarily degrade the video quality than to pause the playback. And of course, for a touch interface to work well, it must be snappy, which Apple has learned with iOS, and which Google learned with Project Butter.

The same is true with automated emails. They should be generated and transmitted immediately — that is, is under a minute.

I recently went to book a night’s stay at a Days Inn, a part of the Wyndham Hotel Group, and so I had to log into my Wyndham account. Bad news: I couldn’t remember the password. So, I used the password retrieval system, giving my account number and info. The website said to check my e-mail for the reset link. Kudos: That’s a lot better than saying “We’ll mail you your password,” and then sending it in plain text!!

So, I flipped over to my e-mail client. Checked for new mail. Nothing. Checked again. Nothing. Checked again. Nothing. Checked the spam folder. Nothing. Checked for new mail. Nothing. Checked again. Nothing.

I submitted the request for the password reset at 9:15 a.m. The link appeared in my inbox at 10:08 a.m. By that time, I had already booked the stay with Best Western. Sorry, Days Inn! You snooze, you lose.

What happened? The e-mail header didn’t show a transit delay, so we can’t blame the Internet. Rather, it took nearly an hour for the email to be uploaded from the originating server. This is terrible customer service, plain and simple.

It’s not merely Wyndham. When I purchase something from Amazon, the confirmation e-mail generally arrives in less than 30 seconds. When I purchase from Barnes & Noble, a confirmation e-mail can take an hour. The worst is Apple: Confirmations of purchases from the iTunes Store can take three days to appear. Three days!

It’s time to examine your policies for generating automated e-mails. You do have policies, right? I would suggest a delay of no more than one minute from when the user performs an action that would generate an e-mail and having the message delivered to the SMTP server.

Set the policy. Automated emails should go out in seconds — certainly in under one minute. Design for that and test for that. More importantly, audit the policy on a regular basis, and monitor actual performance. If password resets or order confirmations are taking 53 minutes to hit the Internet, you have a problem.

, , ,

Celebrating Ada Lovelace and doubling the talent pool

626px-Ada_Lovelace_portraitDespite some recent progress, women are still woefully underrepresented in technical fields such as software development. There are many academic programs to bring girls into STEM (science, technology, engineering and math) at various stages in their education, from grade school to high school to college. Corporations are trying hard.

It’s not enough. We all need to try harder.

On Oct. 11, 2016, we will celebrate Ada Lovelace Day, honoring the first computer programmer — male or female. Augusta Ada King-Noel, Countess of Lovelace, wrote the algorithms for Charles Babbage’s difference engine in the mid-1800s.

According to the website Finding Ada, this date doesn’t represent her birthday, which is of Dec. 10. Rather, they say, “The date is arbitrary, chosen in an attempt to make the day maximally convenient for the most number of people. We have tried to avoid major public holidays, school holidays, exam season, and times of the year when people might be hibernating.” I’d like to think that the scientifically minded Ada Lovelace would find this amusing.

There are great organizations focused on promoting women in technology, such as Women in Technology International (WITI) and the Anita Borg Institute. There are cool projects, like the Wiki Edit-a-Thon sponsored by Brown University, which seeks to correct the historic (and inaccurate) underrepresentation of female scientists in Wikipedia.

Those are good efforts. They still aren’t enough.

Are women good at STEM fields, including software development? Yes. But all too often, they are gender-stereotyped into non-coding parts of the field—when they are hired at all. And certainly the hyper-competitive environment in many tech teams, and the death-march culture, is not friendly to anyone (male or female) who wants to have a life outside the startup.

Let me share the Anita Borg Institute’s 10 best practices to foster retention of women in technical roles:

  • Collect, analyze and report retention data as it pertains to women in technical roles.
  • Formally train managers in best practices, and hold them accountable for retention.
  • Embed collaboration in the corporate culture to encourage diverse ideas.
  • Offer training programs that raise awareness of and counteract microinequities and unconscious biases.
  • Provide development and visibility opportunities to women that increase technical credibility.
  • Fund and support workshops and conferences that focus on career path experiences and challenges faced by women technologists.
  • Establish mentoring programs on technical and career development.
  • Sponsor employee resource groups for mutual support and networking.
  • Institute flexible work arrangements and tools that facilitate work/life integration.
  • Enact employee-leave policies, and provide services that support work/life integration.

Does your organization have a solid representation of women in technical jobs (not only in technical departments)? Are those women given equal pay for equal work? Are women provided with solid opportunities for professional growth and career advancement? Are you following any of the above best practices?

If so, that’s great news. I’d love to hear about it and help tell your story.

,

A good HR department is the No. 1 secret for a successful startup

pizzaIt’s not intellectual property. It’s not having code warriors who can turn pizza into algorithms. It’s not even having great angel investors. If you want a successful startup that’s going to keep you in the headlines for your technology and market prowess, you need a great Human Resources department.

Whether your organization has three employees, 30 or 300, it’s a company. That means a certain level of professionalism in administering it. Yes, tech companies love to be led by hotshot engineers who often brag about their inexperience as CEOs. Yes, those companies are often the darlings of the venture capital community. Yes, those CEOs get lots of visibility in the technology media, the financial media and most importantly, social media.

That is not enough. That’s explained very well in Claire Cain Miller’s essay in The New York Times, “Yes, Silicon Valley, Sometimes You Need More Bureaucracy.”

Miller focuses on the 2014 GitHub scandal, where a lack of professionalism in HR led to deep problems in hiring, management and culture.

“GitHub is not unusual. Tech startups with 100 or fewer employees have half as many personnel professionals as companies of the same size in other industries, according to data from PayScale, which makes compensation software and analyzed about 2,830 companies,” Miller writes.

Is HR something that’s simply soft and squishy, a distraction from the main business of cranking out code and generating viral marketing? No. It’s a core function of every business that’s large enough to have employees.

Miller cites a study that found that companies with personnel departments were nearly 40% less likely to fail than the norm, and nearly 40% more likely to go public. That 36-page study, “Organizational Blueprints for Success in High-Tech Startups,” from the University of California, Berkeley, was published in 2002, but provides some interesting food for thought.

The authors, James Baron and Michael Hannan, wrote,

It is by no means uncommon to see a founder spend more time and energy fretting about the scalability of the phone system or IT platform than about the scalability of the culture and practices for managing employees, even in case where that same founder would declare with great passion and sincerity that ‘people are the ultimate source of competitive advantage in my business.’

The study continues,

Any plan for launching a new enterprise should include a road map for evolving the organizational structure and HR system, which parallels the timeline for financial, technological, and growth milestones. We have yet to meet an entrepreneur who told us, on reflection, he or she believes they spent too much time worrying about people issues in the early days of their venture.

What does that mean for you?

• If you are part of the leadership team of a startup or small company, look beyond the tech industry for best practices in human resources management. Just because other small tech firms gloss over HR doesn’t mean that you should. In fact, perhaps having better HR might be better way to out-innovate your competitors.

• If you are looking at joining a startup or a small company, look at the HR department and the culture. If HR seems casual or ad hoc, and if everyone in the company looks the same, perhaps that’s a company not poised for long-term success. Look for a culture that cares about having a healthy and genuinely diverse workforce—and for policies that talk about ways to resolve problems.

Human resources are as important as technology and financial resources. Without the right leadership in all three areas, you’re in for a rough ride.

, , , ,

Enterprise risks when an employee can’t find a BYOD phone

find-my-phoneThere are several types of dangers presented by a lost Bring Your Own Device (BYOD) smartphone or tablet. Many IT professionals and security specialists think only about some of them. They are all problematic. Does your company have policies about lost personal devices?

  • If you have those policies, what are they?
  • Does the employee know about those policies?
  • Does the employee know how to notify the correct people in case his or her device is lost?

Let’s say you have policies. Let’s say the employee calls the security office and says, “My personal phone is gone. I use it to access company resources, and I don’t think it was securely locked.” What happens?

Does the company have all the information necessary to take all the proper actions, including the telephone number, carrier, manufacturer and model, serial number, and other characteristics? Who gets notified? How long do you wait before taking an irreversible action? Can the security desk respond in an effective way? Can the security respond instantly, including nights, weekend and holidays?

If you don’t have those policies — with people and knowledge to make them effective — you’ve got a serious problem.

Read my latest story in NetworkWorld, “Dude, where’s my phone? BYOD means enterprise security exposure.” It discusses the four biggest obvious threats from a lost BYOD device, and what you can do to address those threats.

, , ,

KFC’s Watt-a-Box jolts the fast food industry in India

kfc-watt-a-box“Would you like amps with that?” Perhaps that’s the new side-dish question when ordering fast food. Yes, I’ll have three pieces of extra crispy chicken, potato wedges, cole slaw, unsweet iced tea and a cell-phone charging box.

New of out India is  KFC (which many of us grew up calling Kentucky Fried Chicken) has introduced the Watt-a-Box, which says on its side “Charge your phone while experiencing finger lickin’ good food.” (That last part may be debatable.)

According to the Times of India,

NEW DELHI: KFC garnered a lot of accolades for its recently launched 5-in-1 Meal Box. And the fast-food chain has now introduced an all new ‘gadgety’ variant of the same box.

The limited edition box comes with a built-in power bank. Dubbed as ‘Watt a Box,’ it lets you charge your smartphone as you go about enjoying your meal.

KFC has said that a few lucky customers at select KFC stores in Mumbai and Delhi will get a chance to have their 5-in-1 Meal served in ‘Watt a Box’. Along with this, users can also participate in an online contest on KFC India’s Facebook page and win more of these limited edition boxes.

We are lacking a number of details. Is the box’s charger removable and reusable, or is it a one-time-use thing? If so, what a waste of electronics and battery tech. What about disposal / recycling the battery? And — eww — will everything get finger-lickin’ greasy?

The Watt-a-Box. Watt an idea.

, ,

I’m rich from the Apple Kindle eBooks Antitrust Settlement

settlementThis just in — literally, at 8:58am on June 21 — an $8.50 credit from Amazon, paid for by Apple. I am trying to restrain my excitement, but in reality, it’s nice to get a few bucks back.

This payout has been pending for a few months. Well, a few years. This is Apple’s second payout from the antitrust settlement; the first was in 2014. Read “Apple’s $400M E-Book Payout: How Much You’ll Get and When” Jeff John Roberts in Forbes, which explains

The payments will mark the end of a long, strange antitrust story in which Apple and publishers tried to challenge the industry powerhouse, Amazon, with a new pricing system. Ironically, Amazon is still the dominant player in e-books today while Apple barely matters. Now Apple will pay $400 million to consumers—most of which will be spent at Amazon. Go figure.

I agree with that assessment: Apple lost both the battle (the antitrust pricing lawsuit) and the war (to be the big payer in digital books). Sure, $400 million is pocket change to Apple, which is reported to be hoarding more than $200 billion in cash. But still, it’s gotta hurt.

Here’s what Amazon said in its email:

Your Credit from the Apple eBooks Antitrust Settlement Is Ready to Use

Dear Alan Zeichick,

You now have a credit of $8.50 in your Amazon account. Apple, Inc. (Apple) funded this credit to settle antitrust lawsuits brought by State Attorneys General and Class Plaintiffs about the price of electronic books (eBooks). As a result of this Settlement, qualifying eBook purchases from any retailer are eligible for a credit. You previously received an email informing you that you were eligible for this credit. The Court in charge of these cases has now approved the Apple Settlement. If you did not receive that email or for more information about your credit, please visit www.amazon.com/applebooksettlement.

You don’t have to do anything to claim your credit, we have already added it to your Amazon account. We will automatically apply your available credit to your purchase of qualifying items through Amazon, an Amazon device or an Amazon app. The credit applied to your purchase will appear as a gift card in your order summary and in your account history. In order to spend your credit, please visit the Kindle bookstore or Amazon. If your account does not reflect this credit, please contact Amazon customer service.

Your credit is valid for one year and will expire after June 24, 2017, by order of the Court. If you have not used it, we will remind you of your credit before it expires.

Thank you for being a Kindle customer.

The Amazon Kindle Team

, , , ,

Happy World WiFi Day!

world-wifi-dayWiFi is the present and future of local area networking. Forget about families getting rid of the home phone. The real cable-cutters are dropping the Cat-5 Ethernet in favor of IEEE 802.11 Wireless Local Area Networks, generally known as WiFi. Let’s celebrate World WiFi Day!

There are no Cat-5 cables connected in my house and home office. Not one. And no Ethernet jacks either. (By contrast, when we moved into our house in the Bay Area in the early 1990s, I wired nearly every room with Ethernet jacks.) There’s a box of Ethernet cables, but I haven’t touched them in years. Instead, it’s all WiFi. (Technically, WiFi refers to industry products that are compatible with the IEEE 802.11 specification, but for our casual purposes here, it’s all the same thing.)

My 21” iMac (circa 2011) has an Ethernet port. I’ve never used it. My MacBook Air (also circa 2011) doesn’t have an Ethernet port at all; I used to carry a USB-to-Ethernet dongle, but it disappeared a long time ago. It’s not missed. My tablets (iOS, Android and Kindle) are WiFi-only for connectivity. Life is good.

The first-ever World WiFi Day is today — June 20, 2016 . It was declared by the Wireless Broadband Alliance to

be a global platform to recognize and celebrate the significant role Wi-Fi is playing in getting cities and communities around the world connected. It will champion exciting and innovative solutions to help bridge the digital divide, with Connected City initiatives and new service launches at its core.

Sadly, the World WiFi Day initiative is not about the wire-free convenience of Alan’s home office and personal life. Rather, it’s about bringing Internet connectivity to third-world, rural, poor, or connectivity-disadvantaged areas. According to the organization, here are eight completed projects:

  • KT – KT Giga Island – connecting islands to the mainland through advanced networks
  • MallorcaWiFi – City of Palma – Wi-Fi on the beach
  • VENIAM – Connected Port @ Leixões Porto, Portugal
  • ISOCEL – Isospot – Building a Wi-Fi hotspot network in Benin
  • VENIAM – Smart City @ Porto, Portugal
  • Benu Neworks – Carrier Wi-Fi Business Case
  • MCI – Free Wi-Fi for Arbaeen
  • Fon – After the wave: Japan and Fon’s disaster support procedure

It’s a worthy cause. Happy World WiFi Day, everyone!

, ,

The legacy application decommissioning ceremony

mag-tapeI once designed and coded a campus parking pass management system for an East Coast university. If you had a faculty, staff, student or visitor parking sticker for the campus, it was processed using my green-screen application, which went online in 1983. The university used the mainframe program with minimal changes for about a decade, until a new client/server parking system was implemented.

Today, that sticker application exists on a nine-track tape reel hanging on my wall — and probably nowhere else.

Decommissioning the parking-sticker app was relatively straightforward for the data center team, though of course I hope that it was emotionally traumatic. Data about the stickers was stored in several tables. One contained information about humans: name, address, phone number, relationship with the university. The other was about vehicles: make, year and color; license plate number; date of sticker assignment; sticker type and serial number; expiration date; date of cancellation. We handled some interesting exceptions. For example, some faculty were issued “floating” stickers that weren’t assigned to specific vehicles. That sort of thing.

Fortunately, historical info in the sticker system was not needed past a year or two. While important for campus security (“Who does that car parked in a no-parking zone belong to?”), it wasn’t data that needed to be retained for any length of time for legal or compliance reasons. Shutting off the legacy application was as simple as, well, shutting off the legacy application.

It’s not always that simple. Other software on campus in the 1980s — and much of the software that your team writes — needed to be retained, sometimes according to campus internal regulations, other times due to government or industry rules. How long do you need to keep payroll data? Transaction data for sales from your website? Bids for products and services, and the documentation that explains how the bids were solicited?

Any time you get into regulated industries, you have this issue. Financial services, aerospace, safety-oriented embedded systems, insurance, human resources, or medical: Information must be retained for compliance, and must be produced on demand by auditors, queries from litigators during eDiscovery, regulatory investigations, even court subpoenas.

That can make it hard — very hard — to turn off an application you no longer need. Even if the software is recording no new transactions, retention policies might necessitate keeping it alive for years. Maybe even decades, depending on the type of data being retained, and on the regulatory requirements of your industry. Think about drug records from pharmaceutical companies, or component sourcing for automobile manufacturers.

Data and application retention has many enterprise implications. For example: Before you deploy an application and its data onto a cloud provider or SaaS platform, you should ask: Will that application and its data need to be retained? If so, will the provider still be around and provide access to it? If not, you need to make sure there’s a plan to bring the systems in-house (even if they are no longer needed) to archive the data outside the application in a way that conforms with regulatory requirements for retention and access, and then you can decommission the application.

A word of caution: I don’t know how long nine-track tapes last, especially if they are not well shielded. My 20-year-old tape was not protected against heat or magnetism — hey, it was thrown into a box. There’s a better-than-good chance it is totally unreadable. Don’t rely upon unproven technology or suppliers for your own data archive, especially if the data must be retained for compliance purposes.

, , , ,

Blast from the past: Facebook’s tech infrastructure from 2008

Waybackmachine3Fire up the WABAC Machine, Mr. Peabody: In June 2008, I wrote a piece for MIT Technology Review explaining “How Facebook Works.”

The story started with this:

Facebook is a wonderful example of the network effect, in which the value of a network to a user is exponentially proportional to the number of other users that network has.

Facebook’s power derives from what Jeff Rothschild, its vice president of technology, calls the “social graph”–the sum of the wildly various connections between the site’s users and their friends; between people and events; between events and photos; between photos and people; and between a huge number of discrete objects linked by metadata describing them and their connections.

Facebook maintains data centers in Santa Clara, CA; San Francisco; and Northern Virginia. The centers are built on the backs of three tiers of x86 servers loaded up with open-source software, some that Facebook has created itself.

Let’s look at the main facility, in Santa Clara, and then show how it interacts with its siblings.

Read the whole story here… and check out Facebook’s current Open Source project pages too.

, , ,

The glacial pace of cellular security standards: From 3GPP to 5G

mobile_everythingSecurity standards for cellular communications are pretty much invisible. The security standards, created by groups like the 3GPP, play out behind the scenes, embedded into broader cellular protocols like 3G, 4G, LTE and the oft-discussed forthcoming 5G. Due to the nature of the security and other cellular specs, they evolve very slowly and deliberately; it’s a snail-like pace compared to, say, WiFi or Bluetooth.

Why the glacial pace? One reason is that cellular standards of all sorts must be carefully designed and tested in order to work in a transparent global marketplace. There are also a huge number of participants in the value chain, from handset makers to handset firmware makers to radio manufacturers to tower equipment to carriers… the list goes on and on.

Another reason why cellular software, including security protocols and algorithms goes slowly is that it’s all bound up in large platform versions. The current cellular security system is unlikely to change significantly before the roll-out of 5G… and even then, older devices will continue to use the security protocols embedded in their platform, unless a bug forces a software patch. Those security protocols cover everything from authentication of the cellular device to the tower, to the authentication of the tower to the device, to encryption of voice and data traffic.

We can only hope that end users will move swiftly to 5G. Why? because 4G and older platforms aren’t incredibly secure. Sure, they are good enough today, but that’s only “good enough.” The downside is that everything is pretty fuzzy when it comes to what 5G will actually offer… or even how many 5G standards there will be.

Read more in my story in Pipeline Magazine, “Wireless Security Standards.”

, , ,

Paying a steep price in Bitcoins for security lapses, thanks to ransomware

ransomRansomware is a huge problem that causes real harm to businesses and individuals. Technology service providers are gearing up to fight these cyberattacks – and that’s coming none too soon.

Ransomware is a type of cyberattack where bad actors gain access to a system, such as a consumer’s desktop or a corporate server. The attack vector might be provided by downloading a piece of malware attached to an email, visiting a corrupted website that runs a script that installs the malware or by opening a document that contains a malicious macro that downloads the malware.

In most ransomware attacks, the malware encrypts the user’s data and then demands an untraceable ransom. When the ransom is paid, the hackers promise to either decrypt the data or provide the user with a key to decrypt it. Because the data is encrypted, even removing the malware from the computer will not restore system functionality; typically, the victim has to restore the entire system from a backup or pay the ransom and hope for the best.

As cyberattacks go, ransomware has proven to be extremely effective at both frustrating users and obtaining ransom money for the attackers.

I was asked to write a story for Telecom Ramblings about ransomware. The particular focus of the assignment was on how itaffects Asia-Pacific countries, but the info is applicable everywhere: “What We Can Do About Ransomware – Today and Tomorrow.”

, , ,

Open source is eating carrier OSS and BSS stacks, and that’s a good thing

5D3_9411Forget vendor lock-in: Carrier operation support systems (OSS) and business support systems (BSS) are going open source. And so are many of the other parts of the software stack that drive the end-to-end services within and between carrier networks.

That’s the message from TM Forum Live, one of the most important conferences for the telecommunications carrier industry.

Held in Nice, France, from May 9-12, 2016, TM Forum Live is produced by TM Forum, a key organization in the carrier universe.

TM Forum works closely with other industry groups, like the MEF, OpenDaylight and OPNFV. I am impressed how so many open-source projects, standards-defining bodies and vendor consortia are collaborating a very detailed level to improve interoperability at many, many levels. The key to making that work: Open source.

You can read more about open source and collaboration between these organizations in my NetworkWorld column, “Open source networking: The time is now.”

While I’m talking about TM Forum Live, let me give a public shout-out to:

Pipeline Magazine – this is the best publication, bar none, for the OSS, BSS, digital transformation and telecommunications service provider space. At TM Forum Live, I attended their annual Innovation Awards, which is the best-prepared, best-vetted awards program I’ve ever seen.

Netcracker Technology — arguably the top vendor in providing software tools for telecommunications and cable companies. They are leading the charge for the agile reinvention of a traditionally slow-moving industry. I’d like to thank them for hosting a delicious press-and-analyst dinner at the historic Hotel Negresco – wow.

Looking forward to next year’s TM Forum Live, May 15-18, 2017.