javamagThe newest issue of the second-best software development publication is out – and it’s a doozy. You’ll definitely want to read the July/August 2016 issue of Java Magazine.

(The #1 publication in this space is my own Software Development Times. Yeah, SD Times rules.)

Here is how Andrew Binstock, editor-in-chief of Java Magazine, describes the latest issue:

…in which we look at enterprise Java – not so much at Java EE as a platform, but at individual services that can be useful as part of a larger solution, For example, we examine JSON-P, the two core Java libraries for parsing JSON data; JavaMail, the standalone library for sending and receiving email messages; and JASPIC , which is a custom way to handle security, often used with containers. For Java EE fans, one of the leaders of the JSF team discusses in considerable detail the changes being delivered in the upcoming JSF 2.3 release.

We also show off JShell from Java 9, which is an interactive shell (or REPL) useful for testing Java code snippets. It will surely become one of the most used features of the new language release, especially for testing code interactively without having to set up and run an entire project.

And we continue our series on JVM languages with JRuby, the JVM implementation of the Ruby scripting language. The article’s author, Charlie Nutter, who implemented most of the language, discusses not only the benefits of JRuby but how it became one of the fastest implementations of Ruby.

For new to intermediate programmers, we deliver more of our in-depth tutorials. Michael Kölling concludes his two-part series on generics by explaining the use of and logic behind wildcards in generics. And a book excerpt on NIO.2 illustrates advanced uses of files, paths, and directories, including an example that demonstrates how to monitor a directory for changes to its files.

In addition, we have our usual code quiz with its customary detailed solutions, a book review of a new text on writing maintainable code, an editorial about some of the challenges of writing code using only small classes, and the overview of a Java Enhancement Proposal (JEP) for Java linker. A linker in Java? Have a look.

The story I particularly recommend is “Using the Java APIs for JSON processing.” David Delabasseé covers the Java API for JavaScript Object Notation Processing (JSR-353) and its two parts, one of which is high-level object modal API, and the other a lower-level streaming API.

It’s a solid issue. Read it – and subscribe, it’s free!

sophos-naked-securityHere’s a popular article that I wrote on email security for Sophos’ “Naked Security” blog.

5 things you should know about email unsubscribe links before you click” starts with:

We all get emails we don’t want, and cleaning them up can be as easy as clicking ‘unsubscribe’ at the bottom of the email. However, some of those handy little links can cause more trouble than they solve. You may end up giving the sender a lot of information about you, or even an opportunity to infect you with malware.

Read the whole article here.

jason-steerNews websites are an irresistible target for hackers because they are so popular. Why? because they are trusted brands, and because — by their very nature — they contain many external links and use lots of outside content providers and analytics/tracking services. It doesn’t take much to corrupt one of those websites, or one of the myriad partners sites they rely upon, like ad networks, content feeds or behavioral trackers.

Potentially, malware injected on any well-trafficked news website, could infect tremendous numbers of people with ransomware, keyloggers, zombie code, or worse. Alarmist? Perhaps, but with good reason. News websites, which can include both traditional media (like the Chicago Tribune and the BBC), or new-media platforms (such as BuzzFeed or Business Insider) attract a tremendous number of visitors, especially when there is a breaking news story of tremendous interest, like a natural disaster, political event or celebrity shenanigans.

Publishing companies are not technology companies. They are content providers who do their honest best to offer a secure experience, but can’t be responsible for external links. In fact, many say so right in their terms of use statements or privacy policies. What they can be responsible for are the third-party networks that provide content or services to their platforms, but in reality, the search for profits and/or a competitive advantage outweighs any other considerations. And of course, their platforms can be hacked as well.

According to a story in the BBC, news sites in Russia, including the Moscow Echo Radio Station, opposition newspaper New Times, and the Kommersant business newspaper were hacked back in March 2012. In November 2014, the Syrian Electronic Army claimed to have hacked news sites, including the Canada’s CBC News.

Also in November 2014, one of the U.K’s most popular sites, The Telegraph, tweeted, “A part of our website run by a third-party was compromised earlier today. We’ve removed the component. No Telegraph user data was affected.”

A year earlier, in January 2013, the New York Times self-reported, “Hackers in China Attacked The Times for Last 4 Months.” The story said that, “The attackers first installed malware — malicious software — that enabled them to gain entry to any computer on The Times’s network. The malware was identified by computer security experts as a specific strain associated with computer attacks originating in China.”

Regional news outlets can also be targets. On September 18, 2015, reported CBS Local in San Francisco, “Hackers took control of the five news websites of Palo Alto-based Embarcadero Media Group on Thursday night, according to the CBS. The websites of Palo Alto Weekly, The Almanac, Mountain View Voice and Pleasanton Weekly were all reportedly attacked at about 10:30 p.m. Thursday.

I talked recently with Jason Steer of Menlo Security, a security company based in Menlo Park, Calif. He put it very clearly:

You are taking active code from a source you didn’t request, and you are running it inside your PC and your network, without any inspection whatsoever. Because of the high volumes of users, it only takes a small number of successes to make the hacking worthwhile. Antivirus can’t really help here, either consumer or enterprise. Antivirus may not detect ransomware being installed from a site you visit, or malicious activity from a bad advertisement or bad JavaScript.

Jason pointed me to his blog post from November 12, 2015, “Top 50 UK Website Security Report.” His post says, in part,

Across the top 50 sites, a number of important findings were made:

• On average, when visiting a top 50 U.K. website, your browser will execute 19 scripts

• The top UK website executed 125 unique scripts when requested

His blog continued with a particularly scary observation:

15 of the top 50 sites (i.e. 30 percent) were running vulnerable versions of web-server code at time of testing. Microsoft IIS version 7.5 was the most prominent vulnerable version reported with known software vulnerabilities going back more than five years.

How many scripts are running on your browser from how many external servers? According to Jason’s research, if you visit the BBC website, your browser might be running 92 scripts pushed to it from 11 different servers. The Daily Mail? 127 scripts from 35 servers. The Financial Times? 199 scripts from 31 servers. The New Yorker? 113 scripts from 33 sites. The Economist? 185 scripts from 46 sites. The New York Times? 76 scripts from 29 servers. And Forbes, 100 scripts from 49 servers.

Most of those servers and scripts are benign. But if they’re not, they’re not. The headline on Ars Technica on March 15, 2016, says it all: “Big-name sites hit by rash of malicious ads spreading crypto ransomware.” The story begins,

Mainstream websites, including those published by The New York Times, the BBC, MSN, and AOL, are falling victim to a new rash of malicious ads that attempt to surreptitiously install crypto ransomware and other malware on the computers of unsuspecting visitors, security firms warned.

The tainted ads may have exposed tens of thousands of people over the past 24 hours alone, according to a blog post published Monday by Trend Micro. The new campaign started last week when “Angler,” a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

 According to a separate blog post from Trustwave’s SpiderLabs group, one JSON-based file being served in the ads has more than 12,000 lines of heavily obfuscated code. When researchers deciphered the code, they discovered it enumerated a long list of security products and tools it avoided in an attempt to remain undetected.

Let me share my favorite news website hack story, because of its sheer audacity. According to Jason’s blog, ad delivery systems can be turned into malware delivery systems, and nobody might every know:

If we take one such example in March 2016, one attacker waited patiently for the domain ‘brentsmedia[.]com’ to expire, registered in Utah, USA , a known ad network content provider. The domain in question had expired ownership for 66 days, was then taken over by an attacker in Russia (Pavel G Astahov) and 1 day later was serving up malicious ads to visitors of sites including the BBC, AOL & New York Times. No-one told any of these popular websites until the malicious ads had already appeared.

Jason recently published an article on this subject in SC Magazine, “Brexit leads to pageviews — pageviews lead to malware.” Check it out. And be aware that when you visit a trusted news website, you have no idea what code is being executed on your computer, what that code does, and who wrote that code.

vz_use_outdoor_headerThank you, NetGear, for taking care of your valued customers. On July 1, the company announced that it would be shutting down the proprietary back-end cloud services required for its VueZone cameras to work – turning them into expensive camera-shaped paperweights. See “Throwing our IoT investment in the trash thanks to NetGear.”

The next day, I was contacted by the company’s global communications manager. He defended the policy, arguing that NetGear was not only giving 18 months’ notice of the shutdown, but they are “doing our best to help VueZone customers migrate to the Arlo platform by offering significant discounts, exclusive to our VueZone customers.” See “A response from NetGear regarding the VueZone IoT trashcan story.”

And now, the company has done a 180° turn. NetGear will not turn off the service, at least not at this time. Well done. Here’s the email that came a few minutes ago. The good news for VueZone customers is that they can continue. On the other hand, let’s not party too heartily. The danger posed by proprietary cloud services driving IoT devices remains. When the vendor decides to turn it off, all you have is recycle-ware and potentially, one heck of a migration issue.

Subject: VueZone Services to Continue Beyond January 1, 2018

Dear valued VueZone customer,

On July 1, 2016, NETGEAR announced the planned discontinuation of services for the VueZone video monitoring product line, which was scheduled to begin as of January 1, 2018.

Since the announcement, we have received overwhelming feedback from our VueZone customers expressing a desire for continued services and support for the VueZone camera system. We have heard your passionate response and have decided to extend service for the VueZone product line. Although NETGEAR no longer manufactures or sells VueZone hardware, NETGEAR will continue to support existing VueZone customers beyond January 1, 2018.

We truly appreciate the loyalty of our customers and we will continue our commitment of delivering the highest quality and most innovative solutions for consumers and businesses. Thank you for choosing us.

Best regards,

The NETGEAR VueZone Team

July 19, 2016

BirthInternetLIn the “you learn something every day” department: Discovered today that there’s a plaque at Stanford honoring the birth of the Internet. The plaque was dedicated on July 28, 2005, and is in the Gates Computer Science Building.

You can read all about the plaque, and see it more clearly, on J. Noel Chiappa’s website. His name is on the plaque.

Here’s what the plaque says. Must check it out during my next trip to Palo Alto.


BIRTH OF THE INTERNET

THE ARCHITECTURE OF THE INTERNET AND THE DESIGN OF THE CORE NETWORKING PROTOCOL TCP (WHICH LATER BECAME TCP/IP) WERE CONCEIVED BY VINTON G. CERF AND ROBERT E. KAHN DURING 1973 WHILE CERF WAS AT STANFORD’S DIGITAL SYSTEMS LABORATORY AND KAHN WAS AT ARPA (LATER DARPA). IN THE SUMMER OF 1976, CERF LEFT STANFORD TO MANAGE THE PROGRAM WITH KAHN AT ARPA.

THEIR WORK BECAME KNOWN IN SEPTEMBER 1973 AT A NETWORKING CONFERENCE IN ENGLAND. CERF AND KAHN’S SEMINAL PAPER WAS PUBLISHED IN MAY 1974.

CERF, YOGEN K. DALAL, AND CARL SUNSHINE WROTE THE FIRST FULL TCP SPECIFICATION IN DECEMBER 1974. WITH THE SUPPORT OF DARPA, EARLY IMPLEMENTATIONS OF TCP (AND IP LATER) WERE TESTED BY BOLT BERANEK AND NEWMAN (BBN), STANFORD, AND UNIVERSITY COLLEGE LONDON DURING 1975.

BBN BUILT THE FIRST INTERNET GATEWAY, NOW KNOWN AS A ROUTER, TO LINK NETWORKS TOGETHER. IN SUBSEQUENT YEARS, RESEARCHERS AT MIT AND USC-ISI, AMONG MANY OTHERS, PLAYED KEY ROLES IN THE DEVELOPMENT OF THE SET OF INTERNET PROTOCOLS.

KEY STANFORD RESEARCH ASSOCIATES AND FOREIGN VISITORS

  • VINTON CERF
  • DAG BELSNES
  • RONALD CRANE
  • BOB METCALFE
  • YOGEN DALAL
  • JUDITH ESTRIN
  • RICHARD KARP
  • GERARD LE LANN
  • JAMES MATHIS
  • DARRYL RUBIN
  • JOHN SHOCH
  • CARL SUNSHINE
  • KUNINOBU TANNO

DARPA

  • ROBERT KAHN

COLLABORATING GROUPS

BOLT BERANEK AND NEWMAN

  • WILLIAM PLUMMER
  • GINNY STRAZISAR
  • RAY TOMLINSON

MIT

  • NOEL CHIAPPA
  • DAVID CLARK
  • STEPHEN KENT
  • DAVID P. REED

NDRE

  • YNGVAR LUNDH
  • PAAL SPILLING

UNIVERSITY COLLEGE LONDON

  • FRANK DEIGNAN
  • MARTINE GALLAND
  • PETER HIGGINSON
  • ANDREW HINCHLEY
  • PETER KIRSTEIN
  • ADRIAN STOKES

USC-ISI

  • ROBERT BRADEN
  • DANNY COHEN
  • DANIEL LYNCH
  • JON POSTEL

ULTIMATELY, THOUSANDS IF NOT TENS TO HUNDREDS OF THOUSANDS HAVE CONTRIBUTED THEIR EXPERTISE TO THE EVOLUTION OF THE INTERNET.

DEDICATED JULY 28, 2005

pidgeonThere are standards for everything, it seems. And those of us who work on Internet things are often amused (or bemused) by what comes out of the Internet Engineering Task Force (IETF). An oldie but a goodie is a document from 1999, RFC-2549, “IP over Avian Carriers with Quality of Service.”

An RFC, or Request for Comment, is what the IETF calls a standards document. (And yes, I’m browsing my favorite IETF pages during a break from doing “real” work. It’s that kind of day.)

RFC-2549 updates RFC-1149, “A Standard for the Transmission of IP Datagrams on Avian Carriers.” That older standard did not address Quality of Service. I’ll leave it for you to enjoy both those documents, but let me share this part of RFC-2549:

Overview and Rational

The following quality of service levels are available: Concorde, First, Business, and Coach. Concorde class offers expedited data delivery. One major benefit to using Avian Carriers is that this is the only networking technology that earns frequent flyer miles, plus the Concorde and First classes of service earn 50% bonus miles per packet. Ostriches are an alternate carrier that have much greater bulk transfer capability but provide slower delivery, and require the use of bridges between domains.

The service level is indicated on a per-carrier basis by bar-code markings on the wing. One implementation strategy is for a bar-code reader to scan each carrier as it enters the router and then enqueue it in the proper queue, gated to prevent exit until the proper time. The carriers may sleep while enqueued.

Most years, the IETF publishes so-called April Fool’s RFCs. The best list of them I’ve seen is on Wikipedia. If you’re looking to take a work break, give ’em a read. Many of them are quite clever! However, I still like RFC-2549 the best.

A prized part of my library is “The Complete April Fools’ Day RFCs” compiled by by Thomas Limoncelli and Peter Salus. Sadly this collection stops at 2007. Still, it’s a great coffee table book to leave lying around for when people like Bob MetcalfeTim Berners-Lee or Al Gore come by to visit.

5d3_9839-100670811-primary.idgeThank you, NetGear, for the response to my July 11 opinion essay for NetworkWorld, “Throwing our IoT investment in the trash thanks to NetGear.” In that story, I used the example of our soon-to-be-obsolete VueZone home video monitoring system: At the end of 2017, NetGear is turning off the back-end servers that make VueZone work – and so all the hardware will become fancy camera-shaped paperweights.

The broader message of the story is that every IoT device tied into a proprietary back-end service will be turned to recycleware if (or when) the service provider chooses to turn it off. My friend Jason Perlow picked up this theme in his story published on July 12 on ZDNet, “All your IoT devices are doomed” and included a nice link to my NetworkWorld story. As Jason wrote,

First, it was Aether’s smart speaker, the Cone. Then, it was the Revolv smart hub. Now, it appears NetGear’s connected home wireless security cameras, VueZone, is next on the list.

I’m sure I’ve left out more than a few others that have slipped under the radar. It seems like every month an Internet of Things (IoT) device becomes abandonware after its cloud service is discontinued.

Many of these devices once disconnected from the cloud become useless. They can’t be remotely managed, and some of them stop functioning as standalone (or were never capable of it in the first place). Are these products going end-of-life too soon? What are we to do about this endless pile of e-waste that seems to be the inevitable casualty of the connected-device age?

I would like to publicly acknowledge NetGear for sending a quick response to my story. Apparently — and contrary to what I wrote — the company did offer a migration path for existing VueZone customers. I can’t find the message anywhere, but can’t ignore the possibility that it was sucked into the spamverse.

Here is the full response from Nathan Papadopulos, Global Communications & Strategic Marketing for NetGear:

Hello Alan,

I am writing in response to your recent article about disposing of IoT products. As you may know, the VueZone product line came to Netgear   as part of our acquisition of Avaak, Inc. back in 2012, and is the predecessor of the current Arlo security system. Although we wanted to avoid interruptions of the VueZone services as much as possible, we are now faced with the need to discontinue support  for the camera line. VueZone was built on technologies which are now outdated and a platform which is not scalable. Netgear has since shifted our resources to building better, more robust products which are the Arlo system of security cameras. Netgear is doing our best to help VueZone customers migrate to the Arlo platform by offering significant discounts, exclusive to our VueZone customers.

1. On July 1, 2016, Netgear officially announced the discontinuation of VueZone services to VueZone customers. Netgear has sent out an email notification to the entire VueZone customer base with the content in the “Official End-of-Services Announcement.” Netgear is providing the VueZone customers with an 18-month notice, which means that the actual effective date of this discontinuation of services will be on January 1, 2018.

2. Between July 2 and July 6, 26,000+ customers who currently have an active VueZone base station have received an email with an offer to purchase an Arlo 4-camera kit. There will be two options for them to choose from:

a. Standard Arlo 4-camera kit for $299.99

b. Refurbished Arlo 4-camera kit for $149.99

Both refurbished and new Arlo systems come with the NETGEAR limited 1-year hardware warranty. The promotion will run until the end of July 31, 2016.

It appears NetGear is trying to do the right thing, though they lose points for offering the discounted migration path for less than one month. Still, the fact remains that obsolescence of service-dependent IoT devices is a big problem. Some costly devices will cease functioning if the service goes down; others will lose significant functionality.

And thank you, Jason, for the new word: Abandonware.

ThrivingandSurvivinginaMulti-CoreWorld-1I wrote five contributions for an ebook from AMD Developer Central — and forgot entirely about it! The book, called “Surviving and Thriving in a Multi-Core World: Taking Advantage of Threads and Cores on AMD64,” popped up in this morning’s Google Alerts report. I have no idea why!

Here are the pieces that I wrote for the book, published in 2006. Darn, they still read well! Other contributors include my friends Anderson Bailey, Alexa Weber Morales and Larry O’Brien.

  • Driving in the Fast Lane: Multi-Core Computing for Programmers, Part 1 (page 5)
  • Driving in the Fast Lane: Multi-Core Computing for Programmers, Part 2 (page 8)
  • Coarse-Grained Vs. Fine-Grained Threading for Native Applications, Part 1 (p. 37)
  • Coarse-Grained Vs. Fine-Grained Threading for Native Applications, Part 2 (p. 40)
  • Device Driver & BIOS Development for AMD Systems (p. 87)

I am still obsessed with questionable automotive analogies. The first article begins with:

The main road near my house, called Skyline Drive, drives me nuts. For several miles, it’s a quasi-limited access highway. But for some inexplicable reason, it keeps alternating between one and two lanes in each direction. In the two-lane part, traffic moves along swiftly, even during rush hour. In the one-lane part, the traffic merges back together, and everything crawls to a standstill. When the next two-lane part appears, things speed up again.

Two lanes are better than one — and not just because they can accommodate twice as many cars. What makes the two-lane section better is that people can overtake. In the one-lane portion (which has a double-yellow line, so there’s no passing), traffic is limited to the slowest truck’s speed, or to little-old-man-peering-over-the-steering-wheel-of-his-Dodge-Dart speed. Wake me when we get there. But in the two-lane section, the traffic can sort itself out. Trucks move to the right, cars pass on the left. Police and other priority traffic weave in and out, using both lanes depending on which has more capacity at any particular moment. Delivery services with a convoy of trucks will exploit both lanes to improve throughput. The entire system becomes more efficient, and net flow of cars through those two-lane sections is considerably higher.

Okay, you’ve figured out that this is all about dual-core and multi-core computing, where cars are analogous to application threads, and the lanes are analogous to processor cores.

I’ll have to admit that my analogy is somewhat simplistic, and purists will say that it’s flawed, because an operating system has more flexibility to schedule tasks in a single-core environment under a preemptive multiprocessing environment. But that flexibility comes at a cost. Yes, if I were really modeling a microprocessor using Skyline Drive, cars would be able to pass each other in the single-lane section, but only if the car in front were to pull over and stop.

Okay, enough about cars. Let’s talk about dual-core and multi-core systems, why businesses are interested in buying them, and what implications all that should have for software developers like us.

Download and enjoy the book – it’s not gated and entirely free.

SharePoint-2016-Preview-tiltedExcellent story about SharePoint in ComputerWorld this week. It gives encouragement to those who prefer to run SharePoint in their own data centers (on-premises), rather than in the cloud. In “The Future of SharePoint,” Brian Alderman writes,

In case you missed it, on May 4 Microsoft made it loud and clear it has resuscitated SharePoint On-Premises and there will be future versions, even beyond SharePoint Server 2016. However, by making you aware of the scenarios most appropriate for On-Premises and the scenarios where you can benefit from SharePoint Online, Microsoft is going to remain adamant about allowing you to create the perfect SharePoint hybrid deployment.

The future of SharePoint begins with SharePoint Online, meaning changes, features and functionality will first be deployed to SharePoint Online, and then rolled out to your SharePoint Server On-Premises deployment. This approach isn’t much of a surprise, being that SharePoint Server 2016 On-Premises was “engineered” from SharePoint Online.

Brian was writing about a post on the Microsoft SharePoint blog, and one I had overlooked (else I’d have written about it back in May. In the post, “SharePoint Server 2016—your foundation for the future,” the SharePoint Team says,

We remain committed to our on-premises customers and recognize the need to modernize experiences, patterns and practices in SharePoint Server. While our innovation will be delivered to Office 365 first, we will provide many of the new experiences and frameworks to SharePoint Server 2016 customers with Software Assurance through Feature Packs. This means you won’t have to wait for the next version of SharePoint Server to take advantage of our cloud-born innovation in your datacenter.

The first Feature Pack will be delivered through our public update channel starting in calendar year 2017, and customers will have control over which features are enabled in their on-premises farms. We will provide more detail about our plans for Feature Packs in coming months.

In addition, we will deliver a set of capabilities for SharePoint Server 2016 that address the unique needs of on-premises customers.

Now, make no mistake: The emphasis at Microsoft is squarely on Office 365 and SharePoint Online. Or as the company says SharePoint Server is, “powering your journey to the mobile-first, cloud-first world.” However, it is clear that SharePoint On-Premises will continue for some period of time. Later in the blog post in the FAQ, this is stated quite definitively:

Is SharePoint Server 2016 the last server release?

No, we remain committed to our customer’s on-premises and do not consider SharePoint Server 2016 to be the last on-premises server release.

The best place to learn about SharePoint 2016 is at BZ Media’s SPTechCon, returning to San Francisco from Dec. 5-8. (I am the Z of BZ Media.) SPTechCon, the SharePoint Technology Conference, offers more than 80 technical classes and tutorials — presented by the most knowledgeable instructors working in SharePoint today — to help you improve your skills and broaden your knowledge of Microsoft’s collaboration and productivity software.

SPTechCon will feature the first conference sessions on SharePoint 2016. Be there! Learn more at http://www.sptechcon.com.

old-cameraIf you are asked to submit a photograph, screen shot or a logo to a publication or website, there’s the right way and the less-right way. Here are some suggestions that I wrote several years ago for BZ Media for use in lots of situations — in SD Times, for conferences, and so-on.

While they were written for the days of print publications, these are still good guidelines for websites, blog and other digital publishing media.

General Suggestions

  • Photos need to be high resolution. Bitmaps that would look great on a Web page will look dreadful in print. The recommended minimum size for a bitmap file should be two inches across by three inches high, at a resolution of 300 dpi — that is, 600×900 pixels, at the least. A smaller photograph may be usable, but frankly, it will probably not be.
  • Photos need to be in a high-color format. The best formats are high-resolution JPEG files (.jpg) and TIFF (.tif) files. Or camera RAW if you can. Avoid GIF files (.gif) because they are only 256 colors. However, in case of doubt, send the file in and hope for the best.
  • Photos should be in color. A color photograph will look better than a black-and-white photograph — but if all you have is B&W, send it in. As far as electronic files go, a 256-color image doesn’t reproduce well in print, so please use 24-bit or higher color depth. If the website wants B&W, they can convert a color image easily.
  • Don’t edit or alter the photograph. Please don’t crop it, modify it using Photoshop or anything, unless otherwise requested to do so. Just send the original image, and let the art director or photo editor handle the cropping and other post-processing.
  • Do not paste the image into a Word or PowerPoint document. Send the image as a separate file.

Logos

  • Send logos as vector-based EPS files (such as an Adobe Illustrator file with fonts converted to outlines) if possible. If a vector-based EPS file is not available, send a 300 dpi TIFF, JPEG or Photoshop EPS files (i.e., one that’s at least two inches long). Web-resolution logos are hard to resize, and often aren’t usable.

Screen Shots

  • Screen shots should be the native bitmap file or a lossless format. A native bitmapped screen capture from Windows will be a huge .BMP file. This may be converted to a compressed TIFF file, or compressed to a .ZIP file for emailing. PNG is also a good lossless format and is quite acceptable.
  • Do not convert a screen capture to JPEG or GIF.  JPEGs in particular make terrible screen shots due to the compression algorithms; solid color areas may become splotchy, and text can become fuzzy. Screen captures on other platforms should also be lossless files, typically in TIFF or PNG.

Hints for better-looking portraits

  • Strive for a professional appearance. The biggest element is a clean, uncluttered background. You may also wish to have the subject wear business casual or formal clothing, such as a shirt with a collar instead of a T-shirt. If you don’t have a photo like that, send what you have.
  • Side or front natural light is the best and most flattering. Taking pictures outdoors with overcast skies is best; a picture outdoors on a sunny day is also good, but direct overhead sunlight (near noon) is too harsh. If possible, keep away from indoor lighting, especially ceiling or fluorescent lights. Avoid unpleasant backlighting by making sure the subject isn’t standing between the camera and a window or lamp.
  • If you must use electronic flash… Reduce red-eye by asking the subject to look at the photographer, not at the camera. (Off-camera flash is better than on-camera flash.) Eliminate harsh and unpleasant shadows by ensuring that the subject isn’t standing or sitting within three feet of a wall, bookcase or other background objects. Another problem is white-out: If the camera is too close to the subject, the picture will be too bright and have too much contrast.
  • Maintain at least six feet separation between the camera and the subject, and three feet (or more) from the background. If the subject is closer than six feet to the camera, his/her facial features will be distorted, and the results will be unattractive. For best results, hold the camera more than six feet from the subject. It’s better to be farther away and use the camera’s optical zoom, rather than to shoot a close-up from a few feet away.
  • Focus on his/her eyes. If the eyes are sharp, the photo is probably okay. If the eyes aren’t sharp (but let’s say the nose or ears are), the photo looks terrible. That’s because people look at the eyes first.

crashCloud services crash. Of course, non-cloud-services crash too — a server in your data center can go down, too. At least there you can do something, or if it’s a critical system you can plan with redundancies and failover.

Not so much with cloud services, as this morning’s failure of Google Calendar clearly shows. The photo shows Google’s status dashboard as of 6:53am on Thursday, June 30.

I wrote about crashes at Amazon Web Services and Apple’s MobileMe back in 2008 in “When the cloud was good, it was very good. But when it was bad it was rotten.”

More recently, in 2011, I covered another AWS failure in “Skynet didn’t take down Amazon Web Services.”

Overall, cloud services are quite reliable. But they are not perfect, and it’s a mistake to think that just because they are offered by huge corporations, they will be error-free and offer 100% uptime. Be sure to work that into your plans, especially if you and your employees rely upon public cloud services to get your job done, or if your customers interact with you through cloud services.

hackathonThe MEF recently conducted its second LSO Hackathon at a Rome event called Euro16. You can read my story about it here in DiarioTi: LSO Hackathons Bring Together Open Standards, Open Source.

Alas, my coding skills are too rusty for a Hackathon, unless the objective is to develop a fully buzzword compliant implementation of “Hello World.” Fortunately, there are others with better skills, as well as a broader understanding of today’s toughest problems.

Many Hackathons are thinly veiled marketing exercises by companies, designed to find ways to get programmers hooked on their tools, platforms, APIs, etc. Not all! One of the most interesting Hackathons is from the MEF, an industry group that drives communications interoperability. As a standards defining organization (SDO), the MEF wants to help carriers and equipment vendors design products/services ready for the next generation of connectivity. That means building on a foundation of SDN (software defined networks), NFV (network functions virtualization), LSO (lifecycle service orchestration) and CE2.0 (Carrier Ethernet 2.0).

To make all this happen:

  • What the MEF does: Create open standards for architectures and specifications.
  • What vendors, carriers and open source projects do: Write software to those specifications.
  • What the Hackathon does: Give everyone a chance to work together, make sure the code is truly interoperable, and find areas where the written specs might be ambiguous.

Thus, the MEF LSO Hackathons. They bring together a wide swatch of the industry to move beyond the standards documents and actually write and test code that implements those specs.

As mentioned above, the MEF just completed its second Hackathon at Euro16. The first LSO Hackathon was at last year’s MEF GEN15 annual conference in Dallas. Here’s my story about it in Telecom Ramblings: The MEF LSO Hackathon: Building Community, Swatting Bugs, Writing Code.

The third LSO Hackathon will be at this year’s MEF annual conference, MEF16, in Baltimore, Nov. 7-10. I will be there as an observer – alas, without the up-to-date, practical skills to be a coding participant.

stopwatchI can hear the protesters. “What do we want? Faster automated emails! When do we want them? In under 20 nanoseconds!

Some things have to be snappy. A Web page must load fast, or your customers will click away. Moving the mouse has to move the cursor without pauses or hesitations. Streaming video should buffer rarely and unobtrusively; it’s almost always better to temporarily degrade the video quality than to pause the playback. And of course, for a touch interface to work well, it must be snappy, which Apple has learned with iOS, and which Google learned with Project Butter.

The same is true with automated emails. They should be generated and transmitted immediately — that is, is under a minute.

I recently went to book a night’s stay at a Days Inn, a part of the Wyndham Hotel Group, and so I had to log into my Wyndham account. Bad news: I couldn’t remember the password. So, I used the password retrieval system, giving my account number and info. The website said to check my e-mail for the reset link. Kudos: That’s a lot better than saying “We’ll mail you your password,” and then sending it in plain text!!

So, I flipped over to my e-mail client. Checked for new mail. Nothing. Checked again. Nothing. Checked again. Nothing. Checked the spam folder. Nothing. Checked for new mail. Nothing. Checked again. Nothing.

I submitted the request for the password reset at 9:15 a.m. The link appeared in my inbox at 10:08 a.m. By that time, I had already booked the stay with Best Western. Sorry, Days Inn! You snooze, you lose.

What happened? The e-mail header didn’t show a transit delay, so we can’t blame the Internet. Rather, it took nearly an hour for the email to be uploaded from the originating server. This is terrible customer service, plain and simple.

It’s not merely Wyndham. When I purchase something from Amazon, the confirmation e-mail generally arrives in less than 30 seconds. When I purchase from Barnes & Noble, a confirmation e-mail can take an hour. The worst is Apple: Confirmations of purchases from the iTunes Store can take three days to appear. Three days!

It’s time to examine your policies for generating automated e-mails. You do have policies, right? I would suggest a delay of no more than one minute from when the user performs an action that would generate an e-mail and having the message delivered to the SMTP server.

Set the policy. Automated emails should go out in seconds — certainly in under one minute. Design for that and test for that. More importantly, audit the policy on a regular basis, and monitor actual performance. If password resets or order confirmations are taking 53 minutes to hit the Internet, you have a problem.

626px-Ada_Lovelace_portraitDespite some recent progress, women are still woefully underrepresented in technical fields such as software development. There are many academic programs to bring girls into STEM (science, technology, engineering and math) at various stages in their education, from grade school to high school to college. Corporations are trying hard.

It’s not enough. We all need to try harder.

On Oct. 11, 2016, we will celebrate Ada Lovelace Day, honoring the first computer programmer — male or female. Augusta Ada King-Noel, Countess of Lovelace, wrote the algorithms for Charles Babbage’s difference engine in the mid-1800s.

According to the website Finding Ada, this date doesn’t represent her birthday, which is of Dec. 10. Rather, they say, “The date is arbitrary, chosen in an attempt to make the day maximally convenient for the most number of people. We have tried to avoid major public holidays, school holidays, exam season, and times of the year when people might be hibernating.” I’d like to think that the scientifically minded Ada Lovelace would find this amusing.

There are great organizations focused on promoting women in technology, such as Women in Technology International (WITI) and the Anita Borg Institute. There are cool projects, like the Wiki Edit-a-Thon sponsored by Brown University, which seeks to correct the historic (and inaccurate) underrepresentation of female scientists in Wikipedia.

Those are good efforts. They still aren’t enough.

Are women good at STEM fields, including software development? Yes. But all too often, they are gender-stereotyped into non-coding parts of the field—when they are hired at all. And certainly the hyper-competitive environment in many tech teams, and the death-march culture, is not friendly to anyone (male or female) who wants to have a life outside the startup.

Let me share the Anita Borg Institute’s 10 best practices to foster retention of women in technical roles:

  • Collect, analyze and report retention data as it pertains to women in technical roles.
  • Formally train managers in best practices, and hold them accountable for retention.
  • Embed collaboration in the corporate culture to encourage diverse ideas.
  • Offer training programs that raise awareness of and counteract microinequities and unconscious biases.
  • Provide development and visibility opportunities to women that increase technical credibility.
  • Fund and support workshops and conferences that focus on career path experiences and challenges faced by women technologists.
  • Establish mentoring programs on technical and career development.
  • Sponsor employee resource groups for mutual support and networking.
  • Institute flexible work arrangements and tools that facilitate work/life integration.
  • Enact employee-leave policies, and provide services that support work/life integration.

Does your organization have a solid representation of women in technical jobs (not only in technical departments)? Are those women given equal pay for equal work? Are women provided with solid opportunities for professional growth and career advancement? Are you following any of the above best practices?

If so, that’s great news. I’d love to hear about it and help tell your story.

pizzaIt’s not intellectual property. It’s not having code warriors who can turn pizza into algorithms. It’s not even having great angel investors. If you want a successful startup that’s going to keep you in the headlines for your technology and market prowess, you need a great Human Resources department.

Whether your organization has three employees, 30 or 300, it’s a company. That means a certain level of professionalism in administering it. Yes, tech companies love to be led by hotshot engineers who often brag about their inexperience as CEOs. Yes, those companies are often the darlings of the venture capital community. Yes, those CEOs get lots of visibility in the technology media, the financial media and most importantly, social media.

That is not enough. That’s explained very well in Claire Cain Miller’s essay in The New York Times, “Yes, Silicon Valley, Sometimes You Need More Bureaucracy.”

Miller focuses on the 2014 GitHub scandal, where a lack of professionalism in HR led to deep problems in hiring, management and culture.

“GitHub is not unusual. Tech startups with 100 or fewer employees have half as many personnel professionals as companies of the same size in other industries, according to data from PayScale, which makes compensation software and analyzed about 2,830 companies,” Miller writes.

Is HR something that’s simply soft and squishy, a distraction from the main business of cranking out code and generating viral marketing? No. It’s a core function of every business that’s large enough to have employees.

Miller cites a study that found that companies with personnel departments were nearly 40% less likely to fail than the norm, and nearly 40% more likely to go public. That 36-page study, “Organizational Blueprints for Success in High-Tech Startups,” from the University of California, Berkeley, was published in 2002, but provides some interesting food for thought.

The authors, James Baron and Michael Hannan, wrote,

It is by no means uncommon to see a founder spend more time and energy fretting about the scalability of the phone system or IT platform than about the scalability of the culture and practices for managing employees, even in case where that same founder would declare with great passion and sincerity that ‘people are the ultimate source of competitive advantage in my business.’

The study continues,

Any plan for launching a new enterprise should include a road map for evolving the organizational structure and HR system, which parallels the timeline for financial, technological, and growth milestones. We have yet to meet an entrepreneur who told us, on reflection, he or she believes they spent too much time worrying about people issues in the early days of their venture.

What does that mean for you?

• If you are part of the leadership team of a startup or small company, look beyond the tech industry for best practices in human resources management. Just because other small tech firms gloss over HR doesn’t mean that you should. In fact, perhaps having better HR might be better way to out-innovate your competitors.

• If you are looking at joining a startup or a small company, look at the HR department and the culture. If HR seems casual or ad hoc, and if everyone in the company looks the same, perhaps that’s a company not poised for long-term success. Look for a culture that cares about having a healthy and genuinely diverse workforce—and for policies that talk about ways to resolve problems.

Human resources are as important as technology and financial resources. Without the right leadership in all three areas, you’re in for a rough ride.

find-my-phoneThere are several types of dangers presented by a lost Bring Your Own Device (BYOD) smartphone or tablet. Many IT professionals and security specialists think only about some of them. They are all problematic. Does your company have policies about lost personal devices?

  • If you have those policies, what are they?
  • Does the employee know about those policies?
  • Does the employee know how to notify the correct people in case his or her device is lost?

Let’s say you have policies. Let’s say the employee calls the security office and says, “My personal phone is gone. I use it to access company resources, and I don’t think it was securely locked.” What happens?

Does the company have all the information necessary to take all the proper actions, including the telephone number, carrier, manufacturer and model, serial number, and other characteristics? Who gets notified? How long do you wait before taking an irreversible action? Can the security desk respond in an effective way? Can the security respond instantly, including nights, weekend and holidays?

If you don’t have those policies — with people and knowledge to make them effective — you’ve got a serious problem.

Read my latest story in NetworkWorld, “Dude, where’s my phone? BYOD means enterprise security exposure.” It discusses the four biggest obvious threats from a lost BYOD device, and what you can do to address those threats.

kfc-watt-a-box“Would you like amps with that?” Perhaps that’s the new side-dish question when ordering fast food. Yes, I’ll have three pieces of extra crispy chicken, potato wedges, cole slaw, unsweet iced tea and a cell-phone charging box.

New of out India is  KFC (which many of us grew up calling Kentucky Fried Chicken) has introduced the Watt-a-Box, which says on its side “Charge your phone while experiencing finger lickin’ good food.” (That last part may be debatable.)

According to the Times of India,

NEW DELHI: KFC garnered a lot of accolades for its recently launched 5-in-1 Meal Box. And the fast-food chain has now introduced an all new ‘gadgety’ variant of the same box.

The limited edition box comes with a built-in power bank. Dubbed as ‘Watt a Box,’ it lets you charge your smartphone as you go about enjoying your meal.

KFC has said that a few lucky customers at select KFC stores in Mumbai and Delhi will get a chance to have their 5-in-1 Meal served in ‘Watt a Box’. Along with this, users can also participate in an online contest on KFC India’s Facebook page and win more of these limited edition boxes.

We are lacking a number of details. Is the box’s charger removable and reusable, or is it a one-time-use thing? If so, what a waste of electronics and battery tech. What about disposal / recycling the battery? And — eww — will everything get finger-lickin’ greasy?

The Watt-a-Box. Watt an idea.

settlementThis just in — literally, at 8:58am on June 21 — an $8.50 credit from Amazon, paid for by Apple. I am trying to restrain my excitement, but in reality, it’s nice to get a few bucks back.

This payout has been pending for a few months. Well, a few years. This is Apple’s second payout from the antitrust settlement; the first was in 2014. Read “Apple’s $400M E-Book Payout: How Much You’ll Get and When” Jeff John Roberts in Forbes, which explains

The payments will mark the end of a long, strange antitrust story in which Apple and publishers tried to challenge the industry powerhouse, Amazon, with a new pricing system. Ironically, Amazon is still the dominant player in e-books today while Apple barely matters. Now Apple will pay $400 million to consumers—most of which will be spent at Amazon. Go figure.

I agree with that assessment: Apple lost both the battle (the antitrust pricing lawsuit) and the war (to be the big payer in digital books). Sure, $400 million is pocket change to Apple, which is reported to be hoarding more than $200 billion in cash. But still, it’s gotta hurt.

Here’s what Amazon said in its email:

Your Credit from the Apple eBooks Antitrust Settlement Is Ready to Use

Dear Alan Zeichick,

You now have a credit of $8.50 in your Amazon account. Apple, Inc. (Apple) funded this credit to settle antitrust lawsuits brought by State Attorneys General and Class Plaintiffs about the price of electronic books (eBooks). As a result of this Settlement, qualifying eBook purchases from any retailer are eligible for a credit. You previously received an email informing you that you were eligible for this credit. The Court in charge of these cases has now approved the Apple Settlement. If you did not receive that email or for more information about your credit, please visit www.amazon.com/applebooksettlement.

You don’t have to do anything to claim your credit, we have already added it to your Amazon account. We will automatically apply your available credit to your purchase of qualifying items through Amazon, an Amazon device or an Amazon app. The credit applied to your purchase will appear as a gift card in your order summary and in your account history. In order to spend your credit, please visit the Kindle bookstore or Amazon. If your account does not reflect this credit, please contact Amazon customer service.

Your credit is valid for one year and will expire after June 24, 2017, by order of the Court. If you have not used it, we will remind you of your credit before it expires.

Thank you for being a Kindle customer.

The Amazon Kindle Team

world-wifi-dayWiFi is the present and future of local area networking. Forget about families getting rid of the home phone. The real cable-cutters are dropping the Cat-5 Ethernet in favor of IEEE 802.11 Wireless Local Area Networks, generally known as WiFi. Let’s celebrate World WiFi Day!

There are no Cat-5 cables connected in my house and home office. Not one. And no Ethernet jacks either. (By contrast, when we moved into our house in the Bay Area in the early 1990s, I wired nearly every room with Ethernet jacks.) There’s a box of Ethernet cables, but I haven’t touched them in years. Instead, it’s all WiFi. (Technically, WiFi refers to industry products that are compatible with the IEEE 802.11 specification, but for our casual purposes here, it’s all the same thing.)

My 21” iMac (circa 2011) has an Ethernet port. I’ve never used it. My MacBook Air (also circa 2011) doesn’t have an Ethernet port at all; I used to carry a USB-to-Ethernet dongle, but it disappeared a long time ago. It’s not missed. My tablets (iOS, Android and Kindle) are WiFi-only for connectivity. Life is good.

The first-ever World WiFi Day is today — June 20, 2016 . It was declared by the Wireless Broadband Alliance to

be a global platform to recognize and celebrate the significant role Wi-Fi is playing in getting cities and communities around the world connected. It will champion exciting and innovative solutions to help bridge the digital divide, with Connected City initiatives and new service launches at its core.

Sadly, the World WiFi Day initiative is not about the wire-free convenience of Alan’s home office and personal life. Rather, it’s about bringing Internet connectivity to third-world, rural, poor, or connectivity-disadvantaged areas. According to the organization, here are eight completed projects:

  • KT – KT Giga Island – connecting islands to the mainland through advanced networks
  • MallorcaWiFi – City of Palma – Wi-Fi on the beach
  • VENIAM – Connected Port @ Leixões Porto, Portugal
  • ISOCEL – Isospot – Building a Wi-Fi hotspot network in Benin
  • VENIAM – Smart City @ Porto, Portugal
  • Benu Neworks – Carrier Wi-Fi Business Case
  • MCI – Free Wi-Fi for Arbaeen
  • Fon – After the wave: Japan and Fon’s disaster support procedure

It’s a worthy cause. Happy World WiFi Day, everyone!

mag-tapeI once designed and coded a campus parking pass management system for an East Coast university. If you had a faculty, staff, student or visitor parking sticker for the campus, it was processed using my green-screen application, which went online in 1983. The university used the mainframe program with minimal changes for about a decade, until a new client/server parking system was implemented.

Today, that sticker application exists on a nine-track tape reel hanging on my wall — and probably nowhere else.

Decommissioning the parking-sticker app was relatively straightforward for the data center team, though of course I hope that it was emotionally traumatic. Data about the stickers was stored in several tables. One contained information about humans: name, address, phone number, relationship with the university. The other was about vehicles: make, year and color; license plate number; date of sticker assignment; sticker type and serial number; expiration date; date of cancellation. We handled some interesting exceptions. For example, some faculty were issued “floating” stickers that weren’t assigned to specific vehicles. That sort of thing.

Fortunately, historical info in the sticker system was not needed past a year or two. While important for campus security (“Who does that car parked in a no-parking zone belong to?”), it wasn’t data that needed to be retained for any length of time for legal or compliance reasons. Shutting off the legacy application was as simple as, well, shutting off the legacy application.

It’s not always that simple. Other software on campus in the 1980s — and much of the software that your team writes — needed to be retained, sometimes according to campus internal regulations, other times due to government or industry rules. How long do you need to keep payroll data? Transaction data for sales from your website? Bids for products and services, and the documentation that explains how the bids were solicited?

Any time you get into regulated industries, you have this issue. Financial services, aerospace, safety-oriented embedded systems, insurance, human resources, or medical: Information must be retained for compliance, and must be produced on demand by auditors, queries from litigators during eDiscovery, regulatory investigations, even court subpoenas.

That can make it hard — very hard — to turn off an application you no longer need. Even if the software is recording no new transactions, retention policies might necessitate keeping it alive for years. Maybe even decades, depending on the type of data being retained, and on the regulatory requirements of your industry. Think about drug records from pharmaceutical companies, or component sourcing for automobile manufacturers.

Data and application retention has many enterprise implications. For example: Before you deploy an application and its data onto a cloud provider or SaaS platform, you should ask: Will that application and its data need to be retained? If so, will the provider still be around and provide access to it? If not, you need to make sure there’s a plan to bring the systems in-house (even if they are no longer needed) to archive the data outside the application in a way that conforms with regulatory requirements for retention and access, and then you can decommission the application.

A word of caution: I don’t know how long nine-track tapes last, especially if they are not well shielded. My 20-year-old tape was not protected against heat or magnetism — hey, it was thrown into a box. There’s a better-than-good chance it is totally unreadable. Don’t rely upon unproven technology or suppliers for your own data archive, especially if the data must be retained for compliance purposes.

Waybackmachine3Fire up the WABAC Machine, Mr. Peabody: In June 2008, I wrote a piece for MIT Technology Review explaining “How Facebook Works.”

The story started with this:

Facebook is a wonderful example of the network effect, in which the value of a network to a user is exponentially proportional to the number of other users that network has.

Facebook’s power derives from what Jeff Rothschild, its vice president of technology, calls the “social graph”–the sum of the wildly various connections between the site’s users and their friends; between people and events; between events and photos; between photos and people; and between a huge number of discrete objects linked by metadata describing them and their connections.

Facebook maintains data centers in Santa Clara, CA; San Francisco; and Northern Virginia. The centers are built on the backs of three tiers of x86 servers loaded up with open-source software, some that Facebook has created itself.

Let’s look at the main facility, in Santa Clara, and then show how it interacts with its siblings.

Read the whole story here… and check out Facebook’s current Open Source project pages too.

mobile_everythingSecurity standards for cellular communications are pretty much invisible. The security standards, created by groups like the 3GPP, play out behind the scenes, embedded into broader cellular protocols like 3G, 4G, LTE and the oft-discussed forthcoming 5G. Due to the nature of the security and other cellular specs, they evolve very slowly and deliberately; it’s a snail-like pace compared to, say, WiFi or Bluetooth.

Why the glacial pace? One reason is that cellular standards of all sorts must be carefully designed and tested in order to work in a transparent global marketplace. There are also a huge number of participants in the value chain, from handset makers to handset firmware makers to radio manufacturers to tower equipment to carriers… the list goes on and on.

Another reason why cellular software, including security protocols and algorithms goes slowly is that it’s all bound up in large platform versions. The current cellular security system is unlikely to change significantly before the roll-out of 5G… and even then, older devices will continue to use the security protocols embedded in their platform, unless a bug forces a software patch. Those security protocols cover everything from authentication of the cellular device to the tower, to the authentication of the tower to the device, to encryption of voice and data traffic.

We can only hope that end users will move swiftly to 5G. Why? because 4G and older platforms aren’t incredibly secure. Sure, they are good enough today, but that’s only “good enough.” The downside is that everything is pretty fuzzy when it comes to what 5G will actually offer… or even how many 5G standards there will be.

Read more in my story in Pipeline Magazine, “Wireless Security Standards.”

ransomRansomware is a huge problem that causes real harm to businesses and individuals. Technology service providers are gearing up to fight these cyberattacks – and that’s coming none too soon.

Ransomware is a type of cyberattack where bad actors gain access to a system, such as a consumer’s desktop or a corporate server. The attack vector might be provided by downloading a piece of malware attached to an email, visiting a corrupted website that runs a script that installs the malware or by opening a document that contains a malicious macro that downloads the malware.

In most ransomware attacks, the malware encrypts the user’s data and then demands an untraceable ransom. When the ransom is paid, the hackers promise to either decrypt the data or provide the user with a key to decrypt it. Because the data is encrypted, even removing the malware from the computer will not restore system functionality; typically, the victim has to restore the entire system from a backup or pay the ransom and hope for the best.

As cyberattacks go, ransomware has proven to be extremely effective at both frustrating users and obtaining ransom money for the attackers.

I was asked to write a story for Telecom Ramblings about ransomware. The particular focus of the assignment was on how itaffects Asia-Pacific countries, but the info is applicable everywhere: “What We Can Do About Ransomware – Today and Tomorrow.”

5D3_9411Forget vendor lock-in: Carrier operation support systems (OSS) and business support systems (BSS) are going open source. And so are many of the other parts of the software stack that drive the end-to-end services within and between carrier networks.

That’s the message from TM Forum Live, one of the most important conferences for the telecommunications carrier industry.

Held in Nice, France, from May 9-12, 2016, TM Forum Live is produced by TM Forum, a key organization in the carrier universe.

TM Forum works closely with other industry groups, like the MEF, OpenDaylight and OPNFV. I am impressed how so many open-source projects, standards-defining bodies and vendor consortia are collaborating a very detailed level to improve interoperability at many, many levels. The key to making that work: Open source.

You can read more about open source and collaboration between these organizations in my NetworkWorld column, “Open source networking: The time is now.”

While I’m talking about TM Forum Live, let me give a public shout-out to:

Pipeline Magazine – this is the best publication, bar none, for the OSS, BSS, digital transformation and telecommunications service provider space. At TM Forum Live, I attended their annual Innovation Awards, which is the best-prepared, best-vetted awards program I’ve ever seen.

Netcracker Technology — arguably the top vendor in providing software tools for telecommunications and cable companies. They are leading the charge for the agile reinvention of a traditionally slow-moving industry. I’d like to thank them for hosting a delicious press-and-analyst dinner at the historic Hotel Negresco – wow.

Looking forward to next year’s TM Forum Live, May 15-18, 2017.

apple-watchos-wwdc-2016_0014-720x405-cSan Francisco – Apple’s Worldwide Developer Conference 2016 had plenty of developers. Plenty of WWDC news about updated operating systems, redesigned apps, sexy APIs, expansion of Apple Pay and a long-awaited version of Siri for the Macintosh.

Call me underwhelmed. There was nothing, nothing, nothing, to make me stand up and cheer. Nothing inspired me to reach for my wallet. (Yes, I know it’s a developer conference, but still.) I’m an everyday Apple user who is typing this on a MacBook Air, who reads news and updates Facebook on an iPad mini, and who carries an iPhone as my primary mobile phone. Yawn.

If you haven’t read all the announcements from Apple this week, or didn’t catch the WWDC keynote live or streaming, Wired has the best single-story write-up.

Arguably the biggest “news” is that Apple has changed its desktop operating system naming convention again. It used to be Mac OS, then Mac OS X, then just OS X. Now it is macOS. The next version will be macOS 10.12 “Sierra.” Yawn.

I am pleased that Siri, Apple’s voice recognition software, is finally coming to the Mac. However, Siri itself is not impressive. It’s terrible for dictation – Dragon is better. On the iPhone, it misinterprets commands far more than Microsoft’s Cortana, and its sphere of influence is pretty limited: It can launch third-party apps, for example, but can’t control them because the APIs are locked down.

Will Siri on macOS be better? We can be hopeful, since Apple will provide some API access. Still, I give Microsoft the edge with Cortana, and both are lightyears behind Amazon’s Alexa software for the Echo family of smart home devices.

There are updates to iOS, but they are mainly window dressing. There’s tighter integration between iOS and the Mac, but none of those are going to move the needle. Use an iPhone to unlock a Mac? Copy-paste from iOS to the Mac? Be able to hide built-in Apple apps on the phone? Some of the apps have a new look? Nice incremental upgrades. No excitement.

Apple Watch. I haven’t paid much attention to watchOS, which is being upgraded, because I can’t get excited about the Apple Watch until next-generation hardware has multiple-day battery life and an always-on time display. Until then, I’ll stick with my Pebble Time, thank you.

There are other areas where I don’t have much of an opinion, like the updates to Apple Pay and Apple’s streaming music services. Similarly, I don’t have much experience with Apple TV and tvOS. Those may be important. Or maybe not. Since my focus is on business computing, and I don’t use those products personally, they fall outside my domain.

So why were these announcements from WWDC so — well — uninspiring? Perhaps Apple is hitting a dry patch. Perhaps they need to find a new product category to dominate; remember, Apple doesn’t invent things, it “thinks different” and enters and captures markets by creating stylish products that are often better than other companies’ clunky first-gen offerings. That’s been true in desktop computers, notebooks, smartphones, tablets, smart watches, cloud services and streaming music – Apple didn’t invent those categories, and was not first to market, not even close.

Apple needs to do something bold to reignite excitement and to truly usher in the Tim Cook era. Bringing Siri to the desktop, redesigning its Maps app, using the iPhone to unlock your desktop Mac, and a snazzy Minnie Mouse watch face, don’t move the needle.

I wonder what we’ll see at WWDC 2017. Hopefully a game-changer.

dronecon

You’ve gotta be there! Michael Huerta was just announced as Grand Opening Keynote at InterDrone, the industry’s most important drone conference.

BZ Media’s InterDrone will be Sept 7-9, 2016, in Las Vegas. (I am the “Z” of BZ Media.)

InterDrone 2015 was attended by 2,797 commercial drone professionals from all 50 states and 48 countries, and InterDrone 2016 will be even bigger!

New for 2016, InterDrone offers three targeted conferences under one roof:

Drone TechCon: For Drone Builders, OEMs and Developers

Content will focus on advanced flying dynamics, chips and boards, airframe and payload considerations, hardware/software integration, sensors, power and software development.

Drone Enterprise: For Flyers, Buyers and Drone Service Businesses

Classes focus on enterprise applications such as precision agriculture, surveying, mapping, infrastructure inspection, law enforcement, package delivery and search and rescue.

Drone Cinema: For Aerial Photographers and Videographers

Class content includes drone use for real estate and resort marketing, action sports and movie filming, news gathering – and any professional activity where the quality of the image is paramount.

A little about Mr. Huerta, the Grand Opening Keynote:

Michael P. Huerta is the Administrator of the Federal Aviation Administration. He was sworn into office on January 7, 2013, for a five-year term. Michael is responsible for the safety and efficiency of the largest aerospace system in the world. He oversees a $15.9 billion budget, more than 47,000 employees, and is focused on ensuring the agency and its employees are the best prepared and trained professionals to meet the growing demands and requirements of the industry. Michael also oversees the FAA’s NextGen air traffic control modernization program as the United States shifts from ground-based radar to state-of-the-art satellite technology.

See you at InterDrone 2016!

Have you done your backups lately? If not… now is the time, thanks to ransomware. Ransomware is a huge problem that’s causing real harm to businesses and individuals. Technology service providers are gearing up to fight these cyberattacks – and that’s coming none too soon.

In March 2016, Methodist Hospital reported that it was operating in an internal state of emergency after a ransomware attack encrypted files on its file servers. The data on those servers was inaccessible to the Kentucky-based hospital’s doctors and administrators unless the hackers received about $1,600 in Bitcoins.

A month earlier, a hospital in Los Angeles paid about $17,000 in ransom money to recover its data after a similar hack attack. According to the CEO of Hollywood Presbyterian Medical Center, Allen Stefanek, “The quickest and most efficient way to restore our systems and administrative functions was to pay the ransom and obtain the decryption key.”

As far as we know, no lives have been lost due to ransomware. Even so, the attacks keep coming – and consumers and businesses are often left with no choice but to pay the ransom, usually in untraceable Bitcoins.

The culprit in many of the attacks — but not all of them — is a sophisticated trojan called Locky. First appearing in 2013, Locky is described by Avast as using top-class features, “such as a domain generation algorithm, custom encrypted communication, TOR/BitCoin payment, strong RSA-2048+AES-128 file encryption and can encrypt over 160 different file types, including virtual disks, source codes and databases.” Multiple versions of Locky are on the Internet today, which makes fighting it particularly frustrating. Another virulent ransomware trojan is called CryptoLocker, which works in a similar way.

Ransomware is a type of cyberattack where bad actors gain access to a system, such as a consumer’s desktop or a corporate server. The attack vector might be provided by downloading a piece of malware attached to an email, visiting a corrupted website that runs a script that installs the malware or by opening a document that contains a malicious macro that downloads the malware. In most ransomware attacks, the malware encrypts the user’s data and then demands an untraceable ransom in order to either decrypt the data or provide the user with a key to decrypt it. Because the data is encrypted, even removing the malware from the computer will not restore system functionality; typically, the victim has to restore the entire system from a backup or pay the ransom and hope for the best.

As cyberattacks go, ransomware has proven to be extremely effective at both frustrating users and obtaining ransom money for the attackers. Beyond the ransom demands, of course, there are other concerns. Once the malware has access to the user or server data… what’s to prevent it from scanning for passwords, bank account information, or other types of sensitive intellectual property? Or deleting files in a way where they can’t be retrieved? Nothing. Nothing at all. And even if you pay the ransom, there’s no guarantee that you’ll get your files back. The only true solution to ransomware is prevention.

Read about how to prevent ransomware in my essay for Upgrade Magazine, “What we can do about ransomware – today and tomorrow.” And do your backups!

dell-pcslimitedGet used to new names. Instead of Dell the computer company, think Dell Technologies. Instead of EMC, think Dell EMC. So far, it seems that VMware won’t be renamed Dell VMware, but one never can tell. (They’ve come a long way since PC’s Limited.)

What’s in a name? Not much. What’s in an acquisition of this magnitude (US$67 billion)? In this case, lots of synergies.

Within the Dell corporate structure, EMC will find a stable, predictable management team. Michael Dell is a thoughtful leader, and is unlikely to do anything stupid with EMC’s technology, products, branding and customer base. The industry shouldn’t expect bet-the-business moonshots. Satisfied customers should expect more-of-the-same, but with Dell’s deep pockets to fuel innovation.

Dell’s private ownership is another asset. Without the distraction of stock prices and quarterly reporting, managers don’t have to worry about beating the Street. They can focus on beating competitors.

EMC and Dell have partnered to develop technology and products since 2001. While the partnership dissolved in 2011, the synergies remained… and now will be locked in, obviously, by the acquisition. That means new products for physical data centers, the cloud, and hybrid environments. Those will be boosted by Dell. Similarly, there are tons of professional services. The Dell relationship will only expand those opportunities.

Nearly everyone will be a winner…. Everyone, that is, except for Dell/EMC’s biggest competitors, like HPE and IBM. They must be quaking in their boots.

panamaThe Panama Papers should be a wake-up call to every CEO, COO, CTO and CIO in every company.

Yes, it’s good that alleged malfeasance by governments and big institutions came to light. However, it’s also clear that many companies simply take for granted that their confidential information will remain confidential. This includes data that’s shared within the company, as well as information that’s shared with trusted external partners, such as law firms, financial advisors and consultants. We’re talking everything from instant messages to emails, from documents to databases, from passwords to billing records.

Clients of Mossack Fonseca, the hacked Panamanian law firm, erroneously thought its documents were well protected. How well protected are your documents and IP held by your company’s law firms and other partners? It’s a good question, and shadow IT makes the problem worse. Much worse.

Read why in my column in NetworkWorld: Fight corporate data loss with secure, easy-to-use collaboration tools.

customer_experienceNo smart software would make the angry customer less angry. No customer relationship management platform could understand the problem. No sophisticated HubSpot or Salesforce or Marketo algorithm could be able to comprehend that a piece of artwork, brought to a nationwide framing store location in October, wouldn’t be finished before Christmas – as promised. While an online order tracking system would keep the customer informed, it wouldn’t keep the customer satisfied.

Customer Experience Management (CEM). That’s the hot new buzzword for directly engaging the customer. Contrast that with Customer Relationship Management (CRM), which is more about the back-end tracking of customers, leads and orders.

Think about how Amazon.com or FedEx or Netflix keep you constantly informed about what’s happening with your products and services. They have realized that the key to customer success is equally product/service excellence and communications excellence. When I was a kid, you mailed a check and an order form to Sears Roebuck, and a few weeks later a box showed up in the mail. That was great customer service in the 1960s and 1970s. No more. We demand communications. Proactive communications. Effective, empathetic communications.

One of the best ways to make an unhappy customer happy is to empower a human to do whatever it takes to get things right. If possible, that should be the first person the customer talks to, so the problem gets solved as quickly as possible, and without adding “dropped calls” or “too many transfers” to the litany of complaints. A CEM platform should be designed with this is mind.

I’ve written a story about the non-software factors required for effective CEM platforms for Pipeline Magazine. Read the story: “CEM — Now with Humans!