SharePoint-2016-Preview-tiltedExcellent story about SharePoint in ComputerWorld this week. It gives encouragement to those who prefer to run SharePoint in their own data centers (on-premises), rather than in the cloud. In “The Future of SharePoint,” Brian Alderman writes,

In case you missed it, on May 4 Microsoft made it loud and clear it has resuscitated SharePoint On-Premises and there will be future versions, even beyond SharePoint Server 2016. However, by making you aware of the scenarios most appropriate for On-Premises and the scenarios where you can benefit from SharePoint Online, Microsoft is going to remain adamant about allowing you to create the perfect SharePoint hybrid deployment.

The future of SharePoint begins with SharePoint Online, meaning changes, features and functionality will first be deployed to SharePoint Online, and then rolled out to your SharePoint Server On-Premises deployment. This approach isn’t much of a surprise, being that SharePoint Server 2016 On-Premises was “engineered” from SharePoint Online.

Brian was writing about a post on the Microsoft SharePoint blog, and one I had overlooked (else I’d have written about it back in May. In the post, “SharePoint Server 2016—your foundation for the future,” the SharePoint Team says,

We remain committed to our on-premises customers and recognize the need to modernize experiences, patterns and practices in SharePoint Server. While our innovation will be delivered to Office 365 first, we will provide many of the new experiences and frameworks to SharePoint Server 2016 customers with Software Assurance through Feature Packs. This means you won’t have to wait for the next version of SharePoint Server to take advantage of our cloud-born innovation in your datacenter.

The first Feature Pack will be delivered through our public update channel starting in calendar year 2017, and customers will have control over which features are enabled in their on-premises farms. We will provide more detail about our plans for Feature Packs in coming months.

In addition, we will deliver a set of capabilities for SharePoint Server 2016 that address the unique needs of on-premises customers.

Now, make no mistake: The emphasis at Microsoft is squarely on Office 365 and SharePoint Online. Or as the company says SharePoint Server is, “powering your journey to the mobile-first, cloud-first world.” However, it is clear that SharePoint On-Premises will continue for some period of time. Later in the blog post in the FAQ, this is stated quite definitively:

Is SharePoint Server 2016 the last server release?

No, we remain committed to our customer’s on-premises and do not consider SharePoint Server 2016 to be the last on-premises server release.

The best place to learn about SharePoint 2016 is at BZ Media’s SPTechCon, returning to San Francisco from Dec. 5-8. (I am the Z of BZ Media.) SPTechCon, the SharePoint Technology Conference, offers more than 80 technical classes and tutorials — presented by the most knowledgeable instructors working in SharePoint today — to help you improve your skills and broaden your knowledge of Microsoft’s collaboration and productivity software.

SPTechCon will feature the first conference sessions on SharePoint 2016. Be there! Learn more at http://www.sptechcon.com.

Kitt-InteriorWas it a software failure? The recent fatal crash of a Tesla in Autopilot mode is worrisome, but it’s too soon to blame Tesla’s software. According to Tesla on June 30, here’s what happened:

What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S. Had the Model S impacted the front or rear of the trailer, even at high speed, its advanced crash safety system would likely have prevented serious injury as it has in numerous other similar incidents.

We shall have to await the results of the NHTSA investigation to learn more. Even if it does prove to be a software failure, at least the software can be improved to try to avoid similar incidents in the future.

By coincidence, a story that I wrote about the security issues related to advanced vehicles,Connected and Autonomous Cars Are Wonderful and a Safety-Critical Security Nightmare,” was published today, July 1, on CIO Story. The piece was written several weeks ago, and said,

The good news is that government and industry standards are attempting to address the security issues with connected cars. The bad new is that those standards don’t address security directly; rather, they merely prescribe good software-development practices that should result in secure code. That’s not enough, because those processes don’t address security-related flaws in the design of vehicle systems. Worse, those standards are a hodge-podge of different regulations in different countries, and they don’t address the complexity of autonomous, self-driving vehicles.

Today, commercially available autonomous vehicles can parallel park by themselves. Tomorrow, they may be able to drive completely hands-free on highways, or drive themselves to parking lots without any human on board. The security issues, the hackability issues, are incredibly frightening. Meanwhile, companies as diverse as BMW, General Motors, Google, Mercedes, Tesla and Uber are investing billions of dollars into autonomous, self-driving car technologies.

Please read the whole story here.

crashCloud services crash. Of course, non-cloud-services crash too — a server in your data center can go down, too. At least there you can do something, or if it’s a critical system you can plan with redundancies and failover.

Not so much with cloud services, as this morning’s failure of Google Calendar clearly shows. The photo shows Google’s status dashboard as of 6:53am on Thursday, June 30.

I wrote about crashes at Amazon Web Services and Apple’s MobileMe back in 2008 in “When the cloud was good, it was very good. But when it was bad it was rotten.”

More recently, in 2011, I covered another AWS failure in “Skynet didn’t take down Amazon Web Services.”

Overall, cloud services are quite reliable. But they are not perfect, and it’s a mistake to think that just because they are offered by huge corporations, they will be error-free and offer 100% uptime. Be sure to work that into your plans, especially if you and your employees rely upon public cloud services to get your job done, or if your customers interact with you through cloud services.

hackathonThe MEF recently conducted its second LSO Hackathon at a Rome event called Euro16. You can read my story about it here in DiarioTi: LSO Hackathons Bring Together Open Standards, Open Source.

Alas, my coding skills are too rusty for a Hackathon, unless the objective is to develop a fully buzzword compliant implementation of “Hello World.” Fortunately, there are others with better skills, as well as a broader understanding of today’s toughest problems.

Many Hackathons are thinly veiled marketing exercises by companies, designed to find ways to get programmers hooked on their tools, platforms, APIs, etc. Not all! One of the most interesting Hackathons is from the MEF, an industry group that drives communications interoperability. As a standards defining organization (SDO), the MEF wants to help carriers and equipment vendors design products/services ready for the next generation of connectivity. That means building on a foundation of SDN (software defined networks), NFV (network functions virtualization), LSO (lifecycle service orchestration) and CE2.0 (Carrier Ethernet 2.0).

To make all this happen:

  • What the MEF does: Create open standards for architectures and specifications.
  • What vendors, carriers and open source projects do: Write software to those specifications.
  • What the Hackathon does: Give everyone a chance to work together, make sure the code is truly interoperable, and find areas where the written specs might be ambiguous.

Thus, the MEF LSO Hackathons. They bring together a wide swatch of the industry to move beyond the standards documents and actually write and test code that implements those specs.

As mentioned above, the MEF just completed its second Hackathon at Euro16. The first LSO Hackathon was at last year’s MEF GEN15 annual conference in Dallas. Here’s my story about it in Telecom Ramblings: The MEF LSO Hackathon: Building Community, Swatting Bugs, Writing Code.

The third LSO Hackathon will be at this year’s MEF annual conference, MEF16, in Baltimore, Nov. 7-10. I will be there as an observer – alas, without the up-to-date, practical skills to be a coding participant.

stopwatchI can hear the protesters. “What do we want? Faster automated emails! When do we want them? In under 20 nanoseconds!

Some things have to be snappy. A Web page must load fast, or your customers will click away. Moving the mouse has to move the cursor without pauses or hesitations. Streaming video should buffer rarely and unobtrusively; it’s almost always better to temporarily degrade the video quality than to pause the playback. And of course, for a touch interface to work well, it must be snappy, which Apple has learned with iOS, and which Google learned with Project Butter.

The same is true with automated emails. They should be generated and transmitted immediately — that is, is under a minute.

I recently went to book a night’s stay at a Days Inn, a part of the Wyndham Hotel Group, and so I had to log into my Wyndham account. Bad news: I couldn’t remember the password. So, I used the password retrieval system, giving my account number and info. The website said to check my e-mail for the reset link. Kudos: That’s a lot better than saying “We’ll mail you your password,” and then sending it in plain text!!

So, I flipped over to my e-mail client. Checked for new mail. Nothing. Checked again. Nothing. Checked again. Nothing. Checked the spam folder. Nothing. Checked for new mail. Nothing. Checked again. Nothing.

I submitted the request for the password reset at 9:15 a.m. The link appeared in my inbox at 10:08 a.m. By that time, I had already booked the stay with Best Western. Sorry, Days Inn! You snooze, you lose.

What happened? The e-mail header didn’t show a transit delay, so we can’t blame the Internet. Rather, it took nearly an hour for the email to be uploaded from the originating server. This is terrible customer service, plain and simple.

It’s not merely Wyndham. When I purchase something from Amazon, the confirmation e-mail generally arrives in less than 30 seconds. When I purchase from Barnes & Noble, a confirmation e-mail can take an hour. The worst is Apple: Confirmations of purchases from the iTunes Store can take three days to appear. Three days!

It’s time to examine your policies for generating automated e-mails. You do have policies, right? I would suggest a delay of no more than one minute from when the user performs an action that would generate an e-mail and having the message delivered to the SMTP server.

Set the policy. Automated emails should go out in seconds — certainly in under one minute. Design for that and test for that. More importantly, audit the policy on a regular basis, and monitor actual performance. If password resets or order confirmations are taking 53 minutes to hit the Internet, you have a problem.

626px-Ada_Lovelace_portraitDespite some recent progress, women are still woefully underrepresented in technical fields such as software development. There are many academic programs to bring girls into STEM (science, technology, engineering and math) at various stages in their education, from grade school to high school to college. Corporations are trying hard.

It’s not enough. We all need to try harder.

On Oct. 11, 2016, we will celebrate Ada Lovelace Day, honoring the first computer programmer — male or female. Augusta Ada King-Noel, Countess of Lovelace, wrote the algorithms for Charles Babbage’s difference engine in the mid-1800s.

According to the website Finding Ada, this date doesn’t represent her birthday, which is of Dec. 10. Rather, they say, “The date is arbitrary, chosen in an attempt to make the day maximally convenient for the most number of people. We have tried to avoid major public holidays, school holidays, exam season, and times of the year when people might be hibernating.” I’d like to think that the scientifically minded Ada Lovelace would find this amusing.

There are great organizations focused on promoting women in technology, such as Women in Technology International (WITI) and the Anita Borg Institute. There are cool projects, like the Wiki Edit-a-Thon sponsored by Brown University, which seeks to correct the historic (and inaccurate) underrepresentation of female scientists in Wikipedia.

Those are good efforts. They still aren’t enough.

Are women good at STEM fields, including software development? Yes. But all too often, they are gender-stereotyped into non-coding parts of the field—when they are hired at all. And certainly the hyper-competitive environment in many tech teams, and the death-march culture, is not friendly to anyone (male or female) who wants to have a life outside the startup.

Let me share the Anita Borg Institute’s 10 best practices to foster retention of women in technical roles:

  • Collect, analyze and report retention data as it pertains to women in technical roles.
  • Formally train managers in best practices, and hold them accountable for retention.
  • Embed collaboration in the corporate culture to encourage diverse ideas.
  • Offer training programs that raise awareness of and counteract microinequities and unconscious biases.
  • Provide development and visibility opportunities to women that increase technical credibility.
  • Fund and support workshops and conferences that focus on career path experiences and challenges faced by women technologists.
  • Establish mentoring programs on technical and career development.
  • Sponsor employee resource groups for mutual support and networking.
  • Institute flexible work arrangements and tools that facilitate work/life integration.
  • Enact employee-leave policies, and provide services that support work/life integration.

Does your organization have a solid representation of women in technical jobs (not only in technical departments)? Are those women given equal pay for equal work? Are women provided with solid opportunities for professional growth and career advancement? Are you following any of the above best practices?

If so, that’s great news. I’d love to hear about it and help tell your story.

pizzaIt’s not intellectual property. It’s not having code warriors who can turn pizza into algorithms. It’s not even having great angel investors. If you want a successful startup that’s going to keep you in the headlines for your technology and market prowess, you need a great Human Resources department.

Whether your organization has three employees, 30 or 300, it’s a company. That means a certain level of professionalism in administering it. Yes, tech companies love to be led by hotshot engineers who often brag about their inexperience as CEOs. Yes, those companies are often the darlings of the venture capital community. Yes, those CEOs get lots of visibility in the technology media, the financial media and most importantly, social media.

That is not enough. That’s explained very well in Claire Cain Miller’s essay in The New York Times, “Yes, Silicon Valley, Sometimes You Need More Bureaucracy.”

Miller focuses on the 2014 GitHub scandal, where a lack of professionalism in HR led to deep problems in hiring, management and culture.

“GitHub is not unusual. Tech startups with 100 or fewer employees have half as many personnel professionals as companies of the same size in other industries, according to data from PayScale, which makes compensation software and analyzed about 2,830 companies,” Miller writes.

Is HR something that’s simply soft and squishy, a distraction from the main business of cranking out code and generating viral marketing? No. It’s a core function of every business that’s large enough to have employees.

Miller cites a study that found that companies with personnel departments were nearly 40% less likely to fail than the norm, and nearly 40% more likely to go public. That 36-page study, “Organizational Blueprints for Success in High-Tech Startups,” from the University of California, Berkeley, was published in 2002, but provides some interesting food for thought.

The authors, James Baron and Michael Hannan, wrote,

It is by no means uncommon to see a founder spend more time and energy fretting about the scalability of the phone system or IT platform than about the scalability of the culture and practices for managing employees, even in case where that same founder would declare with great passion and sincerity that ‘people are the ultimate source of competitive advantage in my business.’

The study continues,

Any plan for launching a new enterprise should include a road map for evolving the organizational structure and HR system, which parallels the timeline for financial, technological, and growth milestones. We have yet to meet an entrepreneur who told us, on reflection, he or she believes they spent too much time worrying about people issues in the early days of their venture.

What does that mean for you?

• If you are part of the leadership team of a startup or small company, look beyond the tech industry for best practices in human resources management. Just because other small tech firms gloss over HR doesn’t mean that you should. In fact, perhaps having better HR might be better way to out-innovate your competitors.

• If you are looking at joining a startup or a small company, look at the HR department and the culture. If HR seems casual or ad hoc, and if everyone in the company looks the same, perhaps that’s a company not poised for long-term success. Look for a culture that cares about having a healthy and genuinely diverse workforce—and for policies that talk about ways to resolve problems.

Human resources are as important as technology and financial resources. Without the right leadership in all three areas, you’re in for a rough ride.

ESDC_2010Today’s serendipitous discovery: A blog post about the Enterprise Software Development Conference (ESC), produced by BZ Media in March 2010. I was the conference chair of that event; our goal was to try to replicate the wonderful SD West conference, which CMP had discontinued the year before. (I am the “Z” of BZ Media.)

Unfortunately, ESDC was not viable from a business perspective, so we only ran it one time. Even so, we had a great conference, and the attendees, presenters and exhibitors were delighted with the event’s quality and technical content.

One of our top exhibitors was OutSystems. Mike Jones, one of their executives, wrote about the conference in a thoughtful blog post, “ESDC Retrospective.” Mike started with

Last week, the OutSystems team attended the Enterprise Software Development Conference (ESDC) in San Mateo California. This is the first year for this show and, as Alan Zeichick notes, it takes up where the old SD West conference left off. As gold sponsors of the show, we got to both attend the sessions and talk to the conference attendees at the OutSystems booth. I just wanted to share a few highlights & take-aways from the show.

One of his cited highlights was

Another highlight: Kent Beck‘s keynote on “Responsive Design: Efficiency Through Safety.”  This was the first time I had heard Kent speak. He started off by referencing Ed Yourdon‘s work on Systems Design and how it led him to try and distill his own working process for design. This was the premise for his presentation. My take-away was that no matter what you do, your design will change. I think we all accept this as fact – especially for application software. Kent then explained his techniques to reduce the risk when making design changes. For each of his examples I found myself thinking ‘This is not really a problem with the Agile Platform because the TrueChange™ engine will keep you from breaking stuff you did not intend to break, allowing you to move very fast with little risk.” If you are hand-coding, then Kent’s four techniques (as described here by Alan Zeichick) to reduce risk when making change is great advice, but why do that if you don’t have to? BTW, I think Kent would love the Agile Platform.

Thanks, Mike, for the thoughtful writeup. Hard to believe ESDC was more than six years ago. (Read the whole post here.)

I am often looking for these symbols and can’t find them. So here they are for English language Mac keyboards, in a handy blog format. They all use the Option key.

Note: The Option key is not the Command key, which is marked with ⌘ (looped square) symbol. Rather, the Option key is between Control and Command on many (most?) Mac keyboard. These key combinations won’t work a numerical keypad; you have to be using the main part of the keyboard.

The case of the letter/key pressed with the Option key matters. For example, Option+v is the root √ and Option+V (in other words, Option+Shift+v) is the diamond ◊. Another example: Option+7 is the paragraph ¶ and Option+& (that is, Option+Shift+7) is the double dagger ‡. You may simply copy/paste the symbols, if that’s more convenient.

These key combinations should work in most modern Mac applications, and be visible in most typefaces. No guarantees. Your mileage may vary.

SYMBOLS

¡ Option+1 (inverted exclamation)
¿ Option+? (inverted question)
« Option+\ (open double angle quote)
» Option+| (close double angle quote)
© Option+g (copyright)
® Option+r (registered copyright)
™ Option+2 (trademark)
¶ Option+7 (paragraph)
§ Option+6 (section)
• Option+8 (dot)
· Option+( (small dot)
◊ Option+V (diamond)
– Option+- (en-dash)
— Option+_ (em-dash)
† Option+t (dagger)
‡ Option+& (double dagger)
¢ Option+4 (cent)
£ Option+3 (pound)
¥ Option+y (yen)
€ Option+@ (euro)

ACCENTS AND SPECIAL LETTERS

ó Ó Option+e then letter (acute)
ô Ô Option+i then letter (circumflex)
ò Ò Option+` then letter (grave)
õ Õ Option+n then letter (tilde)
ö Ö Option+u then letter (umlaut)
å Å Option+a or Option+A (a-ring)
ø Ø Option+o or Option+O (o-slash)
æ Æ Option+’ or Option+” (ae ligature)
œ Œ Option+q or Option+Q (oe ligature)
fi Option+% (fi ligature)
fl Option+^ (fl ligature)
ç Ç Option+c or Option+C (circumflex)
ß Option+s (double-s)

MATH AND ENGINEERING

÷ Option+/ (division)
± Option++ (plus/minus)
° Option+* (degrees)
¬ Option+l (logical not)
≠ Option+= (not equal)
≥ Option+> (greater or equal)
≤ Option+< (less or equal)
√ Option+v (root)
∞ Option+5 (infinity)
≈ Option+x (tilde)
∆ Option+j (delta)
Σ Option+w (sigma)
Ω Option+z (ohm)
π Option+p (pi)
µ Option+m (micro)
∂ Option+d (derivative)
∫ Option+b (integral)

Summary of Recall Trends. Source: SRR.

Summary of Recall Trends. Source: SRR.

The costs of an automobile recall can be immense for an OEM automobile or light truck manufacturer – and potentially ruinous for a member of the industry’s supply chain. Think about the ongoing Takata airbag scandal, which Bloomberg says could cost US$24 billion. General Motors’ ignition locks recall may have reached $4.1 billion. In 2001, the exploding Firestone tires on the Ford Explorer cost $3 billion to recall. The list goes on and on. That’s all about hardware problems. What about bits and bytes?

Until now, it’s been difficult to quantify the impact of software defects on the automotive industry. Thanks to a new analysis from SRR called “Industry Insights for the Road Ahead: Automotive Warranty and Recall Report 2016,” we have a good handle on this elusive area.

According to the report, there were 63 software- related vehicle recalls from late 2012 to June 2015. That’s based on data from the United States’ National Highway Traffic Safety Administration (NHTSA). The SRR report derived that count of 63 software-related recalls using this methodology (p. 22),

To classify a recall as a software component recall, SRR searched the “Defect Summary” and “Corrective Action” fields of NHTSA’s Recall flat file for the term “software.” SRR’s inquiry captured descriptions of software-related defects identified specifically as such, as well as defects that were to be fixed by updating or changing a vehicle’s software.

That led to this analysis (p. 22),

Since the end of 2012, there has been a marked increase in recall activity due to software issues. For the primary light vehicle makes and models we studied, 32 unique software-related recalls affected about 3.6 million vehicles from 2005–2012. However, in a much shorter time period from the end of 2012 to June 2015, there were 63 software-related recalls affecting 6.4 million more vehicles.

And continuing (p. 23),

From less than 5 percent of all recalls in 2011, software-related recalls have risen to almost 15 percent in 2015. Overall, the amount of unique campaigns involving software has climbed dramatically, with nine times as many in 2015 than in 2011…

No surprises there given the dramatically increased complexity of today’s connected vehicles, with sophisticated internal networks, dozens of ECUs (electronic control units with microprocessors, memory, software and network connections), and extensive remote connectivity.

These software defects are not occurring only in systems where one expects to find sophisticated microprocessors and software, such as engine management controls and Internet-connected entertainment platforms. Microprocessors are being used to analyze everything from the driver’s position and stage of alert, to road hazards, to lane changes — and offer advanced features such as automatic parallel parking.

Where in the car are the software-related vehicle recalls? Since 2006, says the report, recalls have been prompted by defects in areas as diverse as locks/latches, power train, fuel system, vehicle speed control, air bags, electrical systems, engine and engine cooling, exterior lighting, steering, hybrid propulsion – and even the parking brake system.

That’s not all — because not every software defect results in a public and costly recall. That’s the last resort, from the OEM’s perspective. Whenever possible, the defects are either ignored by the vehicle manufacturer, or quietly addressed by a software update next time the car visits a dealer. (If the car doesn’t visit an official dealer for service, the owner may never know that a software update is available.) Says the report (p. 25),

In addition, SRR noted an increase in software-related Technical Service Bulletins (TSB), which identify issues with specific components, yet stop short of a recall. TSBs are issued when manufacturers provide recommended procedures to dealerships’ service departments for fixing problematic components.

A major role of the NHTSA is to record and analyze vehicle failures, and attempt to determine the cause. Not all failures result in a recall, or even in a TSB. However, they are tracked by the agency via Early Warning Reporting (EWR). Explains the report (p. 26),

In 2015, three new software-related categories reported data for the first time:

• Automatic Braking, listed on 21 EWR reports, resulting in 26 injuries and 1 fatality

• Electronic Stability, listed on 6 EWR reports, resulting in 7 injuries and 1 fatality

• Forward Collision Avoidance, listed in 1 EWR report, resulting in 1 injury and no fatalities

The bottom line here, beyond protecting life and property, is the bottom line for the automobile and its supply chain. As the report says in its conclusion (p. 33),

Suppliers that help OEMs get the newest software-aided components to market should be prepared for the increased financial exposure they could face if these parts fail.

About the Report

Industry Insights for the Road Ahead: Automotive Warranty and Recall Report 2016” was published by SRR: Stout, Risius Ross, which offers global financial advisory services. SRR has been in the automotive industry for 25 years, and says, “SRR professionals have more automotive experience in these service areas than any other advisory firm, period.”

This brilliant report — which is free to download in its entirety — was written by Neil Steinkamp, a Managing Director at SRR. He has extensive experience in providing a broad range of business and financial advice to corporate executives, risk managers, in-house counsel and trial lawyers. Mr. Steinkamp has provided consulting services and has been engaged as an expert in numerous matters involving automotive warranty and recall costs. His practice also includes consulting services for automotive OEMs, suppliers and their advisors regarding valuation, transactions and disputes.

mag-tapeI once designed and coded a campus parking pass management system for an East Coast university. If you had a faculty, staff, student or visitor parking sticker for the campus, it was processed using my green-screen application, which went online in 1983. The university used the mainframe program with minimal changes for about a decade, until a new client/server parking system was implemented.

Today, that sticker application exists on a nine-track tape reel hanging on my wall — and probably nowhere else.

Decommissioning the parking-sticker app was relatively straightforward for the data center team, though of course I hope that it was emotionally traumatic. Data about the stickers was stored in several tables. One contained information about humans: name, address, phone number, relationship with the university. The other was about vehicles: make, year and color; license plate number; date of sticker assignment; sticker type and serial number; expiration date; date of cancellation. We handled some interesting exceptions. For example, some faculty were issued “floating” stickers that weren’t assigned to specific vehicles. That sort of thing.

Fortunately, historical info in the sticker system was not needed past a year or two. While important for campus security (“Who does that car parked in a no-parking zone belong to?”), it wasn’t data that needed to be retained for any length of time for legal or compliance reasons. Shutting off the legacy application was as simple as, well, shutting off the legacy application.

It’s not always that simple. Other software on campus in the 1980s — and much of the software that your team writes — needed to be retained, sometimes according to campus internal regulations, other times due to government or industry rules. How long do you need to keep payroll data? Transaction data for sales from your website? Bids for products and services, and the documentation that explains how the bids were solicited?

Any time you get into regulated industries, you have this issue. Financial services, aerospace, safety-oriented embedded systems, insurance, human resources, or medical: Information must be retained for compliance, and must be produced on demand by auditors, queries from litigators during eDiscovery, regulatory investigations, even court subpoenas.

That can make it hard — very hard — to turn off an application you no longer need. Even if the software is recording no new transactions, retention policies might necessitate keeping it alive for years. Maybe even decades, depending on the type of data being retained, and on the regulatory requirements of your industry. Think about drug records from pharmaceutical companies, or component sourcing for automobile manufacturers.

Data and application retention has many enterprise implications. For example: Before you deploy an application and its data onto a cloud provider or SaaS platform, you should ask: Will that application and its data need to be retained? If so, will the provider still be around and provide access to it? If not, you need to make sure there’s a plan to bring the systems in-house (even if they are no longer needed) to archive the data outside the application in a way that conforms with regulatory requirements for retention and access, and then you can decommission the application.

A word of caution: I don’t know how long nine-track tapes last, especially if they are not well shielded. My 20-year-old tape was not protected against heat or magnetism — hey, it was thrown into a box. There’s a better-than-good chance it is totally unreadable. Don’t rely upon unproven technology or suppliers for your own data archive, especially if the data must be retained for compliance purposes.

Waybackmachine3Fire up the WABAC Machine, Mr. Peabody: In June 2008, I wrote a piece for MIT Technology Review explaining “How Facebook Works.”

The story started with this:

Facebook is a wonderful example of the network effect, in which the value of a network to a user is exponentially proportional to the number of other users that network has.

Facebook’s power derives from what Jeff Rothschild, its vice president of technology, calls the “social graph”–the sum of the wildly various connections between the site’s users and their friends; between people and events; between events and photos; between photos and people; and between a huge number of discrete objects linked by metadata describing them and their connections.

Facebook maintains data centers in Santa Clara, CA; San Francisco; and Northern Virginia. The centers are built on the backs of three tiers of x86 servers loaded up with open-source software, some that Facebook has created itself.

Let’s look at the main facility, in Santa Clara, and then show how it interacts with its siblings.

Read the whole story here… and check out Facebook’s current Open Source project pages too.

apple-watchos-wwdc-2016_0014-720x405-cSan Francisco – Apple’s Worldwide Developer Conference 2016 had plenty of developers. Plenty of WWDC news about updated operating systems, redesigned apps, sexy APIs, expansion of Apple Pay and a long-awaited version of Siri for the Macintosh.

Call me underwhelmed. There was nothing, nothing, nothing, to make me stand up and cheer. Nothing inspired me to reach for my wallet. (Yes, I know it’s a developer conference, but still.) I’m an everyday Apple user who is typing this on a MacBook Air, who reads news and updates Facebook on an iPad mini, and who carries an iPhone as my primary mobile phone. Yawn.

If you haven’t read all the announcements from Apple this week, or didn’t catch the WWDC keynote live or streaming, Wired has the best single-story write-up.

Arguably the biggest “news” is that Apple has changed its desktop operating system naming convention again. It used to be Mac OS, then Mac OS X, then just OS X. Now it is macOS. The next version will be macOS 10.12 “Sierra.” Yawn.

I am pleased that Siri, Apple’s voice recognition software, is finally coming to the Mac. However, Siri itself is not impressive. It’s terrible for dictation – Dragon is better. On the iPhone, it misinterprets commands far more than Microsoft’s Cortana, and its sphere of influence is pretty limited: It can launch third-party apps, for example, but can’t control them because the APIs are locked down.

Will Siri on macOS be better? We can be hopeful, since Apple will provide some API access. Still, I give Microsoft the edge with Cortana, and both are lightyears behind Amazon’s Alexa software for the Echo family of smart home devices.

There are updates to iOS, but they are mainly window dressing. There’s tighter integration between iOS and the Mac, but none of those are going to move the needle. Use an iPhone to unlock a Mac? Copy-paste from iOS to the Mac? Be able to hide built-in Apple apps on the phone? Some of the apps have a new look? Nice incremental upgrades. No excitement.

Apple Watch. I haven’t paid much attention to watchOS, which is being upgraded, because I can’t get excited about the Apple Watch until next-generation hardware has multiple-day battery life and an always-on time display. Until then, I’ll stick with my Pebble Time, thank you.

There are other areas where I don’t have much of an opinion, like the updates to Apple Pay and Apple’s streaming music services. Similarly, I don’t have much experience with Apple TV and tvOS. Those may be important. Or maybe not. Since my focus is on business computing, and I don’t use those products personally, they fall outside my domain.

So why were these announcements from WWDC so — well — uninspiring? Perhaps Apple is hitting a dry patch. Perhaps they need to find a new product category to dominate; remember, Apple doesn’t invent things, it “thinks different” and enters and captures markets by creating stylish products that are often better than other companies’ clunky first-gen offerings. That’s been true in desktop computers, notebooks, smartphones, tablets, smart watches, cloud services and streaming music – Apple didn’t invent those categories, and was not first to market, not even close.

Apple needs to do something bold to reignite excitement and to truly usher in the Tim Cook era. Bringing Siri to the desktop, redesigning its Maps app, using the iPhone to unlock your desktop Mac, and a snazzy Minnie Mouse watch face, don’t move the needle.

I wonder what we’ll see at WWDC 2017. Hopefully a game-changer.

code-curmudgeon2I am hoovering directly from the blog of my friend Arthur Hicken, the Code Curmudgeon:

Last week with Alan Zeichick and I did a webinar for Parasoft on automotive cybersecurity. Now Alan thinks that cybersecurity is an odd term, especially as it applies to automotive and I mostly agree with him. But appsec is also pretty poorly fitted to automotive so maybe we should be calling it AutoSec. Feel free to chime-in using the comments below or on twitter.

I guess the point is that as cars get more complicated and get more “smart” parts and get more connected (The connected car) as part of the “internet of things”, you will start to see more and more automotive security breaches occurring. From taking over the car to stealing data to triggering airbags we’ve already had several high-profile incidents which you can see in my IoT Hall-of-Shame.

To help out we’ve put together a high-level overview of a 7-point plan to get you started. In the near future we’ll be diving into detail on each of these topics, including how standards can help you not only get quality but safety and security, the role of black-box, pen-test, and DAST as well as how to get ahead of the curve and harden your vehicle software using (SAST) and hybrid testing (IAST).

The webinar was recorded for your convenience, so be sure and check it out. If you have automotive software topics that are near and dear to your heart, but sure to let me know in the comments or on Twitter or Facebook.

Okay, the webinar was back in February, but the info didn’t appear on my blog then. Here it is now. My apologies for the oversight. Watch and enjoy the webinar!

customer_experienceNo smart software would make the angry customer less angry. No customer relationship management platform could understand the problem. No sophisticated HubSpot or Salesforce or Marketo algorithm could be able to comprehend that a piece of artwork, brought to a nationwide framing store location in October, wouldn’t be finished before Christmas – as promised. While an online order tracking system would keep the customer informed, it wouldn’t keep the customer satisfied.

Customer Experience Management (CEM). That’s the hot new buzzword for directly engaging the customer. Contrast that with Customer Relationship Management (CRM), which is more about the back-end tracking of customers, leads and orders.

Think about how Amazon.com or FedEx or Netflix keep you constantly informed about what’s happening with your products and services. They have realized that the key to customer success is equally product/service excellence and communications excellence. When I was a kid, you mailed a check and an order form to Sears Roebuck, and a few weeks later a box showed up in the mail. That was great customer service in the 1960s and 1970s. No more. We demand communications. Proactive communications. Effective, empathetic communications.

One of the best ways to make an unhappy customer happy is to empower a human to do whatever it takes to get things right. If possible, that should be the first person the customer talks to, so the problem gets solved as quickly as possible, and without adding “dropped calls” or “too many transfers” to the litany of complaints. A CEM platform should be designed with this is mind.

I’ve written a story about the non-software factors required for effective CEM platforms for Pipeline Magazine. Read the story: “CEM — Now with Humans!

Let’s explore the causes of slow website loads. There are obviously some delays that are beyond our control — like the user being on a very slow mobile connection. However, for the most part, our website’s load time is entirely up to us.

For the most part, our website’s load time is entirely up to us as developers and administrators. We need to do everything possible to accelerate the experience, and in fact I would argue that load time may be the single most important aspect of your site. That’s especially true of your home page, but also of other pages, especially if there are deep links to them from search engines, other Internet sites, or your own marketing emails and tweets.

We used to say that the biggest cause of slow websites was large images, especially too-large images that are downloaded to the browser and dynamically resized. Those are real issues, even today, and you should optimize your site to push out small graphics, instead of very large images. Images are no longer the main culprit, however.

Read my recent article in the GoDaddy Garage, “Are slow website load times costing you money and pageviews?” to see the five main causes of slow website loads, and get some advice about what to do about them.

sauronBarcelona, Mobile World Congress 2016—IoT success isn’t about device features, like long-life batteries, factory-floor sensors and snazzy designer wristbands. The real power, the real value, of the Internet of Things is in the data being transmitted from devices to remote servers, and from those remote servers back to the devices.

“Is it secret? Is it safe?” Gandalf asks Frodo in the “Lord of the Rings” movies about the seductive One Ring to Rule Them All. He knows that the One Ring is the ultimate IoT wearable: Sure, the wearer is uniquely invisible, but he’s also vulnerable because the ring’s communications can be tracked and hijacked by the malicious Nazgûl and their nation/state sponsor of terrorism.

Wearables, sensors, batteries, cool apps, great wristbands. Sure, those are necessary for IoT success, but the real trick is to provision reliable, secure and private communications that Black Riders and hordes of nasty Orcs can’t intercept. Read all about it in my NetworkWorld column, “We need secure network infrastructure – not shiny rings – to keep data safe.”

diet-cokeA hackathon – like the debut LSO Hackathon held in November 2015 at the MEF’s GEN15 conference – is where magic happens. It’s where theory turns into practice, and the state of the art advances. Dozens of techies sitting in a room, hunched over laptops, scribbling on whiteboards, drinking excessive quantities of coffee and Diet Coke. A hubbub of conversation. Focus. Laughter. A sense of challenge.

More than 50 network and/or software experts joined the first-ever LSO Hackathon, representing a very diverse group of 20 companies. They were asked to focus on two Reference Points of the MEF’s Lifecycle Service Orchestration (LSO) Reference Architecture. As explained by , Director of Certification and Strategic Programs at the MEF and one of the architects of the LSO Hackathon series, these included:

  • LSO Adagio, which defines the element management reference point needed to manage network resources, including element view management functions
  • LSO Presto, which defines the network management reference point needed to manage the network infrastructure, including network view management functions

Read more about the LSO Hackathon in my story in Telecom Ramblings, “Building Community, Swatting Bugs, Writing Code.”

gartner-bimodal-itLas Vegas, December 2015 — Get ready for Bimodal IT. That’s the message from the Gartner Application, Architecture, Development & Integration Summit (AADI). It wasn’t a subtle message. Bimodal was a veritable drumbeat, pounded home over and over again in keynotes, classes, and one-on-one meetings with Gartner analysts. We’re going to be hearing a lot about bimodal development, from Gartner and the industry, because it’s a message that really describes what many of us are encountering today.

To quote Gartner’s official definition:

Bimodal IT is the practice of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility. Mode 1 is traditional and sequential, emphasizing safety and accuracy. Mode 2 is exploratory and nonlinear, emphasizing agility and speed.

Gartner sees that we create and manage two different types of projects. Some, Mode 1, being very serious, very methodical, bet-the-business projects that must be done right using formal processes, and others, Mode 2, being more opportunistic, quicker, more agile. That’s not to say that Mode 1 projects can’t be agile, and that Mode 2 projects can’t be big and significant. However, we all know that there’s a big difference between launching an initiative to implement a Black Friday sale on our website or designing a new store-locator mobile app, vs. rolling out a GAAP-compliant accounting system or migrating critical systems to the cloud.

You might argue that there’s nothing revolutionary here with bimodal, and if you did, you would be right. Nobody ever claimed that all IT projects, including software development, are the same, and should be managed the same way. What Gartner has done is provide a clear vocabulary for understanding, categorizing, and communicating project differences more efficiently.

Read more about this in my story “Mode 1, Mode 2: Gartner Preaches Bimodal Development at AADI,” published on the Parasoft blog.

difficult-conversationsYour app’s user interface is terrible. Your business plan is flawed. Your budget is a pipe dream. Your code isn’t efficient. Clients are unhappy with your interpersonal skills. Your meetings are too long. You don’t seem to get along with your developers. You are hard to work with. You are being kicked off the task force because you aren’t adding any value. The tone of your e-mail was too informal. Your department is being given to someone else. No, we won’t need you for this project. No, we don’t need you at all.

We all get feedback. Usually it’s a combination of good and bad. There’s praise and helpful criticism. Sometimes the feedback is about our company, sometimes about our project, sometimes about our team, and sometimes, well, about us. Sometimes we take the feedback in good stride. Other times, we get hurt and angry—and don’t listen. Speaking for myself, I tend to get defensive when given feedback that’s less than glowingly effusive.

Own your feedback. A short paper published by Harvard Business Review, called“Difficult Conversations 2.0: Thanks for the Feedback,” can help.

“In the realm of feedback, the receiver—not the giver—is the key player in the exchange. Here’s how to become a world-class receiver,” write Douglas Stone and Sheila Heen, founders of Triad Consulting Group. The pair teach negotiation at Harvard Law School, and have written a couple of great books, Thanks for the Feedback: The Science and Art of Receiving Feedback Well (Even When it is Off Base, Unfair, Poorly-Delivered, and Frankly, You’re Not in the Mood) and Difficult Conversations: How to Discuss What Matters Most.

I heartily recommend both of Stone & Heen’s books. For now, let me share some of the wisdom in the five-page paper, which is focused somewhat on feedback from managers, but which is broadly applicable to all types of feedback. Stone & Heen write:

By becoming a skillful receiver of feedback, you can achieve three important benefits:

• Take charge of your life-long learning: When we get better at receiving feedback, we take charge of our own learning and can accelerate our growth.

• Improve our relationships: The way we handle feedback has an impact on our relationships.

• Reduce stress and anxiety: For the more sensitive among us, there’s one more important benefit: getting better at receiving feedback reduces stress and anxiety.

The authors say that it’s only natural to evaluate feedback, determine what is accurate and inaccurate, and then focus on what we see as inaccurate:

You can find something wrong with just about any feedback you get. Maybe it doesn’t address the constraints you’re under, it’s outdated, biased, coming from only a few people, or only part of the story. The problem is, when we focus on what’s wrong with the feedback, we lose sight of what might be right about it; and there is also almost always something right about it.

They continue:

Receiving feedback well doesn’t mean that you always have to take the feedback or agree with the assessment. But it does mean engaging in order to first truly understand the feedback, and then deciding what to do about it.

We receive feedback every day. The feedback might be about us from our managers. It might be about products from customer comments left on an open forum, or sent via Twitter. Feedback can be elating—everyone loves a five-star review. It can also be painful and debilitating. As developers, techies, managers and humans, let’s get better at receiving it.

 

ddjSoftware-defined networks and Network Functions Virtualization will redefine enterprise computing and change the dynamics of the cloud. Data thefts and professional hacks will grow, and development teams will shift their focus from adding new features to hardening against attacks. Those are two of my predictions for 2015.

Big Security: As 2014 came to a close, huge credit-card breaches from retailers like Target faded into the background. Why? The Sony Pictures hack, and the release of an incredible amount of corporate data, made us ask a bigger question: “What is all that information doing on the network anyway?” Attackers took off with Sony Pictures’ spreadsheets about executive salaries, confidential e-mails about actors and actresses, and much, much more.

What information could determined, professional hackers make off with from your own company? If it’s on the network, if it’s on a server, then it could be stolen. And if hackers can gain access to your cloud systems (perhaps through social engineering, perhaps by exploiting bugs), then it’s game over. From pre-released movies and music albums by artists like Madonna, to sensitive healthcare data and credit-card numbers, if it’s on a network, it’s fair game.

No matter where you turn, vulnerabilities are everywhere. Apple patched a hole in its Network Time Protocol implementation. Who’d have thought attackers would use NTP? GitHub has new security flaws. ICANN has scary security flaws. Microsoft released flawed updates. Inexpensive Android phones and tablets are found to have backdoor malware baked right into the devices. I believe that 2015 will demonstrate that attackers can go anywhere and steal anything.

That’s why I think that savvy development organizations will focus on reviewing their new code and existing applications, prioritizing security over adding new functionality. It’s not fun, but it’s 100% necessary.

Big Cloud: Software-defined networking and Network Functions Virtualization are reinventing the network. The fuzzy line between intranet and Internet is getting fuzzier. Cloud Ethernet is linking the data center directly to the cloud. The network edge and core are indistinguishable. SDN and NFV are pushing functions like caching, encryption, load balancing and firewalls into the cloud, improving efficiency and enhancing the user experience.

In the next year, mainstream enterprise developers will begin writing (and rewriting) back-end applications to specifically target and leverage SDN/NFV-based networks. The question of whether the application is going to run on-premises or in the cloud will cease to be relevant. In addition, as cloud providers become more standards-based and interoperable, enterprises will gain more confidence in that model of computing. Get used to cloud APIs; they are the future.

Looking to boost your job skills? Learn about SDN and NFV. Want to bolster your development team’s efforts? Study your corporate networking infrastructure, and tailor your efforts to matching the long-term IT plans. And put security first—both of your development environments and your deployed applications.

Big Goodbye: The tech media world is constantly changing, and not always for the better. The biggest one is the sunsetting of Dr. Dobb’s Journal, a website for serious programmers, and an enthusiastic bridge between the worlds of computer science and enterprise computing. After 38 years in print and online, the website will continue, but no new articles or content will be commissioned or published.

DDJ was the greatest programming magazine ever. There’s a lot that can be said about its sad demise, and I will refer you to two people who are quite eloquent on the subject: Andrew Binstock, the editor of DDJ, and Larry O’Brien, SD Times columnist and former editor of Software Development Magazine, which was folded into DDJ a long time ago.

Speaking as a long-time reader—and as one of the founding judges of DDJ’s Jolt Awards—I can assure you that Dr. Dobb’s will be missed.

sony_pictures_logoFor development teams, cloud computing is enthralling. Where’s the best place for distributed developers, telecommuters and contractors to reach the code repository? In the cloud. Where do you want the high-performance build servers? At a cloud host, where you can commandeer CPU resources as needed. Storing artifacts? Use cheap cloud storage. Hosting test harness? The cloud has tremendous resources. Load testing? The scales. Management of beta sites? Cloud. Distribution of finished builds? Cloud. Access to libraries and other tools? Other than the primary IDE itself, cloud. (I’m not a fan of working in a browser, sorry.)

Sure, a one-person dev team can store an entire software development environment on a huge workstation or a convenient laptop. Sure, a corporation or government that has exceptional concerns or extraordinary requirements may choose to host its own servers and tools. In most cases, however, there are undeniable benefits for cloud-oriented development, and if developers aren’t there today, they will be soon. My expectation is that new projects and team launch on the cloud. Existing projects and teams will remain on their current dev platforms (and on-prem) until there’s a good reason to make the switch.

The economics are unassailable, the convenience is unparalleled, and both performance and scalability can’t be matched by in-house code repositories. Security in the cloud may also outmatch most organizations’ internal software development servers too.

We have read horror stories about the theft of millions of credit cards and other personal data, medical data, business documents, government diplomatic files, e-mails and so-on. It’s all terrible and unlikely to stop, as the recent hacking of Sony Pictures demonstrates.

What we haven’t heard about, through all these hacks, is the broad theft of source code, and certainly not thefts from hosted development environments. Such hacks would be bad, not only because proprietary source code contains trade secrets, but also because the source can be reverse-engineered to reveal attack vulnerabilities. (Open-source projects also can be reverse-engineered, of course, but that is expected and in fact encouraged.)

Even worse that reverse-engineering of stolen source code would be unauthorized and undetected modifications to a codebase. Can you imagine if hackers could infiltrate an e-commerce system’s hosted code and inject a back door or keylogger? You get the idea.

I am not implying that cloud-based software development systems are more secure than on-premises systems. I am also not implying the inverse. My instinct is to suggest that hosted cloud dev systems are as safe, or safer, than internal data center systems. However, there’s truly no way to know.

A recent report from the analyst firm Technology Business Research took this stance, arguing that security for cloud-based services will end up being better than security at local servers and data centers. While not speaking specifically to software development, a recent TBR report concluded, “Security remains the driving force behind cloud vendor adoption, while the emerging trends of hybrid IT and analytics, and the associated security complications they bring to the table, foreshadow steady and growing demand for cloud professional services over the next few years.”

Let me close by drawing your attention to a competition geared at startups innovating in the cloud. The Clouded Leopard’s Den is for young companies looking for A-series or B/C-series funding, and offers tools and resources to help them grow, attract publicity, and possibly even find new funding. If you work at a cloud startup, check it out!

lawyer

Cloud-based storage is amazing. Simply amazing. That’s especially true when you are talking about data from end users that are accessing your applications via the public Internet.

If you store data in your local data center, you have the best control over it. You can place it close to your application servers. You can amortize it as a long-term asset. You can see it, touch it and secure it—or at least, have full control over security.

There are downsides, of course, to maintaining your own on-site data storage. You have to back it up. You have to plan for disasters. You have to anticipate future capacity requirements through budgeting and advance purchases. You have to pay for the data center itself, including real estate, electricity, heating, cooling, racks and other infrastructure. Operationally you have to pipe that data to and from your remote end users through your own connections to the Internet or to cloud application servers.

By contrast, cloud storage is very appealing. You pay only for what you use. You can hold service providers to service-level guarantees. You can pay the cloud provider to replicate the storage in various locations, so customers and end-users are closer to their data. You can pay for security, for backups, for disaster recovery provisions. And if you find that performance isn’t sufficient, you can migrate to another provider or order up a faster pipe. That’s a lot easier, cheaper and faster than ripping-and-replacing outdated storage racks in your own data center.

Gotta say, if I were setting up a new application for use by off-site users (whether customers or employees), I’d lean toward cloud storage. In most cases, the costs are comparable, and the operational convenience can’t be beat.

Plus, if you are at a startup, a monthly storage bill is easier to work with than a large initial outlay for on-site storage infrastructure.

Case closed? No, not exactly. On-site still has some tricks up its sleeve. If your application servers are on-site, local storage is faster to access. If your users are within your own building or campus, you can keep everything within your local area network.

There also may be legal advantages to maintaining and using onsite storage. For compliance purposes, you know exactly where the data is at all times. You can set up your own instruction detection systems and access logs, rather than relying upon the access controls offered by the cloud provider. (If your firm isn’t good at security, of course, you may want to trust the cloud provider over your own IT department.)

On that subject: Lawsuits. In her story, “Eek! Lawyers are Coming After Your Fitbit!,” Sharon Fisher writes about insurance attorneys issuing subpoenas against a client’s FitBit data to show that she wasn’t truly as injured as she claimed. The issue here isn’t only about wearables or healthcare. It’s also about access. “Will legal firms be able to subpoena your cloud provider if that’s where your fitness data is stored? How much are they going to fight to protect you?” Fisher asks.

Say a hostile attorney wants to subpoena some of your data. If the storage is in your own data center, the subpoena comes to your company, where your own legal staff can advise whether to respond by complying or fighting the subpoena.

Yet: If the data is stored in the cloud, attorneys or government officials could come after you, or try to get access by giving a subpoena to the cloud service provider. Of course, encryption might prevent the cloud provider from complying. Still, this is a new concern, especially given the broad subpoena powers granted to prosecutors, litigating attorneys and government agencies.

It’s something to talk to your corporate counsel about. Bring your legal eagles into the conversation.

hemingwaySEYTON
The tests, my lord, have failed.

MACBETH
I should have used a promise;
There would have been an object ready made.
Tomorrow, and tomorrow, and tomorrow,
Loops o’er this petty code in endless mire,
To the last iteration of recorded time;
And all our tests have long since found
Their way to dusty death. Shout, shout, brief handle!
Thine’s but a ghoulish shadow, an empty layer
That waits in vain to play upon this stage;
And then is lost, ignored. Yours is a tale
Told by an idiot, full of orphaned logic
Signifying nothing.

Those are a few words from a delightful new book, “If Hemingway Wrote JavaScript,” by Angus Croll. For example, the nugget above is “Macbeth’s Last Callback, after a soliloquy from Macbeth from William Shakespeare.”

Literary gems and nifty algorithms abide in this code-dripping 200-page tome from No Starch Press. Croll, a member of the UI framework team at Twitter, has been writing about famous authors writing JavaScript since 2012, and now has collected and expanded the entries into a book that will be amusing to read or gift this holiday season. (He also has a serious technical blog about JavaScript, but where’s the fun in that?)

Read and wonder as you see how Dan Brown, author of “The Da Vinci Code,” would code a Fibonacci sequence generator. How Jack Kerouac would calculate factorials. How J.D. Salinger and Tupac Shakur would determine if numbers are happy or inconsolable. How Dylan Thomas would muse on refactoring. How Douglas Adams of “Hitchhiker’s Guide to the Galaxy” fame would generate prime numbers. How Walt Whitman would perform acceptance tests. How J.K. Rowling would program a routine called mumbleMore. How Edgar Allen Poe would describe a commonplace programming task:

Once upon a midnight dreary, while I struggled with JQuery,
Sighing softly, weak and weary, troubled by my daunting chore,
While I grappled with weak mapping, suddenly a function wrapping
Formed a closure, gently trapping objects that had gone before.

Twenty-five famous authors, lots of JavaScript, lots of prose and poetry. What’s not to like? Put “If Hemingway Wrote JavaScript” on your shopping list.

Let’s move from JavaScript to C, or specifically the 7th Underhanded C Contest. If you are a brilliantly bad C programmer, you might win a US$200 gift certificate to popular online store ThinkGeek. The organizer, Prof. Scott Craver of Binghamton University in New York, explains:

The goal of the contest is to write code that is as readable, clear, innocent and straightforward as possible, and yet it must fail to perform at its apparent function. To be more specific, it should do something subtly evil. Every year, we will propose a challenge to coders to solve a simple data processing problem, but with covert malicious behavior. Examples include miscounting votes, shaving money from financial transactions, or leaking information to an eavesdropper. The main goal, however, is to write source code that easily passes visual inspection by other programmers.

The specific challenge for 2014 is to write a surveillance subroutine that looks proper but leaks data. The deadline is Jan. 1, 2015, more or less. See the Underhanded C website; be sure to read the FAQ!

forecastMalicious agents can crash a website by implementing a DDoS—a Distributed Denial of Service Attack—against a server. So can sloppy programmers.

Take, for example, the National Weather Service’s website, operated by the United States National Oceanic and Atmospheric Administration, or NOAA. On August 29, the service went down, hard, as single rogue Android app overwhelmed the NOAA’s servers.

As far as anyone knows, there was nothing deliberately malicious about the Android app, and of course there is nothing specific to Android in this situation. However, the app in question was making service requests of the NOAA server’s public APIs every few milliseconds. With hundreds, thousands or tens of thousands of instances of that app running simultaneously, the NOAA system collapsed.

There is plenty of blame to go around. Let’s start with the app developer.

Certainly the app developer was sloppy, sloppy, sloppy. I can imagine that the app worked great in testing, when only one or two instances of the app were running at any one time on a simulator or on actual devices. Scale it up—boom! This is a case where manual code reviews may have found the problem. Maybe not.

Alternatively, the app developer could have checked to see if the public APIs it required (such as NOAA’s weather API) could handle the anticipated load. However, if the coders didn’t write the software correctly, load testing may not have sufficed. For example, say that the design of the app was to pull data every 10 seconds. If the programmers accidentally set up the data retrieval to pull the data every 10 milliseconds, the load would be 1,000x greater than anticipated. Every 10 seconds, no problem. Every 10 milliseconds, big problem. Boom!

This is a nasty bug, to be sure. Compilers, libraries, test systems, all would verify that the software ran correctly, because it did run correctly. In the scenario I’ve painted, it simply wasn’t coded to meet the design. The bug might have been spotted if someone noticed a very high number of external API calls, or again, perhaps during a manual code review. Otherwise, it’s not hard to see how it would slip through the crack.

Let’s talk about NOAA now. In 2004, the weather service beefed up its Internet loads in anticipation of Hurricane Charley, contracting with Akamai to host some of its busiest Web pages, using distributed edge caching to reduce the load. This worked well, and Akamai continued to work with NOAA. It’s unclear if Akamai also fronted public API calls; my guess is that those were passed straight through to the National Weather Service servers.

NOAA’s biggest problem is that it has little control over external applications that use its public APIs. Even so, Akamai was still in the circuit and, fortunately, was able to help with the response to the Aug. 29 accidental DDoS situation. At that time, the National Weather Service put out a bulletin on its NIDS messaging service that said:

TO – ALL CUSTOMERS SUBJECT – POINT FORECAST ISSUES. WE ARE PROVIDING NOTICE TO ALL THAT NIDS HAS IDENTIFIED AN ABUSING ANDROID APP THAT IS IMPACTING FORECAST.WEATHER.GOV. WE HAVE FORCED ALL SITES TO ZONES WHILE WE WORK WITH THE DEVELOPER. AKAMAI IS BEING ENGAGED TO BLOCK THE APPLICATION. WE CONTINUE TO WORK ON THIS ISSUE AND APPRECIATE YOUR PATIENCE AS WE WORK TO RESOLVE THIS ISSUE.

Kudos to NOAA for responding quickly and transparently to this issue. Still, this appalling situation—that a single DDoS attack could cripple such a vital service—is unacceptable. Imagine if this had been a malicious attack, rather than an accidental coding error, and if the attacker was able to modify the attack in real time to go around Akamai’s attempts to block the traffic.

What could NOAA have done differently? For best results, DDoS attacks must be blocked within the network before they reach (and overwhelm) the server. Therefore, DDoS detection and blocking systems should already have been in place.

For example, with the ability to detect potential attacks due to abnormally high volumes of requests from a specific app, raise alarms, and also drop such requests (which is fast and takes few resources), instead of servicing them (which is slow and takes more resources). Perfect? No. DDoS scenarios are nasty and messy. No matter how you slice it, though, a single misbehaving app should never be able to crash your server.

caprizaHTML browser virtualization, not APIs, may be the best way to mobilize existing enterprise applications like SAP ERP, Oracle E-Business Suite or Microsoft Dynamics.

At least, that’s the perspective of Capriza, a company offering a SaaS-based mobility platform that uses a cloud-based secure virtualized browser to screen-scrape data and context from the enterprise application’s Web interface. That data is then sent to a mobile device (like a phone or tablet), where it’s rendered and presented through Capriza’s app.

The process is bidirectional: New transactional data can be entered into the phone’s Capriza app, which transmits it to the cloud-based platform. The Capriza cloud, in turn, opens up a secure virtual browser session with the enterprise software and performs the transaction.

The Capriza platform, which I saw demonstrated last week, is designed for employees to access enterprise applications from their Android or Apple phones, or from tablets.

The platform isn’t cheap – it’s licensed on a per-seat, per-enterprise-application basis, and you can expect a five-digit or six-digit annual cost, at the least. However, Capriza is solving a pesky problem.

Think about the mainstream way to deploy a mobile application that accesses big enterprise back-end platforms. Of course, if the enterprise software vendor offers a mobile app, and if that app meets you needs, that’s the way to go. What if the enterprise software’s vendor doesn’t have a mobile app – or if the software is homegrown? The traditional approach would be to open up some APIs allowing custom mobile apps to access the back-end systems.

That approach is fraught with peril. It takes a long time. It’s expensive. It could destabilize the platform. It’s hard to ensure security, and often it’s a challenge to synchronize API access policies with client/server or browser-based access policies and ACLs. Even if you can license the APIs from an enterprise software vendor, how comfortable are you exposing them over the public Internet — or even through a VPN?

That’s why I like the Capriza approach of using a virtual browser to access the existing Web-based interface. In theory (and probably in practice), the enterprise software doesn’t have to be touched at all. Since the Capriza SaaS platform has each mobile user log into the enterprise software using the user’s existing Web interface credentials, there should be no security policies and ACLs to replicate or synchronize.

In fact, you can think of Capriza as an intentional man-in-the-middle for mobile users, translating mobile transactions to and from Web transactions on the fly, in real time.

As the company explains it, “Capriza helps companies leverage their multi-million dollar investments in existing enterprise software and leapfrog into the modern mobile era. Rather than recreate the wheel trying to make each enterprise application run on a mobile device, Capriza breaks complex, über business processes into mini ones. Its approach bypasses the myriad of tools, SDKs, coding, integration and APIs required in traditional mobile app development approaches, avoiding the perpetual cost and time requirements, risk and questionable ROI.”

It certainly looks like Capriza wins this week’s game of Buzzword Bingo. Despite the marketing jargon, however, the technology is sound, and Capriza has real customers—and has recently landed a US$27 million investment. That means we’re going to see a lot of more this solution.

Can Capriza do it all? Well, no. It works best on plain vanilla Web sites; no Flash, no Java, no embedded apps. While it’s somewhat resilient, changes to an internal Web site can break the screen-scraping technology. And while the design process for new mobile integrations doesn’t require a real programmer, the designer must be very proficient with the enterprise application, and model all the pathways through the software. This can be tricky to design and test.

Plus, of course, you have to be comfortable letting a third-party SaaS platform act as the man-in-the-middle to your business’s most sensitive applications.

Bottom line: If you are mobilizing enterprise software — either commercial or home-grown — that allow browser access, Capriza offers a solution worth considering.

Big Data Divination Pam BakerYou’ve gotta read “Data Divination: Big Data Strategies,” Pam Baker’s new book about Big Data.

Actually, let me change my recommendation. If you are a techie and you are looking for suggestions on how to configure your Hadoop installation or optimize the storage throughput in your NAS array, this isn’t the book for you. Rather, this is the book for your business-side manager or partner, who is looking to understand not only what Big Data is, but really really learn how to apply data analysis to business problems.

One of the challenges with Big Data is simply understanding it. The phrase is extremely broad and quite nebulous. Yet behind the overhyping of Big Data, there are genuine use cases that demonstrate that looking at your business’ data in a new way can transform your business. It is real, and it is true.

Bake is the editor of the “Fierce Big Data” website. She deconstructs the concept by dispensing with the jargon and the, well, overly smug Big Data worship that one finds in a lot of literature and pushed out by the vendors. With a breezy style that reflects her background as a technology journalist, Baker uses clear examples and lots of interviews to make her points.

What will you learn? To start with, “Data Divination” teaches you how to ask good questions. After all, if you don’t ask, you won’t learn anything from all that data and all those reports. Whether it’s predictive analytics or trend spotting or real-time analysis, she helps you understand which data is valuable and which isn’t. That’s why this book is best for the executive and business-side managers, who are the ultimate beneficiaries of your enterprise’s Big Data investments.

This book goes beyond other books on the subject, which could generally be summarized either as too fluffy and cheerleading, or as myopically focused on implementation details of specific Big Data architectures. For example, there is a lengthy chapter on the privacy implications of data gathering and data analysis, the sort of chapter that a journalist would write, but an engineer wouldn’t even think about.

Once you’ve finished with the basics, Baker jumps into several fascinating use cases: in healthcare, in the security industry, in government and law enforcement, in small business, in agriculture, in transportation, in energy, in retail, in manufacturing, and so on. Those are the most interesting parts of the book, and each use had takeaways that could apply to any industry. Baker is to be commended for digging into the noteworthy challenges that Big Data attempts to help businesses overcome.

It’s a good book. Read it. And tell your business partner, CIO or even CEO to read it too.

Graeme WarringThirty seconds. That’s about how long a mobile user will spend with your game before deciding if he or she will continue using it. Thirty seconds. Maybe a minute. If you haven’t engaged the customer by then, forget it.

That’s according to Graeme Warring, COO of 2XL Games LLC, a game startup based in Phoenix. Speaking at an investor conference here today sponsored by AZ TechBeat, Warring explained that while mobile games are exploding, it’s getting harder and harder to make money at it.

One culprit that’s especially true with mobile games is that the new business model is free-to-play. That is, gamers can download the mobile app at no cost. They have, therefore, little or no emotional investment. They might try the game. They might not try it. They might play for 30 seconds or a minute. There’s no sense of guilt to drive them to engage with the software for hours or days, and then be inspired to use in-app payments to improve the gaming experience.

By contrast, consider a console game, such as for Sony’s PlayStation 4 or Microsoft’s Xbox One. A typical game might cost US$60. The gamer has done his/her research before making that purchase. Thanks to the emotional and financial investment, he/she is going to make a serious effort to play that game.

“It’s problem transference,” explained Warring. Who owns the problem of ensuring that the player gives the game a serious try? For an expensive console game, it’s the player’s problem. For a free-to-play mobile game, it’s your problem as the game developer.

Getting the player to engage requires an outstanding initial experience. Don’t require a steep learning curve; the era of preliminary in-game tutorials is long gone. Get the player involved instantly, and make it a fun and rewarding experience. Later, and only later, should you try to monetize through in-app purchases. Whether it’s a new weapon for a shoot-em-up, or grippier tires for a racing game, or more lives and candy and prizes, those become appealing only after the player is hooked and engaged.

Warring and other speakers at the AZ TechBeat conference made the point that the best-selling, top-revenue-producing games come from a small number of firms. They insist, however, that there are tremendous opportunities to make a smaller game, perhaps one that costs less than $5 million to create and market, and to make a profit from the investment.

Marketing is key. Expect to spend as much on marketing as on development, “and be prepared to burn through that budget,” the speakers insisted. That may mean social media; it may mean licensing arrangements. To that end, they suggest that instead of creating your own new brand and attracting a new audience, you may do better licensing an existing brand and a proven audience. Making a motorcycle racing game? License and tie it in with an existing motorcycle event, if you can. Such a tie in might be expensive, but it might bootstrap downloads and maybe even help attract investors.

That, in turn, will buy you 30 seconds. Make the most of it.

apple_watchFirst Impressions of the Apple Watch: Surprised that it’s not called the iWatch. The user interface looks surprisingly cool. Distressed that the Apple Watch needs to be charged every day, but if the docking station is sufficiently easy to use, it shouldn’t be a deal breaker.

The watches look like real watches, beautiful as well as functional. The pricing of US$349 and up doesn’t scare me. The long delay for the release—not until early 2015—gives competitors like Motorola and Samsung a great opportunity to respond and seize the initiative. I hope that by the release date, Apple Watch will work with Android phones (and maybe Windows Phone), not only iPhones.

First Impressions of Pay-to-Yelp: The Ninth Circuit Court of Appeals in San Francisco ruled that Yelp did not extort businesses by changing how business reviews appeared on its site based on their advertising status. For example, because Yelp never had any agreement to be impartial in its dealings with Dr. Tracy Chan (a dentist who never bought ads from the company), the judge said:

We begin with Chan, who alleges that Yelp extorted her by removing positive reviews from her Yelp page. Chan asserts that she was deprived of the benefit of the positive reviews Yelp users posted to Yelp’s website, and that, had she received the benefits of the positive reviews, they would have counteracted the negative reviews other users posted. But Chan had no pre-existing right to have positive reviews appear on Yelp’s website. She alleges no contractual right pursuant to which Yelp must publish positive reviews, nor does any law require Yelp to publish them. By withholding the benefit of these positive reviews, Yelp is withholding a benefit that Yelp makes possible and maintains. It has no obligation to do so, however.

This sets a scary precedent that could affect all for-profit businesses that both provide a forum for user feedback and which benefit in some way from that feedback. For example, an electronics reseller will undoubtedly sell more products if the reviews of those products are positive. There is nothing to stop such a reseller from removing negative reviews of products that it wants to sell (such as those that have profit margins or where the manufacturer offers incentives), or removing positive reviews from other products. While I never had much faith in online reviews, whether of books, hotels or big-screen TVs, I will have even less faith in them now.

First Impressions of COBOL: Well, okay, it’s not a first impression, but let us revisit last week’s column, where I talked about job opportunities for young COBOL developers. Kevin Nitert, a 26-year-old developer from the Netherlands, responded, “While it’s very true [COBOL] is easy to learn, the problem is that most companies work directly on the mainframe or ISPF. So learning COBOL is only one part; you have to know about the mainframe environment as well and learn things about JCL and REXX.”

I totally agree and should have talked about the environment. It is easy to learn COBOL on your own or with online training. Picking up the mainframe and environment is much harder. It’s been my experience that employers bringing in employees to work on legacy systems expect to do such training themselves, especially if those employees are young and were hired for their aptitude, not for their specific legacy skills with the platform.

To be honest, it wouldn’t take long to bring newbies up to speed on REXX (Restructured Extended Executor, a sophisticated scripting and job-control language) and ISPF (Interactive System Productivity Facility, a development tool chain for IBM’s z-series mainframes).

mag-tapeOnce upon a time, back when dinosaurs roamed the planet, I learned COBOL. While I never wrote any deployed applications in the language, I did use it to teach an undergraduate course in computer science for business majors, back in the early 1980s. Those poor students, who submitted their programs on punch cards for an IBM System/370, complained that the class was the most time-consuming of their undergraduate studies, often requiring lots of late nights at the campus computer center.

Hint: If you are using a card punch, it’s important to have excellent typing skills.

Today, of course, COBOL is more likely to be a punchline. Sure, there was a resurgence of interest in the language about 15 years ago, when everyone was concerned about the Y2K problem. After that, legacy systems running COBOL disappeared from the public eye. Everyone would much rather use Java or C# or Python or Ruby or Objective-C; that’s where the jobs are, right?

Don’t be so quick to discount COBOL, especially if you are looking for a job, or think that you might be some day. There are lots of COBOL applications running all over the world, and more and more COBOL experts are retiring every day.

I’d like to share some information from one of the biggest COBOL vendors: Micro Focus. Bear in mind that Micro Focus’ interest is self-serving, but with that said, I find the information fascinating.

According to the company:

A poll of academic leaders from 119 universities across the world saw more than half (58%) say they believed COBOL programming should be on their curriculum, with 54% estimating the demand for COBOL programming skills would increase or stay the same over the next 10 years. That’s a far cry from today’s reality. Of the 27% confirming COBOL programming was part of their curriculum, only 18% had it as a core part of the course, while the remaining 9% made it an elective component.

The company has a lot of bullet points about COBOL, which I’ll share verbatim:

  • COBOL supports 90% of Fortune 500 business systems every day
  • 70% of all critical business logic and data is written in COBOL
  • COBOL connects 500 million mobile phone users every day
  • COBOL applications manage the care of 60 million patients every day
  • COBOL powers 85% of all daily business transactions processed
  • COBOL applications move 72,000 shipping containers every day and process 85% of port transactions
  • 95% of all ATM transactions use COBOL
  • COBOL enables 96,000 vacations to be booked every year
  • COBOL powers 80% of all point-of-sale transactions
  • There are 200 more COBOL transactions per day than Google + You Tube searches worldwide
  • $2 trillion worth of mainframe applications in corporations are written in COBOL
  • 1.5 million new lines of COBOL code are written every day
  • 5 billion lines of new COBOL code are developed every year
  • The total investment in COBOL technologies, staff and hardware is estimated at $5 trillion
  • An estimated 2 million people are currently working in COBOL

Micro Focus cites lots of sources: The Aberdeen Group, Giga Information Group, Database and Network Journal, The COBOL Report, SearchEngineWatch.com, Tactical Strategy Group, and The Future of COBOL report as sources for those points.

Again, this might mean job opportunities. FedTech Magazine reports:

In March [2014], the Office of Personnel Management released its “Strategic Information Technology Plan,” a vision for revamping the agency’s IT operations, including plans to automate paper-based processes, invest in data analytics and consolidate numerous databases. But maintaining the mainframe hardware and software systems that currently support retirement processes is expensive; according to the agency’s plan, OPM anticipates costs will increases 10 percent to 15 percent annually “as personnel with the necessary coding expertise retire and cannot easily be replaced.”

The United States military agrees. According to Terry Halvorsen, CIO of the Department of the Navy, it is often more cost-effective to maintain COBOL systems than to replace them:

In many cases, however, maintaining an older system that fully supports our mission makes more sense than upgrading it or buying a new system that runs the risk of degrading our mission and requires a large investment. The [Department of the Navy] and industry still use a programming language developed in 1959 to operate major business functions in finance and personnel. In fact, nearly three quarters of the world’s business transactions are done in that venerable language, COBOL. Now with an estimated 200 billion lines of code, it still works. Part of the reason for this is it’s an easy-to-use language, it is fast and it processes data quickly. It is in use worldwide and has proven so stable and reliable that many Fortune 500 companies—Capital One, for example—still hire and train COBOL programmers.

COBOL is easy to learn… and the jobs are there.