Java Magazine home page
Java Magazine home page

I’m back in the saddle again, if by “saddle” you mean editing a magazine. Today, I took over the helm of Oracle’s Java Magazine, one of the world’s leading publications for software developers, with about 260,000 subscribers. The previous e-in-c, Andrew Binstock, moved on after five years to work on other projects; he leaves big shoes to fill.

As Andrew wrote,

This is my last issue of Java Magazine. After five very enjoyable years at the helm, I’m ready to take on other challenges, including getting back to working on my preferred coding projects. I will surely pop up here and there with articles and reviews (likely even in this magazine). If you’ve enjoyed my work, I invite you to follow me on Twitter (@platypusguy) or to reach out to me on LinkedIn, where I accept all invitations. At the moment, I am currently participating in interviewing prospective successors and I’ll make sure that Oracle has a good person in place to carry on. From the bottom of my heart, thank you all for being readers; and to many of you, I send additional gratitude for your thoughtful comments and suggestions over the years. It’s been truly an honor.

I am honored to follow in Andrew’s footsteps.

Chapter One: Christine Hall

Should the popular Linux operating system be referred to as “Linux” or “GNU/Linux”? It’s a thing, or at least it used to be, writes my friend Christine Hall in her aptly named article, “Is It Linux or GNU/Linux?, published in Linux Journal on May 11:

Some may remember that the Linux naming convention was a controversy that raged from the late 1990s until about the end of the first decade of the 21st century. Back then, if you called it “Linux”, the GNU/Linux crowd was sure to start a flame war with accusations that the GNU Project wasn’t being given due credit for its contribution to the OS. And if you called it “GNU/Linux”, accusations were made about political correctness, although operating systems are pretty much apolitical by nature as far as I can tell.

Christine (aka Bride of Linux) quotes a number of learned people. That includes Steven J. Vaughan-Nichols, one of the top experts in the politics of open-source software – and frequent critic of the antics of Richard M. Stallman (aka RMS) who founded the Free Software Foundation, and who insists that everyone call the software GNU/Linux.

Here’s what Steven (aka SJVN), said in the article:

“Enough already”, he said. “RMS tried, and failed, to create an operating system: Hurd. He and the Free Software Foundation’s endless attempts to plaster his GNU name to the work of Linus Torvalds and the other Linux kernel developers is disingenuous and an insult to their work. RMS gets credit for EMACS, GPL, and GCC. Linux? No.”

Another humble luminary sought out by Christine: Yours truly.

“For me it’s always, always, always, always Linux,” said Alan Zeichick, an analyst at Camden Associates who frequently speaks, consults and writes about open-source projects for the enterprise. “One hundred percent. Never GNU/Linux. I follow industry norms.”

To make a long story short: In the article, the consensus was for Linux, not GNU/Linux.

Chapter Two: figosdev

But then someone going by the handle “figosdev” authored a rebuttal, “Debunking the Usual Omission of GNU,” published on Techrights. To make a long story short, he believes that the operating system should be called GNU/Linux. Here’s my favorite part of figosdev’s missive (which was written in all lower-case):

ive heard about gnu and linux about a million times in over a decade. as of today ive heard of alan zeichick once, and camden associates (what do they even do?) once. im just going to call them linux, its the more popular term.

Riiight. figosdev never heard of me, fine (founder of SD Times, but figosdev probably never heard of that either). On the other hand, at least figosdev knows my name. I have no idea who figosdev is, except to infer that he/she/it is a developer on the fig component compiler project, since he/she/it is hiding behind a handle. And that brings me to

Chapter Three: Richi Jennings

Christine Hall’s article sparked a lively debate on Twitter. As part of it, my friend Richi Jennings (quoted in the original article) tweeted:

Let’s end the story here, at least for now. Linux forever!

The purchase order looks legitimate, yet does it have all the proper approvals? Many lawyers reviewed this draft contract so is this the latest version? Can we prove that this essential document hasn’t been tampered with, before I sign it? Can we prove that these two versions of a document are absolutely identical?

Blockchain might be able to help solve these kinds of everyday trust issues related to documents, especially when they are PDFs—data files created using the Portable Document Format. Blockchain technology is best known for securing financial transactions, including powering new financial instruments such as Bitcoin. But blockchain’s ability to increase trust will likely find enterprise use cases solving common, non-financial information exchanges like these documents use.

Joris Schellekens, a software engineer and PDF expert at iText Software in Ghent, Belgium, recently presented his ideas for blockchain-supported documents at Oracle Code Los Angeles. Oracle Code is a series of free events around the world created to bring developers together to share fresh thinking and collaborate on ideas like these.

PDF’s Power and Limitations

The PDF file format was created in the early 1990s by Adobe Systems. PDF was a way to share richly formatted documents whose visual layout, text, and graphics would look the same, no matter which software created them or where they were viewed or printed. The PDF specification became an international standard in 2008.

Early on, Adobe and other companies implemented security features into PDF files. That included password protection, encryption, and digital signatures. In theory, the digital signatures should be able to prove who created, or at least who encrypted, a PDF document. However, depending on the hashing algorithm used, it’s not so difficult to subvert those protections to, for example, change a date/time stamp, or even the document content, says Schellekens. His company, iText Software, markets a software development kit and APIs for creating and manipulating PDFs.

“The PDF specification contains the concept of an ID tuple,” or an immutable sequence of data, says Schellekens. “This ID tuple contains timestamps for when the file was created and when it was revised. However, the PDF spec is vague about how to implement these when creating the PDF.”

Even in the case of an unaltered PDF, the protections apply to the entire document, not to various parts of it. Consider a document that must be signed by multiple parties. Since not all certificate authorities store their private keys with equal vigilance, you might lack confidence about who really modified the document (e.g. signed it), at which times, and in which order. Or, you might not be confident that there were no modifications before or after someone signed it.

A related challenge: Signatures to a digital document generally must be made serially, one at a time. The PDF specification doesn’t allow for a document to be signed in parallel by several people (as is common with contract reviews and signatures) and then merged together.

Blockchain has the potential to solve such document problems, and several others besides. Read more in my story for Forbes, “Can Blockchain Solve Your Document And Digital Signature Headaches?

Albert Einstein famously said, “Everything should be made as simple as possible, but not simpler.” Agile development guru Venkat Subramaniam has a knack for taking that insight and illustrating just how desperately the software development process needs the lessons of Professor Einstein.

As the keynote speaker at the Oracle Code event in Los Angeles—the first in a 14-city tour of events for developers—Subramaniam describes the art of simplicity, and why and how complexity becomes the enemy. While few would argue that complex is better, that’s what we often end up creating, because complex applications or source code may make us feel smart. But if someone says our software design or core algorithm looks simple, well, we feel bad—perhaps the problem was easy and obvious.

Subramaniam, who’s president of Agile Developer and an instructional professor at the University of Houston, urges us instead to take pride in coming up with a simple solution. “It takes a lot of courage to say, ‘we don’t need to make this complex,’” he argues. (See his full keynote, or register for an upcoming Oracle Code event.)

Simplicity Is Not Simple

Simplicity is hard to define, so let’s start by considering what simple is not, says Subramaniam. In most cases, our first attempts at solving a problem won’t be simple at all. The most intuitive solution might be overly verbose, or inefficient, or perhaps difficult to understand, even by its programmers after the fact.

Simple is not clever. Clever software, or clever solutions, may feel worthwhile, and might cause people to pat developers on the back. But ultimately, it’s hard to understand, and can be hard to change later. “Clever code is self-obfuscating,” says Subramaniam, meaning that it can be incomprehensible. “Even programmers can’t understand their clever code a week later.”

Simple is not necessarily familiar. Subramaniam insists that we are drawn to the old, comfortable ways of writing software, even when those methods are terribly inefficient. He mentioned someone who wrote code with 70 “if/then” questions in a series—because it was familiar. But it certainly wasn’t simple, and would be nearly impossible to debug or modify later. Something that we’re not familiar with may actually be simpler than what we’re comfortable with. To fight complexity, Subramaniam recommends learning new approaches and staying up with the latest thinking and the latest paradigms.

Simple is not over-engineered. Sometimes you can overthink the problem. Perhaps that means trying to develop a generalized algorithm that can be reused to solve many problems, when the situation calls for a fast, basic solution to a single problem. Subramaniam cited Occam’s Razor: When choosing between two solutions, the simplest may be the best.

Simple is not terse. Program source code should be concise, which means that it’s small, but also clearly communicate the programmer’s intent. By contrast, something that’s terse may still execute correctly when compiled into software, but the human understanding may be lost. “Don’t confuse terse with concise,” warns Subramaniam. “Both are really small, but terse code is waiting to hurt you when you least expect it.”

Read more in my essay, “Practical Advice To Whip Complexity And Develop Simpler Software.”

Open source software (OSS) offers many benefits for organizations large and small—not the least of which is the price tag, which is often zero. Zip. Nada. Free-as-in-beer. Beyond that compelling price tag, what you often get with OSS is a lack of a hidden agenda. You can see the project, you can see the source code, you can see the communications, you can see what’s going on in the support forums.

When OSS goes great, everyone is happy, from techies to accounting teams. Yes, the legal department may want to scrutinize the open source license to make sure your business is compliant, but in most well-performing scenarios, the lawyers are the only ones frowning. (But then again, the lawyers frown when scrutinizing commercial closed-source software license agreements too, so you can’t win.)

The challenge with OSS is that it can be hard to manage, especially when something goes wrong. Depending on the open source package, there can be a lot of mysteries, which can make ongoing support, including troubleshooting and performance tuning, a real challenge. That’s because OSS is complex.

It’s not like you can say, well, here’s my Linux distribution on my server. Oh, and here’s my open source application server, and my open source NoSQL database, and my open source log suite. In reality, those bits of OSS may be from separate OSS projects, which may (or may not) have been tested for how well they work together.

A separate challenge is that because OSS is often free-as-in-beer, the software may not be in the corporate inventory. That’s especially common if the OSS is in the form of a library or an API that might be built into other applications you’ve written yourself. The OSS might be invisible but with the potential to break or cause problems down the road.

You can’t manage what you don’t know about

When it comes to OSS, there may be a lot you don’t know about, such as those license terms or interoperability gotchas. Worse, there can be maintenance issues — and security issues. Ask yourself: Does your organization know all the OSS it has installed on servers on-prem or in the cloud? Coded into custom applications? Are you sure that all patches and fixes have been installed (and installed correctly), even on virtual machine templates, and that there are no security vulnerabilities?

In my essay “The six big gotchas: The impact of open source on data centers,” we’ll dig into the key topics: License management, security, patch management, maximizing uptime, maximizing performance, and supporting the OSS.

Stupidity. Incompetence. Negligence. The unprecedented huge data breach at Equifax has dominated the news cycle, infuriating IT managers, security experts, legislators, and attorneys — and scaring consumers. It appears that sensitive personally identifiable information (PII) on 143 million Americans was exfiltrated, as well as PII on some non-US nationals.

There are many troubling aspects. Reports say the tools that consumers can use to see if they are affected by the breach are inaccurate. Articles that say that by using those tools, consumers are waiving their rights to sue Equifax. Some worry that Equifax will actually make money off this by selling affected consumers its credit-monitoring services.

Let’s look at the technical aspects, though. While details about the breach are still widely lacking, two bits of information are making the rounds. One is that Equifax practiced bad password practices, allowing hackers to easily gain access to at least one server. Another is that there was a flaw in a piece of open-source software – but the patch had been available for months, yet Equifax didn’t apply that patch.

It’s unclear about the veracity of those two possible causes of the breach. Even so, this points to a troubling pattern of utter irresponsibility by Equifax’s IT and security operations teams.

Bad Password Practices

Username “admin.” Password “admin.” That’s often the default for hardware, like a home WiFi router. The first thing any owner should do is change both the username and password. Every IT professional knows that. Yet the fine techies at Equifax, or at least their Argentina office, didn’t know that. According to well-known security writer Brian Krebs, earlier this week,

Earlier today, this author was contacted by Alex Holden, founder of Milwaukee, Wisc.-based Hold Security LLC. Holden’s team of nearly 30 employees includes two native Argentinians who spent some time examining Equifax’s South American operations online after the company disclosed the breach involving its business units in North America.

It took almost no time for them to discover that an online portal designed to let Equifax employees in Argentina manage credit report disputes from consumers in that country was wide open, protected by perhaps the most easy-to-guess password combination ever: “admin/admin.”

What’s more, writes Krebs,

Once inside the portal, the researchers found they could view the names of more than 100 Equifax employees in Argentina, as well as their employee ID and email address. The “list of users” page also featured a clickable button that anyone authenticated with the “admin/admin” username and password could use to add, modify or delete user accounts on the system.

and

A review of those accounts shows all employee passwords were the same as each user’s username. Worse still, each employee’s username appears to be nothing more than their last name, or a combination of their first initial and last name. In other words, if you knew an Equifax Argentina employee’s last name, you also could work out their password for this credit dispute portal quite easily.

Idiots.

Patches Are Important, Kids

Apache’s Struts is a well-regarded open source framework for creating Web applications. It’s excellent — I’ve used it myself — but like all software, it can have bugs. One such defect was discovered in March 2017, and was given the name “CVE-2017-5638.” A patch was issued within days by the Struts team. Yet Equifax never installed that patch.

Even so, the company is blaming the U.S. breach on that defect:

Equifax has been intensely investigating the scope of the intrusion with the assistance of a leading, independent cybersecurity firm to determine what information was accessed and who has been impacted. We know that criminals exploited a U.S. website application vulnerability. The vulnerability was Apache Struts CVE-2017-5638. We continue to work with law enforcement as part of our criminal investigation, and have shared indicators of compromise with law enforcement.

Keeping up with vulnerability reports, and applying patches right away, is essential for good security. Everyone knows this. Including, I’m sure, Equifax’s IT team. There is no excuse. Idiots.

For programmers, a language style guide is essential for helping learn a language’s standards. A style guide also can resolve potential ambiguities in syntax and usage. Interestingly, though, the official Code Conventions for the Java Programming Language guide has not been updated since April 20,1999 – back from long before Oracle bought Sun Microsystems. In fact, the page is listed as for “Archival Purposes Only.”

What’s up with that? I wrote to Andrew Binstock (@PlatypusGuy), the editor-in-chief of Oracle Java Magazine. In the November/December 2016 issue of the magazine, Andrew explained that according to the Java team, the Code Conventions guide was meant as an internal coding guide – not as an attempt to standardize the language.

Instead of Coding Conventions, Mr. B recommends the Google Java Style Guide as a “full set of well-reasoned Java coding guidelines.” So there you have it: If you want the good Java guidelines, look to Google — not to Oracle. Here’s the letter and the response.

firefox-privateBe paranoid! When you visit a website for the first time, it can learn a lot about you. If you have cookies on your computer from one of the site’s partners, it can see what else you have been doing. And it can place cookies onto your computer so it can track your future activities.

Many (or most?) browsers have some variation of “private” browsing mode. In that mode, websites shouldn’t be able to read cookies stored on your computer, and they shouldn’t be able to place permanent cookies onto your computer. (They think they can place cookies, but those cookies are deleted at the end of the session.)

Those settings aren’t good enough, because they are either all or nothing, and offer a poor balance between ease-of-use and security/privacy. The industry can and must do better. See why in my essay on NetworkWorld, “We need a better Private Browsing Mode.

 

zebra-tc8000Are you a coder? Architect? Database guru? Network engineer? Mobile developer? User-experience expert? If you have hands-on tech skills, get those hands dirty at a Hackathon.

Full disclosure: Years ago, I thought Hackathons were, well, silly. If you’ve got the skills and extra energy, put them to work for coding your own mobile apps. Do a startup! Make some dough! Contribute to an open-source project! Do something productive instead of taking part in coding contests!

Since then, I’ve seen the light, because it’s clear that Hackathons are a win-win-win.

  • They are a win for techies, because they get to hone their abilities, meet people, and learn stuff.
  • They are a win for Hackathon sponsors, because they often give the latest tools, platforms and APIs a real workout.
  • They are a win for the industry, because they help advance the creation and popularization of emerging standards.

One upcoming Hackathon that I’d like to call attention to: The MEF LSO Hackathon will be at the upcoming MEF16 Global Networking Conference, in Baltimore, Nov. 7-10. The work will support Third Network service projects that are built upon key OpenLSO scenarios and OpenCS use cases for constructing Layer 2 and Layer 3 services. You can read about a previous MEF LSO Hackathon here.

Build your skills! Advance the industry! Meet interesting people! Sign up for a Hackathon!

gaurdian_duke-1What’s it going to mean for Java? When Oracle purchased Sun Microsystems that was one of the biggest questions on the minds of many software developers, and indeed, the entire industry. In an April 2009 blog post, “Oracle, Sun, Winners, Losers,” written when the deal was announced (it closed in January 2010), I predicted,

Winner: Java. Java is very important to Sun. Expect a lot of investment — in the areas that are important to Oracle.

Loser: The Java Community Process. Oracle is not known for openness. Oracle is not known for embracing competitors, or for collaborating with them to create markets. Instead, Oracle is known to play hardball to dominate its markets.

Looks like I called that one correctly. While Oracle continues to invest in Java, it’s not big on true engagement with the community (aka, the Java Community Process). In a story in SD Times, “Java EE awaits its future,” published July 20, 2016, Alex Handy writes about what to expect at the forthcoming JavaOne conference, including about Java EE:

When Oracle purchased Sun Microsystems in 2010, the immediate worry in the marketplace was that the company would become a bad actor around Java. Six years later, it would seem that these fears have come true—at least in part. The biggest new platform for Java, Android, remains embroiled in ugly litigation between Google and Oracle.

Despite outward appearances of a danger for mainstream Java, however, it’s undeniable that the OpenJDK has continued along apace, almost at the same rate of change IT experienced at Sun. When Sun open-sourced the OpenJDK under the GPL before it was acquired by Oracle, it was, in a sense, ensuring that no single entity could control Java entirely, as with Linux.

Java EE, however, has lagged behind in its attention from Oracle. Java EE 7 arrived two years ago, and it’s already out of step with the new APIs introduced in OpenJDK 8. The executive committee at the Java Community Process is ready to move the enterprise platform along its road map. Yet something has stopped Java EE dead in its tracks at Oracle. JSR 366 laid out the foundations for this next revision of the platform in the fall of 2015. One would never know that, however, by looking at the Expert Committee mailing lists at the JCP: Those have been completely silent since 2014.

Alex continues,

One person who’s worried that JavaOne won’t reveal any amazing new developments in Java EE is Reza Rahman. He’s a former Java EE evangelist at Oracle, and is now one of the founders of the Java EE Guardians, a group dedicated to goading Oracle into action, or going around them entirely.

“Our principal goal is to move Java EE forward using community involvement. Our biggest concern now is if Oracle is even committed to delivering Java EE. There are various ways of solving it, but the best is for Oracle to commit to and deliver Java EE 8,” said Rahman.

His concerns come from the fact that the Java EE 8 specification has been, essentially, stalled by lack of action on Oracle’s part. The specification leads for the project are stuck in a sort of limbo, with their last chunk of work completed in December, followed by no indication of movement inside Oracle.

Alex quotes an executive at Red Hat, Craig Muzilla, who seems justifiably pessimistic:

The only thing standing in the way of evolving Java EE right now, said Muzilla, is Oracle. “Basically, what Oracle does is they hold the keys to the [Test Compatibility Kit] for certifying in EE, but in terms of creating other ways of using Java, other runtime environments, they don’t have anything other than their name on the language,” he said.

Java is still going strong. Oracle’s commitment to the community and the process – not so much. This is one “told you so” that I’m not proud of, not one bit.

javamagThe newest issue of the second-best software development publication is out – and it’s a doozy. You’ll definitely want to read the July/August 2016 issue of Java Magazine.

(The #1 publication in this space is my own Software Development Times. Yeah, SD Times rules.)

Here is how Andrew Binstock, editor-in-chief of Java Magazine, describes the latest issue:

…in which we look at enterprise Java – not so much at Java EE as a platform, but at individual services that can be useful as part of a larger solution, For example, we examine JSON-P, the two core Java libraries for parsing JSON data; JavaMail, the standalone library for sending and receiving email messages; and JASPIC , which is a custom way to handle security, often used with containers. For Java EE fans, one of the leaders of the JSF team discusses in considerable detail the changes being delivered in the upcoming JSF 2.3 release.

We also show off JShell from Java 9, which is an interactive shell (or REPL) useful for testing Java code snippets. It will surely become one of the most used features of the new language release, especially for testing code interactively without having to set up and run an entire project.

And we continue our series on JVM languages with JRuby, the JVM implementation of the Ruby scripting language. The article’s author, Charlie Nutter, who implemented most of the language, discusses not only the benefits of JRuby but how it became one of the fastest implementations of Ruby.

For new to intermediate programmers, we deliver more of our in-depth tutorials. Michael Kölling concludes his two-part series on generics by explaining the use of and logic behind wildcards in generics. And a book excerpt on NIO.2 illustrates advanced uses of files, paths, and directories, including an example that demonstrates how to monitor a directory for changes to its files.

In addition, we have our usual code quiz with its customary detailed solutions, a book review of a new text on writing maintainable code, an editorial about some of the challenges of writing code using only small classes, and the overview of a Java Enhancement Proposal (JEP) for Java linker. A linker in Java? Have a look.

The story I particularly recommend is “Using the Java APIs for JSON processing.” David Delabasseé covers the Java API for JavaScript Object Notation Processing (JSR-353) and its two parts, one of which is high-level object modal API, and the other a lower-level streaming API.

It’s a solid issue. Read it – and subscribe, it’s free!

hackathonThe MEF recently conducted its second LSO Hackathon at a Rome event called Euro16. You can read my story about it here in DiarioTi: LSO Hackathons Bring Together Open Standards, Open Source.

Alas, my coding skills are too rusty for a Hackathon, unless the objective is to develop a fully buzzword compliant implementation of “Hello World.” Fortunately, there are others with better skills, as well as a broader understanding of today’s toughest problems.

Many Hackathons are thinly veiled marketing exercises by companies, designed to find ways to get programmers hooked on their tools, platforms, APIs, etc. Not all! One of the most interesting Hackathons is from the MEF, an industry group that drives communications interoperability. As a standards defining organization (SDO), the MEF wants to help carriers and equipment vendors design products/services ready for the next generation of connectivity. That means building on a foundation of SDN (software defined networks), NFV (network functions virtualization), LSO (lifecycle service orchestration) and CE2.0 (Carrier Ethernet 2.0).

To make all this happen:

  • What the MEF does: Create open standards for architectures and specifications.
  • What vendors, carriers and open source projects do: Write software to those specifications.
  • What the Hackathon does: Give everyone a chance to work together, make sure the code is truly interoperable, and find areas where the written specs might be ambiguous.

Thus, the MEF LSO Hackathons. They bring together a wide swatch of the industry to move beyond the standards documents and actually write and test code that implements those specs.

As mentioned above, the MEF just completed its second Hackathon at Euro16. The first LSO Hackathon was at last year’s MEF GEN15 annual conference in Dallas. Here’s my story about it in Telecom Ramblings: The MEF LSO Hackathon: Building Community, Swatting Bugs, Writing Code.

The third LSO Hackathon will be at this year’s MEF annual conference, MEF16, in Baltimore, Nov. 7-10. I will be there as an observer – alas, without the up-to-date, practical skills to be a coding participant.

ESDC_2010Today’s serendipitous discovery: A blog post about the Enterprise Software Development Conference (ESC), produced by BZ Media in March 2010. I was the conference chair of that event; our goal was to try to replicate the wonderful SD West conference, which CMP had discontinued the year before. (I am the “Z” of BZ Media.)

Unfortunately, ESDC was not viable from a business perspective, so we only ran it one time. Even so, we had a great conference, and the attendees, presenters and exhibitors were delighted with the event’s quality and technical content.

One of our top exhibitors was OutSystems. Mike Jones, one of their executives, wrote about the conference in a thoughtful blog post, “ESDC Retrospective.” Mike started with

Last week, the OutSystems team attended the Enterprise Software Development Conference (ESDC) in San Mateo California. This is the first year for this show and, as Alan Zeichick notes, it takes up where the old SD West conference left off. As gold sponsors of the show, we got to both attend the sessions and talk to the conference attendees at the OutSystems booth. I just wanted to share a few highlights & take-aways from the show.

One of his cited highlights was

Another highlight: Kent Beck‘s keynote on “Responsive Design: Efficiency Through Safety.”  This was the first time I had heard Kent speak. He started off by referencing Ed Yourdon‘s work on Systems Design and how it led him to try and distill his own working process for design. This was the premise for his presentation. My take-away was that no matter what you do, your design will change. I think we all accept this as fact – especially for application software. Kent then explained his techniques to reduce the risk when making design changes. For each of his examples I found myself thinking ‘This is not really a problem with the Agile Platform because the TrueChange™ engine will keep you from breaking stuff you did not intend to break, allowing you to move very fast with little risk.” If you are hand-coding, then Kent’s four techniques (as described here by Alan Zeichick) to reduce risk when making change is great advice, but why do that if you don’t have to? BTW, I think Kent would love the Agile Platform.

Thanks, Mike, for the thoughtful writeup. Hard to believe ESDC was more than six years ago. (Read the whole post here.)

Waybackmachine3Fire up the WABAC Machine, Mr. Peabody: In June 2008, I wrote a piece for MIT Technology Review explaining “How Facebook Works.”

The story started with this:

Facebook is a wonderful example of the network effect, in which the value of a network to a user is exponentially proportional to the number of other users that network has.

Facebook’s power derives from what Jeff Rothschild, its vice president of technology, calls the “social graph”–the sum of the wildly various connections between the site’s users and their friends; between people and events; between events and photos; between photos and people; and between a huge number of discrete objects linked by metadata describing them and their connections.

Facebook maintains data centers in Santa Clara, CA; San Francisco; and Northern Virginia. The centers are built on the backs of three tiers of x86 servers loaded up with open-source software, some that Facebook has created itself.

Let’s look at the main facility, in Santa Clara, and then show how it interacts with its siblings.

Read the whole story here… and check out Facebook’s current Open Source project pages too.

diet-cokeA hackathon – like the debut LSO Hackathon held in November 2015 at the MEF’s GEN15 conference – is where magic happens. It’s where theory turns into practice, and the state of the art advances. Dozens of techies sitting in a room, hunched over laptops, scribbling on whiteboards, drinking excessive quantities of coffee and Diet Coke. A hubbub of conversation. Focus. Laughter. A sense of challenge.

More than 50 network and/or software experts joined the first-ever LSO Hackathon, representing a very diverse group of 20 companies. They were asked to focus on two Reference Points of the MEF’s Lifecycle Service Orchestration (LSO) Reference Architecture. As explained by , Director of Certification and Strategic Programs at the MEF and one of the architects of the LSO Hackathon series, these included:

  • LSO Adagio, which defines the element management reference point needed to manage network resources, including element view management functions
  • LSO Presto, which defines the network management reference point needed to manage the network infrastructure, including network view management functions

Read more about the LSO Hackathon in my story in Telecom Ramblings, “Building Community, Swatting Bugs, Writing Code.”

hemingwaySEYTON
The tests, my lord, have failed.

MACBETH
I should have used a promise;
There would have been an object ready made.
Tomorrow, and tomorrow, and tomorrow,
Loops o’er this petty code in endless mire,
To the last iteration of recorded time;
And all our tests have long since found
Their way to dusty death. Shout, shout, brief handle!
Thine’s but a ghoulish shadow, an empty layer
That waits in vain to play upon this stage;
And then is lost, ignored. Yours is a tale
Told by an idiot, full of orphaned logic
Signifying nothing.

Those are a few words from a delightful new book, “If Hemingway Wrote JavaScript,” by Angus Croll. For example, the nugget above is “Macbeth’s Last Callback, after a soliloquy from Macbeth from William Shakespeare.”

Literary gems and nifty algorithms abide in this code-dripping 200-page tome from No Starch Press. Croll, a member of the UI framework team at Twitter, has been writing about famous authors writing JavaScript since 2012, and now has collected and expanded the entries into a book that will be amusing to read or gift this holiday season. (He also has a serious technical blog about JavaScript, but where’s the fun in that?)

Read and wonder as you see how Dan Brown, author of “The Da Vinci Code,” would code a Fibonacci sequence generator. How Jack Kerouac would calculate factorials. How J.D. Salinger and Tupac Shakur would determine if numbers are happy or inconsolable. How Dylan Thomas would muse on refactoring. How Douglas Adams of “Hitchhiker’s Guide to the Galaxy” fame would generate prime numbers. How Walt Whitman would perform acceptance tests. How J.K. Rowling would program a routine called mumbleMore. How Edgar Allen Poe would describe a commonplace programming task:

Once upon a midnight dreary, while I struggled with JQuery,
Sighing softly, weak and weary, troubled by my daunting chore,
While I grappled with weak mapping, suddenly a function wrapping
Formed a closure, gently trapping objects that had gone before.

Twenty-five famous authors, lots of JavaScript, lots of prose and poetry. What’s not to like? Put “If Hemingway Wrote JavaScript” on your shopping list.

Let’s move from JavaScript to C, or specifically the 7th Underhanded C Contest. If you are a brilliantly bad C programmer, you might win a US$200 gift certificate to popular online store ThinkGeek. The organizer, Prof. Scott Craver of Binghamton University in New York, explains:

The goal of the contest is to write code that is as readable, clear, innocent and straightforward as possible, and yet it must fail to perform at its apparent function. To be more specific, it should do something subtly evil. Every year, we will propose a challenge to coders to solve a simple data processing problem, but with covert malicious behavior. Examples include miscounting votes, shaving money from financial transactions, or leaking information to an eavesdropper. The main goal, however, is to write source code that easily passes visual inspection by other programmers.

The specific challenge for 2014 is to write a surveillance subroutine that looks proper but leaks data. The deadline is Jan. 1, 2015, more or less. See the Underhanded C website; be sure to read the FAQ!

satya-nadellaI like this new Microsoft. Satya Nadella’s Microsoft. Yes, the CEO needs to improve his public speaking skills, at least when talking to women’s conferences. Yet when you look at the company’s recent activities, what appears are lots of significant moves toward openness, a very positive focus on personal productivity, and even inventiveness.

That’s not to say that Microsoft is firing on all cylinders. There is too much focus on Windows as the universal platform, when not every problem needs Windows as a solution. There is too much of a focus on having its own mobile platform, where Windows Phone is spinning its wheels and can’t get traction against platforms that are, quite frankly, better. Innovation is lacking in many of Microsoft’s older enterprise products, from Windows Server to Exchange to Dynamics. And Microsoft isn’t doing itself any favors by pushing Surface Pro and competing against its loyal OEM partners—thereby undermining the foundations of its success.

That said, I like some of Microsoft’s most recent initiatives. While it’s possible that some of them were conceived under former CEO Steve Ballmer, they are helping demonstrate that Microsoft is back in the game.

Some examples of success so far:

  • Microsoft Band. Nobody saw this low-cost, high-functionality fitness band coming, and it took the wind out of the Apple Watch and Samsung Gear. The Band is attractive, functional, and most importantly, cross-platform. Of course, it works best at present with Windows Phone, but it does work with Android and iOS. That’s unexpected, and given the positive reviews of Band, I’m very impressed. It makes me think: If Zune had been equally open, would it have had a chance? (Umm. Probably not.)
  • Office Mobile. The company dropped the price of its Office suite for iPhone, Android, Windows Phone and iPad to the best possible price: free. Unlike in the past, the mobile apps aren’t crippled unless you tie them to an Office 365 license for your Windows desktop. You can view, edit and print Word, Excel and PowerPoint documents; use OneNote; and even use the Lync communications platform. Whether Microsoft realized that mobile users are a different breed, or whether it saw the opportunity to use mobile as a loss leader, it’s hard to say. This change is welcome, however, and has added to Microsoft’s karma credit.
  • Microsoft Sway. Another “didn’t see it coming” launch, Sway is a new presentation program that will be part of the Office suite. It’s not PowerPoint; it’s geared toward online presentations, not slide shows. The company writes: “Sway’s built-in design engine takes the hassle out of formatting your content by putting all of it into a cohesive layout as you create. This means that from the first word, image, Tweet, or graphic you add, your Sway is already being formed for you. This is thanks to a lot of Microsoft Research technology we’ve brought together in the background. As you add more of your content, Sway continues to analyze and arrange it based on the algorithms and design styles we’ve incorporated.” That’s not PowerPoint—and it’s perfect for today’s Web and mobility viewing.
  • .NET Core is open source. Nadella said that Microsoft was committed, and the release of the .NET Core to GitHub is a big deal. Why did the company do this? Two reasons according to Immo Landwerth: “Lay the foundation for a cross-platform .NET. Build and leverage a stronger ecosystem.” Cross-platform .NET? That would indeed by welcome news, because after all, there should be nothing Windows-specific about the .NET sandbox. Well, nothing technical. Marketing-wise, it was all about customer lock-in to Windows.
  • Microsoft is removing the lock-in—or at least, some of the lock-in. That’s good for customers, of course, but could be scary for Microsoft—unless it ensures that if customers have a true choice of platforms, they intentionally choose Windows. For that to be the case, the company will have to step up its game. That is, no more Windows 8-style fiascos.

Microsoft is truly on the right track, after quite a few years of virtual stagnation and playing catch-up. It’s good that they’re back in the game and getting stuff done.

forecastMalicious agents can crash a website by implementing a DDoS—a Distributed Denial of Service Attack—against a server. So can sloppy programmers.

Take, for example, the National Weather Service’s website, operated by the United States National Oceanic and Atmospheric Administration, or NOAA. On August 29, the service went down, hard, as single rogue Android app overwhelmed the NOAA’s servers.

As far as anyone knows, there was nothing deliberately malicious about the Android app, and of course there is nothing specific to Android in this situation. However, the app in question was making service requests of the NOAA server’s public APIs every few milliseconds. With hundreds, thousands or tens of thousands of instances of that app running simultaneously, the NOAA system collapsed.

There is plenty of blame to go around. Let’s start with the app developer.

Certainly the app developer was sloppy, sloppy, sloppy. I can imagine that the app worked great in testing, when only one or two instances of the app were running at any one time on a simulator or on actual devices. Scale it up—boom! This is a case where manual code reviews may have found the problem. Maybe not.

Alternatively, the app developer could have checked to see if the public APIs it required (such as NOAA’s weather API) could handle the anticipated load. However, if the coders didn’t write the software correctly, load testing may not have sufficed. For example, say that the design of the app was to pull data every 10 seconds. If the programmers accidentally set up the data retrieval to pull the data every 10 milliseconds, the load would be 1,000x greater than anticipated. Every 10 seconds, no problem. Every 10 milliseconds, big problem. Boom!

This is a nasty bug, to be sure. Compilers, libraries, test systems, all would verify that the software ran correctly, because it did run correctly. In the scenario I’ve painted, it simply wasn’t coded to meet the design. The bug might have been spotted if someone noticed a very high number of external API calls, or again, perhaps during a manual code review. Otherwise, it’s not hard to see how it would slip through the crack.

Let’s talk about NOAA now. In 2004, the weather service beefed up its Internet loads in anticipation of Hurricane Charley, contracting with Akamai to host some of its busiest Web pages, using distributed edge caching to reduce the load. This worked well, and Akamai continued to work with NOAA. It’s unclear if Akamai also fronted public API calls; my guess is that those were passed straight through to the National Weather Service servers.

NOAA’s biggest problem is that it has little control over external applications that use its public APIs. Even so, Akamai was still in the circuit and, fortunately, was able to help with the response to the Aug. 29 accidental DDoS situation. At that time, the National Weather Service put out a bulletin on its NIDS messaging service that said:

TO – ALL CUSTOMERS SUBJECT – POINT FORECAST ISSUES. WE ARE PROVIDING NOTICE TO ALL THAT NIDS HAS IDENTIFIED AN ABUSING ANDROID APP THAT IS IMPACTING FORECAST.WEATHER.GOV. WE HAVE FORCED ALL SITES TO ZONES WHILE WE WORK WITH THE DEVELOPER. AKAMAI IS BEING ENGAGED TO BLOCK THE APPLICATION. WE CONTINUE TO WORK ON THIS ISSUE AND APPRECIATE YOUR PATIENCE AS WE WORK TO RESOLVE THIS ISSUE.

Kudos to NOAA for responding quickly and transparently to this issue. Still, this appalling situation—that a single DDoS attack could cripple such a vital service—is unacceptable. Imagine if this had been a malicious attack, rather than an accidental coding error, and if the attacker was able to modify the attack in real time to go around Akamai’s attempts to block the traffic.

What could NOAA have done differently? For best results, DDoS attacks must be blocked within the network before they reach (and overwhelm) the server. Therefore, DDoS detection and blocking systems should already have been in place.

For example, with the ability to detect potential attacks due to abnormally high volumes of requests from a specific app, raise alarms, and also drop such requests (which is fast and takes few resources), instead of servicing them (which is slow and takes more resources). Perfect? No. DDoS scenarios are nasty and messy. No matter how you slice it, though, a single misbehaving app should never be able to crash your server.

Big Data Divination Pam BakerYou’ve gotta read “Data Divination: Big Data Strategies,” Pam Baker’s new book about Big Data.

Actually, let me change my recommendation. If you are a techie and you are looking for suggestions on how to configure your Hadoop installation or optimize the storage throughput in your NAS array, this isn’t the book for you. Rather, this is the book for your business-side manager or partner, who is looking to understand not only what Big Data is, but really really learn how to apply data analysis to business problems.

One of the challenges with Big Data is simply understanding it. The phrase is extremely broad and quite nebulous. Yet behind the overhyping of Big Data, there are genuine use cases that demonstrate that looking at your business’ data in a new way can transform your business. It is real, and it is true.

Bake is the editor of the “Fierce Big Data” website. She deconstructs the concept by dispensing with the jargon and the, well, overly smug Big Data worship that one finds in a lot of literature and pushed out by the vendors. With a breezy style that reflects her background as a technology journalist, Baker uses clear examples and lots of interviews to make her points.

What will you learn? To start with, “Data Divination” teaches you how to ask good questions. After all, if you don’t ask, you won’t learn anything from all that data and all those reports. Whether it’s predictive analytics or trend spotting or real-time analysis, she helps you understand which data is valuable and which isn’t. That’s why this book is best for the executive and business-side managers, who are the ultimate beneficiaries of your enterprise’s Big Data investments.

This book goes beyond other books on the subject, which could generally be summarized either as too fluffy and cheerleading, or as myopically focused on implementation details of specific Big Data architectures. For example, there is a lengthy chapter on the privacy implications of data gathering and data analysis, the sort of chapter that a journalist would write, but an engineer wouldn’t even think about.

Once you’ve finished with the basics, Baker jumps into several fascinating use cases: in healthcare, in the security industry, in government and law enforcement, in small business, in agriculture, in transportation, in energy, in retail, in manufacturing, and so on. Those are the most interesting parts of the book, and each use had takeaways that could apply to any industry. Baker is to be commended for digging into the noteworthy challenges that Big Data attempts to help businesses overcome.

It’s a good book. Read it. And tell your business partner, CIO or even CEO to read it too.

Microsoft-SharePoint-Foundation-2010-logoWhere do your employees go to find shared data? If it’s external data, probably an external search engine, like Google (which apparently holds 67.6% of the U.S. market) or Bing (18.7%) or one of the niche players.

What about internal corporate data? If your organization uses a platform like Microsoft’s SharePoint, that platform includes a pretty robust search engine. You can use SharePoint to find documents stored inside the SharePoint database, or external documents linked to it, and conversations and informal data hosted by SharePoint. If you are familiar with a product called FAST, which Microsoft acquired in 2008, SharePoint’s search contains some elements of FAST and some elements of Bing. It’s quite good.

What if you are not a SharePoint shop, or if you are in a shop that hasn’t rolled SharePoint out to every portion of the organization?  You probably don’t have any good way for employees to find structured and unstructured documents, as well as data. You’ve got information in Dropbox. In Box.com. In Lotus Notes, maybe. In private Facebook groups. In Yammer (another Microsoft acquisition, by the way). In Ribose, a neat startup. Any number of places that might be on enterprise servers or cloud services, and I’m not even talking about the myriad code repositories that you may have, from ClearCase to Perforce to Subversion to GitHub.

All of those sources are good. There are reasons to use each of them for document sharing and collaboration and source-code development. That’s the problem. Like the classic potato chips advertisements say, you can’t only eat one.

Even in a small company, the number of legitimate sharing platforms can proliferate like weeds. As organizations grow, the potential places to stash information can grow exponentially, especially if there is a culture that allows for end users or line-of-business departments to roll out ad hoc solutions. Add mobile, and the problem explodes.

This is a governance problem: How do you ensure that data is accounted for, check that external sharing solutions are secure, or even detect if information has been stolen or tampered with?

This is a productivity problem: How much time is wasted by employees looking for information?

This is a business problem: How much money is wasted, or how much work must be duplicated or redone because data can’t be found?

This is a Big Data problem: How can you analyze it if you can’t find it?

The answer has to be a smarter intranet portal. In a recent essay by the Nielsen Norman Group, usability experts Patty Caya and Kara Pernice write that “Intranet portals are the hub of the enterprise universe.”

The trick is to discover it, index it, and make it available to authorized users—without stifling productivity. That includes data from applications that your developers are creating and maintaining.

satya-nadellaMicrosoft has evolved considerably. It’s moved from its early days selling developer tools, or its era focusing on Windows and Office, or its run as a server software maker, or its first iteration as a cloud/online services company. Despite all the myriad changes, it’s always been true that Microsoft does not excel at innovation.

In fact, when the company focuses on innovation, it often misses with its products and pricing. Features are implemented badly, bugs proliferate, messages are muddled and strategy appears non-existent.

This confuses customers, annoys developers and frustrates partners.

When, by contrast, Microsoft focuses on execution, it does much, much better. Software and services are about getting the details right, and that means understanding the customers, not slamming out a bewildering product that has state-of-the-art technology but doesn’t make sense to anyone.

This is true whether you are talking about operating systems like Windows, or back-end products like Bing or SharePoint, or mobile phones. The new, innovative, visionary, ground-breaking products (or product upgrades) nearly always disappoint.

Reading new CEO Satya Nadella’s letter to his employees, I am concerned that Microsoft doesn’t understand that customers want excellent products. That means execution more than it means innovation.

Nadella’s letter, called “Bold Ambition & Our Core,” was published on July 10. Right up front, Nadella says, “The day I took on my new role I said that our industry does not respect tradition – it only respects innovation.”

That scares me. I think he misses the point.

Nadella writes,

At our core, Microsoft is the productivity and platform company for the mobile-first and cloud-first world. We will reinvent productivity to empower every person and every organization on the planet to do more and achieve more.

What does it mean to reinvent productivity? I’m sure it means more than carrying around a Microsoft Surface Pro 3 device that tries to be both a notebook computer and a tablet, but doesn’t truly succeed in either configuration.

Nadella continues,

Productivity for us goes well beyond documents, spreadsheets and slides. We will reinvent productivity for people who are swimming in a growing sea of devices, apps, data and social networks. We will build the solutions that address the productivity needs of groups and entire organizations as well as individuals by putting them at the center of their computing experiences.

It’s a beautiful concept – but so far, Microsoft’s bread and butter has been specifically documents, spreadsheets and slides. Is he talking about SharePoint and Yammer?

In the 3,000-word missive, Nadella spends a lot of time talking about specific areas. He talks about “digital work and life experiences,” which are productivity enhancers designed for the mobile-first and cloud-first world. He talks about context-rich connections between experience, such as with the Cortana app on Windows Phone. He talks about the cloud, where

the combination of Azure and Windows Server makes us the only company with a public, private and hybrid cloud platform that can power modern business. We will transform the return on IT investment by enabling enterprises to combine their existing datacenters and our public cloud into one cohesive infrastructure backplane.

Nadella also talks about Xbox:

The single biggest digital life category, measured in both time and money spent, in a mobile-first world is gaming. We are fortunate to have Xbox in our family to go after this opportunity with unique and bold innovation. Microsoft will continue to vigorously innovate and delight gamers with Xbox.

What’s missing from Nadella’s call-to-arms letter? You won’t read much specifically about Windows Phone, about notebooks and desktop computers, about desktop Windows, or even traditional Office.

You also didn’t see much about execution, about delivering excellent products. All I read is innovate, innovate, innovate. Ideas are nice, Mr. Nadella, but I’d like to see a company that actually delights its customers, instead of frustrating them with its latest upgrades.

OTN-Tour-2014-370x395If your developers aren’t enrolled in developer relations programs, they will grow old and stale. They will become moldy. They will pine for the Good Old Days and opine endlessly about the irrelevance of new tools, new platforms, new paradigms and new ideas. No matter their brilliance today, they will become obsolescent.

You can’t let that happen!

Developer relations programs are all over the map, literally. Some are focused on operating systems – see those from Apple and Microsoft. Some are about back-end platforms, like programs from IBM or Oracle. Some are tied to very specific products.

It’s hard to know where your developers will get the best value. Let’s take a very simplistic case. If you have a bright programmer who is working to integrate back-end Oracle databases with Windows servers, should she be a member of the Oracle Technology Networkor MSDN? Likely both; it doesn’t hurt to sign up. But where should she spend her time?

It’s tricky to make that call, and it largely depends on both the developers’ self-starter motivation and your own corporate culture. Some developer programs are free, but others aren’t, with prices ranging from a hundred dollars per year to thousands of dollars. Do you offer to cover the costs of belonging to the developer program for each architect, designer, coder or tester who wants to sign up – or do they have to go through hoops that send out the message that the programs aren’t important (or that the employee isn’t worth the investment)?

Let’s say that you are running a Windows shop, and being a Windows Server guru is seen as essential for career growth. Clearly, your bright programmer should grow and enhance her skills as a Windows expert. While deep Oracle expertise is essential, it might be a secondary investment for her time.

Of course, if your team is seen as an Oracle shop, and the Microsoft aspect is seen as secondary, she should invest her time in Oracle technologies.

The scenario above is too simple. There’s no reason that the bright programmer can’t participate in two developer programs. However, what’s a reasonable ceiling. Two? Three? Five? Ten? If developers spread themselves out too thin, it’s hard to gain deep expertise. To my mind, a developer should engage with 3-5 development programs; probably no more. Depending on the situation, though, perhaps only one or two would be appropriate. If you have a developer who doesn’t see any benefit in belonging to a developer relations program, look at where he or she is spending time. There may be local user groups that provide the same level of engagement. But if you have someone who doesn’t want to engage at all in the larger world beyond his or her team — who doesn’t see the value of building deep expertise in products or platforms — you should be concerned.

Early in 2014, the market research firm Evans Data Corp. conducted a study on developer relations programs. They asked developers, “What most motivates you to seek solutions from developer programs?” The answers should not surprise anyone:

  • 35.5%: Need to upgrade from existing, outdated technology
  • 24.3%: Present skillset is insufficient
  • 21.7%: Present toolsets are insufficient
  • 8.5%: Anticipating future problems
  • 6.3%: Need to match or beat competition
  • 3.8%: Other

I’d keep an eye on those who answered “present skillset is insufficient.” Those employees are investing in themselves — and they are going places!

googletvGOOGLE I/O 2004, SAN FRANCISCO — What is Android? It’s hard to know these days, and I’m not sure if that’s good or not. We all know what happened when Microsoft began seeing Windows as a common operating system for everything from embedded systems to desktops to phones to servers. By trying to be reasonably good at everything, Windows lost its way and ceased being the best platform for anything.

Once upon a time, Android was a free operating system for smartphones, conceived of as a rival for Symbian and (believe it or not) Windows Mobile. Google purchased Android Inc. in 2005; the Open Handset Alliance launched in 2007; and the first smartphone running Android appeared in 2008. Today, Android-based phones dominate the market, with the most visible handset makers being Samsung and LG. Some estimates show that at the end of 2013, more than 81% of all smartphones were running Android.

From its origins in smartphones, it was natural that Android would expand to tablets. Although no Android tablet has emerged as a clear market leader, there are many manufacturers, from Samsung to Amazon to Google to Asus. While Android has decisively eclipsed Apple’s iPhone in the smartphone market, the iPad still defines tablets.

What else? Android is now an operating system for head-mounted displays, smartwatches, wearables, televisions and automotive entertainment systems.

We’re all familiar with Google Glass, which is based on Android. The company is working hard to recruit developers to build Glassware. This spring, Android announced Android Wear, which is described as “your key to a multiscreen world,” especially if one of those screens will be a smart watch. A few companies, including LG, Samsung and Motorola, have announced watches.

Remember Google TV? It was not a success in the market. The replacement, announced this week here at the annual Google I/O developer conference, is called Android TV. According to Google, “Thousands of apps in the Google Play Store are already optimized for TVs.”

Google is clearly interested in cars, and not only because it wants to build self-driving vehicles. A few aftermarket audio system makers have used off-the-shelf Android as the driver in replacement automotive head units. This week, Google announced Android Autoas a competitor to Apple’s iOS-focused CarPlay. As with smartphones, Google set up a vendor alliance — in this case, the Open Automotive Alliance — to developer industry specifications and to drive alliances with car manufacturers.

From the looks of things, Android is now intended to become a general-purpose operating system. Good for embedded, small-footprint, app-based, highly connected devices.

Google’s emphasis, though, isn’t on the hardware, but on that increasingly multiscreen world. With screens spanning the wrist, phone, tablet, head-mounted displays and televisions, Android looks to be everywhere. And that means that Google Play will be everywhere. Thus Google advertisements everywhere too. I mean, duh.

I guess that’s the future of computing: Android Everywhere.

swift-chris-lattnerSAN FRANCISCO — I expected a new version of OS X, the operating system for Mac desktops and notebooks. I expected a new version of iOS, the operating system for iPhones and iPads. I did not expect a new programming language. Yet that’s what we got at Apple’s Worldwide Developers Conference, held here this week. And I’m delighted.

Along with the previews of OS X 10.10 “Yosemite” and iOS 8, Apple showed Swift, a language that runs inside its familiar Xcode integrated development environment.

Swift is designed primarily for safety. As Apple puts it:

Swift eliminates entire classes of unsafe code. Variables are always initialized before use, arrays and integers are checked for overflow, and memory is managed automatically. Syntax is tuned to make it easy to define your intent—for example, simple three-character keywords define a variable (var) or constant (let). The safe patterns in Swift are tuned for the powerful Cocoa and Cocoa Touch APIs. Understanding and properly handling cases where objects are nil is fundamental to the frameworks, and Swift code makes this extremely easy. Adding a single character can replace what used to be an entire line of code in Objective-C. This all works together to make building iOS and Mac apps easier and safer than ever before.

Obviously we have not had a chance to work with Swift directly. However, it appears to be a very easy language to learn, especially for developers who are used to Objective-C. It’s designed for Apple’s Cocoa and Cocoa Touch libraries, is built with the familiar and stable LLVM compiler, and creates the same runtime as Objective-C.

Apple says Swift programs are fast. The company claims that a complex object sort will run in Objective-C about 2.8x faster than in Python. But if you use Swift, it runs 3.9x faster.

Learn more about the Swift announcement, as well as a ton of new APIs, in Rob Marvin’s article, “Apple announces Swift programming language, new SDK and developer features at WWDC.”

A neat feature of Swift is what Apple calls “playgrounds,” where you can see how the code runs as you write it—without building a new version. That’s cool. Swift isn’t ready yet, but you can check out Apple’s preliminary documentation, including a tour and language guide.

Swift is a cross-platform language, if you define “cross platform” to mean “both of Apple’s platforms.” Yes, you can use Swift to write apps for OS X and for iOS, and because it creates the same runtimes as Objective-C, you should be able to run Swift applications on older versions of those operating systems, not only OS X 10.10 and iOS 8. What you can’t do, and probably never will do, is run Swift apps on competing platforms. No Android, no Windows Phone. Given that the language does target the Cocoa and Cocoa Touch libraries, that’s clearly no surprise.

Most mobile app developers already target iOS first, even though there are more Android devices than iPhones and iPads. Swift won’t change that, and in fact it will likely make iOS development even more attractive.

On the desktop/notebook side, developers writing native software probably target Windows first because of the huge installed base of machines. It’s unlikely that Swift will change behaviors there. But then again, writing desktop/notebook software truly is becoming a niche activity.

gitThere are lots of reasons to use Git as your source-code management system. Whether used as a primary system, or in conjunction with an existing legacy repository, I’m going to argue that if you’re not using Git now, you should be at least testing it out.

Basics of Git: It is open source, and runs on Linux, Unix and Windows servers. It is stable. It is solid. It is fast. It is supported by just about every major tool vendor. Developers love Git. Managers love Git.

Not long ago, much of the world standardized on Concurrent Versions System (CVS) as its version control system. Then Subversion (SVN) came along, and the world standardized on that. Yes, yes, I know there are dozens of other version control systems, ranging from Microsoft’s Visual SourceSafe and Team Foundation Server to IBM Rational’s ClearCase. Those have always been niche products. Some are very successful niche products, but the industry standards have been CVS and SVN for years.

Along came Git, designed by Linus Torvalds in 2005, now headed up by Junio Hamano. For a brief history of Git, read “The Legacy of Linus Torvalds: Linux, Git, and One Giant Flamethrower,” by Robert McMillan, published in Wired in November 2012. For the official history, see the Git website.

What’s so wonderful about Git? I’ll answer in two ways: industry support and impressive functionality.

For industry support, let me refer you to two new articles by SD Times’ Lisa Morgan. Those stories inspired this column. The first is“How to get Git into the enterprise,” and the other is “Git smart about tools: A Buyers Guide.” You’ll see that nearly every major industry player supports Git—even competing SCM systems have worked to ensure interoperability. That’s a heck of an endorsement, and shows the stability and maturity of the platform.

Don’t take my word for it for the impressive functionality. Instead, let me quote from other bloggers.

Tobias Günther: “Work Offline: What if you want to work while you’re on the move? With a centralized VCS like Subversion or CVS, you’re stranded if you’re not connected to the central repository. With Git, almost everything is possible simply on your local machine: make a commit, browse your project’s complete history, merge or create branches… Git lets you decide where and when you want to work.”

Stephen Ball: “Resolving conflicts is way easier (than SVN): In Git, if I have a private branch from a branch that has been updated with new (conflicting) commits, I can rebase its commits one at a time against the public destination branch. I can resolve conflicts as they arise between my code and the current codebase. This makes dealing with conflicts easy because I get the context of the conflict (my commit message) and only see one conflict at a time.

“In SVN if I merge a branch against another and there are a lot of conflicts, there’s nothing I can do but resolve them all at the same time. What a mess.”

Scott Chacon: “There are tons of fantastic and powerful features in Git that help with debugging, complex diffing and merging, and more. There is also a great developer community to tap into and become a part of and a number of really good free resources online to help you learn and use Git…

“I want to share with you the concept that you can think about version control not as a necessary inconvenience that you need to put up with in order to collaborate, but rather as a powerful framework for managing your work separately in contexts, for being able to switch and merge between those contexts quickly and easily, for being able to make decisions late and craft your work without having to pre-plan everything all the time. Git makes all of these things easy and prioritizes them and should change the way you think about how to approach a problem in any of your projects and version control itself.”

Nicola Paolucci:
“If you don’t like speed, being productive and more reliable coding practices, then you shouldn’t use Git.”

Peter Cho: “Most developers would be delighted if they can change their workflow to use Git. Switching over early would be more ideal unless, of course, your SCM relies on a large network of dependent applications. If it’s not viable to change SCM systems, I would highly recommend using it on future projects.

“Git is infamous for having a large suite of tools that even seasoned users need months to master. However, getting into the fundamentals of Git is simple if you’re trying to switch over from SVN or CVS. So give a try sometime.”

Thomas Koch: “Somebody probably already recommended you to switch to Git, because it’s the best VCS. I’d like to go a step further now and talk about the risk you’re taking if you won’t switch soon. By still using SVN (if you’re using CVS you’re doomed anyway), you communicate the following: We’re ignorant about the fact that the rest of the (free) world switched to Git. We don’t invest time to train our developers in new technologies. We don’t care to provide the best development infrastructure. We’re not used to collaborate with external contributors. We’re not aware how much Subversion sucks and that Subversion does not support any decent development process. Yes, our development process most certainly sucks too.”

Günther also wrote, “Go With the Flow: Only dead fish swim with the stream. And sometimes, clever developers do, too. Git is used by more and more well-known companies and Open Source projects: Ruby On Rails, jQuery, Perl, Debian, the Linux Kernel, and many more. A large community often is an advantage by itself because an ecosystem evolves around the system. Lots of tutorials, tools (do I have to mention Tower?) and services make Git even more attractive.”

I’m sure there are arguments against Git. Nearly all the ones I’ve heard have come to me via competing source-code management vendors, not from developers who have actually tried Git for more at least one pilot. If you aren’t using Git, check it out. It’s the present and future of version control systems.

arthur-hickenSouth San Francisco, California — Writing software would be oh, so much simpler if we didn’t have all those darned choices. HTML5 or native apps? Windows Server in the data center or Windows Azure in the cloud? Which Linux distro? Java or C#? Continuous Integration? Continuous Delivery? Git or Subversion or both? NoSQL? Which APIs? Node.js? Follow-the-sun?

In a panel discussion on real-world software delivery bottlenecks, “complexity” was suggested as a main challenge. The panel, held here at the SDLC Acceleration Summit, pointed out that the complexity of constantly evaluating new technologies, techniques and choices can bring uncertainty and doubt and consume valuable mental bandwidth—and those might sometimes negate the benefits of staying on the cutting edge. (Pictured: My friend Arthur Hicken, aka “The Code Curmudgeon,” chief evangelist at Parasoft, which sponsored the event.)

I was the moderator. Sitting on the panel were David Intersimone from Embarcadero Technologies; Paul Dhaliwal from 383 Media; Andrew Binstock, editor of Dr. Dobb’s Journal; and Norman Buck from SQS.

Choices are not simple. Merely keeping up with the latest technologies can consume tons of time. Not only reading resources like SD Times, but also following your favorite Twitter feeds, reading blogs like Stack Overflow, meeting thought leaders at conferences, and, of course, hearing vendor pitches.

While complexity can be overwhelming, the truth is that we can’t opt out. We must keep up with the latest platforms and changes. We must have a mobile strategy. Yes, you can choose to ignore, say, the recent advances in cloud computing, Web APIs and service virtualization, but if you do so, you’re potentially missing out on huge benefits. Yes, technologies like Software Defined Networking (SDN) and OpenFlow may not seem applicable to you today, but odds are that they will be soon. Ignore them now and play catch-up later.

Complexity is not new. If you were writing FORTRAN code back in the 1970s, you had choices of libraries. Developing client/server software for NetWare or AIX? Building with Oracle? We have always had complexity and choices in platforms, tools, methodologies, databases and libraries. We always had to ensure that our code ran (and ran properly) on a variety of different targets, including a wide range of browsers, Java runtimes, rendering engines and more.

Yet today the number of combinations and permutations seems to be significantly greater than at any time in the past. Clouds, virtual machines, mobile devices, APIs, tools. Perhaps we need a new abstraction layer. In any case, though, complexity is a root cause of our challenges with software delivery. We must deal with it.

Carla-Schroder“I tried working for some tech companies like Microsoft, Tektronix, IBM, and Intel. What a fiasco. I can’t count how many young men with way less experience and skills than me snagged the good fun hands-on tech jobs, while I got stuck doing some kind of crap customer service job. I still remember this guy who got hired as a desktop technician. He was in his 30s, but in bad health, always red and sweaty and breathing hard. It took him forever to do the simplest task, like connecting a monitor or printer. He didn’t know much and was usually wrong, but he kept his job. I busted my butt to show I was serious and already had a good skill set, and would work my tail off to excel, and they couldn’t see past that I wasn’t male. So I got the message, mentally told them to eff off and stuck with freelancing.”

So writes Carla Schroder in her blog post, “My Nerd Life: Too Loud, Too Funny, Too Smart, Too Fat” on linux.com. Her story is an important one for female techies – and all techies. Read it.

As I write this on Friday, Apr. 19, it’s been a rough week. A tragic week. Boston is on lockdown, as the hunt for the suspected Boston Marathon bombers continues. Explosion at a fertilizer plant in Texas. Killings in Syria. Suicide bombings in Iraq. And much more besides.

The Boston incident struck me hard. Not only as a native New Englander who loves that city, and not only because I have so many friends and family there, but also because I was near Copley Square only a week earlier. My heart goes out to all of the past week’s victims, in Boston and worldwide.

Changing the subject entirely: I’d like to share some data compiled by Black Duck Software and North Bridge Venture Partners. This is their seventh annual report about open source software (OSS) adoption. The notes are analysis from Black Duck and North Bridge.

How important will the following trends be for open source over the next 2-3 years?

#1 Innovation (88.6%)
#2 Knowledge and Culture in Academia (86.4%)
#3 Adoption of OSS into non-technical segments (86.3%)
#4 OSS Development methods adopted inside businesses (79.3%)
#5 Increased awareness of OSS by consumers (71.9%)
#6 Growth of industry specific communities (63.3%)

Note: Over 86% of respondents ranked Innovation and Knowledge and Culture of OSS in Academia as important/very important.

How important are the following factors to the adoption and use of open source? Ranked in response order:

#1 – Better Quality
#2 – Freedom from vendor lock-in
#3 – Flexibility, access to libraries of software, extensions, add-ons
#4 – Elasticity, ability to scale at little cost or penalty
#5 – Superior security
#6 – Pace of innovation
#7 – Lower costs
#8 – Access to source code

Note: Quality jumped to #1 this year, from third place in 2012.

How important are the following factors when choosing between using open source and proprietary alternatives? Ranked in response order:

#1 – Competitive features/technical capabilities
#2 – Security concerns
#3 – Cost of ownership
#4 – Internal technical skills
#5 – Familiarity with OSS Solutions
#6 – Deployment complexity
#7 – Legal concerns about licensing

Note: A surprising result was “Formal Commercial Vendor Support” was ranked as the least important factor – 12% of respondents ranked it as unimportant.  Support has traditionally been held as an important requirement by large IT organizations, with awareness of OSS rising, the requirement is rapidly diminishing.

When hiring new software developers, how important are the following aspects of open source experience? Ranked in response order:

2012
#1 – Variety of projects
#2 – Code contributions
#3 – Experience with major projects
#4 – Experience as a committer
#5 – Community management experience

2013
#1 – Experience with relevant/specific projects
#2 – Code contributions
#3 – Experience with a variety of projects
#4 – Experience as a committer
#5 – Community management experience

Note: The 2013 results signal a shift to “deep vs. broad experience” where respondents are most interested in specific OSS project experience vs. a variety of projects, which was #1 in 2012.

There is a lot more data in the Future of Open Source 2013 survey. Go check it out. 

Git, the open-source version control system, is becoming popular with enterprise developers. Or so it appears not only from anecdotal evidence I hear from developers all the time, but also from a new marketing study from CollabNet.

The study, called “The State of Git in the Enterprise,” was conducted by InformationWeek, but was paid for by CollabNet, which coincidentally sells tools and services to help development teams use Git. You should bear that in mind when interpreting the study,  which you can only receive by giving CollabNet your contact information.

That said, there are five interesting findings in the January 2013 study, which surveyed 248 development and business technology professionals at companies with 100 or more employees who use source code management tools:

First: Most developers are not using or planning to use Git. But of those that do, usage is split between on-premises or in a public/private cloud.

How do you use (or intend to use by 2013) Git deployment?

On premises: 30%
Private cloud/virtualized: 23%
Public cloud: 10%
Don’t use/do not intend to use 54%

Second: What best describes your use of Git today?

Git is our corporate standard: 5%
Git is one of several SCMs we use: 20%
Still kicking the tires on Git: 18%
Not currently using Git: 57%

Third: What do you like about Git?

Power branching/merging: 61%
Network performance: 53%
Everyone seems to be using it: 35%
It’s our corporate standard: 13%

Fourth: How do you conduct code reviews?

Automated and manual: 46%
Manual only: 24%
Manual, but only occasionally: 17%
Automated only: 7%
Not at all: 6%

Fifth: By the end of 2013, which SCM tools do you plan to use?

Microsoft TFS/VSS: 33%
Subversion: 32%
Git: 27%
IBM ClearCase: 22%
CVS: 21%
Perforce: 11%
Mercurial: 7%
None: 4%

Some of these technologies have been around for a long time. For example, CVS first appeared in 1986. CollabNet started Subversion in 2000, and it’s now a top-level Apache project. By contrast, Git’s initial release was only in 2005, and it flew under the radar for years before getting traction. Git’s rise to the third position on this study is impressive.

What is going on at Google? I’m not sure, and neither are the usual pundits.

Last week, Google announce that Andy Rubin, the long-time head of the Android team, is moving to another role within the company, and will be replaced by Sundar Pichai — the current head of the company’s Chrome efforts.

To quote from Larry Page’s post

Having exceeded even the crazy ambitious goals we dreamed of for Android—and with a really strong leadership team in place—Andy’s decided it’s time to hand over the reins and start a new chapter at Google. Andy, more moonshots please!

Going forward, Sundar Pichai will lead Android, in addition to his existing work with Chrome and Apps. Sundar has a talent for creating products that are technically excellent yet easy to use—and he loves a big bet. Take Chrome, for example. In 2008, people asked whether the world really needed another browser. Today Chrome has hundreds of millions of happy users and is growing fast thanks to its speed, simplicity and security. So while Andy’s a really hard act to follow, I know Sundar will do a tremendous job doubling down on Android as we work to push the ecosystem forward. 

What is the real story? The obvious speculation is that Google may have too many mobile platforms, and may look to merge the Android and Chrome OS operating systems.

Ryan Tate of Wired wrote, in “Andy Rubin and the Great Narrowing of Google,”

The two operating system chiefs have long clashed as part of a political struggle between Rubin’s Android and Pichai’s Chrome OS, and the very different views of the future each man espouses. The two operating systems, both based on Linux, are converging, with Android growing into tablets and Chrome OS shrinking into smaller and smaller laptops, including some powered by chips using the ARM architecture popular in smartphones.

Tate continues,

There’s a certain logic to consolidating the two operating systems, but it does seem odd that the man in charge of Android – far and away the more successful and promising of the two systems – did not end up on top. And there are hints that the move came as something of a surprise even inside the company; Rubin’s name was dropped from a SXSW keynote just a few days before the Austin, Texas conference began.

Other pundits seem equally confused. Hopefully, we’ll know what’s on going on soon. Registration for Google’s I/O conference opened – and closed – on March 13. If you blinked, you missed it. We’ll obviously be covering the Android side of this at our own AnDevCon conference, coming to Boston on May 28-31.