Some software developers manage without 1,3,7-trimethyl-1H-purine-2,6(3H,7H)-dione. I have no idea how they do it. Haven’t they read the requirements document, which clearly states that all IT professionals must consistently consume massive quantities of caffeine at all times?

How can you be agile without coffee? My apologies, but tea, hot chocolate, Diet Coke and Mr Pibb simply don’t cut it. And don’t get me started about Dr Pepper. There’s got to be something in the Carnegie-Mellon CMMI about caffeine.

As part of the lead-up to last September’s iPhone/iPad DevCon in San Diego, we undertook a survey into our attendees’ favorite coffee spots. This is clearly a North American-centric survey, and we make no claims as to its statistical validity. However, we learned that (shudder) most of our attendees prefer Starbucks.

Starbucks: 53.2%
Dunkin’ Donuts: 10.5%
Peet’s: 9.7%
Caribou Coffee: 5.6%
Tim Horton’s: 3.2%
Coffee Bean & Tea Leaf: 1.6%
Seattle’s Best: 0.8%

The good news is that Dunkin’ Donuts came in second, albeit a not-very-close second, narrowly edging out Peet’s, a small chain that originated Berkeley, Calif.

I’ve never visited Caribou Coffee, which operates in the eastern and mid-west areas of the United States, but it fared reasonably well, followed by the Canadian donut chain Tim Horton’s, and two other chains, Coffee Bean and Seattle’s Best.

Surprisingly, five chains we had on our survey received zero responses: Tully’s, Coffee Republic, Port City Java, Coffee Beanery and McDonalds McCafe. Yes, the Golden Arches is reinventing itself as a coffee shop, complete with free WiFi. No, mobile developers don’t care.

No, we did not ask how many attendees don’t consume coffee at all.

Z Trek Copyright (c) Alan Zeichick

Microsoft has many, many enemies. Microsoft is threatened on the Internet front by Google, on smartphones by Apple, on developer tools by IBM Rational, on databases by Oracle, and on game platforms by Sony and Nintendo.

Yet the earliest Undesirable No.1 was Novell. Since the early 1980s, NetWare platform defined small-business local area networks. The software operating system was ubiquitous and reliable, but also expensive, difficult to create apps for, and required most businesses to use resellers and consultants to manage their LANs.

Microsoft saw an opportunity to offers a simpler solution, and Windows NT Server ate Novell’s lunch. Sure, Windows NT Server was less efficient and less stable than NetWare, but small businesses could manage Windows NT themselves (and that was huge) and could write their own server-side applications (which was also huge).

Write a Windows application or develop an NLM? Work with a NetWare reseller or buy Windows off the shelf? Use Netware’s IPX/SPX or use a network that spoke TCP/IP? Bye-bye, NetWare.

Since its long-ago NetWare-centric glory days, Novell has become a hodgepodge of technologies. It bought Unix Systems Laboratories and sold part of it to SCO. It developed GroupWise, an email platform that always seemed to have great promise, but which never could get a foothold and which was pummeled by Microsoft’s Exchange and IBM’s Lotus Notes. Novell also created Novell Directory Services, but that was taken down by Microsoft’s Active Directory. The company bought WordPerfect and created an office suite, but nobody even noticed.

Where Novell has excelled lately is with Linux, thanks to its purchase of SuSE in 2003. But jumping into Linux also put Novell squarely in Microsoft’s crosshairs yet again, as during that time Linux was beginning to make serious inroads against Windows Server, particularly for Web servers. The enterprise-class SuSE Linux was a much bigger threat to Windows Server than Red Hat or other Linux distros.

Now we learn that Novell is being purchased by Attachmate, best known for its mainframe terminal emulators and host integration systems. Okay, I’ll admit – I didn’t see that coming. Last week, if you’d asked me to name 25 potential acquirers of Novell, Attachmate wouldn’t be on that list. Heck, if I’d written a list of 250 likely buyers, Attachmate wouldn’t have made that list either.

Microsoft is simultaneously is buying intellectual property from Novell. The 8-K investment notification paperwork filed by Novell on Nov. 21 said,

Also on November 21, 2010, Novell entered into a Patent Purchase Agreement (the “Patent Purchase Agreement”) with CPTN Holdings LLC, a Delaware limited liability company and consortium of technology companies organized by Microsoft Corporation (“CPTN”). The Patent Purchase Agreement provides that, upon the terms and subject to the conditions set forth in the Patent Purchase Agreement, Novell will sell to CPTN all of Novell’s right, title and interest in 882 patents (the “Assigned Patents”) for $450 million in cash (the “Patent Sale”).

This raises many questions. It is unclear what those patents are – or what role, if any, Microsoft may have played in Attachmate’s decision to buy Novell, and if there are any side agreements between the two companies. (It’s unlikely that Microsoft would have been able to buy Novell itself, because this this would raise many, many anti-trust issues.)

It’s also unclear about what’s going to happen to Novell’s assets, other than those mysterious patents. Will Attachmate want them all? Will it sell some of Novell’s business lines to other companies (such as selling SuSE Linux to VMware, which appears to be a persistent rumor)?

Neal Sedaka wrote in 1962 that “breaking up is hard to do.” In this case, I suspect that Novell will be broken up into lots of little pieces. The big winner here is Microsoft, which will finally have seen one of its oldest enemies not merely defeated – but actually utterly destroyed.

Z Trek Copyright (c) Alan Zeichick

What a day I’ve chosen to write this Take – it’s late afternoon on Thursday, Aug. 26, and today the Dow Jones Industrial Average closed below 10,000 again. Coincidentally, we’ve been sweltering here in the San Francisco Bay Area with a unusual heat wave (bring me lemonade, stat!), and I’ve been trying to help a young man find a software engineering job.

The young man – the boyfriend of a friend’s daughter – has an solid resume, given that he’s only been out of college for a few years, with experience in software development, testing, quality assurance, second-level support and network operations. He’s versatile, too, skills in Java, C, C++, C#, HTML and Flash, and has been using both Visual Studio and Eclipse. Even more important he’s been programming Facebook Markup Language, and has worked with Google AdWords and Google Analytics.

How do you find a good, solid job with a background like that? He’s working all the social-networking angles, including Facebook and LinkedIn. Family friends (like me) are making introductions. He’s schmoozing every chance he gets.

It’s tough out there for people looking for employment, even if you’re a bring young software engineer in Silicon Valley. He’s thinking about doing some volunteer work to keep himself busy until the right job (or any job) in his field comes along.

My young friend isn’t the only one looking for work in the Valley. Another friend – a guy in his mid-50s – is also looking for work in technical marketing or product management. With a gold-plated resume, a winning personality and great references, he’s having trouble getting anyone’s attention.

What do you tell job-hunters in this economy, beyond advising them to meet people, meet people, meet people and meet more people?

Like many of my friends and colleagues, I’ve got two smartphones. One of them is an Apple iPhone 4. Oooh. The other is an HTC EVO running Android. Aaah.

Both of those smartphones are great to use (though it’s sometimes disconcerting when switching between them). Each is stuffed full of different native functionality and third-party applications that add to my productivity and are also fun to use.

But, like many readers of my blog, I’m not merely a consumer of smartphone technology. My 9-to-5 responsibilities include software development, IT management and business strategy. The Android and iOS ecosystems represent significant opportunities and unique challenges in each of those areas.

That’s where our iPhone/iPad DevCon and AnDevCon come in. We’ve created those events to offer you – and your colleagues – the best independent education available about those platforms.

AnDevCon, the Android Developer Conference, is coming to San Francisco on March 7-9, 2011. Phone/iPad DevCon is a month later on the other side of the country. Please join us in Boston from April 4-6, 2011.

Whether you’re building applications, deploying iOS or Android devices or managing mobile systems, these conferences are for you.

The workshops, classes and speakers for AnDevCon are already posted – go check them out! We’re still finalizing the program for iPhone/iPad DevCon and will put the session descriptions online soon.

Hope to see you at one of those conferences in the spring – or maybe both. After all, how many smartphones are you carrying around?

Z Trek Copyright (c) Alan Zeichick

SD Times is changing to a monthly publishing schedule, and is being redesigned into a standard magazine size.

When we launched SD Times in February 2000, it was as the first-ever newspaper of record for the software development industry. Being a newspaper meant that we needed to publish often; we determined that twice a month was perfect for getting you the news without overburdening your mailbox.

Times change. Over the past several years, more and more of our readers have been getting the latest news from the SDTimes.com website, from our weekly News on Monday e-newsletter, from our RSS feeds, and even from links we post on Twitter.

We’ve learned that by the time most subscribers received their printed issue of SD Times, or opened up our twice-monthly digital edition, they had already read the news. Readers were breezing quickly past the news sections, focusing instead on columns, special reports and other in-depth features.

Therefore, we’re changing to reflect how our readers interact with SD Times. We’re separating the breaking news part of SD Times, which requires immediacy, from the more analytical part… which doesn’t.

Beginning in January 2011, the editorial team will continue to report breaking news on SDTimes.com, News on Monday, through the RSS feeds and using Twitter — just like it does today. Meanwhile, the now-monthly SD Times print/digital magazine will present longer articles that explore the news and what it means, as well as in-depth features on trends, meaningful interviews, thoughtful opinions and insightful commentary.

We’re excited about the changes in SD Times, and are confident that you’ll enjoy the new frequency and format.

Z Trek Copyright (c) Alan Zeichick

I don’t know a single software developer who doesn’t process a commitment to quality – and who believes anything except that he or she designs, writes and publishes solid, secure applications filled with clean, efficient code.

I don’t know a single development team manager who won’t insist that his/her team writes great software – and who would be mightily offended if you suggest otherwise.

I don’t know a single IT professional who gets up in the morning and says, “I’m going to do a really lousy job today.”

Yet software has bugs. Platforms have vulnerabilities. Applications sometimes don’t meet requirements. Systems experience crashes, Blue Screens of Death, kernel panics. Hackers find a way to penetrate networks, servers, websites, applications and databases.

Clearly there’s a disconnect. Thinking about the great Watts Humphrey (see “‘Father of Software Quality’ Watts Humphrey dies at 83”) got me thinking about these issues. I hope this news got you thinking about quality, too.

The challenge isn’t that our teams suck. It’s not that we write crappy code. It’s not that our architects and designers don’t care about security and application performance; it’s not that our programmers are idiots; its not that our testers are asleep at the switch.

We don’t have bugs because we’re losers. It’s not because we don’t use the right agile methods, or because we don’t care, or because we don’t use the right software suites or “best of breed” solutions, or because we haven’t “built a culture of quality.”

There’s no silver bullet. The truth is that writing complex software is hard, and our modern platforms and protocols are very complex. No matter how hard we try, bugs, inefficiencies will always creep in – and in terms of vulnerabilities, there’s always something we haven’t thought of. So, for any non-trivial application, we’l always be fixing and patching.

Let’s say it again: There is no silver bullet.

What we need to remember, and what we need to communicate to our teams, is that we acknowledge that that quality is hard. So what? It’s our job. We must constantly find ways to do better, learn from our mistakes, stay responsive to our customers and their requirements, and keep doing the best job that we can.

Z Trek Copyright (c) Alan Zeichick

It’s amazing to believe the laser has only been around for about 50 years. So much depends on lasers, from the read/write heads in our optical drives (think CDs and DVDs) to laser printers to laser pointers to laser eye surgery to lasers driving optical fiber networks to laser mice for our laptops.

Last week, I received a short press release from HRL Laboratories, a research laboratory jointly owned by Boeing and General Motors:

On November 23 during an employee event at its Malibu facilities, HRL Laboratories will be recognized as an IEEE Milestone in Electrical Engineering and Computing signifying where the first working laser was demonstrated more than 50 years ago.

HRL will receive a plaque from IEEE marking the historic event: “On this site in May 1960 Theodore Maiman built and operated the first laser. A number of teams around the world were trying to construct this theoretically anticipated device from different materials. Maiman’s was based on a ruby rod optically pumped by a flash lamp. The laser was a transformative technology in the 20th century and continues to enjoy wide application in many fields of human endeavor.”

Since its first demonstration, more than 55,000 patents involving the laser have been granted in the United States, according to IEEE.

Looking around my office, I see lasers, lasers and more lasers. I remember playing with them in high school and college physics classes, being totally fascinating by the concept of light amplification by stimulated emission of radiation. This was a big deal.

Early lasers were large, expensive and cranky. And now they’re tiny, cheap throw-away electronics. On my desk is a red laser pointer built into a pen given away at a booth at Oracle OpenWorld. I have a 30 milliwatt 532 nanometer green laser that I use to aim telescopes. How much would those lasers have cost in, oh, 1970?

Thank you, Dr. Maiman.

Read more about the first lasers on the HRL Laboratories site.

Z Trek Copyright (c) Alan Zeichick

Apple makes really cool, really sexy notebook computers. Last week, the company unveiled two models of its ultra-lightweight MacBook Air – one weighing a featherweight 2.3 pounds, the other a mere 2.9 pounds. The emotional right side of my brain is demanding, “Buy one! Buy one now!” while the analytical left side is screaming, “Shut up! You don’t need one!”

While my inner monologue rages, let’s turn our cerebral hemispheres to a stealthy announcement by Apple:

As of the release of Java for Mac OS X 10.6 Update 3, the Java runtime ported by Apple and that ships with Mac OS X is deprecated. Developers should not rely on the Apple-supplied Java runtime being present in future versions of Mac OS X. The Java runtime shipping in Mac OS X 10.6 Snow Leopard, and Mac OS X 10.5 Leopard, will continue to be supported and maintained through the standard support cycles of those products.

What that means, according to casual statements by Apple (including an email allegedly from Steve Jobs), is that the company is tired of maintaining its own Java runtime – especially since that runtime is often based on out-of-date specifications.

The Steve Jobs email says,

Sun (now Oracle) supplies Java for all other platforms. They have their own release schedules, which are almost always different than ours, so the Java we ship is always a version behind. This may not be the best way to do it.

Why was Apple maintaining its own Java? Historically, the Mac was a niche platform unworthy of much attention from Sun – and when you coupled that with Apple’s well-known penchant for complete control over its software, it made sense for Apple to do its own port.

Of course, things are different today. While still dwarfed by Windows sales, the Mac is a much more significant presence in the desktop market, particularly in the consumer space. But that still begs the question: Who should maintain particular Java runtimes, such as the one for for Mac OS X?

Should it be Oracle, as the owner of the Java specifications – but if so, what’s the financial motivation? Or should it be the platform owner, in this case Apple, who clearly has a vested interest in making sure that Mac OS X sports a first-class, up-to-date Java SE runtime?

In my view, it’s Apple’s responsibility — not Oracle’s — to maintain Java for the Mac.

Since Oracle hasn’t said that it will take over maintaining the Mac version of Java, it’s hard to understand why Apple has unilaterally deprecated its Java port. Unless, of course, the company is saying that Java is irrelevant to its plans. Both sides of my brain are in agreement that this is the real message here.

Z Trek Copyright (c) Alan Zeichick

We’re long past the simplify and naivety of Java’s “write once, run anywhere,” and of a vision for a universal programming language.

Java was always about business. The Java Community Process wasn’t never forum for interested parties to develop the ideal programming language and an ideal runtime. It was a place for competitors and partners to work together to create a platform that would suit their strategic interests. For a long time, the JCP was about competing effectively against Microsoft.

The JCP has become much diminished from the heady days of the early and even mid 2000s. Debates about Enterprise JavaBeans, about JavaServer Pages, about Plain Old Java Objects… they seem so long ago. Remember the angst about application portability between WebLogic, WebSphere, Oracle, JBoss, and whether Apache should have access to the validation suite?

Once upon a time, that mattered. Or it seemed to matter. It’s been a long, long time since I’ve had a conversation with anyone complaining about a lack of compliance with the Java EE specs, or angered by proprietary extensions.

The industry has accepted that Java, as specified by the JCP, is merely one layer of specifications upon which an application server stack is created.

Yet there’s a strong desire, I believe, on the part of most players in the Java community to maintain some unity. It would be a shame were the Java Community Process to fracture, to have its major backers walk away to abandon the technology or to create a permanent fork. Just as all the various Linux distros are built on a common kernel, so to the many implementations of Java should remain build on a core set of specs from the JCP. Otherwise… what is Java?

That’s why I’m delighted at the bilateral agreement between Oracle and IBM – the two biggest stakeholders in enterprise Java. They’re agreed on how the JCP should be structured more transparently in the post-acquisition world. They’ve agreed that the OpenJDK should remain a common core for Java.

While that agreement is between only two players, it’s reasonable to expect that others in the Java world would go along. They would be foolish not to. Let’s not be naïve: Java is only important as long as it’s a multi-vendor initiative. That’s its strength. Nobody, not IBM, not Oracle, is going to throw that away.

Z Trek Copyright (c) Alan Zeichick

One hundred million issues of SD Times! That’s one heck of a milestone.
For many developers and IT professionals, it’s easy to forget the low-level underpinnings of today’s computers. A personal computer isn’t a computation device. It’s a communicator, office productivity tool, entertainment center and shopping aid.
What we see—whether we’re browsing with Firefox, coding with Visual Studio, listening to our favorite iTunes playlist or munching on a digital image with Photoshop—is what looks like a real world. Fonts. Colors. Pictures. Things moving. Stuff that responds when you touch it, whether with a mouse or with a finger.
This is all a façade. Underneath our pretty GUI, under all that AJAX code, behind the iOS user interface, behind Safari and Firefox and Internet Explorer, there’s hardware. Chips. Analog-to-Digital and Digital-to-Analog converters. Power supplies. Voltage levels. Ones and zeros.
We try hard to forget it’s there, but our profession is turning abstract binary patterns inside a computer into something that matters in the real world. If this reminds you of “The Matrix,” well, it should.
The 100,000,000th issue (in binary) of SD Times, of course, is the 256th issue (in decimal). Busting out of the eight-bit byte, SD Times continues to move into the future. A future that, except for embedded development and some specialized applications that require bit-twiddling, has nearly completely abstracted the computer out of computing.
In honor of our official 100,000,000th issue’s publication date of Oct. 15, 2010, I’m going to wear one of my favorite t-shirts this Friday: “There are 10 types of people in the world: Those that understand binary, and those that don’t.”
Sadly, that t-shirt (and my prized HP-16C calculator) are about as close as I ever get to binary these days. It’ll have to do.
Z Trek Copyright (c) Alan Zeichick

Last week was incredible. We held our debut iPhone/iPad DevCon last week in beautiful La Jolla, a village in San Diego. So, please forgive me if I’m on a bit of a mobile high.

(As I write this, it’s Friday, and I’m both working my way through several days’ accumulated email and trying to get readjusted to life in foggy San Francisco instead of basking in sunny South California.)

Two highlights of the conference were keynotes from Mike Lee, “The World’s Toughest Programmer,” and Aaron Hillegass, the founder of the Big Nerd Ranch.

Mike appeared for his keynote dressed in a pirate costume – and extolled everyone in the audience to focus on things that are worth doing, and to do those things with excellence. Don’t waste your time building me-too apps or cranking out garbage, he said – write software that matters.

Aaron’s point was that we’re living in the golden age of mobile computing, similar to the way that the mid-to-late 1980s were a golden age for the Internet. There are more opportunities and ideas today for mobile developers and entrepreneurs than there ever have been – or ever will be again. So, he advised, now’s the time to seize the moment and write software that matters.

Hmm. Do I sense a common thread there?

Our second iPhone/iPad DevCon will be in Boston from April 4-6, 2011. We’re also creating a similar conference for Android developers. AnDevCon is coming to the San Francisco area from Mar. 7-9, 2011.

Hope to see you at one of them – or at both!

Z Trek Copyright (c) Alan Zeichick

The first ZTrek post was on Sept. 22, 2006. The 500th was on May 9, 2008. In that post, entitled “Opus 500,” I calculated that post #1000 would be on Dec. 25, 2009. Whoops.
The pace has slowed for several reasons.
1. I’m so busy!
2. Because I share many thoughts that would have been blog posts on Facebook.
3. There are some other blogs I contribute to, and that draw away from this one.
No matter. The point is, I enjoy the blog. I hope you do too.
Z Trek Copyright (c) Alan Zeichick

OOW + OD + J1 = Big. Wow. This week was the Oracle triple play – Oracle OpenWorld, Oracle Develop and JavaOne. This is the first year, of course, that Java is owned by Oracle, and therefore, it’s the first year that JavaOne is an Oracle production instead of an event thrown by Sun Microsystems.
How many people went to the combo event? By my rough estimate, 6.4 trillion. Yes, I know. I may be exaggerating. The official figure is 40,000 attendees, 1,800 classes and 450 exhibitors strewn across multiple convention centers and hotels encompassing both the South of Market and Union Square sections of San Francisco.
But frankly, the event blew out San Francisco – itself not a small town. This is the type of event that is normally found in Las Vegas, and in fact, seeing all the tents (filling up city streets, with traffic diverted), and seeing all the attendees with aching feet, I was reminded of COMDEX events back in the 1980s.
The big difference, of course, is that COMDEX heralded the growth of the personal computer industry. Before it got too big and imploded, attendees flocked to COMDEX to see new stuff, preferably from little tiny companies that might, just might, become big some day.
By contrast, the Oracle trifecta was all about celebrating the growth of the Oracle industry. Oracle products, Oracle partners, Oracle this, Oracle that, 11 out of the leading 11 companies use Oracle, blah blah blah. A prime example of the emphasis on the company itself was the Moscone North shrine to the 33rd America’s Cup yacht race, won earlier this year by the Oracle BMW Racing team.
The message: Oracle plays to win. Despite the crowds, craziness and confusion, OOW + OD + J1 demonstrated not only that Oracle is huge, but that it is advancing and dominating nearly every enterprise computing front.
Want proof? At the show there were lots of announcements around Oracle’s packaged applications, but that’s only the beginning. The hyperkinetic company also unveiled a Java roadmap, declared its commitment to open-source developer communities, upgraded its middleware, put a new kernel into its Linux distribution, launched a new generation of 16-core SPARC processors introduced a new set of massive database server hardware, revved the SunRay terminals, demoed a new MySQL release candidate, enhanced its storage products, hyped its plans for the cloud… the list goes on and on.
Oh, yes, Oracle also talked about its jaw-dropping financials. It announced quarterly revenue up 48% to US$7.5 billion, profit up 10% to $1.19 billion, and software license sales up 25%.
It was an impressive week – except, of course, for non-IT folks who live or work in San Francisco, who had to navigate around Oracle’s mega-conference. I’m sure that Oracle’s competitors weren’t too happy either. Larry Ellison’s ego might be huge, but then again, so is his company.
Z Trek Copyright (c) Alan Zeichick

My head is full of mobility – and that’s a good thing. But sometimes juggling all the technology can be a bit overwhelming. In one pocket, you’ll often find a new Apple iPhone 4, newly upgraded to iOS 4.1. In another pocket, an HTC Evo 4G handset runs the Android 2.2 operating system. A Kindle e-book reader from Amazon.com lives in my briefcase. And at home there’s an HP netbook running Moblin Linux on the kitchen counter and a WiFi-equipped iPad in the living room.

(And that’s not even mentioning a out-of-date BlackBerry sitting in a charger. I don’t use that one very often these days — I suppose it’s time to get a new one.)

Why does Alan have so many mobile devices? If you answered, “Because he’s a total geek!” well, you’d be right. I love my technology toys. And while new software is fun to play with, nothing (nothing!) beats the thrill of mastering new hardware.

Another reason to have all these toys is because, well, SD Times readers are building applications for mobile devices, and it’s important to have hands-on familiarity with them.

Many of you are familiar with our iPhone/iPad Developer Conference, coming to San Diego at the end of this month. We recently announced an Android Developer Conference, March 7-9, 2010, in the S.F. Bay Area.

A few months ago, we developed our first mobile application, and many people have downloaded and are enjoying the free SD Times Newsreader for iPhone.

I’m now proud to share our new SD Times Newsreader for iPad with you. It’s available for free download from the Apple App Store. It’s a whole new codebase, and a totally different reading experience, than the iPhone app. Check it out!

And now, hmm, which mobile device should I play with next?

Z Trek Copyright (c) Alan Zeichick

It’s scary watching water-tankers flying over your house… and dropping their loads only a couple of blocks away onto a towering ball of flame that we could see – and feel.
My home is less than a mile from the gas-line explosion that happened in San Bruno, Calif., on Thursday afternoon, Sept. 10. San Bruno is a sleepy little bedroom community near San Francisco International Airport. Part of the city is industrial, but most is residential. The Crestmoor area, where the high-pressure gas main exploded, is a nice, tree-lined neighborhood we drive past every day.
Fortunately, my family is safe, and our home is safe. But this is a huge disaster. Many people were killed and injured. More than 50 homes were destroyed, more than 100 houses damaged. Perhaps you’ve seen the photos or video on television or on the Internet. It’s unreal. Thankfully, modern technology helped keep people in touch during the evening, and throughout the night.
Of course you can’t count on consumer-grade telecommunications during man-made or natural disasters. Phone lines, power lines, cellular transmission towers, all are vulnerable to damage or overloading. The only technologies that aren’t affected are free-standing radio systems, such as those used by emergency services or amateur radio operators. But there’s no doubt that the miracle of modern technology plays a vital role.
One of the first things that my wife, my son and I did after the explosion was get onto Facebook and post that “we’re okay.” That helped our far-flung friends and family who saw “San Bruno disaster” on the news and had no idea if we were affected. After all, this was a story that was covered not only in Northern Calif., but also across the nation and around the globe.
Using Facebook, email, text messaging and calls to mobile phones, we (and many others) inventoried our friends, to make sure that everyone was okay and to see if anyone needed help. While technology doesn’t help directly – it didn’t prevent the fire, help put out the fire, save lives or save homes – it did comfort the community and the affected.
Without tools like Facebook or email, the uncertainty would have been greater, and the heartache even deeper.
This has been a terrifying experience. As a former firefighter myself (back in the 1980s), I’ve trained for disasters, but have never seen or imagined anything on this scale. Frankly, we’re overwhelmed.
Our heart goes out to everyone affected by this terrible tragedy.
Z Trek Copyright (c) Alan Zeichick

It’s late afternoon on Thursday, Aug. 26, and today the Dow Jones Industrial Average closed below 10,000 again. Coincidentally, we’ve been sweltering here in the San Francisco Bay Area with a unusual heat wave (bring me lemonade, stat!), and I’ve been trying to help a young man find a software engineering job.

The young man – the boyfriend of a friend’s daughter – has an solid resume. Not bad, given that he’s only been out of college for a few years, with experience in software development, testing, quality assurance, second-level support and network operations. He’s versatile, too, skills in Java, C, C++, C#, HTML and Flash, and has been using both Visual Studio and Eclipse. Even more important he’s been programming Facebook Markup Language, and has worked with Google AdWords and Google Analytics.

How do you find a good, solid job with a background like that? He’s working all the social-networking angles, including Facebook and LinkedIn. Family friends (like me) are making introductions. He’s schmoozing every chance he gets.

It’s tough out there for people looking for employment, even if you’re a bring young software engineer in Silicon Valley. He’s thinking about doing some volunteer work to keep himself busy until the right job (or any job) in his field comes along.

My young friend isn’t the only one looking for work in the Valley. Another friend – a guy in his mid-50s – is also looking for work in technical marketing or product management. With a gold-plated resume, a winning personality and great references, he’s having trouble getting anyone’s attention.

What do you tell job-hunters in this economy, beyond advising them to meet people, meet people, meet people and meet more people?

Z Trek Copyright (c) Alan Zeichick

It seems as if all the major IT companies are going bonkers. What’s going on? Is there something in the water?
Think about all the odd behavior that we’ve seen lately. Is there a pattern? To mention just a few, in alphabetical order:
Apple – The master of marketing screwed up. Yes, there is a problem with the iPhone 4 antenna design – an exposed metal antenna is a bad idea, because being touched by a human body changes an antenna’s performance. However, anyone can make a mistake. The real problem was Apple’s bizarre response, which turned a minor hardware issue into a major news story. Stupid.
Google – The company’s unofficial motto is “Don’t Be Evil.” The company’s plan to bypass net neutrality with a private deal with Verizon seems to be contrary to much of what Google stands for. When you add that to the never-ending series of inquires about privacy violations and the Street View service, you have to wonder if the folks running the Googleplex are getting delusions of grandeur.
Hewlett-Packard – We may never know the real story behind the forced resignation of CEO Mark Hurd. Was the guy a visionary leader who turned the company but who made a silly mistake on his expense account? Or was he a self-obsessed cost-cutter who was ousted by a board who was looking for an excuse to get rid of him? The story seemingly changes every day. Is scandal the new HP Way?
Microsoft – Cancelling its Kin smartphones in June, just 48 days after their introduction, was an incredible admission of failure. But what do you expect when Microsoft is also trying to promote its late-to-market Windows Phone 7 platform? It’s unclear that there was ever a good reason for Microsoft to buy Danger in 2008. What were they thinking? If anything?
Oracle – After being essentially silent for months about its plans for Sun’s open source software, the company suddenly takes two actions. First, it kills OpenSolaris, and then it sues Google for violating some patents regarding the use of a Java Virtual Machine inside Android. What’s Larry Ellison up to? Is he trying to monetize his acquisition, or is he doing a favor for his buddy Steve Jobs?
All this makes you wonder if there any grown-ups minding the store.
Z Trek Copyright (c) Alan Zeichick

The Android platform is gaining market share and mindshare with amazing speed. While it’s clearly trailing iPhone’s large head start, in terms of installed base and number of independent applications, Android is sprinting to make up the gap.
Why is Android moving so fast? From the consumer side, it’s about choice. If you buy an iPhone, you get one model (with a choice of colors and amount of memory). One size must fit all. Want a physical keyboard? No. Want a choice of carriers? If you’re in the United States, no. There’s one handset currently sold with iOS 4, and it’s from Apple and run on AT&T.
By contrast, you have a whole consortium of innovators pushing Android – and building on top of it. From multiple carriers like Verizon, T-Mobile and AT&T, to competing manufacturers like HTC, Samsung and Motorola, everyone is free to build on the Android experience with different hardware features and add-on functions. That means consumers get choices galore.
Developers, too, have plenty of flexibility. Apple imposes strict limits on what developers can put into their iPhone apps – not only to ensure that the functions don’t crash the phone and aren’t malicious, but also to make sure they don’t compete against what Apple wants to sell. With Android, it’s the Wild Wild West. The market is wide open.
When you couple the broad base of manufacturer and carrier support with the open model for app distribution, you get unparalleled opportunities for developers and entrepreneurs.
That’s why BZ Media – the company behind SD Times – is launching AnDevCon: The Android Developer Conference, March 7-9, 2011.
Join us for three days in San Mateo. We’ll have dozens of technical classes and workshops focused in three topic areas: Programming for Android, using Android software in the enterprise, and how to market applications to the Android market.
If you want to learn all about Android development, please join us at AnDevCon. Hope to see you there!
PS: If you’re an expert in Android development and are an experienced instructor, the Call for Speakers for AnDevCon is open through mid-September.
Z Trek Copyright (c) Alan Zeichick

We talk so much about agile processes, which are clearly well-suited to small and mid-sized projects. But what about scaling agile for big projects? This is a topic that’s often debated, but in my opinion at least, hasn’t been completely settled.
By big projects, I don’t mean the workload of the deployed application (i.e., lots of transactions per second), or even of the importance of the deployed application (i.e., a business-critical application). I mean the size and complexity of the application.
Think about a software development project that has, oh, 10 million source lines of C#, C++ or Java code, or maybe more than 300,000 function points. Or maybe 5x or 10x that size.
We’re talking about large software teams, complex requirements and a significant cost. It’s not a happy little Web application that’s being thrown together with a couple of dozen people and rolling out next month – it’s going to take a while, and it’s going to be expensive.
Many large projects use a heavyweight process like the Rational Unified Process. But can they be agile too? Can you successfully combine the flexibility of Extreme Programming with a requirements-first RUP project? RUP already specifies iterative development, but how much of Scrum can scale up large? Is the answer to use Kanban? Or to say bye-bye, agile?
When discussing this question with SD Times columnists Andrew Binstock and Larry O’Brien, Larry said that, when it comes to the problem of scaling agility for large projects, “it’s not the methodology, but the management. An aligned, self-correcting team is far more likely in a smaller business where there’s an investment and personal relationship from the low to the high. Is a scrum standup going to be successful for a team assigned to execute a doubtful policy about which they can give no meaningful criticism?”
Now, consider how agile plays out when launching a large project. As Andrew Binstock suggested, some questions are:
• How do you decompose large projects into agile-size subprojects?
• How do you balance the inevitable need for requirements with agility’s commitment to changeability?
• How do you do frequent releases that a user can give you useful feedback on?
Larry gently pointed out, though, that my premise might be flawed – or rather, somewhat out of date. Hey, what do you expect from a mainframe guy? He said, “With service-oriented architectures and the ease with which frameworks and libraries are pulled in, fewer companies think of themselves as dealing with very large codebases. The reality is that the enterprise ecosystem is still very large, but subsystems seem to be more loosely coupled than in the past. The result is that most teams perceive themselves as working with only a few tens of thousands of lines of code, for which agile techniques are fine.”
Z Trek Copyright (c) Alan Zeichick

I’m a mainframe guy. Cut my teeth writing COBOL, PL/I and FORTRAN on the IBM System/370. CICS is my friend. Was playing with virtual machines long, long before there was anything called “DOS” or Windows” or Linux.” My office closet is filled with punch cards and old nine-track tapes, all probably unreadable today. One of the happiest days of my professional life was trading in an old TeleVideo 925 monochrome terminal for a brand-new 3279 color display.
If you listen to just about any marketer in the software development or IT industry, mainframes are always described as legacy systems – with the implication that only a total loser would continue to use such an outdated piece of junk.
By casually repeating terms like “legacy system,” or buying into the phrase “legacy modernization” for projects that integrate mainframes with other platforms like Java and .NET, everyone perpetuates the marketing myth that mainframes are bad. That they’re relics whose time has come and gone. That the goal of any IT professional should be to replace every mainframe with something else – anything else.
I say, “Bah, humbug. Nonsense. Fiddlesticks. Balderdash.”
A legacy system is an old method, technology, computer system, or application program that continues to be used, typically because it still functions for the users’ needs, even though newer technology or more efficient methods of performing a task are now available. A legacy system may include procedures or terminology which are no longer relevant in the current context, and may hinder or confuse understanding of the methods or technologies used.
In many situations, there is no more efficient tool for solving a business problem than a mainframe. Mainframes are just as current, just as new, just as relevant and just as useful as any other modern, state-of-the-art IT platform. Mainframes are not legacy systems.
Now, are some mainframe applications legacies? Yes. Any application that hasn’t been properly maintained becomes obsolescent. If you’re having to do extensive wrappering around an old COBOL or RPG program that nobody understands in order to keep it running, then you’ve got a problem. But the problem isn’t that it’s running on a mainframe. The problem is that the software wasn’t properly documented and that your engineers weren’t properly trained.
A 30-year-old undocumented C# program running on .NET, or a 30-year-old undocumented C++ program running on Solaris or a 30-year-old undocumented Java program running on WebLogic will be just as “legacy” as a 30-year-old CICS program running on z/OS.
Today, IBM released a new family of mainframes, called the zEnterprise 196. I don’t know much about it – I haven’t touched a mainframe since the early 1980s. But I do know one thing: It’s not a legacy system.
Z Trek Copyright (c) Alan Zeichick

What will software development be like in the year 2020? It would be easy to draw a straight line from ten years ago through today, and see where it goes a decade from now.
Ten years ago: Hosted applications through ASPs (application service providers) were getting started, but had little impact. Today: Hosted applications through the cloud and SaaS providers are having some impact on enterprise data centers, particularly in smaller companies. Ten years from now: Hosted applications will be mainstream, and IT managers will have to justify running applications on-premise.
Ten years ago: The Web was everything, and browsers were how desktops and mobile devices (in their limited way) dealt with Internet-based services. Today: Desktops are browser devices but mobile devices increasingly use apps to manipulate Internet services as diverse as Facebook, reading newspapers and enterprise resources. Ten years from now: Apps will have taken over mobile devices entirely, and “walled garden” apps will be a significant presence on the enterprise desktop. The browser will be far less important than it is today.
Ten years ago: Distributed development teams just starting to leverage Internet bandwidth, hosted SCM systems and collaboration systems – but even so, most developers lived in their IDEs. Today: The value of collaboration tools has been proven, and in many organizations, sophisticated ALM suites have turned the stand-alone developer into an endangered species. Ten years from now: More and more ALM functionality will migrate onto servers, particularly hosted servers across the Internet. IDEs will be turning into front-end apps. Source code and metadata will live in cyberspace.
Ten years ago: Most serious enterprise developers worked with native compiled languages, with the primary exceptions of Web script, Visual Basic and Java. Today: Managed languages like Java, C#, Perl, PHP and Python rule the enterprise, with C/C++ and other native languages being seen as specialist tools for those who need to stay close to the hardware. Ten years from now: With the exception of device developers, the world will belong to managed runtimes and virtual machines.
Ten years ago: Databases meant a SQL-based relational database from a company like Oracle or IBM. Today: While most enterprise data is still in a large SQL-based RDMS, such as OracleDB, DB2 or SQL Server, many development teams have embraced lighter-weight alternatives like MySQL and are playing with NoSQL alternatives. Ten years from now: Most enterprise data will still be in giant relational databases, but there will be more acceptance of those alternatives.
Ten years ago: The most important members of a software development team were its programmers; testers got no respect. Today: The most important members of a team are seen as its architects; testers get no respect. Ten years from now: The most important members of the team will be its agile coaches and champions; testers still will get no respect.
Ten years ago: Software development was seen as a wonderful career, even after the dot-com implosion. Today: Software development is a wonderful career, but the recession has affected many enterprise jobs. Ten years from now: New tools will empower less-technical professionals to build applications, but software development will still be a wonderful career, as we take on the hard problems that nobody else can solve.
Ten years ago: SD Times launched. Today: On July 15, 2010, we celebrate the publication of our 250th issue. Ten years from now: The future’s so bright, we’ll have to wear shades.
Z Trek Copyright (c) Alan Zeichick

When you think about a modern software monoculture, which company do you think of first? Chances are that it’s Apple. However, if I asked that question between, say, 1995 and 2007, you probably would have said Microsoft.
In agriculture, a monoculture is when too much of a region plants exactly the same crops. If there’s a disease or pest that destroys that crop, the entire region is in big trouble. Similarly, if the economics of that crop change – like a price collapse – everyone is in trouble too. That’s why diversity is often healthier and more sustainable at the macroeconomic level.
However, the problem with a monoculture is that it’s an attractive nuisance. If all your neighbors are planting a certain crop and are making a fortune, you probably want to do that too. In other words, while monocultures are bad for society as a whole, they’re often better for individuals – at least until something goes wrong.
Microsoft’s dominance over the past couple of decades turned into a monoculture. Vast numbers of consumers and enterprises standardized on Windows and Office, because that’s what they knew, that’s what was in stores, that’s where the applications were and because for them personally, it seemed to be the right choice. to go with the flow.
While there were alternatives, like Unix and Linux and the Macintosh, those remained niche products (especially on corporate desktops), because a monoculture rewards going with the flow and jumping on the bandwagon. Monocultures foster a lack of competition and a desire to play it safe. Nobody wants to upset the bandwagon. And thus, real innovation at Microsoft didn’t make it into Windows and Office – leaving room for the Macintosh to take risks, build a compelling product and start taking market share, and for Linux to tackle and win the early netbook market.
Today, Microsoft’s Windows and Office still dominate the enterprise. But even with Windows 7, I don’t think that customers are quite as willing to just to whatever Microsoft says as they used to be.
In the smartphone wars, the iPhone never became a real monoculture – there are too many BlackBerrys and other devices. However, certainly the media acts as if the iPhone is the only game in town. Apple plays into the perceptions of monoculture, offering essentially one model handset (now the iPhone 4), with the only variations being a choice of two colors and three memory configurations.
Apple’s dismissal of the well-publicized flaws in the iPhone 4’s antenna – first saying that it was a user error (you’re holding the phone wrong), and then claiming it’s a trivial software bug (displaying an incorrect signal strength) – shows incredible arrogance. And I say that as a happy iPhone 3GS owner and long-time Mac user who frequently recommends Apple products to friends and colleagues.
Any company can release a product that has a flaw. However, Apple’s behavior has been astonishingly bad. And if Apple wasn’t trying to impose a software monoculture by offering essentially one handset, it wouldn’t be a big deal. If Apple offered half-a-dozen iOS handsets, if one had a bad antenna, nobody would even notice.
The upshot, of course, is that while Apple is sure to fix the problem, we may see the early demise of the perceived iPhone monoculture. Android is coming on strong with a fast-evolving operating system and a lot of innovative work from handset makers and app developers. While I have no plans to migrate from my iPhone 3GS right now, I would definitely consider an Android device for my next purchase. Monocultures are bad, and we all benefit from a rich and diverse marketplace.
Z Trek Copyright (c) Alan Zeichick

Is literally everything about the cloud? You’d think so, going by the chatter from the biggest industry players. It seems that every company that wants to talk to be is pushing something to do with cloud computing. New service offerings from hosting providers. New tools for optimizing the performance of applications, or for making it easier to migrate, or for making cloud-based development more agile.
The cloud sure is seductive. In our company, we’re considering a migration to cloud technologies within the next 12 months. BZ Media, the organization behind SD Times, is a small company, and frankly I’d rather not be maintaining server, either in-house or dedicated hardware in a collocation center. If the economics of cloud computer work out, and if reliability and scalability deliver what we need, then it’s a good thing.
Yet I’m puzzled. How much is cloud computing a software development conversation, rather than an operations conversation? Obviously the platforms are different: Windows Azure is different than Windows Server 2008. Microsoft’s SQL Azure is different than Microsoft’s SQL Server. The Java EE that VMware is pushing into Saleforce.com’s cloud isn’t the same Java EE that’s on your current in-house app server. Working with Amazon S3 is not the same as working with an EMC storage array. So yes, there’s an undeniable learning curve ahead. But that’s what you’d encounter in any significant server platform change, whether cloud, on-premise or collocated.
Therefore, my confusion. How much does a software development team need to know about the cloud, beyond how to deploy to it and integrate applications with cloud-based apps? Often, I believe, not much.
Z Trek Copyright (c) Alan Zeichick

IBM Rational has written a solid white paper on software security, focusing on improving code reviews. Although I rarely (very rarely) endorse a vendor white paper, this is one that’s worth reading.
Written in December 2009 by Ryan Berg, a senior security architect at IBM, the paper focuses on best practices for examining code for security flaws, and then figuring out how to remediate those flaws. The paper, called “The Path to a Secure Application,” breaks the vulnerabilities into five specific categories, each of which is examined in detail:
• Security-related functions
• Input/Output validation and encoding errors
• Error handling and logging vulnerabilities
• Insecure components
• Coding errors
For each of those vulnerability categories, Berg describes specific instances and offers a list of suspicious code behavior that might indicate problems. For example, for insecure components, he offers that you might have unsafe Java Native Interface methods, or unsupported methods. Suspicious behavior would include raw socket accesses, which could indicate possible backdoors; timer or get-time functions, which might mean triggers; or privilege changes, which might speak to unauthorized access levels within the code.
What’s nice about this paper is that – unlike many that cross my desk – Berg isn’t setting up a paper tiger. He’s not highlighting flaws so that he can say, “Oh, look, IBM sells tools to solve this problem, call us today.” While IBM Rational does offer source-code scanning tools, this white paper ain’t peddling them. Rather, Berg is offering guidelines for making a QA checklist for reviewing source code for secure vulnerabilities. Nicely done! I wish more white papers were this good, and this genuinely educational.
Z Trek Copyright (c) Alan Zeichick

Battery-powered. Built-in satellite-based Global Positioning System receiver. Accelerometer. Ambient light sensor. High-resolution camera. Powerful processor. Gigabytes of storage. Radios for communicating with Bluetooth devices, WiFi networks and cellular data systems. And now, even an embedded gyroscope.
The sensor and communications capabilities of today’s smartphones is astonishing. Each generation of device, whether from Apple or its competitors, crams more and more sophisticated electronics into a pocket-sized package, with the latest being the iPhone 4’s gyro.
All you need is a life-forms sensor and a probe for detecting buried dilithium deposits, and today’s smartphones would be right at home on the U.S.S. Enterprise. Oh, a tachyon emitter would be nice too.
We’ve been here before, of course. In early 2007, Sun Microsystems gave me a Sun SPOT development kit. A SPOT – Small Programmable Object Technology – was a battery-powered device equipped with a small ARM processor, short-distance radio, accelerometer, temperature and light sensors, some multi-colored LEDs and general-purpose analog and digital I/O ports, managed by an embedded Java virtual machine. I did some experiments with the SPOT and was impressed with its capabilities. Sadly, the Sun SPOT initiative faded out after its first limited production run.
General-purpose smartphones, whether based on Apple’s iOS (the new name for iPhone OS), Google’s Android or Microsoft’s Windows Phone 7, have the potential to revolutionize remote sensing. Not only is the array of built-in sensory apparatus impressive, but the ability to add more through hard-wired or Bluetooth connections takes that a set farther. I don’t know if there are third-party toolkits yet for adding analog and digital inputs to smartphones – but there should be.
Already there are kits for connecting smartphones to a car’s OBD-II to pull down a vast array of real-time onboard diagnostics. Software uses that data, plus the phone’s accelerometer, to measure acceleration and performance.
Look at a smartphone, and forget, for a moment, that it’s a phone. Think about its sophisticated electronics, processing power, radio capabilities, and sensory functionality. Imagine how it could be used for science and engineering — both in the lab and in the real world. Think about the low price, well under $1,000 (forgetting about carrier subsidies). Amazing, isn’t it?
Z Trek Copyright (c) Alan Zeichick

Speed matters. With most agile development methodologies, the faster you can push new code out into the source-code management system, into builds and onto servers, the faster you can evaluate your progress and chart your next moves. From monthly builds came weekly builds, then daily (or nightly) builds. In some shops, those builds are used internally, with less-frequent deployments into the production environment. In other cases, the bits are actually pushed out to production servers daily.

Even the now-common daily/nightly build and deployment may not be fast enough to drive modern development, according to some proponents of ever-more-agile agile methodologies. That’s why thinkers like Kent Beck (pictured) are now advocating a move from Daily Deployment to Continuous Deployment.

Continuous Deployment is the topic of the first-ever SD Times Virtual Conference, which we’re holding on Wednesday, June 30, beginning at 1:00pm Eastern (10:00am Pacific). There’s no cost to attend this three-hour educational event, which I’ll be hosting.

Our three instructors are Kent Beck, founder and director of the Three Rivers Institute, and author of “Implementation Patterns,” “Extreme Programming Explained: Embrace Change,” and much more; Timothy Fitz, tech lead at IMVU and one of the creators of the Continuous Deployment movement; and Jez Humble, build-and-release manager at ThoughtWorks Studios, who is currently writing a book called, appropriately enough, “Continuous Deployment.”

Here’s what we’re going to cover in the virtual conference – you can stay for the whole thing, or choose the parts that seem more relevant to you:

• The potential benefits of Continuous Deployment to your organizations.
• The technology required to implement Continuous Deployment.
• How to apply Continuous Deployment to your company’s existing IT systems.
• How to apply Continuous Deployment to the software you’re creating, both Web and client-installed.
• The social challenges of applying Continuous Deployment in your organization.
• The risks of doing Continuous Deployment wrong – and how you can avoid mistakes.
• The impact of Continuous Deployment on various job functions: testers, marketers, managers, programmers and other stakeholders.
• The prerequisites to Continuous Deployment.
• Practical advice and best practices to take steps toward Continuous Deployment today.

You can learn more, see the agenda and timeline, and pre-register at http://bzmedia.com/agility/ — please join us!

Z Trek Copyright (c) Alan Zeichick

Salesforce.com intrigues me, and that’s a positive thing. The company keeps reinventing itself, and shows the type of innovation that used to be more common in Silicon Valley.

If you thought that Salesforce was in the business of hosting customer relationship management software, you’re living in the past. CRM barely scratches the surface of where the company is today. Sure, the company describes itself as “Web-based Customer Relationship Management (CRM) Software-as-a-Service (SaaS),” but when was the last time you heard anyone talking about the company’s CRM systems?

With Salesforce today, it’s all cloud, cloud, cloud. And chocolate – Salesforce is very popular with my family, since the company’s public-relations team bundles bags of chocolate with its press-release packets. Full disclosure: It’s delicious. Also, our advertising sales team uses Salesforce’s CRM system.

Chocolate? By sending out press packets with tasty treats, Salesforce keeps demonstrating that it’s an old-fashioned company innovating with the latest technologies. Very few companies mail out printed materials to journalists any more. Everything is all email and Web pages, webcasts and blogs. Yet there’s something appealingly archaic – positively 1980s – about this corporation.

Salesforce was born in 1999, launched by former Oracle sales executive Marc Benioff. Some analysts (myself included) formed an initial impression that Benioff – a brash showman not unlike Larry Ellison –formed the startup to tactically exploit the hosted CRM market focusing on small and mid-sized customers. A decade ago, that was a niche opportunity that Oracle was far too big to serve.

Benioff would gain some traction in the CRM space, we predicted, and then sell Salesforce.com within a few years. Probably back to Oracle, but if not, to SAP, IBM or another IT-industry giant. If Oracle was the buyer, Benioff would be well positioned as Ellison’s eventual successor at Oracle’s helm.

Yes, I still predict that Oracle or another large firm will acquire Salesforce. My estimate is that it’ll happen within five years. But while the company’s CRM assets are essential because subscriber fees drive substantial revenue, the real intellectual property will be in Salesforce’s cloud technology.

The revenue is growing nicely. According to Salesforce’s fiscal first-quarter results, covering January-March 2010, the quarter’s revenue was US$376.8 million, an increase of 24% over the same period in 2009. Subscription and support revenues made up nearly all of that, coming to $351 million; the rest was professional service. While that’s paltry compared to Oracle’s first-quarter 2010 revenues of $5.1 billion, it’s nothing to sneeze.

There’s no sneezing at Salesforce’s market cap either, which was $10.74 billion in late May, with a price/earnings ratio of 134.17. By comparison, Oracle’s market cap is $112.12 billion, with a P/E of 19.96.

What’s driving the market cap? The cloud. Of course, as a hosted CRM service, Salesforce has always lived in the cloud, even before term gained widespread currency. What distinguished Salesforce years ago from other hosted software companies – and linked it to much-larger cloud pioneers like Amazon and Google – is that the company realized that its hosting infrastructure and database engine could be leveraged by its customers for running custom software. Initially, most custom software was coupled tightly to the CRM service, but increasingly, the capability has taken on a life of its own and has attracted customers beyond Salesforce’s traditional installed base.

So, while cloud service fees are only a small part of Salesforce’s current revenue stream today, it represents the leading edge of the company’s innovation and attraction. To be blunt, that’s the only reason why we cover Salesforce in SD Times – because hosted CRM isn’t an area of interest to the typical enterprise developer or ISV software engineer.

Look at what Salesforce has done with the cloud. It’s gone beyond its simple Apex and VisualForce programming languages – designed to create add-ins to the CRM system – to a richer environment, called Force.com, that’s trying to appeal to all enterprise developers. The company moved into collaboration with its new Chatter system. It created an application store, AppExchange, to let developers choose from pre-written tools and services. It supports rich Internet apps using Flash, CSS and JavaScript. Most recently, the company has partnered with VMware to host a subset of Java EE within the cloud.

Balance sheet notwithstanding. Salesforce.com is no longer about CRM. To my earlier prediction that the company will be acquired with five years, let me add two more. First, its name no longer fits. I see a name-change within the next two years. And that stock ticket, NYSE:CRM – that’s gotta go. It looks like NYSE:CLWD is available.

Z Trek Copyright (c) Alan Zeichick

It’s almost time to unveil the SD Times 100 – the top 100 companies, project and movements that are demonstrating innovation and leadership in the software development industry.

This year, the SD Times 100 will be “officially” published in the June 1, 2010, issue of SD Times. It’ll also appear on that date on sdtimes.com.

However, continuing a new tradition begun last year, we will tweet out the SD Times 100, category by category, on Monday, May 31, starting around 11:00am Eastern, 8:00am Pacific. We’ll begin with the “Agile & Collaboration” category, and will tweet another category every 30 minutes or so, continuing through all 12 categories until we reach the biggest one: Influencers.

See all the action by following us on Twitter: www.twitter.com/sdtimes.

What is the SD Times 100? As we wrote in the debut 2003 awards:

The editors of SD Times identified the industry’s top leaders, innovators and influencers, and broke them out into 10 separate industry segments. Some companies lead in one category, others in more than one. In each category, one company has been spotlighted as a star deserving of special notice.

When choosing the SD Times 100, we carefully considered each company’s offerings and reputation. We also listened for the “buzz”—how much attention and conversation we’ve heard around the company and its products and technologies—as a sign of leadership within the industry.

The SD Times 100 looked for companies that have determined a direction that developers followed. Did the company set the industry agenda? Did its products and services advance the software development art? Were its competitors nervously tracking its moves? Were programmers anxiously awaiting its developments? Those qualities mark a leader.

Subjective? Of course. But leadership and innovation can’t be measured by stock valuations or analyst reports. The SD Times 100 represents what we believe to be the best of the best….

While you’re waiting for the 2010 SD Times 100 to tweet out next Monday, please feel free to peruse the debut 2003 awards and last year’s 2009 awards.

See you on Twitter!

Z Trek Copyright (c) Alan Zeichick

Computer Associates. CA. CA Technologies.

What are they thinking over there in Islandia? When Computer Associates changed its name to CA in 2006, it seemed like a lame move at the time. And indeed, while some people do refer the computer as CA, it appears to me that most people refer to it as Computer Associates. Or as some hybrid, like “CA — you know, Computer Associates.”

Not content with that, five years later the company has renamed itself again. To CA Technologies. And it took more nearly 700 people to come up with that name, too.

Improvement? Not.

Here’s what the official press release says:

CA, Inc. Has a New Name: CA Technologies

LAS VEGAS, May 16, 2010 — CA WORLD — CA Technologies (NASDAQ: CA) today unveiled its new name to demonstrate its commitment to managing and securing IT environments and to deliver more flexible IT services to its customers.

The evolution of the company brand and name change to CA Technologies and its new internet site design—ca.com—were unveiled today at CA World 2010, the Company’s annual customer conference. CA World has attracted more than 7,000 customers, partners, analysts, press and employees to the Mandalay Bay Resort, and runs today through Thursday.

“The name CA Technologies acknowledges both our past and points to our future as a leader in delivering the solutions that will revolutionize the way IT powers business agility,” said CEO Bill McCracken. “We are executing on a bold strategy to delight our customers with unprecedented levels of IT speed and flexibility.”

The brand and name change to CA Technologies was designed with insights from nearly 700 customers, partners and market thought leaders, and was developed to ensure the delivery of a consistent story in the market that reflects the full breadth and depth of what the company offers.

“Our integrated marketing campaign and name change will demonstrate a consistent brand message and image to our global customers,” said Marianne Budnik, Chief Marketing Officer, CA Technologies. “We are rolling out a global advertising campaign, website redesign, online marketing, new collateral and signage to ensure our new identity and brand promise resonates with customers and partners around the world.”

Sorry, Marianne, but you’ll always be Computer Associates to me.

Z Trek Copyright (c) Alan Zeichick

Those of us in the technology world – call us nerds, geeks or software developers – expect a 1:1 ratio between personal email addresses and people.

I’m not talking about business or workgroup addresses. I mean your home, non-business address, which might come from your cable TV company, DSL or wireless provider, or from a free email service like Google’s Gmail, Microsoft’s Hotmail, Yahoo, India’s Rediffmail and so-on.

Many people (like myself) have dozens of personal email addresses. Some current, some legacy, some active, some dormant, some used for friends and family, some used for buying things online, some used for subscribing to newsgroups, some totally forgotten. So, yes, it’s not really a 1:1 ratio; it’s 1:many, but here “many” means many addresses, not many people.

For techies, you can safely assume that a personal email address is like a personal cellular phone number. All my personal email addresses are for me. My wife has her own personal email addresses. There are no shared addresses.

I would venture that most of you receiving this blog also view email addresses and email messages as personal, not shared. Yet I’m continually astonished at how many families share one personal email address, just like they share one home phone number. Jack and Jill Smith might use the shared address email hidden; JavaScript is required or email hidden; JavaScript is required or email hidden; JavaScript is required, and that’s the sole non-business address they have.

This seems to be a generational divide. My parents share one email address. I see a lot of shared personal email addresses in the directories of several non-profit organizations and even in my son’s school parents directory. The divide seems to be somewhere north of 50 years old:

• Over 70 years old, the spouses most likely share one address. It’s probably from AOL or their Internet service provider.

• Under 50 years old, it’s almost certainly not a shared address, and it’s probably not from AOL or the ISP. But I know many exceptions.

• Between 50 and 70 years old, it could go either way, but it’s more likely to be personal than shared, and is more likely to be from the ISP.

Why is this relevant? Many of us assume that a non-business address is a one-person email address, and therefore is suitable for receiving confidential information. That can be a false assumption — even with couples under 50 years old.

Also, what happens if both spouses try to create accounts on a website that uses an email address as the registration key? Registration systems increasingly rely upon the email address as an unique identifier associated with one, and only one, person.

Z Trek Copyright (c) Alan Zeichick