Team ST&P is celebrating that BZ Media’s Software Test & Performance magazine won three 2007 American Inhouse Design Awards, from the editors of Graphic Design USA.

The winners are:

• The cover of the October 2006 issue — the floating “root causes” plant
• The cover of the November 2006 issue — the carousel horses
• The interior layout for “How To Build Tests,” November 2006 — the cover story

Kudos to LuAnn Palazzo, art director; Lindsey Vereen, editor of the October issue; Eddie Correia, editor of the November issue; and the rest of the editorial and art staff.

The awards will be published in the July 2007 issue of Graphic Design USA.

Z Trek Copyright (c) Alan Zeichick

Bertrand Meyer, the designer of the Eiffel programming language (and founder of Eiffel Software, which sells development tools) has just been recognized by the ACM with its 2006 Software System Award.

The citation reads,

“For designing and developing the Eiffel programming language, method and environment, embodying the Design by Contract approach to software development and other features that facilitate the construction of reliable, extendible and efficient software.”

While I wouldn’t call Eiffel an overwhelming commercial success, the object-oriented language’s influence on software development has been profound. Similarly, Meyer has been at the center of interesting debates, especially around Design by Contract. Last year, the Eiffel language became ISO/IEC Standard 25436:2006.

In addition to his work at Eiffel Software, Meyer is a professor of software engineering at the ETH in Zurich.

I haven’t spoken to Meyer for several years, but have always enjoyed our conversations — perhaps this award will help us find the opportunity to chat again soon.

Z Trek Copyright (c) Alan Zeichick

There will be no Visual FoxPro version 10, according to the VFP team at Microsoft.

Visual FoxPro — which started out, of course, as plain old FoxPro— has been around for more than 20 years; it was created by Fox Technologies, which Microsoft acquired in 1992. FoxPro came from the era of dBase II and other so-called “xBase” languages, which were extremely popular because they were fast and efficient on inexpensive PCs, and because they were fairly easy to program.

However, at Microsoft VFP has languished, rapidly falling far behind the company’s other databases, SQL Server and Access. So, there’s no surprise that earlier this month, Microsoft said,

“We are announcing today that there will be no VFP 10. VFP9 will continue to be supported according to our existing policy with support through 2015. We will be releasing SP2 for Visual FoxPro 9 this summer as planned, providing fixes and additional support for Windows Vista.”

They also added that additional features under development, which include connectivity to SQL Server and partial integration with .NET, will be released at no charge within the next few months. You can download a Community Technology Preview of these bits, code-named Sedna, today.

Do you still use VFP? Let me know what you think about this — and what your migration plans are (if any).

Z Trek Copyright (c) Alan Zeichick

My Hawaii-based colleague Larry O’Brien is a believer in storyboarding.

On his blog, Larry posted a short review of a Visio-based tool called stpBA Storyboarding, “… which every architect and team lead owes themselves to evaluate. I would say it’s revolutionary, but it’s better than that — it simply makes the way you probably already work vastly more efficient.”

Given that Larry (pictured) is already vastly efficient by any standard, that’s a pretty tall claim. Plus, I’m somewhat skeptical because tools like Visio are impediments to my own creativity. That may be a matter of personal style: I’m much more comfortable writing text than drawing with circles and arrows. Which is why, incidentally, I never became a UML fanatic, and why I’m arguably the worst user-interface designer on the planet. I’m just not a visual person. (If you don’t believe me, ask one of BZ Media’s art directors.)

Even so, one endorsement by Larry is worth a hundred endorsements by lesser beings. If he says that stpBA Storyboarding is worth checking out, then it’s worth checking out.

Z Trek Copyright (c) Alan Zeichick

Characterization testing is one of the most important — but insufficiently discussed — areas of software testing. It’s where you use unit testing to monitor existing code to capture the current functionality of pieces of the application. (The most common use of unit testing, by contrast, is to validate that new code works correctly.)

So, you might wonder if characterization is truly part of the “testing” part of the application life cycle, or if it’s part of maintenance. Good question, and I don’t know. In any case, characterization useful not only when doing maintenance on legacy code (when you might be trying to figure out exactly what a module does), but also during refactoring of legacy code (when you need to ensure that you didn’t break something).

On this topic, my colleague Andrew Binstock wrote a column (“Characterization: Beyond Pure Unit Testing,” SD Times, March 15) where he talked at length about the benefits of characterization during refactoring, and also where it has drawbacks.

In the column, Andrew cited the characterization features in JUnit Factory, an Eclipse plug-in and hosted test-generation service from Agitar Software. I agree that it’s a cool solution (which is currently in beta), but it’s important to note that characterization has been around for a while. Unit-test tools from Parasoft, IBM Rational and other companies support characterization as well (though they don’t necessarily use that word).

The main point that Andrew makes is an important one, and I’ll quote:

“The benefit is that if you’re refactoring legacy code, you can tell if you’ve disrupted it when any of these characterization tests fail. When you think about it, this might be about the only way of recording functionality in a faithful and actionable way. Clearly, deriving UML diagrams or flowcharts of the code is nearly pointless in this regard, because those artifacts cannot automate the process of telling you what you’ve unhinged and what its effects are.”

Want to learn more about characterization testing? Pick up a copy of “Working Effectively with Legacy Code,” by Michael Feathers, which introduces the concept and brings it to life.

Z Trek Copyright (c) Alan Zeichick

I received this pitch today from Event Management Services, a self-described “publicity firm.” Frankly, it’s too amusing not to share with everyone. This is a verbatim cut-and-paste, with phone number and e-mail addresses removed.

Note that the the e-mail pitch itself was a rich HTML file with lots of colors, bolding, italicizing, centered text, larger text, underlining etc., which I don’t feel like replicating completely.

This is clearly a company that buys mailing lists — they sent it to many addresses at our company, including our info@ and letter@ addresses. The subject line was, “Should You Talk to Women Differently About Your Product?” What do you think about this pitch? -A

* * * * *


Should You Talk to Men and Women Differently About Your Product?

You probably chat with both men and women just about every day, right? But are they hearing you in exactly the same way?

And when it comes to the selling of your products or services, would it pay to speak to them… well… differently?

That’s what marketing experts, like author Martha Barletta, believe. Owing to the way we’re made up, the way we’re raised, men and women can process information very differently. For example…

“Consistent with men’s inclination to simplify and strip away extraneous detail, they believe in starting with the main point and supplying specific detail only if the listener asks for it,” Barletta observed in her bestseller, Marketing to Women.” Conversely…“To women, the details are the good part: what he said, why she answered as she did, and what was the significance of that event. Women want the full story.”

There’s probably more truth in Barletta’s observations than we care to admit. And if your product specifically targets men or women—and you’re out there doing TV or talk radio interviews—it’s a good idea to pay attention to how you talk to them.

Consider, for example…

“Report Talk Versus Rapport Talk”

Along the lines of the above “outline versus detail-rich” way of speaking I mentioned, women place great value, according to Barletta, in personalizing conversation. Men apparently don’t.

“When male and female students in a communications class were asked to bring in an audiotape of a ‘really good conversation,’ one young man brought in a lunch conversation with a fellow classmate that included lots of animated discussion of a project they were working on together. The women students were puzzled because there wasn’t a personal word on the whole tape. You call that a conversation?”

Barletta labeled the way men speak “report talk,” while women use “rapport talk.”

Use This in Your Next Interview

Assuming that’s actually the case, how could you use this in media interviews or even your marketing? Well, if you’re targeting women, you might try telling more stories of how people respond to your product or service or how a person’s life was improved by it. You might also tell your own story, particularly if it was challenging, moving or heartwarming.

Conversely, if you’re targeting men, you might focus on the “nuts and bolts” of your product. How things work, why they work and their future usage—things like that.

And what if you’re speaking to both men and women? Just blend the two approaches. Personalize your information and give out the nuts and bolts in your own particular style.

I’m Marsha Friedman, CEO of Event Management Services, one of the country’s only Publicity and Advertising firms that offer a “media guarantee”. There’s a lot more on this subject of talking to men and women differently that I will share with you in future emails. For now, let me leave you with this: The difference between men and women extends to the way we hear things…and you should be prepared to address that.

If we can help you obtain national media exposure for your products or services, call me or Steve Friedman today. Find out why New York Times bestselling author Earl Mindell said, “Event Management is the best in the business.”


Marsha Friedman, President
Event Management Services

P.S. I mentioned the value of personalizing things for women? Barletta wrote, “To women, personal ties are a good thing—in fact the best thing.” Maybe you could use that tidbit in your next interview, too.

In this week’s InfoWorld, Andrew Binstock (a columnist for SD Times, as well as a technology analyst) wrote a powerful head-to-head review of Java integrated development environments.

Andrew looked at Borland/CodeGear’s JBuilder 2007 Enterprise Edition, IBM Rational Application Developer for WebSphere Software 7.0 (what a terrible name) and Sun’s NetBeans 5.5.

I heartily recommend this article for two reasons. First, if you’re shopping for a Java IDE, this is a definitive resource. Second, if you’re curious how real experts evaluate development tools, there’s no finer reviewer than Andrew Binstock.

Andrew and I chatted several times during during the evaluation process, and I was continually impressed not only with the depth of his knowledge, but his genuine commitment to doing a thorough job on this product evaluation.

Z Trek Copyright (c) Alan Zeichick

Today, IDG’s newsweekly, InfoWorld, confirmed rumors that surfaced last week: It’s moving to an online-only format. As Steve Fox, its Editor-in-Chief, wrote today,

“Yes, the rumors are true. As of today, March 26, 2007, InfoWorld is discontinuing its print component. No more printing on dead trees, no more glossy covers, no more supporting the US Post Office in its rush to get thousands of inky copies on subscribers’ desks by Monday morning (or thereabouts). The issue that many of you will receive in your physical mailbox this week — vol. 29, issue 13 — will be the last one in InfoWorld’s storied 29-year history.

It’s difficult for me to assess how much impact this will have on InfoWorld’s business, but frankly, I don’t see it as a positive development for its readers.

Steve Fox wrote,

“But let me dispel any other rumors. InfoWorld is not dead. We’re not going anywhere. We are merely embracing a more efficient delivery mechanism — the Web — at You can still get all the news coverage, reviews, analysis, opinion, and commentary that InfoWorld is known for. You’ll just have to access it in a browser (or RSS reader) — something more than a million of you already do every month.

I flip through most issues of InfoWorld when they appear in my mailbox. Usually, I read one or two stories; sometimes, I read it cover to cover. Will I remember to browse to every week? Doubtful. The Web is great for searching for something specific, not for learning about new stuff you don’t know about yet.

Will the InfoWorld news feeds be distinguished from the myriad other RSS streams in my newsreader? Maybe. Maybe not. Will I keep reading its fantastic columns? The existing ones, yes, but it’ll be hard for new columnists to build awareness. Soon, will I just read eWeek instead? Probably. (I wonder how long before eWeek joins InfoWorld in the digital-only domain.)

Publications come, and publications go: That’s just how the magazine business works. However, the passing of InfoWorld is more bitter for me, because I’ve been writing for it for two full decades.

I started writing for InfoWorld when I worked for IDG in the mid-1980s, and have maintained a presence there ever since. I’m still listed on their masthead as a Senior Contributing Editor. (In fact, at an InfoWorld editorial gathering a few years back, someone was trying to figure out who had been associated with them the longest — and there was considerable surprise when it turned out to be yours truly.)

So, despite what Steve asserts, InfoWorld is dead. While the separate Web site is not dead, it’s not the same thing, not the same at all.

Z Trek Copyright (c) Alan Zeichick

My Take this week in SD Times News on Thursday discussed a fascinating presentation from Jonathan Rosenberg (pictured), senior VP for product management at Google. In the column, I made passing reference to Metcalfe’s Law and Moore’s Law.

Since I didn’t describe these two laws, and referred to them in adjacent paragraphs, some readers thought that one reference was a typo. It wasn’t. However, let’s use the opportunity to briefly describe these two laws.

Metcalfe’s Law, as proposed by Ethernet inventor Bob Metcalfe, says that the value of a telecommunications network is proportional to the square of the users of the system – that is, the number of potential connections between the users.

Think about fax machines, or e-mail: The more people who use it, the more useful the system is. The same concept also applies to information sources: The more books you have in a library, or the more Web pages are indexed by a search engine, the more popular it is, the more likely people will want to use it (because they’ll more likely to find what they want), and the more people will want to add more stuff to it (because it has more users).

Note that some experts agree with the principle of Metcalfe’s Law, but argue that the correct ratio is n log n, not n squared. While that intuitively seems more accurate for very large networks, I don’t have a strong opinion one way or the other.

Moore’s Law, based on observations by Intel co-founder Gordon Moore, is widely quoted saying that, for a fixed cost, the number of transistors on an integrated circuit doubles every 12 or 18 months. However, Moore himself later that he meant that the number doubles every 24 months.

For my purposes here (and in my Take), the important concept is that technology growth is exponential in many areas of computing technology, including raw CPU power, memory, storage, I/O bandwidth and network bandwidth. Or, to look at it another way, the cost of CPU power, memory, storage, I/O bandwidth and network bandwidth is decreasing at an exponential rate.

When you combine these laws, Google’s business model depends on two predictions being true for many years:

* In a Metcalfe’s Law sort of way, the amount of information that Google stores and delivers will continue to grow – and the more information Google has, the more users it will attract. The more users it has, the more advertisers it will attract. The more money advertisers can make, the more likely they’ll be to put more content there. That will attract more users, and so-on.

* In a Moore’s Law sort of way, the march of technology will make Google’s server farms faster and able to perform more complex processing, and store more content and it will make end users’ computers more powerful and it will increase the width of pipes that deliver content from Google’s service farms to end users.

That, in turn, will enable the processing, storage and delivery of yet more content, making the cycle ever more virtuous.

Z Trek Copyright (c) Alan Zeichick

A friend forwarded a link to this wonderful product demonstration video for the Rockwell Automation Retro Encabulator. The video’s been floating around the Internet for ages, and I’d forgotten how funny it is.

Having just come back from watching several product demonstrations at SD Expo this week, this fictional product seems more realistic than some genuine app-dev tools on the market today.

Z Trek Copyright (c) Alan Zeichick

On March 1, a blog reader responded to the news about the 2006 ACM A.M. Turing Award — which recognized Fran Allen as the first female recipient of this honor — asking a pointed question:

I guess the Lady Admiral who wrote Fortran wasn’t very important… So I won’t bother to even name her. After all, she only worked for the U.S. Government and not a large conglomerate like IBM…

I asked the Association for Computing Machinery if the Turing Award committee had a response to this question. Here’s what they told me this morning.

“Good morning Alan, and thanks for your patience. We appreciate your interest in ACM’s Turing Award, and the issue it raises about women and technology. So let me explain how the process works.

“ACM’s A.M. Turing Award recipient is selected by a committee of prominent computer scientists and engineers. The selection process is confidential, and no single person knows the history of all the deliberations over the years.

“ACM has recognized Grace Hopper with the Grace Murray Hopper Award which originated in 1971. It is presented to the outstanding young computer professional of the year. In addition, ACM is a co-sponsor of the Grace Hopper Celebration of Women in Computing which is now an annual event. It is designed to bring the research and career interests of women in computing to the forefront.

“As the demand for talented computing professionals grows, it is increasingly imperative that women and other underrepresented groups be encouraged to pursue this career path. The recognition provided by ACM’s Turing Award this year has already raised awareness of the achievements of women in the field. We hope this news will motivate girls and women to see the growing opportunities for exciting careers, and to get the recognition they have earned as critical contributors to technology and innovation.”

While I’m delighted that the ACM focuses on the issues of women and technology (which it does in a very prominent way), and that Adm. Hopper was given many other honors, it’s a shame that she was not given their highest honor.

Borland has come up with a thought-provoking list of “Top Ten Blunders” that can lead development teams to introduce unexpected defects into their applications. It’s a real-world list, albeit weighted a little too heavily to builds. While it’s obviously essential to build early and often, and to make sure that your builds are good, it’s only one of many steps in the application development life cycle. But then again, this “Top Ten” list was created by Borland’s Gauntlet build-automation product team, so I can see why it’s tilted that way.

Take a look at their list, and tell me what you think: What are the biggest app-dev development blunders that you’ve seen?

Z Trek Copyright (c) Alan Zeichick

I posted a brief notice of John Backus’ passing on Tuesday, but two technology journalists have written touching and moving obituaries. I urge you to read them both.

The first is from O’Reilly Media’s Kevin Farnham, who writes on his blog about Backus’ background as an artist and as a creator, not just as a computer scientist.

The other is a phenomenal story from BetaNews’ prolific pundit Scott Fulton, who dives into Backus’ credentials as an intellectual. (I borrowed this photo from Scott’s post.)

Well done, gentlemen — excellent tributes to an extraordinary man.

Z Trek Copyright (c) Alan Zeichick

Back when I was studying compiler design in the late 1970s and early 1980s, the name John Backus was often foremost in my mind. He was one-half of the team that developed the Backus-Naur Form, the notation that we used to define language syntax.

Backus, who passed away last Saturday, was one of the designers of the FORTRAN programming language. The 82-year-old computer scientist, who spent most of his professional life at IBM, won many awards, including the ACM’s A.M. Turing Award in 1977 and the Charles Stork Draper Prize in 1993.

You can read a detailed obituary at the New York Times. The IBM archives (from which I appropriated the photo) talks about the development of FORTRAN under Backus’ guidance in the late 1950s:

Most people, Backus says today, “think FORTRAN’s main contribution was to enable the programmer to write programs in algebraic formulas instead of machine language. But it isn’t. What FORTRAN did primarily was to mechanize the organization of loops.”

It’s a great story about a true computer science pioneer.

Z Trek Copyright (c) Alan Zeichick

My friend Andrew Binstock has posted a brief, yet fascinating, discussion about the potential power savings by using dual-core processors. In “MIPS per Watt: The Progression,” he tests similar Dell workstations using a Kill-a-Watt electricity usage monitor, and shows that dual-core system using single AMD Opteron and Intel Pentium D processors draw less juice than a system with two single-core Intel Xeon processors, with minimal performance tradeoff.

Thus, the performance/watt ratio for single-chip dual-core systems is considerably higher than for dual-chip systems.

There aren’t broad implications of this benefit for desktop PCs, since few have dual processors. Most desktops are single-chip machines.

The big payoff is in the data center. When it comes to low-profile servers, for many IT departments a dual-processor server is the baseline deployment platform. I fall into that trap too, since a dual-processor pizza box is what I generally recommend. However, in many cases, a single dual-core processor may offer all the performance required, and the power savings over a dual-processor server can be significant.

For another thought about dual-core processors, see Andrew’s previous post, “Multicores not as productive as you expected?

Z Trek Copyright (c) Alan Zeichick

Trees are important assets – not just for forests, but also for cities. In the small San Francisco suburb where I live, the city government is adamant that if you have to cut down a tree, you have to justify it with a good reason (like, the tree is sick and dying), and you have to replace it with another one.

The City of San Francisco, a few minutes to the north, takes its arboreal assets seriously, and this month embarked on an ambitious urban forest mapping project to inventory and map all the trees within the city limits. Two organizations, the city’s Bureau of Urban Forestry and the non-profit Friends of the Urban Forest, are helping the city with this – and they’re using interesting software tools, based on open-source software.

The software that San Francisco is using is called STRATUM, or Street Tree Resource Analysis Tool for Urban Forest Managers – that’s a mouthful. STRATUM was build by the U.S.D.A. Forest Service Center at U.C. Davis. It’s based on MapGuide Open Source, an LGPL-licensed “corporate” open source program started by Autodesk. The company spun the open source project out of a commercial version of the product. The project is run by the Open Source Geospatial Foundation, which Autodesk set up and exercises a great deal of influence over (if not outright control).

San Francisco isn’t the only city to inventory its trees using STRATUM; others include Chicago, Fort Collins, Colo. and Modesto, Calif. However, it’s the first one I heard about, and because it’s local it’s more interesting to write about. San Francisco’s trees, not only in Golden Gate Park and the Presidio, but also in greenways and neighborhoods all around the City, are as much as part of its charm as the Golden Gate Bridge, Fisherman’s Wharf, cable cars and famously crazy crooked streets.

The urban forest map is available to everyone, not just to the San Francisco City government. Anyone can search for trees by species, address, neighborhood, planting date of the tree, and other factors. The Web-based map itself is very visual and interactive, and you can select different overlays that show soil conditions, the location of water sources and parks, who put the tree there (such as different non-profits or the city itself). You can also overlay satellite images or elevation lines. You’re even supposed to be able to use the STRATUM application to communicate back to the City, such as if you find that there’s a problem with a tree, though I couldn’t get that to work.

In my exploring the application, it seems a bit buggy, and occasionally goes unresponsive. Attempts to pan the map by click-and-drag, or learn information about an object by hovering the mouse pointer, did not work properly. The overlays also didn’t work consistently. With luck, the bugs will get worked out soon.

Despite those “version 1.0” flaws, it’s a pleasant change to encounter open-source successes that are applications, not infrastructure or software developer tools. Normally, we see the likes of Linux, Eclipse, NetBeans, Apache Tomcat, Hibernate and so-on. It’s good to see examples of how ordinary people can use open source software.

Z Trek Copyright (c) Alan Zeichick

One of BZ Media’s more eccentric contributing writers is I.B. Phoolen, a retired software test/QA engineer with impeccable credentials and very strong opinions. Since 2000, he’s written a few pieces for SD Times and Software Test & Performance.

Now, I.B. has just launched a blog, on which he’s posted some of those articles (and yes, he asked reprint permission). He asked me to help spread the word, and of course, I’m delighted to oblige.

Z Trek Copyright (c) Alan Zeichick

Earlier this week, I blogged about Microsoft’s big patch, the newly released Windows Server 2003 Service Pack 2 — which is not only for all versions of Windows Server 2003, but also for the 64-bit version of Windows XP Professional.

In my column in this week’s SD Times News on Thursday, “Patching Isn’t Just for Sysadmins,” the topic shifts to the role of enterprise software developers in the process of evaluating and deploying patches and service packs to operating systems and infrastructure applications.

I’d like to hear what you think. How does this work at your company?

Z Trek Copyright (c) Alan Zeichick

IP over Avian Carriers. The Y10K bug. Telnet’s RANDOMLY-LOSE Option. The Null Encryption Algorithm. The Etymology of “Foo.” SONET to Sonnet Translation. The Hyper Text Coffee Pot Control Protocol. The Infinite Monkey Protocol Suite.

Network technology experts Peter Salus and Thomas Limoncelli have compiled the best of the Internet Engineering Task Force specs into one volume, “The Complete April Fools’ Day RFCs, ” which I’ve just pre-ordered. You’ll probably want to order it too.

(The publication date is listed as April 28. It’s not yet available from Amazon or many other online resellers, but you can get in line now at Barnes & Noble.)

Z Trek Copyright (c) Alan Zeichick

Microsoft and Apple both released service packs yesterday. The Microsoft one is more significant, and applies to nearly all data-center Windows Server users.

Windows Server 2003 Service Pack 2 is for all editions of Windows Server 2003, including Storage Server. It also applies to Windows XP Professional x64 Edition.

There’s a huge list of changes in SP2, many of which have been issued as hotfixes. I counted 61 security patches in SP2, but it’s unclear how many of those are new, and how many were already out as hotfixes.

The contents of SP2 itself range all over the map, and include dozens of changes to the .NET Framework, administration tools, applications compatibility fixes, cluster fixes, COM+, data access components, development tools and processes, drivers, distributed system services (like DNS and LDAP), Exchange services, file system fixes, graphics handling, Internet Information Services, Intellimirror, Internet Explorer, the kernel and hardware abstraction layer, message queuing and middleware, the network stack, Plug ‘n Play, printing, security infrastructure, the command shell, storage, terminal services, the installer engine, Windows Media services, and management instrumentation.

It doesn’t appear that SP2 introduces many new features, as the focus is on bug fixes and resolving compatibility issues. However, there are new functions for data access, distributed systems, file systems, message queuing, and networking. For example, there’s support for WPA2-based WiFi security, and a new XML parser called XmlLite.

As always, it is recommended that you test SP2 before installing it on production servers, or otherwise rolling out, to ensure that there aren’t unwanted side effects of all these patches and fixes. Be sure to read the release notes for caveats; there are quite a few.

By contrast, Apple’s update is pretty minor, though it also addresses a lot of security fixes. The company released Mac OS X 10.4.9 yesterday. It offers changes to the company’s .Mac online service, fixes for Bluetooth wake-up issue, bug fixes for iChat, iCal and iSync, networking and modem fixes, and some fixes for print issues. There’s also a smattering of fixes for third-party applications and a few driver upgrades and fixes. Minor stuff. Rumor is that Mac OS X 10.5 should be out in April, so the 10.4 lineage is clearly in maintenance mode.

Z Trek Copyright (c) Alan Zeichick

A colleague cheerfully pointed out that today, March 14, is Pi Day — that is, it’s 3.14. (This makes more sense in countries where you put the month first; 14.3 for 14 March isn’t very “pi like.”)

Frankly, I hadn’t heard of Pi Day before. I’d heard of Pie Day, but this is obviously different.

If today is Pi Day, when exactly is Pi Time? Since pi is 3.1415926 (which is a good approximation for our purposes), Pi Time should be .15926 of the way through the day, which my trusty HP calculator tells me would be at 3:49:20 in the morning.

That’s too early for pie, but just right for pi.

Z Trek Copyright (c) Alan Zeichick

I was chatting with a colleague here at BZ Media’s New York headquarters office (I’m based near San Francisco, but trek out to NY every few months) about how companies react to negative coverage… and remembered this gem from The SCO Group last summer.

The June 12, 2006, edition of SD Times News on Monday, an e-mail newsletter, carried a “Zeichick’s Take” column entitled “It’s Time for Darl to Go.” I wrote:

In its relentless drive toward ever-bigger losses, The SCO Group passed a milestone with its most recent quarterly earnings: Its losses exceeded the ongoing costs of its famous intellectual-property litigation. That means that even stopping the soap-opera legal proceedings wouldn’t return the company to profitability.

The numbers are grim. For the quarter ending April 30, 2006 (whose numbers were released this week), SCO lost US$4.69 million on sales of $7.13 million. That compares pretty badly with the same quarter in 2005, where SCO lost $1.96 million on sales of $9.25 million.

According to the company’s financial reports, “Legal and other expenses incurred in connection with the Company’s litigation with IBM were $3,762,000 for the second quarter of fiscal year 2006. Because of the unique and unpredictable nature of this litigation, the occurrence and timing of certain expenses is difficult to predict, and will be difficult to predict for the upcoming quarters.”

Further, the company says that the last year it was profitable was in the four quarters ending on Oct. 31, 2003—and that the company’s accumulated deficit has reached $236 million. The company’s piggy bank is getting low, too. SCO reports that cash, cash equivalents and available-for-sale marketable securities now stand at only $18.62 million, down over a million dollars over the past three months.

No matter how you slice it, SCO is crashing and burning.

If you read through the company’s own financial documents, the business is fraught with peril. On June 5, the company released a prospectus for the sale of about 2.1 million shares of common stock by some stockholders (SCO itself wouldn’t see any of the proceeds.) The prospectus admits that “We do not have a history of profitable operation,” among other things, and provides a laundry list of reasons why SCO is, basically, a bad bet. A terrible bet.

The question remains: What are the owners of SCO going to do about this? Are they going to let Darl McBride, their president, CEO and chief champion of this ruinous legal fight, destroy the business? Maybe SCO’s lawsuit has legal merit (I don’t know—I’m not a lawyer), but despite the genuine innovations that SCO’s developers have created, the company is a pariah and a disaster.

It’s time for management changes. The current board of directors consists of McBride, R. Duff Thompson, Ralph J. Yarro III, J. Kent Millington, Omar Leeman, Edward E. Iacobucci, Darcy Mott and Daniel W. Campbell. Gentlemen, it’s time to perform your fiduciary responsibilities to your shareholders—and fire Darl McBride.

SCO’s reaction was swift and hilarious. They didn’t contact me or any of the other editors, but their then-head of corporate communications, Blake Stowell, wrote to one of SD Times’ advertising sales representatives:

I know there is supposed to be a very concrete line between the advertising staffs and editorial staffs of publications, but when SD Times puts out an article like this at the very time that we are looking to possibly do something to market to the developers that subscribe to your publication, it seems EXTREMELY counter-intuitive for us to do ANYTHING at all with SD Times. Any campaign we might do with your readership would fall on deaf ears and be a complete waste of our money after an editorial like this one.

After conferring with me, our sales rep responded to Blake,

One of the reasons SD Times is valuable to readers (and the industry) is because of its independent view and editorial, given much of our coverage is on vendors and their products. An editorial opinion in our email newsletter by no means indicates all our readers agree or disagree. I respectfully submit that your advertising message with SD Times would strengthen your position with our readers and the industry. As you know, all companies face adversity as a part of their progress, but having a continued presence and consistent message will endure in the minds of consumers.

Please do not forgo reaching a readership that can make a difference in your business. Again, I understand your position and your hesitation to proceed with an advertising campaign with us. My hope is that I can communicate to you the integrity we have as a news organization, the value and importance of our product and our audience, and most importantly, how we can help you grow as a company.

The follow-up came from another person with SCO. This person is not named because he’s still at SCO; Blake left a few months ago:

My challenge as director of SCO Marketing is that I’m ready to launch a significant campaign and had the SD Times at the top of our list for an integrated campaign. While the previous coverage is appreciated, what is seared in minds here is the latest message from your magazine and it has made it a VERY hard sell for me to convince executive management to let me use SD Times, even though it is probably the best vehicle for me to use. To have a magazine call for the removal of our CEO makes my job a whole lot harder internally and externally. At this point, I’ve been told to look at several other alternatives.

That was the end of the conversation… and you know, I haven’t seen that “significant campaign” appear anywhere. Maybe they really were planning something big, but my call for Darl’s removal scuttled their entire marketing program. Or maybe these were empty words intended to punish SD Times and attempt to ensure that future opinion essays and editorial coverage would be more positive.

Another day in the life of a publishing company… and I still think that SCO should fire Darl McBride.

Z Trek Copyright (c) Alan Zeichick

Want a free book? I’m giving away two different titles:

The first is “Software Security: Building Security In,” by Gary McGraw. Gary covers just about every aspect of software security, from risk management to code reviews, from testing to case development.

The other book is “The Software Vulnerability Guide,” by Herbert “Hugh” Thompson and Scott Chase. It’s a really strong book for both Java and .NET developers – very practical, stuff you can put to use right away.

How do you get free books? If you register to attend the 4th Software Security Summit, and use a special code, I’ll send you your choice of these books. This offer is for a “full event pass,” which gets you into everything: the full-day tutorials, the technical classes, the keynotes, everything.

The keynotes are, by the way, by Gary McGraw and Hugh Thompson. You’ll receive your book when you check into the conference. Track Gary and Hugh down, and ask ‘em to autograph your book.

The conference is April 16-17, in San Mateo, Calif. Here are the codes:

If you’d like Gary McGraw’s “Software Security: Building Security In,” register with the special code ALZ1.

If you’d like Hugh Thompson and Scott Chase’s “The Software Vulnerability Guide,” register with the special code ALZ2.

See you at the conference!

Z Trek Copyright (c) Alan Zeichick

As you know, Daylight Savings Time (DST) in the United States was changed by an act of Congress, so that it starts earlier this year. For me personally, it’s been a bigger nuisance than Y2K (which wasn’t a nuisance at all).

Here are three personal anecdotes:

1. On Sunday morning, I used my Garmin StreetPilot c550 to get to a friend’s house. (They live in a maze in Foster City, Calif.) I noted that the time was incorrect on the GPS. Today, on the Garmin Web site, I saw that on March 8 the company posted a firmware patch. So, in addition to the numerous software patches for my Windows and Mac computers, I also need to patch my GPS.

2. On Sunday night, I flew on the red-eye from San Francisco to New York City. All the clocks inside the United airlines lounge at SFO were wrong. Every few minutes, an announcer came over the public address system to remind everyone that the correct time was one hour later than shown on the clocks.

3. This afternoon, I was scheduled to have a conference call with a company at 4:00 pm Eastern time. I called into their bridge line – and was told that the passcode was invalid. I emailed the guy who set up the conference, and he replied saying that the combination of their Exchange Server group calendar and the Cisco Unified MeetingPlace software that schedules their phone bridges messed up. Although Exchange correctly adjusted the meeting times for DST, the MeetingPlace software didn’t sync with those adjustments. Therefore, Exchange knew the meeting was at 4:00 pm, but MeetingPlace set up the phone bridge to activate at 3:00 pm. The solution: He had to delete all future meetings set up with the MeetingPlace phone bridge and re-schedule them, in order for the system to work correctly.

Think about all the productivity wasted, and all the hard and soft costs, of this change to Daylight Savings Time. It’s hard to believe that the potential benefits (in theory, reduced energy consumption), is worth the inconvenience and expense that this has caused.

Please share your DST horror stories, large or small… and whether you see any benefit at all to the DST change, or to DST at all.

Z Trek Copyright (c) Alan Zeichick

The Screen Actors Guild is beefing up the software security embedded inside its pension and health plans, spending half a million dollars to protect its data. The county government in Anne Arundel County, Virginia, was paralyzed for more than a day after an attack last week.

Vulnerabilities were found in Google Desktop, where hackers could exploit cross-site scripting flaws in to read end users’ files or even run remote applications. The Web site for the Florida Marlins was hacked before the Superbowl to include an exploit that could install malicious software on browsers that didn’t have the latest security patches.

That’s all recent news, not ancient history. The more than we expose our applications to the Internet, the more vulnerable they are to attack. Because applications are increasingly interconnected, through Web services, trust networks, single sign-on, SOA and mashups, each vulnerable application represents a significant threat to the entire enterprise data center – and our increasingly distributed IT infrastructure.

Network firewalls can’t protect you; many attacks are generated internally, or have an “inside job” agent. Intrusion detection systems can’t protect you; the network traffic is legitimate, it’s what it’s trying to do that’s malicious. Authentication systems can’t protect you; often the attacks come from publicly accessible resources, or from authorized accounts (which might have been compromised). Virtual private networks can’t protect you; they guard the pipe, not the endpoints. The only thing that can protect you is properly written software.

The solution is developer training. Too many software developers simply don’t understand the fundamentals of creating secure applications. They’re so focused on software features, platform compatibility and run-time performance that there’s literally no time left for using the right coding techniques. Similarly, architects often don’t know the security aspects of their designs, and testers are focused on requirements – which rarely spell out the security vulnerabilities that an application must guard against.

For that reason, I invite you, and your architects, developers and testers, to the 4th Software Security Summit – the only technical conference that’s 100 percent focused on helping you write more secure software, and helping you secure the software that you already own. It’s not about networking, it’s not about VPNs, it’s not about firewalls… it’s about software development. (I’m the chairman of this year’s event.)

The two-day conference, held April 16-17 in San Mateo, Calif, has a strong program for everyone on your team. There are solid keynotes from Herbert “Hugh” Thompson and Gary McGraw. There are full-day tutorials on breaking software security, creating enterprise software security standards, and creating a plan for improving your software security.

The technical sessions cover everything from cross-site reference forgery to source code analysis, rootkits to SQL injection. There are specific sessions on securing Windows/.NET, Java EE and AJAX applications. New for this year are classes designed for the software development manager, addressing organizational issues that lead to software security problems.

The Software Security Summit is one conference that you and your team can’t afford to miss. There are discounts if you register by Friday, March 16. I look forward to seeing you there.

Z Trek Copyright (c) Alan Zeichick

In my blog comments about the 2006 ACM A.M. Turing Award, won by Frances E. Allen, I wrote, “It’s a shame that it’s taken 40 years to recognize the first woman for the most prestigious award in computing.”

A reader responded sardonically:

I guess the Lady Admiral who wrote Fortran wasn’t very important… So I won’t bother to even name her. After all, she only worked for the U.S. Government and not a large conglomerate like IBM…

Without detracting from Fran Allen’s justly deserved honor, the reader brings up a valid question. Why wasn’t Adm. Grace Hopper recognized as a Turing Award winner? I’ve asked the ACM, which promises that the Turing Award Committee will have a response to me shortly.

My first thought was that the Turing Award wasn’t given out posthumously, and so there wasn’t time to give her the honor. No, that can’t be it. The A.M. Turing Award was first offered in 1966. Adm. Hopper passed away in January 1992, so, there was plenty of time to give her the Turing Award. In fact, the very first Turing Award went to Alan J. Perlis, who died in February 1990.

It can’t be that the ACM didn’t recognize Adm. Hopper’s contribution. Indeed, since 1971 the ACM has offered the Grace Murray Hopper Award, given out to outstanding young computer professionals. (The first recipient of this award was Donald Knuth, who won the Turing Award three years later.)

I’ll be curious what the Turing Award Committee has to say on this subject.

While we wait, I’ll relate one anecdote about Adm. Hopper, who has long been a personal hero. The expression “computer bug,” or “there’s a bug in the system” dates back to Sept. 1945, when Adm. Hopper “debugged” a Mark II Aiken Relay Calculator by removing a moth stuck between two relay points.

Adm. Hopper’s notebook — with the poor moth taped to the page and the comment, “first actual case of bug being found” — is on display at the U.S. Naval Surface Weapons Center in Dahlgren, Va., where I worked in the early 1980s as a DoD contractor. Discovering that moth, and her notes, created a wonderfully tangible connection to the early days of computer science.

Z Trek Copyright (c) Alan Zeichick

I love my new Apple MacBook Pro, but I’ve been frustrated at how slow it seems, compared to my first-generation Intel-based 20-inch iMac. On the face of it, the MacBook Pro should blow the iMac out of the water. However, when the machines are running with lots of applications, the 15-inch MacBook Pro is a tortoise, the iMac is a hare. Starting apps and switching apps seems instant on the iMac, but lags on the MacBook Pro.

Why should that be? Let’s compare specs:

The iMac has a 2.0GHz Intel Core Duo
The MacBook Pro has a 2.33GHz Intel Core 2 Duo

(Note: the current iMac models use the Core 2 Duo processor. Mine is over a year old.)

Both machines have 2GB of 667MHz DDR2 RAM, in two 1GB sticks

Both machines have Gigabit Ethernet linked to a GigE switch

The iMac has an ATI Radeon X1600 with 128MB RAM
The MacBook Pro has an ATI Radeon X1600 with 256MB RAM

To make a long story short, what’s killing me is the hard drive. This dawned on me when I started keeping the Apple System Profiler open on my screen. When the machine slowed down, the amount of virtual memory was huge – 8GB, 9GB, 10GB or more. The access speeds out to the spinning drive was clobbering everything. (As I write this blog entry, the MacBook Pro has 65 processes running with 235 threads, and is using 12.14GB of virtual memory. That’s a lot of disk I/O.)

Hard drive interface:
Both machines use Serial ATA, supporting up to 1.5GB/sec. In fact, they both use the same Intel ICH-7M AHCI chip.

Hard drive:
The iMac has a 3.5-inch 250GB Western Digital Caviar WD2500JS drive (pictured), running at 7200RPM with a 300MB/sec interface, 8MB buffer, and 32-step native command queue. It has an average seek time of 8.9ms.

The MacBook Pro uses a 2.5-inch 200GB Toshiba MK2035GSS drive, running at 4200RPM with a 150MB/sec interface, 8MB buffer, and a 4-step native command queue. It has an average seek time of 12ms.

Yep. There it is. My beautiful notebook is creamed by a slowly rotating hard drive with a slow interface. This swamps the benefit of the faster, more advanced microprocessor. Grrrr.

I’m not mad at Apple: Their online system configurator stated that the 200GB drive was a 4200RPM model. If I’d selected a 160GB drive instead, I would have had 5400RPM. I chose capacity over speed. That may have been a mistake.

In today’s multithreading, multitasking world, disk performance matters! Whether you’re running Windows, Linux, Solaris or Mac OS X, all modern operating systems make extensive use of virtual memory. So, next time you spec your server, your desktop or your laptop, get the fastest freakin’ hard drive available – look at interface, look at rotation speed, look at buffer, look at seek time.

A faster disk is going to make as much difference, in real-world performance, as a faster processor. Probably more.

Z Trek Copyright (c) Alan Zeichick

When was the last time you asked your general contractor if the company had a power-tool standard, and whether it was Makita or DeWalt? Does your auto mechanic use wrenches from Sears or Snap-On?

When I talk to carpenters, electricians, plumbers and other professionals, I’m engaging them to perform a task. I assume that they have the the right tools for the job, and they know how to use them.

It’s not that simple with software development. Yes, our industry has reached a level of maturity. We can safely assume that all the tools are, generally speaking, pretty darned good. You can write great code with Eclipse. You can write great code with NetBeans. You can write great code with Visual Studio. You can write great code with BEA Workshop and CodeGear JBuilder and Apple Xcode and Oracle JDeveloper and the IBM Rational Software Delivery Platform and you-name-it.

But having an IDE is not enough. The fact that you’ve bought a development environment that integrates many functions is not nearly enough.

You don’t expect a drill to help ensure that your house meets safety specifications. But you could, and should, expect your software development tools to play an active role in helping build better business applications.

What matters, truly, are platforms and applications. You don’t engage a plumber to use a welder and pipe-cutter; you engage a plumber to stop a leak or install a new bathtub in your home. Similarly, enterprises don’t hire programmers or engage consultants to use Eclipse or NetBeans or Visual Studio. CIOs and CTOs pay these people to develop applications that advance the business.

That’s not to say that tools aren’t important. Carpenters need saws, plumbers need torches, and programmers need IDEs. Picking the right tool for the job is essential. However, the metric shouldn’t be “does the IDE enable the creation of good code.” They all do — just like both Makita and DeWalt drills can make clean holes, and Sears and Snap-On socket wrenches can all remove spark plugs.

Writing good code and fast code is easy for a professional developer. We’ve done that, even before today’s crop of super-sophisticated IDEs. The true test for tools makers is: How does their tool help your enterprise developers solve business problems? Do they support your vertical market or vertical application? Do they enforce security? Do they nurture best practices? Do they support the full application life cycle, or smoothly interoperate with other ALM solutions?

Of course, we don’t expect our cordless drill to interoperate with our welding torch and with our screwdriver set. That’s where software development and plumbing are just a little bit different. Too many writers stretch this analogy too far. We don’t expect Makita to document New York City building codes, but we should expect development tools and platform makers to worry about Sarbanes-Oxley and buffer overflows.

As I return from EclipseCon this week, it’s clear that enterprise programming isn’t just about banging out code. Not any more. It’s about building business applications. Challenge your tools providers to talk about how their products actively facilitate the creation of business applications. Because you already know that they handle the easy stuff: coding.

Z Trek Copyright (c) Alan Zeichick

I’m not always a fan of The Onion, but they outdid themselves this week with “Apple Unveils New Product-Unveiling Product.”

Even amid fevered speculation, Apple was typically mum before the launch product’s launch, and Mac rumor websites failed to predict any major details about the new offering, other than the fact that it was going to “change everything” and “be huge.”

It’s almost crazy enough to be credible. Where can I buy one?

Z Trek Copyright (c) Alan Zeichick

Please, please, please, please, please. Don’t allow the use of cell phones on airplanes.

I can sympathize completely with the philosophy espoused by Fortune columnist Stanley Bing in a piece in the Mar. 5 issue called “Called to His Reward.” (Oddly, in their online version, it’s called “Great big cell phones in the sky,” and dated Feb. 23.) This essay describes how unbearable life will be if/when the FAA and FCC allow the use of cell phones during commercial flights.

Last week, I was flying to a meeting in the midwest U.S., routing through Denver. The moment the plane landed, a passenger two rows behind me whipped out her phone, and started making calls. She returned voicemails, she phoned her husband, she burbled baby-talk to a young child, she arranged for ground transportation, she gave instructions to her assistant, she rescheduled a meeting, she talked and talked and talked in a very loud voice.

And talked and talked. It turned out that our plane had landed early, and the gate at Denver wasn’t ready yet. So, we sat on the tarmac for about half an hour, while this well-dressed, obviously successful, young executive talked

and talked

and talked

and talked

and talked

and talked.

Everyone within at least six rows of her could hear every word of every call. Toward the end, many of us were chatting, quite audibly, about her and her calls; my seatmate and I had an active running commentary. The young executive was quite oblivious. We were all mostly bemused by her cluelessness and, frankly, self-centered rudeness.

This was bad enough.

But imagine the day when she — and many others — will talk and talk and talk throughout flight, for hours and hours. And there’s just about nothing that you can do about it. If you can’t imagine it, read Stanley Bing’s article.

For the sake of civilized society, please, FAA and FCC, please don’t allow cell phones to be used during flights.

Z Trek Copyright (c) Alan Zeichick