The irony is ironic: On the same day that I learned about a new Microsoft marketing initiative to sell its customers client, server and network security software, the company released yet another slew of patches to plug up flaws in its products, including Windows Vista.

The new marketing initiative is called “Fast, Faster,” and is designed to push Microsoft’s Forefront security products — products, that is, the customers need to protect themselves against flaws in Microsoft’s operating systems and applications. According to Microsoft’s press release,

“The campaign uses humorous metaphors to illustrate how defending against security threats with Forefront is easier than defending against virtually anything else — including far-fetched threats from aliens, ninjas and zombies.

“The goal of this campaign, created by McCann Worldgroup San Francisco, is to emphasize Microsoft’s competitive differentiation in making security products easier to deploy, implement and manage.

“The Easy, Easier campaign will be appearing in IT print and online media in the U.S. and 28 markets worldwide as well as throughout Microsoft’s digital properties. More information on Microsoft Forefront, the Easy, Easier campaign and related customer stories can be found at http://www.easyeasier.com/.”

The Forefront products officially launch on May 2, at an event in Los Angeles.

What would be truly easy, easier for Microsoft’s customers would be to have more secure operating systems and applications.

• Yesterday was also Patch Tuesday, the monthly event that systems administrators dread. Just think about that: Every month, Microsoft’s customers know to expect a whole bunch of bug fixes. (Every Microsoft customer should sign up to receive advance notification of these patches.) The April 10 patches included five new fixes, four of which Microsoft itself said were critical.

• Microsoft did depart from that monthly schedule to ship an emergency update on April 3. Patch MS07-017 works to resolve a vulnerability in animated cursor handling in .ani files. For those who didn’t catch it early, Microsoft rolled this one into the April 10 patch group.

• Also according to eWeek, Microsoft is investigating public reports of new security flaws in Office. “Reports of several new security holes in Microsoft Office have been made public on known exploit sites. The company did not release specific information about the vulnerabilities, citing potential risk to users,” eWeek reporter Brian Prince writes.

That’s not to say that all the flawed software comes out of Microsoft. Even companies like IBM, Rad Hat and Apple issue regular security bulletins and patches, and have security advisory mailing lists. However, there is the hypocrisy that Microsoft charges customers for Forefront, which is software that exists mainly to help customers overcome flaws in Microsoft’s other products. If Microsoft truly wanted to make security easy, easier, it would create less buggy and less flawed products.

Z Trek Copyright (c) Alan Zeichick

This should be a fascinating, thought-provoking gathering: On April 30, Carnegie Mellon West’s computer science department, and UC Berkeley’s Haas School of Business and SSME programs are putting on a conference with a m0dest goal: to forecast the future of the software industry.

According to the sponsors,

“The consensus is that software, backed by commodity hardware, will shape the future of business. However, predicting and designing its course over the next ten years is a challenge requiring deep understanding and imagination. The build-out of the Internet and mobile technology are giving rise to new service delivery models, while open source and globalization are changing how software is created. What kind of software to produce, how to produce it, and how to deliver its value will be very different tomorrow than it has been in the past.

“The New Software Industry is the brain child of Carnegie Mellon West; The Fisher IT Center at the Haas School of Business, UC Berkeley; and the Services: Science, Management and Engineering program at UC Berkeley. The universities have attracted industry executives, such as Ray Lane, and academics, such as Michael Cusumano, who will lead interactive discussions on the issues and trends that will significantly alter the way technologists do business. Software customers, investors, and developers will gain a framework for the future of the software industry and pointers on where the best opportunities will be found.”

“The New Software Industry” should be a fascinating program; I’m looking forward to attending. If you want to come too, you can register online.

Z Trek Copyright (c) Alan Zeichick

Today, Apple announced that it had sold 100 million iPods. That’s a lot of computers… and yes, that’s what the iPod is. It’s a specialized computer, but it’s a computer nonetheless. So, too, are high-end cell phones and devices like BlackBerries and Palm Treos.

Seeing Apple’s news made me realize that I have no idea how many iPods I’ve purchased. My own family has four, all of which I purchased: an initial 20GB “iPod with click wheel” for me, then a second one for my wife. Then an upgrade to a 30GB “iPod photo,” for me, with the 20GB becoming a hand-me-down. Then, a final upgrade to a black 80GB “iPod with video” for me.

Amazingly, I still haven’t watched a video on it. I got it because my music library wouldn’t fit onto the 30GB model.

But that leaves out the dozens, dozens, dozens of iPods that I’ve purchased as premiums for sales promotions, survey respondents, door prizes, and so-on. Last December, we gave away nearly two dozen iPod shuffles for one particular give-away to business partners. At $79, a shuffle’s the perfect prize.

By far, offering an iPod as a premium or incentive is the most popular thing that I’ve done (even outpulling an equivalent-value Amazon gift certificate in one test that I did). Seemingly, everyone wants one, either for himself/herself, or to give away to a family member.

The number of different iPod models, past and present, is astonishing. See Apple’s site for pictures of ’em all. (Did you know that there was even an iPod Special Edition Harry Potter, with the Hogwarts Crest engraved on the back?)

Z Trek Copyright (c) Alan Zeichick

You’ve gotta love how companies figure ways to make you pay a premium to reduce their cost and increase their profits.

Remember back when Touch-Tone dialing was new, and the phone company charged an extra $1 or more per month to enable tone dialing on your account? Their cost was actually lower than if you kept using a rotary phone, because pulse dialing had to be switched on expensive relays, and tone-based calls could be directed over inexpensive electronic switches. But that didn’t stop Ma Bell and its offspring from charging its customers another few bucks of pure profit… for decades.

Major League Baseball does something similar. I just ordered some tickets for an upcoming San Francisco Giants game. Not only does mlb.com charge you an “order processing and delivery” fee of $4.00, but MLB charges you more if you want to print the tickets yourself on your own printer.

There are five ways to get your tickets:

1. Print tickets at home (which they say is recommended!) — $2.50 surcharge
2. Have them sent out by 3-day FedEx — $15.50 surcharge
3. Have them sent out by overnight FedEx — $19.50 surcharge
4. Have them sent out by regular mail — free
5. Pick them up at Will Call — free

Let me get this straight: Mailing them out using the U.S. Postal Service requires that MLB print the physical tickets, someone stuffs them into the envelope, and then there’s a cost for postage. But that’s free to me.

But instead MLB recommends that I pay a $2.50 premium to let their servers generate a ticket image, which I’ll print off using my own paper and my own ink… while saving them human handling and postage. (Plus, tickets printed at home make lousy souvenirs.)

Needless to say, I’ll be awaiting my game tickets’ arrival through the regular mail.

Z Trek Copyright (c) Alan Zeichick

On the heels of International Data Group’s decision to discontinue the print edition of InfoWorld, Crain’s BtoB reports that the publishing company will be reducing the paper size of its two tabloid-sized newsweeklies to standard magazine trim.

According to Crain’s BtoB (which covers the business-to-business media industry),

“Given the financial pressure we’re under in mailing these things out, we thought it was time to save the money and take advantage of the new format,” said Matt Sweeney, CEO of Computerworld, in a statement.

Being on the outside, it’s hard to say if these three big decisions are related. But given that InfoWorld, NetworkWorld and Computerworld are managed independently at IDG, my guess is that this cost-cutting is a broad or top-down directive. Otherwise, this would be a heck of a coincidence.

As far as I know, this leaves IDG without any tabloid-sized publications (that is, around 10×13 inches) in the U.S. market. Standard magazines are trimmed to around 8×10 inches. No word yet how this affects the international editions of Computerworld and NetworkWorld.

While IDG moves away from the tabloid format, please be assured that BZ Media still believes in that trim size.

The 10×13 format that we use for SD Times is popular with readers and advertisers, and also provides our art/production staff with lots of room for creativity. We have no plans to downsize SD Times.

Z Trek Copyright (c) Alan Zeichick

I was surprised this morning to find a press release from the creators of I Love Nacho Cheese, the self-described “worldwide leader in nacho cheese related news and entertainment,” announcing that their Web site was written up in the San Jose Mercury News.

After double-checking to see that it was really April 5 and not April 1, a quick search verified that the Merc did write a story about them.

What a great way to start a day.

Z Trek Copyright (c) Alan Zeichick

It’s amazing how clever those spammers are.

On Oct. 30, 2006, I set up a new mailbox on one of the domains that I manage. It’s a hosted domain held by a major ISP. I just set up the mailbox, but didn’t do anything with it. It’s never sent a message, not even a test message. The address has never been given out. The address was an obscure one; it wasn’t something like info@ or service@.

Today, for the first time, I logged in to the Web-based interface, and found 58 pieces of spam in the inbox.

How do they do it? Brute force? Have spammers hacked into the ISP’s back-end systems? Is there someone on the inside? Absolutely incredible.

Z Trek Copyright (c) Alan Zeichick

Last month, I posted that my new MacBook Pro laptop was running more slowly than my year-old iMac – despite the fact that the MacBook Pro had a 2.33GHz Core 2 Duo processor, and the iMac had a 2.0GHz Core Duo processor.

To make a long story short, the one area where the MacBook Pro was deficient was in its hard drive. The iMac uses a 7200RPM drive with a 300MB/sec interface, 8.9ms seek time and 32-step queue. The MacBook had a 4200RPM drive with a 150MB/sec interface, 12ms seek time and 4-step queue. (For more tech details, see the previous blog entry.)

After the blog posting, someone suggested defragmenting the MacBook Pro’s 200GB hard drive. I hadn’t, in part because the machine was new, in part because Apple says that defragging isn’t necessary, and in part because I hadn’t thought of it. Still, what the heck, it’s worth a shot.

I purchased a license for iDefrag, after doing a bit of research into the various products available. The software reported that the disk wasn’t very fragmented, but did indicate that one file in particular – the virtual Windows hard drive that I use with Parallels – was split into about 600 fragments. Ouch.

After running the “Full Defrag” (a many-hour operation that requires booting from a CD), I can say that in general, the defragmenting operation did not help. The defragged MacBook Pro is still noticeably slower to launch and switch applications than the still-fragmented iMac, especially when there’s more than about 10GB of virtual memory in use.

However, there is a startling difference when starting and stopping the Windows virtual machine under Parallels. This process used to take upwards of two minutes, and now takes about 10 seconds.

Z Trek Copyright (c) Alan Zeichick

We should all celebrate: Today, the U.S. Federal Communications Commission has terminated its proceedings regardng the use of cellular phones onboard aircraft during flight.

That’s not to say that they won’t bring the issue up in the future, or that the FCC agreed that cell phone usage within the inescapable confines of a commercial aircraft is simply too obnoxious a practice to be allowed. (See my earlier post, “Please, no cell phones on airplanes.”) Instead, the FCC said that it didn’t know enough about the technical ramifications to make an informed decision.

To quote from order FCC 07-47 (I’ve added bolding for emphasis):

“On December 15, 2004, the Commission adopted a Notice of Proposed Rulemaking (Notice) in the above-captioned docket proposing to replace or relax its ban under Section 22.925 of its rules on the use of 800 MHz cellular handsets on airborne aircraft. The Notice explored several different options for allowing airborne use of wireless devices, including a proposal to allow the airborne use of cell phones. The Commission also noted that the Federal Aviation Administration (FAA) prohibits the use of portable electronic devices (PEDs) on airborne aircraft. Given the lack of technical information in the record upon which we may base a decision, we have determined at this time that this proceeding should be terminated.

“In the Notice, the Commission specifically requested technical comment, emphasizing that the ban on the airborne use of cell phones would not be removed without sufficient information regarding possible technical solutions. The Notice also noted that RTCA, Inc. (RTCA), a Federal Advisory Committee, at the request of the FAA, is currently studying the effect of PEDs on aircraft navigation and safety. Phase I of the study – a short-term technology assessment – was completed in late 2004, and focused on existing PED technologies. Phase 2 – an ongoing, long-term technology assessment – is focused on emerging PED technologies, e.g., ultra-wideband devices or pico cells for telephone use onboard aircraft. RTCA published findings in December 2006, and is expected to issue recommendations regarding airplane design and certification requirements later this year.

“It is apparent that it is premature to decide the issues raised in the Notice. The comments filed in this proceeding provide insufficient technical information that would allow the Commission to assess whether the airborne use of cellular phones may occur without causing harmful interference to terrestrial networks. Similarly, although the report issued by RTCA recommends, inter alia, a process by which aircraft operators and/or manufacturers may assess the risk of interference due to a specific PED technology within an aircraft, it does not provide data that would allow us to evaluate the potential for interference between PED operations onboard airplanes and terrestrial-based wireless systems. Further, because it appears that airlines, manufacturers, and wireless providers are still researching the use of cell phones and other PEDs onboard aircraft, we do not believe that seeking further comment at this juncture will provide us with the necessary technical information in the near term. Accordingly, we conclude that this proceeding should be terminated. We may, however, reconsider this issue in the future if appropriate technical data is available for our review.

“Accordingly, IT IS ORDERED that, pursuant to sections 1, 4(i), 11, 303(r) and (y), 308, 309, and 332 of the Communications Act of 1934, as amended, 47 U.S.C. §§ 151, 154(i), 161, 303(r), (y), 308, 309, and 332, that this proceeding is TERMINATED, effective upon issuance of this Order.

To which, we can all say, “hallelujah!” and hope that this never ever comes up again.

Z Trek Copyright (c) Alan Zeichick

The über-dotcom’s philosphy states “fast is better than slow,” but their newest beta product, GMail Paper, challenges that belief. Indeed, for some things, ink-on-dead-trees may be better than pixels-on-recycled-phosphor.

The folks at Google Labs have also been busy with their latest networking innovation, Google TiSP (code-named Project Teaspoon), a free in-home wireless broadband service. The “dark porcelain” project is truly a breakthrough product that solves many unpleasant issues of infrastructure plumbing.

Z Trek Copyright (c) Alan Zeichick

One of our industry’s most laconic yet unpredictable writers, retired test engineer I.B. Phoolen, has written three new stories for BZ Media’s SD Times and Software Test & Performance. He covers such diverse subjects as network firewalls, the Indianapolis 500 and Homeland Security.

You can find links to all these stories on I.B.’s blog. You’ll find all of his previously published stories there too.

Z Trek Copyright (c) Alan Zeichick

Team ST&P is celebrating that BZ Media’s Software Test & Performance magazine won three 2007 American Inhouse Design Awards, from the editors of Graphic Design USA.

The winners are:

• The cover of the October 2006 issue — the floating “root causes” plant
• The cover of the November 2006 issue — the carousel horses
• The interior layout for “How To Build Tests,” November 2006 — the cover story

Kudos to LuAnn Palazzo, art director; Lindsey Vereen, editor of the October issue; Eddie Correia, editor of the November issue; and the rest of the editorial and art staff.

The awards will be published in the July 2007 issue of Graphic Design USA.

Z Trek Copyright (c) Alan Zeichick

Bertrand Meyer, the designer of the Eiffel programming language (and founder of Eiffel Software, which sells development tools) has just been recognized by the ACM with its 2006 Software System Award.

The citation reads,

“For designing and developing the Eiffel programming language, method and environment, embodying the Design by Contract approach to software development and other features that facilitate the construction of reliable, extendible and efficient software.”

While I wouldn’t call Eiffel an overwhelming commercial success, the object-oriented language’s influence on software development has been profound. Similarly, Meyer has been at the center of interesting debates, especially around Design by Contract. Last year, the Eiffel language became ISO/IEC Standard 25436:2006.

In addition to his work at Eiffel Software, Meyer is a professor of software engineering at the ETH in Zurich.

I haven’t spoken to Meyer for several years, but have always enjoyed our conversations — perhaps this award will help us find the opportunity to chat again soon.

Z Trek Copyright (c) Alan Zeichick

There will be no Visual FoxPro version 10, according to the VFP team at Microsoft.

Visual FoxPro — which started out, of course, as plain old FoxPro— has been around for more than 20 years; it was created by Fox Technologies, which Microsoft acquired in 1992. FoxPro came from the era of dBase II and other so-called “xBase” languages, which were extremely popular because they were fast and efficient on inexpensive PCs, and because they were fairly easy to program.

However, at Microsoft VFP has languished, rapidly falling far behind the company’s other databases, SQL Server and Access. So, there’s no surprise that earlier this month, Microsoft said,

“We are announcing today that there will be no VFP 10. VFP9 will continue to be supported according to our existing policy with support through 2015. We will be releasing SP2 for Visual FoxPro 9 this summer as planned, providing fixes and additional support for Windows Vista.”

They also added that additional features under development, which include connectivity to SQL Server and partial integration with .NET, will be released at no charge within the next few months. You can download a Community Technology Preview of these bits, code-named Sedna, today.

Do you still use VFP? Let me know what you think about this — and what your migration plans are (if any).

Z Trek Copyright (c) Alan Zeichick

My Hawaii-based colleague Larry O’Brien is a believer in storyboarding.

On his blog, Larry posted a short review of a Visio-based tool called stpBA Storyboarding, “… which every architect and team lead owes themselves to evaluate. I would say it’s revolutionary, but it’s better than that — it simply makes the way you probably already work vastly more efficient.”

Given that Larry (pictured) is already vastly efficient by any standard, that’s a pretty tall claim. Plus, I’m somewhat skeptical because tools like Visio are impediments to my own creativity. That may be a matter of personal style: I’m much more comfortable writing text than drawing with circles and arrows. Which is why, incidentally, I never became a UML fanatic, and why I’m arguably the worst user-interface designer on the planet. I’m just not a visual person. (If you don’t believe me, ask one of BZ Media’s art directors.)

Even so, one endorsement by Larry is worth a hundred endorsements by lesser beings. If he says that stpBA Storyboarding is worth checking out, then it’s worth checking out.

Z Trek Copyright (c) Alan Zeichick

Characterization testing is one of the most important — but insufficiently discussed — areas of software testing. It’s where you use unit testing to monitor existing code to capture the current functionality of pieces of the application. (The most common use of unit testing, by contrast, is to validate that new code works correctly.)

So, you might wonder if characterization is truly part of the “testing” part of the application life cycle, or if it’s part of maintenance. Good question, and I don’t know. In any case, characterization useful not only when doing maintenance on legacy code (when you might be trying to figure out exactly what a module does), but also during refactoring of legacy code (when you need to ensure that you didn’t break something).

On this topic, my colleague Andrew Binstock wrote a column (“Characterization: Beyond Pure Unit Testing,” SD Times, March 15) where he talked at length about the benefits of characterization during refactoring, and also where it has drawbacks.

In the column, Andrew cited the characterization features in JUnit Factory, an Eclipse plug-in and hosted test-generation service from Agitar Software. I agree that it’s a cool solution (which is currently in beta), but it’s important to note that characterization has been around for a while. Unit-test tools from Parasoft, IBM Rational and other companies support characterization as well (though they don’t necessarily use that word).

The main point that Andrew makes is an important one, and I’ll quote:

“The benefit is that if you’re refactoring legacy code, you can tell if you’ve disrupted it when any of these characterization tests fail. When you think about it, this might be about the only way of recording functionality in a faithful and actionable way. Clearly, deriving UML diagrams or flowcharts of the code is nearly pointless in this regard, because those artifacts cannot automate the process of telling you what you’ve unhinged and what its effects are.”

Want to learn more about characterization testing? Pick up a copy of “Working Effectively with Legacy Code,” by Michael Feathers, which introduces the concept and brings it to life.

Z Trek Copyright (c) Alan Zeichick

In this week’s InfoWorld, Andrew Binstock (a columnist for SD Times, as well as a technology analyst) wrote a powerful head-to-head review of Java integrated development environments.

Andrew looked at Borland/CodeGear’s JBuilder 2007 Enterprise Edition, IBM Rational Application Developer for WebSphere Software 7.0 (what a terrible name) and Sun’s NetBeans 5.5.

I heartily recommend this article for two reasons. First, if you’re shopping for a Java IDE, this is a definitive resource. Second, if you’re curious how real experts evaluate development tools, there’s no finer reviewer than Andrew Binstock.

Andrew and I chatted several times during during the evaluation process, and I was continually impressed not only with the depth of his knowledge, but his genuine commitment to doing a thorough job on this product evaluation.

Z Trek Copyright (c) Alan Zeichick

Today, IDG’s newsweekly, InfoWorld, confirmed rumors that surfaced last week: It’s moving to an online-only format. As Steve Fox, its Editor-in-Chief, wrote today,

“Yes, the rumors are true. As of today, March 26, 2007, InfoWorld is discontinuing its print component. No more printing on dead trees, no more glossy covers, no more supporting the US Post Office in its rush to get thousands of inky copies on subscribers’ desks by Monday morning (or thereabouts). The issue that many of you will receive in your physical mailbox this week — vol. 29, issue 13 — will be the last one in InfoWorld’s storied 29-year history.

It’s difficult for me to assess how much impact this will have on InfoWorld’s business, but frankly, I don’t see it as a positive development for its readers.

Steve Fox wrote,

“But let me dispel any other rumors. InfoWorld is not dead. We’re not going anywhere. We are merely embracing a more efficient delivery mechanism — the Web — at InfoWorld.com. You can still get all the news coverage, reviews, analysis, opinion, and commentary that InfoWorld is known for. You’ll just have to access it in a browser (or RSS reader) — something more than a million of you already do every month.

I flip through most issues of InfoWorld when they appear in my mailbox. Usually, I read one or two stories; sometimes, I read it cover to cover. Will I remember to browse to infoworld.com every week? Doubtful. The Web is great for searching for something specific, not for learning about new stuff you don’t know about yet.

Will the InfoWorld news feeds be distinguished from the myriad other RSS streams in my newsreader? Maybe. Maybe not. Will I keep reading its fantastic columns? The existing ones, yes, but it’ll be hard for new columnists to build awareness. Soon, will I just read eWeek instead? Probably. (I wonder how long before eWeek joins InfoWorld in the digital-only domain.)

Publications come, and publications go: That’s just how the magazine business works. However, the passing of InfoWorld is more bitter for me, because I’ve been writing for it for two full decades.

I started writing for InfoWorld when I worked for IDG in the mid-1980s, and have maintained a presence there ever since. I’m still listed on their masthead as a Senior Contributing Editor. (In fact, at an InfoWorld editorial gathering a few years back, someone was trying to figure out who had been associated with them the longest — and there was considerable surprise when it turned out to be yours truly.)

So, despite what Steve asserts, InfoWorld is dead. While the separate InfoWorld.com Web site is not dead, it’s not the same thing, not the same at all.

Z Trek Copyright (c) Alan Zeichick

My Take this week in SD Times News on Thursday discussed a fascinating presentation from Jonathan Rosenberg (pictured), senior VP for product management at Google. In the column, I made passing reference to Metcalfe’s Law and Moore’s Law.

Since I didn’t describe these two laws, and referred to them in adjacent paragraphs, some readers thought that one reference was a typo. It wasn’t. However, let’s use the opportunity to briefly describe these two laws.

Metcalfe’s Law, as proposed by Ethernet inventor Bob Metcalfe, says that the value of a telecommunications network is proportional to the square of the users of the system – that is, the number of potential connections between the users.

Think about fax machines, or e-mail: The more people who use it, the more useful the system is. The same concept also applies to information sources: The more books you have in a library, or the more Web pages are indexed by a search engine, the more popular it is, the more likely people will want to use it (because they’ll more likely to find what they want), and the more people will want to add more stuff to it (because it has more users).

Note that some experts agree with the principle of Metcalfe’s Law, but argue that the correct ratio is n log n, not n squared. While that intuitively seems more accurate for very large networks, I don’t have a strong opinion one way or the other.

Moore’s Law, based on observations by Intel co-founder Gordon Moore, is widely quoted saying that, for a fixed cost, the number of transistors on an integrated circuit doubles every 12 or 18 months. However, Moore himself later that he meant that the number doubles every 24 months.

For my purposes here (and in my Take), the important concept is that technology growth is exponential in many areas of computing technology, including raw CPU power, memory, storage, I/O bandwidth and network bandwidth. Or, to look at it another way, the cost of CPU power, memory, storage, I/O bandwidth and network bandwidth is decreasing at an exponential rate.

When you combine these laws, Google’s business model depends on two predictions being true for many years:

* In a Metcalfe’s Law sort of way, the amount of information that Google stores and delivers will continue to grow – and the more information Google has, the more users it will attract. The more users it has, the more advertisers it will attract. The more money advertisers can make, the more likely they’ll be to put more content there. That will attract more users, and so-on.

* In a Moore’s Law sort of way, the march of technology will make Google’s server farms faster and able to perform more complex processing, and store more content and it will make end users’ computers more powerful and it will increase the width of pipes that deliver content from Google’s service farms to end users.

That, in turn, will enable the processing, storage and delivery of yet more content, making the cycle ever more virtuous.

Z Trek Copyright (c) Alan Zeichick

A friend forwarded a link to this wonderful product demonstration video for the Rockwell Automation Retro Encabulator. The video’s been floating around the Internet for ages, and I’d forgotten how funny it is.

Having just come back from watching several product demonstrations at SD Expo this week, this fictional product seems more realistic than some genuine app-dev tools on the market today.

Z Trek Copyright (c) Alan Zeichick

Borland has come up with a thought-provoking list of “Top Ten Blunders” that can lead development teams to introduce unexpected defects into their applications. It’s a real-world list, albeit weighted a little too heavily to builds. While it’s obviously essential to build early and often, and to make sure that your builds are good, it’s only one of many steps in the application development life cycle. But then again, this “Top Ten” list was created by Borland’s Gauntlet build-automation product team, so I can see why it’s tilted that way.

Take a look at their list, and tell me what you think: What are the biggest app-dev development blunders that you’ve seen?

Z Trek Copyright (c) Alan Zeichick

I posted a brief notice of John Backus’ passing on Tuesday, but two technology journalists have written touching and moving obituaries. I urge you to read them both.

The first is from O’Reilly Media’s Kevin Farnham, who writes on his blog about Backus’ background as an artist and as a creator, not just as a computer scientist.

The other is a phenomenal story from BetaNews’ prolific pundit Scott Fulton, who dives into Backus’ credentials as an intellectual. (I borrowed this photo from Scott’s post.)

Well done, gentlemen — excellent tributes to an extraordinary man.

Z Trek Copyright (c) Alan Zeichick

Back when I was studying compiler design in the late 1970s and early 1980s, the name John Backus was often foremost in my mind. He was one-half of the team that developed the Backus-Naur Form, the notation that we used to define language syntax.

Backus, who passed away last Saturday, was one of the designers of the FORTRAN programming language. The 82-year-old computer scientist, who spent most of his professional life at IBM, won many awards, including the ACM’s A.M. Turing Award in 1977 and the Charles Stork Draper Prize in 1993.

You can read a detailed obituary at the New York Times. The IBM archives (from which I appropriated the photo) talks about the development of FORTRAN under Backus’ guidance in the late 1950s:

Most people, Backus says today, “think FORTRAN’s main contribution was to enable the programmer to write programs in algebraic formulas instead of machine language. But it isn’t. What FORTRAN did primarily was to mechanize the organization of loops.”

It’s a great story about a true computer science pioneer.

Z Trek Copyright (c) Alan Zeichick

My friend Andrew Binstock has posted a brief, yet fascinating, discussion about the potential power savings by using dual-core processors. In “MIPS per Watt: The Progression,” he tests similar Dell workstations using a Kill-a-Watt electricity usage monitor, and shows that dual-core system using single AMD Opteron and Intel Pentium D processors draw less juice than a system with two single-core Intel Xeon processors, with minimal performance tradeoff.

Thus, the performance/watt ratio for single-chip dual-core systems is considerably higher than for dual-chip systems.

There aren’t broad implications of this benefit for desktop PCs, since few have dual processors. Most desktops are single-chip machines.

The big payoff is in the data center. When it comes to low-profile servers, for many IT departments a dual-processor server is the baseline deployment platform. I fall into that trap too, since a dual-processor pizza box is what I generally recommend. However, in many cases, a single dual-core processor may offer all the performance required, and the power savings over a dual-processor server can be significant.

For another thought about dual-core processors, see Andrew’s previous post, “Multicores not as productive as you expected?

Z Trek Copyright (c) Alan Zeichick

Trees are important assets – not just for forests, but also for cities. In the small San Francisco suburb where I live, the city government is adamant that if you have to cut down a tree, you have to justify it with a good reason (like, the tree is sick and dying), and you have to replace it with another one.

The City of San Francisco, a few minutes to the north, takes its arboreal assets seriously, and this month embarked on an ambitious urban forest mapping project to inventory and map all the trees within the city limits. Two organizations, the city’s Bureau of Urban Forestry and the non-profit Friends of the Urban Forest, are helping the city with this – and they’re using interesting software tools, based on open-source software.

The software that San Francisco is using is called STRATUM, or Street Tree Resource Analysis Tool for Urban Forest Managers – that’s a mouthful. STRATUM was build by the U.S.D.A. Forest Service Center at U.C. Davis. It’s based on MapGuide Open Source, an LGPL-licensed “corporate” open source program started by Autodesk. The company spun the open source project out of a commercial version of the product. The project is run by the Open Source Geospatial Foundation, which Autodesk set up and exercises a great deal of influence over (if not outright control).

San Francisco isn’t the only city to inventory its trees using STRATUM; others include Chicago, Fort Collins, Colo. and Modesto, Calif. However, it’s the first one I heard about, and because it’s local it’s more interesting to write about. San Francisco’s trees, not only in Golden Gate Park and the Presidio, but also in greenways and neighborhoods all around the City, are as much as part of its charm as the Golden Gate Bridge, Fisherman’s Wharf, cable cars and famously crazy crooked streets.

The urban forest map is available to everyone, not just to the San Francisco City government. Anyone can search for trees by species, address, neighborhood, planting date of the tree, and other factors. The Web-based map itself is very visual and interactive, and you can select different overlays that show soil conditions, the location of water sources and parks, who put the tree there (such as different non-profits or the city itself). You can also overlay satellite images or elevation lines. You’re even supposed to be able to use the STRATUM application to communicate back to the City, such as if you find that there’s a problem with a tree, though I couldn’t get that to work.

In my exploring the application, it seems a bit buggy, and occasionally goes unresponsive. Attempts to pan the map by click-and-drag, or learn information about an object by hovering the mouse pointer, did not work properly. The overlays also didn’t work consistently. With luck, the bugs will get worked out soon.

Despite those “version 1.0” flaws, it’s a pleasant change to encounter open-source successes that are applications, not infrastructure or software developer tools. Normally, we see the likes of Linux, Eclipse, NetBeans, Apache Tomcat, Hibernate and so-on. It’s good to see examples of how ordinary people can use open source software.

Z Trek Copyright (c) Alan Zeichick

One of BZ Media’s more eccentric contributing writers is I.B. Phoolen, a retired software test/QA engineer with impeccable credentials and very strong opinions. Since 2000, he’s written a few pieces for SD Times and Software Test & Performance.

Now, I.B. has just launched a blog, on which he’s posted some of those articles (and yes, he asked reprint permission). He asked me to help spread the word, and of course, I’m delighted to oblige.

Z Trek Copyright (c) Alan Zeichick

Earlier this week, I blogged about Microsoft’s big patch, the newly released Windows Server 2003 Service Pack 2 — which is not only for all versions of Windows Server 2003, but also for the 64-bit version of Windows XP Professional.

In my column in this week’s SD Times News on Thursday, “Patching Isn’t Just for Sysadmins,” the topic shifts to the role of enterprise software developers in the process of evaluating and deploying patches and service packs to operating systems and infrastructure applications.

I’d like to hear what you think. How does this work at your company?

Z Trek Copyright (c) Alan Zeichick

IP over Avian Carriers. The Y10K bug. Telnet’s RANDOMLY-LOSE Option. The Null Encryption Algorithm. The Etymology of “Foo.” SONET to Sonnet Translation. The Hyper Text Coffee Pot Control Protocol. The Infinite Monkey Protocol Suite.

Network technology experts Peter Salus and Thomas Limoncelli have compiled the best of the Internet Engineering Task Force specs into one volume, “The Complete April Fools’ Day RFCs, ” which I’ve just pre-ordered. You’ll probably want to order it too.

(The publication date is listed as April 28. It’s not yet available from Amazon or many other online resellers, but you can get in line now at Barnes & Noble.)

Z Trek Copyright (c) Alan Zeichick

Microsoft and Apple both released service packs yesterday. The Microsoft one is more significant, and applies to nearly all data-center Windows Server users.

Windows Server 2003 Service Pack 2 is for all editions of Windows Server 2003, including Storage Server. It also applies to Windows XP Professional x64 Edition.

There’s a huge list of changes in SP2, many of which have been issued as hotfixes. I counted 61 security patches in SP2, but it’s unclear how many of those are new, and how many were already out as hotfixes.

The contents of SP2 itself range all over the map, and include dozens of changes to the .NET Framework, administration tools, applications compatibility fixes, cluster fixes, COM+, data access components, development tools and processes, drivers, distributed system services (like DNS and LDAP), Exchange services, file system fixes, graphics handling, Internet Information Services, Intellimirror, Internet Explorer, the kernel and hardware abstraction layer, message queuing and middleware, the network stack, Plug ‘n Play, printing, security infrastructure, the command shell, storage, terminal services, the installer engine, Windows Media services, and management instrumentation.

It doesn’t appear that SP2 introduces many new features, as the focus is on bug fixes and resolving compatibility issues. However, there are new functions for data access, distributed systems, file systems, message queuing, and networking. For example, there’s support for WPA2-based WiFi security, and a new XML parser called XmlLite.

As always, it is recommended that you test SP2 before installing it on production servers, or otherwise rolling out, to ensure that there aren’t unwanted side effects of all these patches and fixes. Be sure to read the release notes for caveats; there are quite a few.

By contrast, Apple’s update is pretty minor, though it also addresses a lot of security fixes. The company released Mac OS X 10.4.9 yesterday. It offers changes to the company’s .Mac online service, fixes for Bluetooth wake-up issue, bug fixes for iChat, iCal and iSync, networking and modem fixes, and some fixes for print issues. There’s also a smattering of fixes for third-party applications and a few driver upgrades and fixes. Minor stuff. Rumor is that Mac OS X 10.5 should be out in April, so the 10.4 lineage is clearly in maintenance mode.

Z Trek Copyright (c) Alan Zeichick

A colleague cheerfully pointed out that today, March 14, is Pi Day — that is, it’s 3.14. (This makes more sense in countries where you put the month first; 14.3 for 14 March isn’t very “pi like.”)

Frankly, I hadn’t heard of Pi Day before. I’d heard of Pie Day, but this is obviously different.

If today is Pi Day, when exactly is Pi Time? Since pi is 3.1415926 (which is a good approximation for our purposes), Pi Time should be .15926 of the way through the day, which my trusty HP calculator tells me would be at 3:49:20 in the morning.

That’s too early for pie, but just right for pi.

Z Trek Copyright (c) Alan Zeichick