Today (Friday, Oct. 20) is the last day for discounted registrations to the Software Test & Performance Conference — tomorrow, the full conference passport price increases by $200. We’ve got a great conference for development and test/QA professionals, Nov. 7-9 at the Hyatt Regency Cambridge. There are lots of timely classes, including new tracks on test management, performance management and security. There’s a keynote from Rex Black. Plus, the second annual Testers Choice Awards. Don’t miss it.
We had a retirement party on Wed. night for Lindsey Vereen, now Editor Emeritas of Software Test & Performance. I’ve worked with Lindsey off-and-on since 1991, when we were both at Miller Freeman. Lindsey (pictured) edited publications like Design Automation and Embedded Systems Programming, and also chaired the Embedded Systems Conference. He came over to BZ Media at the end of 2004 to run ST&P and STPCon. Now he and his wife Jan have “better things to do.”
Taking over the helm of ST&P is Edward J. Correia, moving up from Executive Editor of SD Times. Eddie was part of the launch team of SD Times in 2000, and he’s got great things in store for ST&P. Congratulations to both of them.
After Red Hat purchased JBoss and created an integrated offering with Linux and a commercial-grade open-source Java EE app server, it was only a matter of time before Novell did the same. The answer has a singularly uncatchy name of “Integrated Stack for SUSE Linux Enterprise,” and launched this week out of a Novell partnership with IBM.
The Integrated Stack combines IBM hardware (such as blade servers or standard x86 boxes) with SUSE Linux, WebSphere App Server Community Edition, DB2 Express-C, and the Centeris Likewise management suite (which I’m not very familiar with). The system is sold by both Novell and IBM.
Red Hat, by contrast, offers several versions of its Red Hat Application Stack, with Red Hat Enterprise Linux, JBoss App Server, JBoss Hibernate, MySQL and PostgreSQL, and the Apache Web Server. No management tools.
The Novell/IBM pricing starts lower than Red Hat’s, at $349/year for the software. It’s unclear how much support Novell/IBM provide at the entry-level price point. Red Hat charges from $1,999 to $8,499, depending on the number of CPUs in the server, and the desired level of support services.
Sun has a comparable open-source stack, which can be run on Linux or Solaris. The Solaris Enterprise System, which consists mainly of Sun’s own software plus the PostgreSQL database, is a strong offering that doesn’t get as much exposure as it deserves. That’s largely because the tightly controlled Solaris operating system isn’t as popular with the “Anyone But Microsoft” crowd as Linux. Watch for the new Sun Java Composite Application Platform Suite, coming soon.
I’m very excited about a new Web seminar that SD Times is doing with the Eclipse Foundation. Called “Anatomy of an Eclipse RCP Application,” it’s a public walk-through of an Eclipse Rich Client Platform app. The best way that I learn a platform is to look at code, and Wayne Beaton, the Eclipse evangelist, has some cool stuff in mind, which I can’t wait to see.
When: October 26, 8:30am Pacific, 11:30am Eastern time. See ya there.
>> Update: This Web seminar has been postponed for a week. It’ll be on Thursday, Nov. 2, 8:30am Pacific, 11:30am Eastern time.
My column in today’s edition of SD Times News on Thursday discusses two new subsets of the Rational Unified Process — OpenUP, which is implemented in the Eclipse Process Framework, and EssUP, developed by Ivar Jacobson for use with Visual Studio Team System.
I’m anticipating that some people will ask, “Why didn’t you mention the Enterprise Unified Process?” That’s not an oversight. Scott Ambler has done a tremendous job with the EUP, but its goals are different.
OpenUP and EssUP were designed as simplifications of the heavyweight RUP: functional subsets that retained the principles of the Unified Process framework, but which were streamlined for agile development.
By contrast, Scott’s EUP is an extension to the RUP, bringing the production and retirement phases of the software development lifecycle into this well-defined process. If anything, the EUP makes the RUP even less agile — but leaves it more complete, and better suited for serious requirements-driven development.
Sunday’s earthquake in Hawaii sounded horrific. Fortunately, Larry O’Brien, Kona resident, SD Times columnist and Ultimate Frisbee player extraordinare, was unharmed by the falling tchotchkes. Things can be replaced, but Larry, Tina and Cheyenne are priceless.
Doesn’t “Falling Tchotchkes” sound like a great name for an alternative rock-jazz fusion-klezmer band?
Evans Data, citing poor attendee registration numbers, has cancelled its first-ever Development Products Conference. The conference, scheduled to be held in San Jose this Thu. and Fri., was billed as “If your job involves planning new technology products for developers to use, or positioning and marketing those products, this is the ONE conference you can’t afford to miss.”
This is unfortunate: I’d been looking forward to attending, and BZ Media was a sponsor of the event. Ted Bahr, president of BZ Media, was a scheduled speaker on the program, along with folks like BEA’s Bill Roth, TIBCO’s Ram Menon, IBM’s Bernie Spang, Programmer’s Paradise’s Jeff Largiader, Telelogic’s Brian James and Red Hat’s Bryan Che.
You won’t find any mention of the DPC on the Evans site; the company has scrubbed it away.
According to demographers, the population of the United States reached 300 million today. (That’s a fuzzy number, plus/minus a few weeks or even a month, but as William Frey, Brookings Institute analyst, said last night on NPR’s All Things Considered, you might as well pick a date, since we’ll never know for sure.)
The U.S. population reached 100 million in 1915, 200 million in 1967, and 250 million in 1990.
The U.S. Census Bureau reports, with tremendous precision, that at 12:00 GMT today, the U.S. population was 300,000,073, and the world population was 6,550,965,951. Every minute, it estimates, 252 people are born and 105 die.
Last spring, my good friend Andrew Binstock and I agreed upon a simple wager: Would we be able to purchase a terabyte hard drive, in a 3.5-inch form factor, by the end of 2006? At that time, 500GB drives were readily available at places like Best Buy and CompUSA. I believe that the 750GB drives were just coming out as well.
Forget Moore’s Law: The pace of innovation in hard drives is incredible. As I write this, there still aren’t 1TB drives commercially available, but the price of 500GB and 750GB 3.5-inch drives is falling fast. You can also get a 160GB laptop drive (in a 2.5-inch form factor) for under $200, which is equally amazing.
Twenty years ago I’d not have predicted this… the future belonged to optical media, remember. When the CD-ROM was introduced in the mid-1980s, a typical hard drive had somewhere around 30MB of storage capacity. An ISO-9660 CD-ROM, by contrast, had 650MB capacity, or about 20x that of the hard disk.
Imagine if 12cm compact disc technology had kept pace with magnetic media: we’d have 15TB discs! But optical didn’t keep up, and capacity has grown very slowly. The first move, in the mid-1990s, was to the 4.7GB DVD, and then the 8.5GB dual-layer DVD. Now, finally, there are 20-50GB optical discs in the Blu-Ray spec coming out in lte 2006.
Given the pace of change, the future clearly belongs to magnetic hard disks.
Apple is normally very good at branding. But what’s with their new iPod, called the iPod nano (PRODUCT)RED? (Initially, I thought the name was an HTML coding error on the Apple Web site.)
I can’t argue with the largesse behind the product. The 4GB unit costs the same US$199 as Apple’s other 4GB iPod nanos, but:
Choose the iPod nano (PRODUCT) RED Special Edition and Apple will give $10 of its purchase price to the Global Fund to fight AIDS in Africa.
That’s a worthy cause. But when it came to the naming, particularly the use of the word “product,” this may be a case where the marketers thought too different.
>> Update 10/15: Visiting the Global Fund‘s Web site, I learned that there are a variety of goods/services from different companies entitled (PRODUCT)RED as part of this awareness campaign. I remain unimpressed by the branding, though it’s hard to blame Apple in this case.
The OSDL has released Portland 1.0, its set of common interface for GNOME and KDE. Because Portland will be found in many Linux distros, such as Debian, Fedora and SUSE, it could help solve some of the forking problems that we’re seeing on the desktop. Let’s hope that the Linux community embraced Portland, and that other distros climb on board — because that, in turn, will encourage third-party app development.
It’s the trickle-down effect: In August, my wife overflowed her 20GB iPod with Click Wheel, and took my 60GB iPod (pre-video). Somehow, in that process, I ended up with a black 80GB iPod with Video.
I’m not complaining!
The new iPod holds a ton of music — most of my library. (I don’t bother exporting all my iTunes playlists, because I know there’s music that I’ll never listen to while traveling or in my car.) But there’s still a lot of room for movies and files.
Putting movies or TV shows on an iPod is a new experience. The Mac and iTunes doesn’t support copying information from a DVD directly. However, I found an open source package, called Instant Handbrake, that will extract a DVD into the right MPEG-4 format for copying into iTunes and then playing on the iPod. For this trip, I copied over the new complete collection of Firefly, which I’m enjoying greatly.
(I have found that the Instant Handbrake’s H.264 encoder makes files that are smaller than those made with the MPEG-4 encoder. However, the H.264 files don’t always work on the iPod. So, better to stick with MPEG-4.)
With files, I have configured iTunes to allow the iPod to be mounted as a volume. Because there’s no intrinsic security on the iPod itself, I used Mac OS X’s Disk Utility to create a 20GB sparse disk image on the iPod disk volume — and encrypted it with AES-128. This lets me keep some of my data files, which normally don’t fit on my traveling PowerBook, on the iPod instead. After mounting the iPod as a read/write volume, I can then open the encrypted disk image, which also mounts as a read/write volume. Nice.
I spend a lot of time on airplanes — not as much as many of my colleagues, but it’s plenty. My default carrier is United Airlines, which has a hub in San Francisco, and which also has an e-mail flight notification system, called EasyUpdate.
(A digression: At SFO, United promotes the service by boasting, “Only United Offers EasyUpdate.” Well, EasyUpdate is a trademark of United Airlines, so of course only United has a service with that name. Is that a lame slogan or what?)
EasyUpdate sends two e-mail messages to you, your loved ones and your administrative assistant about each flight segment:
* Before takeoff, it confirms the projected departure time and gate
* Before landing, it confirms the projected arrival time and gate
But, interestingly, the EasyUpdate system is not updated if you change flights, and therefore, information transmitted by the EasyUpdate service can be obsolete.
To wit: Sometimes I arrive at the airport early, and squeeze onto an earlier flight. More rarely, sometimes my flight is cancelled, and the airline places me on an alternative flight.
EasyUpdate never knows about any of this.
To use a real-world example, today I was scheduled for flight #931 to LAX, but arrived at SFO early enough to hop onto flight #1171 instead. Before that, I had already received the first EasyUpdate e-mail, that flight #931 was scheduled to depart on time from SFO’s gate 84.
However, EasyUpdate never knew that I changed flights. So, half an hour after my flight #1171 had landed, EasyUpdate dutifully sent me and my loved ones (alas, I have no administrative assistant) a message that flight #931 was scheduled to land on-time at LAX’s gate 69A:
** UNITED AIRLINES ARRIVAL MESSAGE **
The following flight is scheduled for arrival:
Flight Number: 931
Departing From: San Francisco California (SFO)
Traveling To: Los Angeles (LAX)
Date: October 10
Gate: 69A (Gate information is subject to change)
Estimated Arrival Time: 8:12 p.m.
Flight times are subject to change. Please check the flight information monitors at the airport.
Thank you for choosing United!
No message was ever sent about flight #1171. That’s pretty worthless for a proported travelers’ real-time information system. From an altitude of 30,000 feet, though, it sounds like an easy problem to fix. Web services, SOA, real-time database orchestration, all that sort of thing.
Let’s see how long it takes.
Ray Noorda, best known for founding Novell, inventing the local-area network industry and then losing the LAN industry to Microsoft, died today.
I only met Noorda a few times, in the last years of his tenure with Novell, and never had much personal interaction with him (the meetings were all at industry-related events), so I don’t have much sense of the man. It always struck me as a shame that he had the vision to create the LAN industry, but that his product (NetWare) lost to a product that was considerably inferior in performance and stability (Windows NT Server). However, he missed three factors which made NetWare’s demise inevitable.
First, NetWare was a fantastically complex product that required extensive administrative training. In most cases, NetWare needed to be installed and maintained by a trained Novell reseller. As LANs became more popular, small and medium-sized companies didn’t want to deal with a Novell Authorized Reseller, many of whom did a lousy job and overcharged for their services. Instead, customers wanted to set up and manage their LANs themselves, or use less-expensive consultants. Novell didn’t make that possible until it was too late. Windows NT, on the other hand, was slow, unstable, but easy to manage. It was also less expensive, and simpler to license, than the channel-centric NetWare.
Second, it was really hard for customers or ISVs to extend NetWare through NetWare Loadable Modules. It seemed that Novell did everything possible to discourage the development of new applications using NLMs. Microsoft, on the other hand, embraced all developers and welcomed third-party applications running on top of Windows NT Server. Those developers became Microsoft’s biggest evangelists.
Third, Novell stuck with IPX/SPX for far too long. When it finally adopted TCP/IP, it did a lousy job. Novell didn’t take TCP/IP seriously until 1998, by which point Microsoft had already won the war.
So, while Certified NetWare Engineers loved their NOS, new customers went to Windows instead. Novell never recovered. Noorda’s many attempts to compete against Microsoft in other areas, such his alternative to the Microsoft Office suite, were not succesful. For many years, through the late 1990s, the company exhibited a reactive, deer-in-the-headlights mentality that precluded creativity.
Bill Gates focused on growing the computer industry, and thereby enriching Microsoft. Toward the end, Noorda’s Novell fixated on attacking Microsoft instead of creating new market opportunities.
>> Update: I received an e-mail which stated:
“To criticize Mr. Noorda upon his death is outrageous! Like the political jerks that permeate our society today, your comments are thoughtless, coarse, and disrespectful. This man pioneered an industry for which your paycheck derives from. Your inability, or unwillingness, to write about the “good” Ray Noorda has done says more about you than Mr. Noorda.”
Gen. Powell had the audience in stitches during his keynote address at Dreamforce 2006, the user conference held by software-as-a-service pioneer Salesforce.com today in S.F. The general riffed on a number of themes: he mocked President Bush’s “When I looked in [Russian president ] Putin’s eyes, I saw his soul” statement, pined after his government-issued Boeing 757 aircraft, laughed about his BlackBerry, and shared his wife Alma’s advice on working with the government.
The keynote address, which lasted about an hour, meandered. The general shared his views about the benefits of corporate philanthropy, the state of the world, conflicts in Europe, Asia, Africa and the Middle East. He reminisced about his training as a young lieutenant, and his first days at the State Department. He talked about looking into Mr. Putin’s eyes, and seeing the KGB. He talked about the economy, about terrorism, about security, about war, about peace.
I took a lot of notes and photographs during Gen. Powell’s talk (that’s my own picture above), but what struck me most were his comments regarding leadership.
Leaders, he said, don’t merely motivate their followers. They inspire their followers — and inspiration makes them motivate themselves, which is obviously desireable. In fact, he said (repeating advice given to him early in his career), if you’re a good enough leader your followers will follow you if for no reason other than curiosity about where you’re going to take them… with trust that the journey is worthwhile.
Gen. Powell talked about the attributes of a good leader. A leader, he says, has to have vision about what the enterprise is doing, and why it’s doing it. That doesn’t just mean the CEO’s overarching vision. A department head, a business-unit manager, a shop steward, even a team leader within a workgroup, all need vision if they’re going to lead.
But the leader needs more than vision (and the ability to communicate that vision). A leader has to be able to give his followers the tools they need to do the job, as well as the training needed to use those tools. Why? Because it’s the followers who get the job done.
A general might know that the army needs to take a certain hill, or protect a certain city, or set up camp in a certain valley. But it’s the troops who take that hill, protect that city and set up the camp.
A leader also needs to be willing and able to discipline the followers when they need it. Why? It’s important for morale and inspiration. If the good followers see that someone is being a bad follower and nothing is being done, then the good followers will say ‘why should I bother?’ and then it all collapses. “If you’re not doing that, you’re not a good leader,” the general said.
And finally, you need to do the right thing. “Everyone wants to be part of an organization that has high standards,” he said. I have no doubt that any organization that Gen. Powell is involved with will have the highest standards possible.
But would someone give this guy an airplane?
Gosh, it’s tempting. Sun Microsystems offered to send me a evaluation kit of its Sun SPOT hardware platform SDK. SPOT, in this case, stands for Small Programmable Object Technologies. It’s a set of small, battery-operated wireless devices with an embedded Java Virtual Machine. (Alex Handy wrote about the kit in the July 15, 2006, issue of SD Times.)
Each device has a 32-bit ARM processor and a wireless radio (based on the 802.15.4 “ZigBee” spec), as well as USB. You can use them for sensor-based data acquisition, using an ad-hoc short-range mesh network. For sensors, there’s a built-in 3-axis accelerometer, a temperature sensor, a light sensor, some LEDs, some switches, and general-purpose analog and digital inputs. Neat.
Priced at $499 for two of the devices, a base station and developer tools, I can imagine this device being a big hit not only with developers, but also with general enthusiasts. Much will depend on the quality of the developer tools and documentation, of course.
I’ve accepted Sun’s kind offer: Although there just aren’t enough hours in the day, it seems, I’ll make the time for checking this out. (Exploring the SDK will make a great father/son project for a rainy Bay Area winter.)
Sun says that the kit will “use standard IDEs. e.g. NetBeans, to create Java code.” Although I do have NetBeans on my Sun Ultra 20 workstation (which I purchased for $1,091.16 after the 2005 JavaOne conference), I prefer working with Eclipse for Java development, and my Mac-centric son prefers Xcode. We’ll see how it works with those alternative IDEs.
Subj: Request to mailing list Ximian-mono-list rejected
Your request to the Ximian-mono-list mailing list
has been rejected by the list moderator. The moderator gave the following reason for rejecting your request:
“[No reason given]”
This is all part of Microsoft’s fight against software piracy. With Windows XP, the amount of discomfort that an illegal software user (or a legal software user who is having problems with Microsoft’s license validation service) suffers is minimal. The company said, however, that it’s going to crank it up a notch with the forthcoming Windows Vista: If Microsoft thinks that your license is invalid, you’re hosed. First, some features of the OS will turn off. But after 30 days, your applications won’t run, you won’t be able to get at your disk files, and your machine will be as good as dead dead dead. Read the company’s Oct. 4, 2006 announcement, disguised as a puff-piece interview:
To quote from that announcement:
Reduced functionality mode in Windows Vista will allow the user to use the browser after the reduced functionality mode has begun. Reduced functionality mode can occur as a result of failed product activation or of that copy being identified as counterfeit or non-genuine. In most cases customers will be able to correct this situation quickly with the options provided. With the tools in place for OEMs, and small to large customers, we expect that most customers should never be affected by having a non-genuine installation.
What if there’s a problem? According to Microsoft’s Cori Hartje, director of the “genuine software initiative,” Windows Vista will solve it for you!
Customers will be able to easily determine the status of their Windows Vista installations. In the System Properties panel of the Windows Vista Control Panel, Windows Vista will display the genuine status of the installed copy of Windows Vista. From there, and from any screen notifying users of a failed validation, a user will be able to obtain more information on why the copy of Windows is not genuine, as well as resources for getting a genuine copy.
In other words, if the System Properties report that the software is not genuine, then it’s not genuine. Period. Those “resources” will be places where you can buy, or rebuy, the software. Judge, jury and executioner, all in software that essentially tells the consumer that if Microsoft’s code says you’re guilty, then you’re guilty.
Microsoft has been working on this for some time. It’s been a disaster with Windows XP, and there’s no reason to think that it’s going to be flawless with Windows Vista. Microsoft’s support forums have been filled with posts from customers whose “validated” Windows XP installations suddenly failed the company’s occasional re-validation tests, due to a software crash, deletion of a key file by disk utilities, or who-knows-what. Read Ed Bott’s excellent series on this on ZDNet, “Busted! What happens when WGA attacks.“
The Hippocratic Oath says that one should do no harm. Yet, by telling its legitimate customers that if the software says they have pirated code, they have no choice but to rebuy it, there’s a presumption of guilt. That’s fundamentally wrong. Microsoft should abandon this project until there’s a way to ensure that there’s a presumption of innocence, not guilt. To behave otherwise is fundamentally unfair, and I believe will be a technological and PR disaster for the firm.
Remember the nightmare with Sony’s CD anti-piracy software, you know, the one that disabled device drivers and opened up machines to backdoor rootkits. This is going to be even worse.
Microsoft’s Hartje concluded, “Software piracy is not a victimless crime.” She’s right: The victims are Microsoft’s customers. Stay away from Windows Vista, until Microsoft rescinds this ill-considered and unfair policy.
On Tuesday, October 3, 2006, the Dow Jones Industrial Average set a new record high – the first since January 14, 2000, more than six and one-half years ago.
In the United States, the Dow (as this 30-stock index is popularly known) is the arguably the most widely quoted stock-market index. However, many people, myself included, believe that it’s not the most accurate assessment of the condition of the U.S. economy. Other indices, such as Nasdaq, survey a much wider array of stocks, and use a better algorithm – and haven’t fared as well. The Nasdaq, for example, is at less than half the value it had in January 2000.
If we take a step back, however, it’s not all as doom-and-gloom as the stock prices would indicate. Back in early 2000, we were at the peak of a technology bubble. Prices were unnaturally high in those heady days, with price/earnings ratios that exhibit the irrational exuberance that U.S. Federal Research chairman Alan Greenspan warned about in 1996. He was right, and the market came crashing down… to where it belonged.
So, forget about the stock market, forget about the Dow Jones record. Look at the fundamentals: The computer industry has changed, and the software development industry has matured. Software is central to every aspect of corporate life, and thanks to Web portals like Amazon.com, YouTube and MySpace, it’s central to our personal life as well.
Yes, your stock options might still be underwater, and your retirement fund may not be back at its bubble values (mine certainly isn’t). However, take solace in the reality that our industry is growing at a reasonable pace, and a healthier one.
What do Tim Berners-Lee, Kurt Gödel and Alan Turing having in common? I’m not entirely sure. But that’s the title of a newly arrived book, “Thinking on the Web: Berners-Lee, Gödel and Turing,” by Peter Alesso and Craig Smith. I’m intrigued.
The back cover says:
Tim Berners-Lee, Kurt Gödel, and Alan Turing are the pivotal pioneers who opened the door to the Information Revolution, beginning with the introduction of the computer in the 1950s and continuing today with the World Wide Web evolving into a resource with intelligent features and capabilities. Taking the main questions posed by these thinkers—”What is decidable?” by Gödel, “What is machine intelligence?” by Turing, and “What is solvable on the Web?” by Berners-Lee—as jumping-off points, Thinking on the Web offers an incisive guide to just how much “intelligence” can be projected onto the Web.
One of the benefits of being a technology journalist/analyst is that books like this show up, unannounced, courtesy of publishing companies (in this case, Wiley), who hope that I’ll review it. Dozens of titles show up on my doorstep each month; a few get kept, but most are donated to a local junior college library. This one looks interesting; I’ll read it on my next plane trip, and let you know what I think. If you’ve already read it, feel free to beat me to it and post your own comments.
My colleague Larry O’Brien has weighed in regarding Borland’s moves to rename/reposition/rejigger its Core SDP products into a new set of application life cycle suites. One upon a time, Larry was one of the biggest and most loyal Borland supporters imaginable, but his faith has waned, and waned and waned, and now it has waned some more.
His current blog posting is “Borland Gives Up On Core SDP: I Wonder How Much That Cost ‘Em?”, and it references one of his older SD Times columns from 2004, “Only Nixon Would Go To China.” Both are worth reading.
Borland has a new application life cycle management strategy. The company, which has been undergoing a radical shift since the departure of CEO Dale Fuller last November, is moving away from its role-based Core SDP ALM solution. Instead, the company is releasing a new line of tools which are more function-based, called LQM.
This strategy makes sense. Core SDP, which the company had flogged continuously since March 2005, divided software developers into four different roles: analysts, architects, developers and testers. Different ALM tools within Borland’s product line were assembled into four suites — called Core::Analyst, Core::Architect, and so on. Companies would then license the appropriate suites for their developers, everything would interoperate, and software would be written.
Borland’s role-based approach is far from unique. The two big bananas of the software tools market, IBM and Microsoft, have similar role-based focus within their IBM Rational and Visual Studio Team System solutions.
The problem is that few companies divide out their world that way. Different people play different roles at different times. Not every company defines the roles the same way, or uses the same terminology, or even wants the same subset of tools for developers within those roles. In short, it was a good idea, but not good enough.
Thus Borland’s new strategy, which still the company’s tools into four piles — but by function, not by developer role. So, there will be a suite for quality management, one for IT management and governance, one for requirements definition and management, and the fourth for change management. Developers would select the building blocks that they need. Or that’s the plan.
Note that Borland is still selling the same individual tools, like the SilkCentral test management software, or CaliberRM requirements manager, or the newly acquired Gauntlet test-automation software. But they’re being assembled in a more rational way.
Alex Handy got the scoop on all this in this week’s News on Thursday newsletter, and we’ll have a fuller report on it in the Oct. 15th issue of SD Times.
The ongoing drama at Hewlett-Packard has me rapt with attention. Beyond its involvement with Mercury (which HP is in the process of buying), the corporate-spying scandal doesn’t have much immediately relevance to my own world of software development. However, it is a fascinating tale, and it’s interesting to watch it unfold.
Certainly friends who work at HP are equally focused. Morale at the company is bad, like when Carly Fiorina was laying off people left and right in the wake of the Compaq acquisition. Not good, not good at all.
Within this broad story, there’s room to enjoy what (to me) is a perennial issue: What do you call the company?
Legally, the firm keeps changing its name. It used to be the Hewlett-Packard Co. But nowadays, it’s the Hewlett-Packard Development Company, L.P.
What about shortening the name? There we have lots of options. I use HP, as you can see. That’s also the company’s preferred usage. However, others also use H-P, H.P. or my personal favorite, simply Hewlett.
Nobody, but nobody, in the tech industry calls Hewlett-Packard, “Hewlett.” That’s reserved for out-of-touch mainstream journalism, such as the New York Times headline for today’s story about the hunt for the leaker. The Times also uses the abbreviation H.P.
The Wall Street Journal likes the abbreviation H-P, even in headlines, but calls the company Hewlett-Packard Co. on first reference within the story.
Our style at SD Times and other BZ Media publications is to follow a company’s own preferred usage, whenever possible; we only make occasional exceptions, such as when companies have gratutious punctuation as part of their name. We drop the exclamation point (which in journalism is called a “bang”) from names like Yahoo!, for example, because it’s disruptive when you’re trying to read.
But I still get a kick out of seeing the first ref to HP written as simply “Hewlett.” It always takes a moment to figure out who the NY Times is talking about. C’mon, guys. You can do better than that.
A little knowledge is a dangerous thing, and I’m at that stage of my nascent blogging career. Two friends, upon hearing about my blog, suggested that I add a link to its XML feed to the blog page.
Sounds easy, I thought. The software supports syndication feeds, and there’s a convenient CSS stylesheet. How hard can that be?
It only took a few minutes to find the right spot in the stylesheet and insert the link and descriptive text. Which I spelled wrong. (Did any of you see the site during the half-hour with the glaring error?) Then I found a nice graphic that says RSS and inserted it instead of the descriptive text. Nerdvana.
Only to be told by one of my friends, Larry O’Brien, “Yours is not an RSS feed but the competitive Atom format.” That’s embarrassing. But now we have a working syndication feed and the right graphic. I hope.
My first exposure to microprocessors came through the use of the Zilog Z80 chip. It was hard to do any work with small computers in the late 1970s and NOT use the eight-bit Z80; they were relative cheap, easy to build circuits with, and simple to program. Many hardware and software engineers, including myself, cut our teeth on Z80 assembly. Early microcomputers, like the Radio Shack TRS/80 and numerous CP/M boxes, used the Z80 before the IBM PC came out and redefined the landscape around Intel’s x86 family.
But Zilog, what have you done for us lately? Not much, given the company’s recent financial woes — millions of dollars of losses every quarter. While the company still sells variations on its newer eight-bit Z8 microprocessor, it also offers other stuff like infrared controllers. Even so, the breadwinner is the Z8, which includes onboard flash memory, perfect for embedded microcontroller applications.
Zilog has lived in an eight-bit world for 30 years. Yes, the company has attempted to break out of the eight-bit box before, such as with the short-lived 16-bit Z280 and 32-bit Z380 processors from the early 90s. But they just didn’t go anywhere.
Flash forward (so to speak) to 2006. One month ago, the company dumped its chairman/CEO, Jim Thorburn, who had been in place for five years, appointing an interium CEO while beginning a search for a permanent replacement. And now it has released a new 16-bit platform, called ZNEO.
ZNEO looks like an interesting chip, and a possible upward migration path from the Z8 microcontroller. It has fast zero-wait-state internal flash memory, in a variety of sizes ranging from 32KB up to 128KB: that’s a lot of space in the embedded market. Plus, it has a math engine that can do 8, 18, and 32-bit operations.
Clearly, the ZNEO project is going to be critical for Zilog, as the firm struggles to survive. It’s a chicken-and-egg situation: Zilog needs new customers and design wins in order to get its finances in order. But will embedded developers, who perhaps rely upon the company as a provider of tried-and-tested commonity chips, want to base their future products on an unproven platform from an trouble supplier? Time alone will tell, but I have my doubts.
Tomorrow I’m heading up to San Francisco for the second day of the Intel Developer Forum. I’ve received many meeting invitations for IDF, but have been struck by the paucity of news or announcements that would apply to software developers. The bulk of the third-party announcements have focused on storage and wireless networking. As Intel gets farther from its roots in CPUs and developer tools, the less relevant much of their ecosystem becomes, at least for me.
One of today’s announcements, one of the few that I found interesting, involved the use of dual-core processors in embedded applications. Dual-core processing will have a significant impact on the embedded/device development market, which has traditionally deployed single processors with single cores. In a hard real-time environment, using a high-end RTOS from companies like Wind River or Green Hills, or one of the hardened versions of Linux, applications must be both tight and deterministic. How well will that play in a dual-core environment, where you have multiple hardware that threads that won’t be synchronized? It should be less of a problem than with dual discrete CPUs — but it’s going to be in issue nonetheless. I look forward to learning more.
Something that I don’t particularly want to learn more about is what’s happening with the Itanium processor, which could be fairly characterized as an increasingly niche product. Sure, the vertical scalability of a high-end Itanium 2 processor can be impressive, but the world belongs to the 32-bit and 64-bit x86 processors from Intel and AMD. While RISC will still play a role, particularly with Sun’s SPARC processors, Itanium is destined to remain the poor stepchild, relegated to specific applications, like big honkin’ databases. And there’s nothing that the Itanium Solutions Alliance — a vendor consortium set up by Intel to promote the processor — has done to change my mind about that.
My 9/21/06 “Zeichick’s Take” about automotive security brought several letters-to-the-editor, one of which made an excellent point that applies well in the physical security world, but which in my opinion falls down in cybersecurity.
Steve Brewin wrote,
“Apparently the vast majority of crime is committed by amateurs chancing on an easy opportunity. The simple lock removes the easy opportunity, amateurs will look elsewhere. Professionals play for much higher stakes and while they can easily bypass such simple security mechanisms, the probability of an attack from them is massively less. Most targets are not worth their time. For most, the cost of installing systems capable of thwarting their attacks is disproportionate to the risk.
The insurance assessor explained that while most viewed their offer as a marketing exercise, their statistics told them that the discount they were offering was small compared to what they expected to save in the cost of claims alone.“
That’s very true. Professionals can unlock cars, remove The Club, jimmy house doors open, even break a laptop security cable with a small bolt cutter. So can a determined amateur, who can pick up the right tools, or practice simple techniques. But what about the casual amateur? The kid walking through the parking lot who sees an brand-new iPod sitting in a car? For that kid, a locked door may sufficient to make him move on.
In other words, if someone is bound a determined to steal YOUR things, locks probably won’t help. But if they’re just looking to steal SOMETHING, they’ll pick the lowest-hanging fruit. Your security system simply ensures that your fruit isn’t the lowest-hanging target.
But that falls down when it comes to cybersecurity — because of the shotgun approach. Even the most casual script kiddies use sophisticated ports scans, SQL injection, worms and other automated techniques. Those are the equivalent of trying to break into every car in the parking lot simultaneously. I’m not sure that hoping that someone else’s computer is a lower-hanging target is enough. It’s unfortunate, but our neworks, servers, desktop AND applications have to become fortresses. At every layer of the stack, we’re being targeted.
For those of you who don’t follow such things, a magazine or newspaper’s editorial calendar provides insight into some of the feature articles that the publication will cover during the next year. It’s traditional for them to come out in September or October. Edit calendars often also provide information for advertisers regarding the cutoff dates for reserving ad space and for delivering ad materials. Edit calendars are used by writers, advertisers, and corporate communications professionals.
It’s important to note that edit calendars are always subject to change without notice. While we do our best to predict the long-lead stories for our publications, software development is a fast-evolving industry. So, you might want to bookmark the editorial calendars, and check back every few months to see if they’ve changed.
There’s one 2007 edit calendar still to come, for Eclipse Review. We’ll post that in a few weeks.
One of the challenges for any software development project — whether enterprise or for-sale, open source or not — is what to do about all those pesky defects that nobody’s going to fix. Why aren’t they going to be fixed? It might be that they’re not a show-stopper, or that there are other priorities, or there’s no easy fix, or simply that nobody wants to do it.
Every non-trivial software project has bugs that won’t be fixed. Sometimes you know that it’s not going to be fixed, and at other times, everyone has the best of intentions, but it just never gets done.
One of the benefits of most open source projects is transparency. Take Eclipse, which uses a public bugzilla feed to let users and contributors report defects. When defects are reported, often they’re resolved, but sometimes they’re marked RESOLVE LATER. Does that mean that the issue truly will be resolved later, or as some people suppose, is that a polite euphemism for RESOLVE NEVER?
Let’s face reality: Not every bug is going to be fixed. Yes, it would be nice to have less ambiguity, and to know, for certain, that a specific bug is going to be (or not going to be) addressed. But at least with a system like the RESOLVE LATER system, you can see if action is being taken or not. With non-open-source projects, or OSS projects that take place with less transparency than Eclipse, bug reports go into a black hole.
By contrast, with commercial software, a bug will only be fixed if the software owner sees the business value of fixing it. While I agree that RESOLVE LATER is suboptimal, it’s easy enough to see that a bug that’s been ignored for months or years isn’t going to be addressed. And that’s valuable information.
The HP spying investigation is getting stranger by the day. When the company reported that its chairwoman, Patricia Dunn, was going to step down as of January, many of us knew that wouldn’t hold — she had to go, and she had to go now. Only a few days later, after more revelations, she resigned effective immediately on Sept. 22.
But what about the new chairman, CEO Mark Hurd? He’s presided over a remarkable turnaround; HP’s fortunes and reputation have improved tremendously since he took over from the disastrous Carly Fiorina. It would be a significant blow to HP were he to be forced out due to this scandal — but that’s a real possibility, given numerous reports that Hurd was in the loop regarding Dunn’s espionage on board members and journalists.
Indeed, as reported in this Fortune story published that same day, Hurd admits to having known that HP was involved with questionable activities. Isn’t it his job to intervene? It doesn’t look good for Hurd, and it doesn’t look good for HP.