Where Linux will dominate, and not dominate
At the Linux Foundation Collaboration Summit (which I attended on June 13), Jim Zemlin, executive director of the Linux Foundation, accurately portrayed that the Linux movement has changed. He stated that, from the enterprise perspective at least, the days of having to build awareness for Linux, and for open source in general, are long since over. He’s right. Within most organizations, in my experience, Linux is seen as just as viable an option for servers as Windows and Unix (specifically, Solaris). That’s both good –and bad.
We’re past the days of religious wars, and we’re also past the days where Linux is chosen merely because it’s free or because it’s open source. The costs of using and deploying Linux aren’t significantly different than those of deploying Windows or Unix on server. To be honest, the licensing cost of cost of software is only a small part of the long-term total cost of ownership.
However, Linux itself has challenges, which I was pleased that the Linux Foundation meeting was honest enough to admit.
For example, because Linux is developed primarily by individuals working on things that they find interesting, Linux lacks the directed evolution of Windows, Unix, Solaris or Mac OS X. Thus, there were many people at the conference talking about the inconsistent state of power management within the Linux kernel and kernel-level device drivers. Everyone acknowledged that it is a problem – but nobody could do anything about it.
Similarly, there are many views as to the best way to prevent fracturing of commercial Linux distributions around kernel levels, but no agreed-upon way to solve that problem. While the Linux kernel team itself is run as a benevolent dictatorship, most other decisions are left up to the individual commercial distributions, who pointedly do not coordinate with the Linux Foundation or with each other.
Of course, not all the issues facing Linux have to do with process. There’s a lot of dissent within the community regarding licensing. The Free Software Foundation’s General Public License v3 is a huge polarizing factor, and as the Linux Foundation explains, even if the bulk of the community wished to adopt the new license (which is uncertain), the process of moving code to the GPLv3 would be incredibly time consuming. It just ain’t gonna happen, folks.
For now, and for the next several years at least, it seems clear that be three separate Linux worlds:
• Linux on servers: Hugely successful. Because servers typically run on a fairly limit set of hardware; most enterprises choose an operating system when they buy server hardware; and because a particularly server runs only a small number of applications at one time, Linux’ limitations in terms of device drivers and applications are not a significant factor.
• Linux on mobile devices: Hugely successful. As the representative from Motorola, Christy Watt, said during the Linux Foundation meeting, “We believe that about 60% of our base will be on Linux soon. We have shipped 6 million devices on Linux already.” The recompilable kernel, ability to create custom drivers, open-source licensing and cost factors are excellent for phones, PDAs and other devices.
• Linux on the desktop: Not even close. There’s been tremendous strides in this area, but device drivers remain a challenge, particularly for top-end graphics cards. Another challenge is the proliferation of user interfaces. Despite the amazing success of Ubuntu Linux, desktop and notebook PCs will be found mainly in three locations: task-specific desktops (such as cash registers or point-of-sale systems); on machines used by True Believers; and in low-cost desktops, such as those deployed into the third world. For the mainstream home and office market, the world belongs to Windows (and the Mac, as a distant runner up), and it’s going to stay that way for a long, long time.
“it’s going to stay that way for a long, long time”
Only if you can help it.
This analysis presumes that the human interface will remain in the limbo MS wants its 1980s Xerox WIMP clone to be in.
It does not realise that there are many other factors affecting computer systems, not the least being the desire by the scientific community and MS competitors to progress development of the whole architecture past the mid 1980s block imposed by MS and achieve the long overdue interactive P2P system proposed back then.
Work by IBM, Toshiba, Sony and others on interactive speech, 3D vision, on the fly data solutions and grid networking, many using the new Cell system, should bear fruits within the next five years.
Where would the clunky old fashioned PC desktop, packaged applications and outdated client server networks be then? In their rightful place. On the scrapheap of progress.
One can see GNU/Linux at the forefront of new developments on many disparate platforms, even while traditional application developers effectively clone their own truer more productive versions of the WIMP workstation and packaged software.
MS on the other hand follows its old strategy of dominating existing markets that others have developed, bundling their features into its monolithic system, each rendition requiring more power from but tied to the one hardware architecture. Even their innovative consumer products business partner, Apple, has joined them to try to marginalise development on new platforms.
The new interface that is being designed to be as easy as a video phone to use, the new grid P2P network as hack proof as the PSTN, and its on the fly computing, gridded supercomputer, will take off like MS never existed, and all those PCs you reckon will never die will only be used to play old games from a previous technological era.