With Windows 8, one size must fit all

It is too early to praise Windows 8. It’s also too early to pan it. But it’s never too early to have an opinion. Mine is, “The one-size-fits-all UX paradigm doesn’t scale.”

I’m a fan of the mobile Metro user experience – excuse me, the Windows Store app user experience. Since its release with Windows Phone 7, the new user interface paradigm has been outstanding on phones and tablets. Live Tiles represent a genuine breakthrough. Microsoft has demonstrated through the original Zune music player software design, the Xbox Kinect, and now with Live Tiles, true creativity that rivals anything from Apple or Google.

The idea behind the Metro, ahem, Windows Store app is, and let me selectively quote from Microsoft’s documentation:

Apps have one window that supports multiple views. Unlike traditional desktop apps, a Windows Store app has a single, chromeless window that fills the entire screen by default, so there are no distractions.

On a phone or a tablet, that is perfect, as the tiny amount of screen real estate lends itself to full-screen apps. Not only that, but given the environment where phone or tablet apps are being run, the user is probably focused on a specific task: I want to check my calendar. I want to send a text message. I want to update Facebook. I want to get driving directions. I want to answer a phone call. I want to play Angry Birds for a few minutes. I want to update my to-do list. I want to read 50 Shades of Gray with a glass of wine.

This is a different use case than when a worker is sitting in front of a desktop computer for eight hours, or when a laptop is connected to a 27-inch monitor while the college student does her homework.

The Metro, err, Windows Store app design does not lend itself to immersive multitasking uses of the computer as a workstation. In my (admittedly limited) experience, it is not designed to help the user efficiently multitask without requiring context-switching.

To use a focus group of one: My environment right now consists of a 13” notebook connected to a 30” display. I have currently open Microsoft Word (in which I’m writing this essay), several browser windows using two separate browsers (Chrome and Firefox), an email client, and several chat windows – and I have switched my mouse over to each of them many times while still writing the column. I’m not swiping from side to side; the windows are all visible, all present, providing me with both information and interrupts. I almost never expand any app to full screen on either display.

One could argue that my windowing style is distracting, and that I would be more productive if the OS encouraged me to focus on a single app or task. Maybe. But when I switched many years ago from a small screen to multiple screen to a very large screen, my productivity increased significantly.

I look forward to spending more time with Windows 8, and in using it on a large touchscreen. Perhaps my view will change. For now, I believe that  new Windows 8 UX may be today’s best for mobile devices that being used in a single-mode context – but that it decreases productivity in a multi-app working environment. In other words, it does not scale.

What do you think?

Z Trek Copyright (c) Alan Zeichick

Cross-platform mobile dev, tablets, Windows Phone and BlackBerry

It’s hard to get away from mobile development. Yes, not every organization is building apps for mobile devices. Yes, only a small number of developers within a typical organization are likely focused on mobility. The others are doing stuff like websites, databases, desktop apps, server apps, integration…

That said, mobile development trends are fascinating, and not only because many of us not only use mobile devices ourselves, but because in many businesses, the subject keeps coming up. Over and over again.

I’d like to share a few data points from Evans Data Corp., an analyst firm that covers mobile development. Below are some abridged quotes from recent documents from Evans:

The vast majority of mobile developers are hedging their bets in the mobile ecosphere by designing at least some of their apps to target multiple platforms according to a survey of over 400 mobile developers conducted by Evans Data Corp. 

The new survey shows 94% design at least some of their apps to run on multiple platforms, though only 13.5% target all of their apps for multiple platforms.  The largest plurality, 58%, design from 1 to 50% of their apps to run on multiple platforms.

Mobile developers are overwhelmingly embracing the tablet form factor according to Evans Data’s Mobile Development Survey, a worldwide survey of developers who target mobile devices.  Seventy-three percent said they either are currently writing apps for mobile devices (34.7%) or plan to within six months time (38.7%). Only 8% said they had no plans at all to write apps for tablets, with the rest planning to begin sometime after six months.

The independent syndicated survey of over 400 mobile software developers found significantly higher numbers of developers in North America planning to target tablets within the next six months than mobile developers in the APAC or EMEA regions.  Android tablets were cited most frequently as the type of tablet that would be targeted, with Samsung as the preferred Android device type.

In North America 35% of mobile developers are currently targeting tablets, but an additional 46% plan to within six months.  The APAC region is second in adoption with 37% currently targeting tablets and an additional 37% planning to within 6 months.  The EMEA region trails.

Regarding specific platforms: On Thursday, Oct. 18, I visited the Microsoft Store at the Stanford Shopping Center in Palo Alto, Calif. There was a big display of Windows Phone 7.5, featuring the Nokia Lumia 900. It was a sad display; the phones were discounted down to $49.95, if someone signing up for a two-year contract with a carrier. (Non-US readers: That’s the common deal for smartphone in the United States.) Why the heck would anyone do that, when the Windows Phone 8 devices, including the superior Nokia Lumia 920, will be out in only a few weeks?

The store manager admitted that they’re not selling many phones.

And what about the BlackBerry? The talk of the town is an article published by the New York Times on Monday, Oct. 15, “The BlackBerry as Black Sheep.” The story is light on data and heavy on anecdote, but it seems fundamentally accurate to me.

The folks at Research in Motion disagree, though. Read the rebuttal by Thorsten Heins, president of RIM.

What do you think of the smartphone and tablet market?

Z Trek Copyright (c) Alan Zeichick

Invent vs. buy: Big companies do both

When you have billions of dollars in your piggy bank, you can go on a big shopping spree and hoover up some decent technology.

According to BerkeryNoyes, an investment bank, there were 4,151 mergers and acquisitions in the online/mobile market between 2010 and the first half of 2012 – and the biggest shopper was Google, which had 49 transactions.

Did you know that there’s a Wikipedia page dedicated to the list of mergers and acquisitions by Google? According to the page, which lists 119 transactions from February 2001 through October 2012, the largest was of course Motorola Mobility. This deal happened in August 2011, and cost US$12.5 billion.

That’s a lot of piggy banks. 

Google has a justly deserved reputation as a hotbed of innovation, but the company’s success is due just as much to smart shopping as pure technical prowess. The Motorola deal, of course, gave Google a huge patent portfolio. But look at who else Google has bought lately: The DealMap daily deal service, the Zagat restaurant reviews, the Meebo instant messaging platform, Apture instantaneous search, RightsFlow digital right management, Wildfire social-media marketing, the Quickoffice productivity suite, Frommer’s travel guides… the incredibly diverse list goes on and on.

Google is not alone by pursuing checkbook innovation (as well as checkbook market share). Every big tech company makes acquisitions, some huge, some tiny. Think about Apple, CA, Facebook, IBM, Microsoft or Oracle.

In some cases, these firms are buying patents. In others, the value is in source code or customer lists to milk for upgrades or migrations. Some of these deals buy out competitors (which are then shut down); some grab companies to settle lawsuits (by buying the opposing party); sometimes it’s a way to recruit some technology talent. Think of, for example, Apple’s buying Steve Jobs’ NeXT Computer in 1996, or Microsoft buying Ray Ozzie’s Groove Networks in 2005.

At these companies, the best and brightest computer scientists refine algorithms, tune source code, conduct basic research and invent the future. Their financial success and market position certainly owe a lot to prowess with an IDE. The firms, though, deserve as much credit for their deal savvy – buying the tech just as much as inventing it.

Z Trek Copyright (c) Alan Zeichick

Secure those passwords!

Stories about hacked or stolen password files keep coming. One of the most recent is a breech at IEEE.org – where 100,000 plaintext passwords were stolen a few weeks ago. The IEEE confirmed it a couple of days ago:

IEEE Statement on Security Incident

25 September 2012 — IEEE has become aware of an incident regarding inadvertent access to unencrypted log files containing user IDs and passwords. We have conducted a thorough investigation and the issue has been addressed and resolved. We are in the process of notifying those who may have been affected.

IEEE takes safeguarding the private information of our members and customers very seriously. We regret the occurrence of this incident and any inconvenience it may have caused.

There are two underlying problems. One we can address. One we can’t.

The problem we need to address is that programmers are sloppy. The application calls for having some sort of login with user names and passwords. So what do programmers do? They store the username and passwords as plain text in some sort of lookup table. They store the password lookup table in a volume where it can be accessed over the Internet.

The fixes are simple.

1. No plain-text storage systems – ever! Encrypt. Hash. Rinse. Repeat.

2. Don’t store the lookup table anywhere where it can be accessed remotely.

3. Don’t record passwords in log files.

4. Forget rules 1, 2 and 3. Instead, don’t let your programmers roll their own identity management system. If one needs to be built, make it a separate project and subject it to serious design work, security auditing and penetration testing.

No matter how trivial the “at risk” data, don’t create a lame login system. Ever. If a login/password system is required, take it seriously from a design perspective. It’s an attack surface!

That brings us to the second problem, the one we can’t address. Humans tend to reuse their passwords. They might have the same username and login in every e-commerce site. You’ve cracked one, you’ve cracked them all. And you know, that same login/password might also be their email access code, their remote network admin login/password, and their corporate portal login/password.

If your system uses an email address as the login, perhaps you’ve made life easier for your end users. You’ve also made it much easier for hackers to target your system, and for them to exploit a stolen login/password list from another site. If email hidden; JavaScript is required uses a password of DontGuessMe123 on one site, he’s probably using it on your site too.

Practically speaking, there’s nothing we can do about password reuse. But we can, we must, make sure that our own identity management systems are secure. If the IEEE can fail, we can too.

Z Trek Copyright (c) Alan Zeichick

Reimagining the taxonomy of computing

Interactive whiteboards! Ambient intelligence! A lot can change in 14 years! That’s the conclusion you have to reach after reading the latest iteration of the Computing Classification System, maintained and published by the Association for Computing Machinery.

The ACM’s CCS has defined the computing field since 1964, and was last updated in 1998. This latest update, completed in March 2012 but unveiled this month, can be considered a full list of terms. According to the ACM,

The 2012 ACM Computing Classification System has been developed as a poly-hierarchical ontology that can be utilized in semantic web applications… It relies on a semantic vocabulary as the single source of categories and concepts that reflect the state of the art of the computing discipline and is receptive to structural change as it evolves in the future. 

You can see the entire CCS as a Word document, HTML page or as an XML file.

What’s new in the 2012 classification? Lots, both in terms of organization and in content.

Previously, the CCS was divided into 11 top-level hierarchies: General literature, Hardware, Computer systems organization, Software, Data, Theory of computing, Mathematics of computing, Information systems, Computing methodologies, Computer applications, Computing milieux (my favorite), and Computers and society.

The new 2012 system has 14 top-level hierarchies which better reflect today’s world: General and reference, Hardware, Computer systems organization, Networks, Software and its engineering, Theory of computation, Mathematics of computing, Information systems, Security and privacy, Human-centered computing, Computing methodologies, Applied computing, Social and professional topics, and Proper nouns: People, technologies and companies.

Alas, Computing milieux has been renamed into the clearer, but less romantic, Social and professional topics.

Here’s an entire section that didn’t exist before:

Ubiquitous and mobile computing
.Ubiquitous and mobile computing theory, concepts and paradigms
..Ubiquitous computing
..Mobile computing
..Ambient intelligence
.Ubiquitous and mobile computing systems and tools
.Ubiquitous and mobile devices
..Smartphones
..Interactive whiteboards
..Mobile phones
..Mobile devices
..Portable media players
..Personal digital assistants
..Handheld game consoles
..E-book readers
..Tablet computers
.Ubiquitous and mobile computing design and evaluation methods
.Empirical studies in ubiquitous and mobile computing

Think of the CCS taxonomy as a giant table of contents or index for our industry. When you look through 2012 CCS, you can see just how big computing is – and how fast it is changing.

Z Trek Copyright (c) Alan Zeichick

Learn how to cope with Big Data

The tangible benefits of Big Data analytics are well known. You can read about them in the IT press – and also in business journals and the daily newspaper. Many books have been published about the “why” of Big Data. Conferences devoted to exploring the trends are happening everywhere.

But what about the “how” of Big Data – how to store, search, share and analyze those gigantic data sets? That’s not what you hear, and it’s hard to learn. That’s why I’m excited to chair the new Big Data TechCon, coming to Boston Apr. 8-10, 2013.

Big Data TechCon isn’t another “why” conference. It’s the HOW-TO conference for Big Data. Practical workshops. Technical classes. Thorough examinations of the real-world choices in storage, processing, analysis and reporting of Big Data information. Strategies for rolling out Big Data projects in your organization.

Come to Big Data TechCon to learn HOW-TO accommodate the terabytes and petabytes of data from your Web logs, social media interactions, scientific research, transactions, sensors and financial records. Learn how to index, search and summarize the Big Data. Learn how to empower employees, inform managers, reach out to customers.

Big Data TechCon is technology-agnostic. The workshops and classes apply to Big Data in your data center or in the cloud, from hosted environments to your own servers. The sessions apply to relational databases, NoSQL databases, unstructured data, flat files and data feeds.

The faculty have real-world experience that you can tap into, whether you use Java, C++, .NET or JavaScript; whether you like MySQL, SQL Server, DB2 or Oracle; whether you love or hate Hadoop; and whether you are looking at dozens of terabytes or hundreds of petabytes.

Learn from the smartest, hardest-working faculty in the Big Data universe in a way you never could by reading a book or watching a webinar. Mingle with fellow attendees. Talk shop during meals and receptions. Be inspired by keynotes, be informed by general sessions, be impressed by the hottest Big Data tools in the Expo Hall. It’s all waiting for you.

The Call for Speakers is open for Big Data TechCon through Sept. 26. Stay tuned to learn more in the weeks ahead.

Z Trek Copyright (c) Alan Zeichick

Software quality assurance by the numbers

What do enterprise software developers think about software quality within their organizations? We asked SD Times subscribers and the results may surprise you.

The research project was conducted in July 2012 by BZ Research (like SD Times, a division of BZ Media). Here’s what we learned:

Does your organization have separate development and test teams?

Some development and test/QA teams are separate, some are integrated 34.6%
All test and development teams are integrated 30.2%
All development teams and test/QA teams are separate 32.7%
Don’t know 2.4%

The net result was the 64.8% of respondents said that some or all of the test and development teams are integrated.

How many testers or test/QA professionals do you have at your company (or the largest company to whom you consult)?

5,000 or more 2.9%
1,000-4,999 3.9%
500-999 2.5%
100-499 5.9%
50-99 7.8%
20-49 11.3%
10-19 9.3%
5-9 15.2%
4 or fewer 41.2%

We found that 34.3% said that they have more than 20 testers or QA professionals at their company.

What background do your test/QA managers and directors typically have?

Both development and test/QA 53.9%
General IT background 38.2%
Test/QA only 23.5%
Development only 21.6%
General management background 21.1%
No particular background – we train them from scratch 14.7%

Who is responsible for internally-developed application performance testing and monitoring in your company? 

Prior to Deployment

Software/Application Developers 60.8%
Software/Application Development Management 52.8%
Testers 50.3%
Testing Management 48.7%
IT top management (development) (VP or above) 36.7%
Systems administrators 24.1%
Networking personnel 21.5%
Line-of-business management 21.1%
IT top management (non-development) (VP or above) 19.6%
Consultants 19.3%
Networking management 18.6%
Service providers 16.1%

After Deployment

Software/Application Development Management 53.8%
Software/Application Developers 47.7%
Systems administrators 45.4%
Testers 41.5%
Testing Management 38.5%
IT top management (development) (VP or above) 34.6%
Networking personnel 31.5%
IT top management (non-development) (VP or above) 30.8%
Line-of-business management 30.8%
Networking management 27.7%
Service providers 23.8%
Consultants 20.8%

Does your company outsource any of its software quality assurance or testing? 

Yes, all of it 4.4%
Yes, some of it 26.6%
No, none of it 65.0%
Don’t know 3.9%

This tabulated as 31.0% outsource some or all software testing.

Is your company developing and testing apps for mobile devices?

No, not developing/testing for mobile application development 42.1%
Yes, mobile software for iPhone/iPad 36.6%
Yes, mobile software for Android devices 33.2%
Yes, mobile software in HTML5 30.2%
Yes, mobile software for Windows Phone 22.8%
Yes, mobile software for Blackberry devices 16.3%
Don’t know 5.4%
Yes, for other devices 3.5%

This tabulated as 57.9% were developing or testing mobile applications.

At what stage is your company, or companies that you consult, using the cloud for software testing?

We are using the cloud for software testing on a routine basis 7.9%
We are experimenting with using the cloud for software testing 17.3%
We are studying the technology but have not started yet 26.7%
No plans to use the cloud for software testing 39.6%
Don’t know 8.4%

What is the state of software security testing at your company?

Software security is checked by the developers 48.0%
Software security is checked by the test/QA team 35.8%
Software security is checked by the IT/networking department 29.9%
Software security testing is done for Web applications 27.9%
Software security is tested by a separate security team 25.5%
Software security testing is done for public-facing applications 24.5%
Software security testing is done for in-house applications 22.1%
We don’t have a specific security testing process 18.6%
Software security is checked by contractors 12.7%
Software security testing is not our responsibility 3.4%

Those are the results. Do they match what you’ve seen at your company or within the industry?

Z Trek Copyright (c) Alan Zeichick

Riding on the Metro, or the Windows 8 Style UI

I remember searching for the perfect words
I was hoping you might change your mind
I remember a soldier sleeping next to me
Riding on the Metro

The group Berlin wrote the song The Metro in 1983. The lyrics evoke rail trips through London and Paris, walking along the Seine, and of course, a romantic  breakup. It’s a great song.

Microsoft used the term Metro to describe the design language and user interface introduced for Windows Phone. Consisting of an array of different-sized tiles in bright primary colors, Metro was reminiscent of the game Tetris, and also of a tic-tac-toe board. The Metro interface is crisp, clean and fresh – and when combined with active content (aka Live Tiles), it brought Windows Phone a user experience that was both attractive and functional.

Microsoft loves Metro. After Windows Phone hit the market with the Metro UX, the design began finding its way into everything from Microsoft marketing (like for the Build 2011 conference and numerous web pages) to the forthcoming Windows 8.

According to Microsoft’s developer tutorial on Metro,

Metro is the name of the new design language created for the Windows Phone 7 interface. When given the chance for a fresh start, the Windows Phone design team drew from many sources of inspiration to determine the guiding principles for the next generation phone interface. Sources included Swiss influenced print and packaging with its emphasis on simplicity, way-finding graphics found in transportation hubs and other Microsoft software such as Zune, Office Labs and games with a strong focus on motion and content over chrome.

Not only has the new design language enabled a unique and immersive experience for users of Windows Phone 7; it has also revitalized third party applications. The standards that have been developed for Metro provide a great baseline, for designers and developers alike. Those standards help them to create successful gesture-driven Windows Phone 7 experiences built for small devices.

Alas, Microsoft doesn’t love the Metro name, not any more. The company is slowly scrubbing the Metro name from both Windows Phone and Windows 8, in favor of the less-colorful phrase “the Windows 8 style UI” for the design language. AT press time, the developer tutorial about still referred to “Metro.”

However, yes, you should begin referring to the Windows Phone 7.x user experience as the Windows 8 style UI. Got it?

Why the name change? According to reports, such as this one from the BBC, the German company Metro AG — which describes itself as the world’s fourth-largest retailer — has told Microsoft to cease and desist. Microsoft is ceasing and desisting. {http://www.bbc.com/news/technology-19108952}

No matter what the name, Metro is a powerful language and an excellent metaphor for a mobile device user experience, where icons represent not only actions but also information. The Metro design represents one of the most innovative differentiators of Windows Phone. While I’m less enthusiastic about it on a Windows laptop, Metro remains one of the most creative developments seen out of Redmond in many years.

Riding on the Windows 8 style UI.

Z Trek Copyright (c) Alan Zeichick

Vacuum cooking as a metaphor for agile development

Sous-vide is an interesting way of cooking. It’s not new – according to the Wikipedia, sous-vide (pronounced soo-veed, meaning “under vacuum”) was invented in 1799. Since we’re quoting from the Wikipedia, might as well keep going:

Sous-vide is a method of cooking food sealed in airtight plastic bags in a water bath for a long time—72 hours in some cases—at an accurately determined temperature much lower than normally used for cooking, typically around 60 °C (140 °F). The intention is to cook the item evenly, and to not overcook the outside while still keeping the inside at the same “doneness,” keeping the food juicier.

You don’t need special sous-vide tools or appliances to use this cooking method. You can prepare the water bath using a big soup pot, a gas or electric cooktop and a cooking thermometer. You can use any old vacuum sealer to prepare the ingredients. In fact, you can just use a zipper baggie and squeeze out the air by hand. Getting a perfect vacuum isn’t essential, not if you’re going to prepare and consume the food right away.

As long as you keep the temperature hot enough to stop the food from spoiling (you don’t want any nasty bacteria to grow), sous-vide does a great job of cooking. Go ahead, give it a try this weekend. You might want to pick up a cookbook, though, at your local store – there are dozens, ranging from inexpensive titles like “Easy Sous Vide” to Nathan Myhrvold’s magnum opus, the US$625 “Modernist Cuisine: The Art and Science of Cooking.” At a mere 2,400 pages, Myhrvold’s book is definitely not casual beach reading.

You can certainly try out sous-vide cooking using a soup pot. But if you try it, and decide to add this technique to your kitchen repertoire, you might find it easier with specialized tools. For examples, there are water baths designed to circulate the water while keeping it at a consistent temperature that’s hot enough to kill bacteria. Over the past few years, a sous-vide industry has taken off, with products ranging from specialized vacuum sealers to ovens to thermometers to the VacMaster Dry Piston Pump Chamber Machine.

Agile software development is like cooking sous-vide. Agile methodologies don’t require special tools on the desktop or on the server – in fact, the Agile Manifesto explicitly states that agility means valuing individuals and interactions over processes and tools. Just like not every kitchen needs a dry-piston pump chamber machine, there’s no commandment that requires your team to choose an agile ALM tool suite with integrated project management, a Scrum countdown timer, stakeholder reports, user story repository or backlog groomer.

But you know, if you’re serious about sous-vide, you’ll want tools optimized for that purpose. And if you’re into agile, you’ll want tools that help you by removing friction and facilitating interactions. Zesty!

Z Trek Copyright (c) Alan Zeichick

Software QA focused on developers – and not the cloud

Remember the old saying, “Everyone is talking about the weather, but nobody is doing anything about it?” That’s pretty much the case when it comes to using the cloud as part of a software QA process.

In research conducted by SD Times in July, we asked, “At what stage is your company, or companies that you consult, using the cloud for software testing?” Very few respondents indicated that they use the cloud in this way:

At what stage is your company, or companies that you consult, using the cloud for software testing?

We are using the cloud for software testing on a routine basis 7.9%
We are experimenting with using the cloud for software testing 17.3%
We are studying the technology but have not started yet 26.7%
No plans to use the cloud for software testing 39.6%
Don’t know 8.4%

When it comes to software quality assurance, what happens in Vegas stays in Vegas most of the time. Or to put it more clearly, the activity usually but not always is conducted by the organization’s employees:

Does your company outsource any of its software quality assurance or testing?

Yes, all of it 4.4%
Yes, some of it 26.6%
No, none of it 65.0%
Don’t know 3.9%

There’s no one favorite model about where testing lives. Is it part of the development group? Is it separate? Is it sometimes separate and sometimes integrated? The answers were surprisingly evenly split.

Does your organization have separate development and test teams?

All test and development teams are integrated 30.2%
All development teams and test/QA teams are separate 32.7%
Some development and test/QA teams are separate, some are integrated 34.6%
Don’t know 2.4%

Enterprise developers can’t simply throw the code over the metaphorical wall once it was completed and let other IT staff take complete responsibility for quality assurance – even after deployment.

We asked, “Who is responsible for internally-developed application performance testing and monitoring in your company?” with separate answers for prior to deployment and after deployment. The answers showed that developers still responsibility after deployment – and sysadmins were in the loop during the development process.

Who is responsible for internally-developed application performance testing and monitoring in your company?

Software/Application Developers prior to deployment 60.8%, after deployment 47.7%
Software/Application Development Management prior to deployment 52.8%, after deployment 53.8%
Testers prior to deployment 50.3%, after deployment 41.5%
Testing Management prior to deployment 48.7%, after deployment 38.5%
IT top management (development) (VP or above) prior to deployment 36.7%, after deployment 34.6%
Systems administrators prior to deployment 24.1%, after deployment 45.4%
Networking personnel prior to deployment 21.5%, after deployment 31.5%

When it comes to software quality assurance, one thing is for certain: We are all in it together.

Z Trek Copyright (c) Alan Zeichick

Don’t roll your own math

When it comes to writing code with advanced numerical functions, my advice is clear: Use libraries. Don’t roll your own algorithms.

Generally speaking, I’m a fan of modular code reuse, especially for complex functions like UI controls, database access drivers, PDF generation or managing images. Most of the time, it’s a good idea to find open-source components that will get your job done, or license commercial reusable components. Sometimes, though, it makes more sense to write your own functionality.

Numerical libraries are the exception. Unless you are in the math business, resist the temptation to write your own Fast Fourier Transform (FFT) functions, random number generators, basic linear algebra subprogram (BLAS), wavelets, Eigenvalues, partial differential equations – you get the picture.

This came up thanks to an email from an SD Times subscriber:

I’m an IT consultant in the software arena and would like to ask you a question on buying mathematical algorithms vs. programming them yourself. Especially for complicated mathematical subroutines, is it cost-effective to subscribe to an algorithm library, or let your programmers do all the work?

Advanced numerical algorithms are very hard to get right. Simply writing the basic code is complicated – and so is the testing of that code, to make sure that each routine delivers consistently correct results in all cases – including across different processors, hardware architectures, programming languages, compilers, runtimes, standard libraries, and so-on.

Incredible amounts of work have gone into designing, coding and testing most high-end numerical libraries. What’s more, the code has been reviewed by many individuals, including both practical and theoretical mathematicians. Generally speaking, you can be confident that the math is correct.

Beyond consistent correctness, there’s also efficiency. You’re not running that FFT or BLAS routine once; it’s being executed hundreds, thousands, millions of times, perhaps, during the execution of your program. Efficiency matters, including both raw speed but also use of resources like memory and threads.

An advantage of most numerical libraries is the tuning that goes into the code – a lot of hand-crafted C or Assembler code, in some cases. In other words, it’s fast. Increasingly, those libraries are also tuned for multicore processing. You could never justify spending the resources to do this yourself.

I had had experience with many numerical libraries, ranging from Intel’s Math Kernel Library to AMD’s Core Math Library to the IMSL Numerical Libraries to the NAG Numerical Components. They are all good, all recommended.

The tradeoff is that many numerical libraries are costly. If you need math, though, licensing one of the libraries is a bargain – and you can do the ROI calculations on a four-function pocket calculator.

Z Trek Copyright (c) Alan Zeichick

With mobile, it’s all about the installed base

At the Google I/O 2012 conference, the Internet giant announced the availability for its Chrome browser for both Android and iOS devices – both tablets and handsets.

The next day, I was able to install the Chrome browser for iOS immediately onto an iPhone 4 (a device released in June 2010), an iPhone 3GS (released June 2009) and an iPad 2 (released March 2011).

Chrome for Android was also available immediately for the Google Nexus 7 tablet given to each attendee of Google I/O 2012. When I went to install it onto a Samsung Galaxy Tab 10.1 (released February 2011) it didn’t work. It also didn’t work on my older HTC Evo phone.

As a teenager would say, “You can run Chrome on a three-year-old iPhone but not a one-year-old Android tablet? Epic fail.”

Take a look at the requirements. In the Mac ecosystem, Chrome requires iOS 4.3 or later. That version of the operating system was released in March 2011, but could be user-installed onto older hardware.

By contrast, the Google Play store says, “This app is incompatible with all of your devices” and indicates that Android 4.0 (Ice Cream Sandwich) or above is needed. It’s the very rare machine that can be upgraded from Android 3.x to Android 4.x. And thus, you have the incongruity that a three-year-old Apple device can run Google’s feature-packed browser, but a one-year-old Android tablet (heralded as a showpiece by Google!) cannot.

This is going to hurt Google in the long run, if they continue to leave operating system upgrades in the hands of the device makers, and if they let hardware makers orphan handsets and tablets so quickly after they are released.

Microsoft, of course, is orphaning everyone who purchased a Windows Phone 7.x handset, because those phones won’t be upgradable to Windows Phone 8.x. However, I feel confident that this is a one-time break from the past. Also, Microsoft, like Apple – but unlike Google with Android – is keeping in control of the upgrade path.

It’s bad enough that Android devices offer different user experiences depending on which hardware maker you prefer. The look and feel of an HTC is light years from that of a Motorola or Samsung phone.

When even Google’s own apps won’t work on older machines, most of us lose. Who is the winner? The lack of backwards-compatibility and customized user experiences suits the needs of the hardware makers – not consumers, developers, or even enterprise customers. I’m sure they like this situation just fine.

Z Trek Copyright (c) Alan Zeichick

Cisco and the undesirable consequences of automatic firmware updates

Harmless kerfuffle? Abuse of corporate power? Enablement of Big Brother? No matter what you call it, Cisco’s recent firmware updates to its Linksys home routers is troubling.

According to a story published on ExtremeTech by Joel Hruska, “Cisco’s cloud vision: Mandatory, monetized, and killed at their discretion,” Cisco pushed out a firmware update to some models of its LinkSys routers for homes and small businesses. One effect of the firmware update is to move administration of the routers from a local application to a service on Cisco’s Connect Cloud services.

This means that router owners must now sign up for Connect Cloud in order to manage their routers, but the Cisco terms of service for the cloud service give lots of power to Cisco.

Hrusksa’s story says that Cisco has changed the terms of service after a firestorm of customer complaints. As of July 5, they contain lots of clauses about the type of traffic that you can use on your home network. It also says,

You agree that Cisco may suspend or terminate your access to the Service without notice if (a) Cisco suspects or determines that you have violated this Agreement, (b) Cisco determines that your actions cause Cisco to be in violation of any agreement or policy needed to run the Service or (c) Cisco is required to do so by any court or government authority in any country.  You agree that Cisco will not be liable to you or to any third party for any suspension or termination of your access to the Service as a result of any threatened or actual violation of this Agreement.


Cisco may, upon such termination, deactivate or delete your account and any related data, information, and files, and bar any further access to such data, information, and files through use of the Service.  Such action may include, among other things, accessing your data and/or discontinuing your use of the Service and any and all rights granted to you in connection with the Service without refund or compensation. 

Note that if Cisco kicks you off Connect Cloud, you will not be able to administer your LinkSys router. While you might lose control of your router, Cisco doesn’t care about that issue. How about this section of the Cisco Connect Cloud Supplement to the Cisco Privacy Statement:

Cisco Connect Cloud software is updated from time to time to provide additional features, address technical issues, and generally make your user experience better. We may add to or upgrade the Service to provide you with new features on an ongoing basis. We may also make available new services in the future. New services provided by third parties or service providers will be governed by the privacy policies of the respective third party or service provider. The Service automatically checks for updates to the firmware/software to help keep your network running at a peak performance and provides alerts as to the latest firmware/software. The auto-update feature offers the ability to download the next available version in the background. Cisco Connect Cloud offers the auto-update feature by default, but you can change your auto-update options by changing your settings within Cisco Connect Cloud. By leaving the auto-update feature as a default, however, you will avoid disruption to your home network and overall Internet connectivity. In some cases, in order to provide an optimal experience on your home network, some updates may still be automatically applied, regardless of the auto-update setting.

In other words: You purchased the router, but Cisco may decide to push new software or change its functioning at any time – including installing third-party software without your knowledge or permission, or without giving you’re the opportunity to review those third parties’ privacy policies. And of course, Cisco itself can change its privacy policy at any time.

Remember, we are talking about a network router here – something that sees every packet on a home or small business network. And Cisco is accused of helping the Chinese government build the “Great Firewall of China” to help it spy on its dissidents. What else might it do?

While automatic firmware updates are certainly convenient, the fact that you can’t turn them off is worrying. Personally, I wouldn’t buy a Cisco router. But who else can push firmware updates to your technology without your knowledge or permission? This, sadly, is the future.

Z Trek Copyright (c) Alan Zeichick

Android and iOS advance, BlackBerry retreats, Windows Phone relaunches

The past few weeks has seen a lot of excitement in the mobile space. This past week we had Google I/O, where we got a first look at Android 4.1 “Jelly Bean,” which offers solid improvements. The previous week Microsoft unveiled Windows Phone 8 – a near-total relaunch that will excite future customers and ISVs, while disappointing existing Windows Phone 7.5 customers. This is comes on the heels of Apple’s World Wide Developer Conference, which highlighted iOS 6’s new social media integration, home-grown mapping engine, and faster browser engine.

All of these platform upgrades are due to ship in 2012.

We also heard from Research in Motion about a delay in its already-pushed-back BlackBerry 10 smartphone operating system. It’s now scheduled for the first quarter of 2013. The company also lost a ton of money in the past quarter. It’s reasonable to predict that BlackBerry is toast, or soon will be.

Let’s dig a bit deeper into the three, uh, more viable smartphone platforms – and what they mean for us.

Apple’s iOS 6 is merely an incremental upgrade. Apple has moved from Google’s mapping system to its own Maps engine, which is fully accessible via APIs. You can also integrate Facebook into apps, tap into the Reminders system, go closer to the hardware with the camera, and leverage an improved WebKit rendering engine.

The good news is that from the developer perspective, not much has changed; you get some new capabilities, but if you don’t need them, you don’t need to do anything.

Existing Apple customers won’t be standing in line at midnight to download iOS 6 – but they will appreciate its upgrades. That’s because Apple continues to win the award for best support of older hardware. All current iOS devices, including iPhone, iPad and iPod touch, should be able to run everything in iOS 6. Most features will run on the 2011’s iPad 2 and even 2010’s iPhone 4.

Microsoft’s Windows Phone 8, code-named Apollo, is a huge, huge, huge upgrade. Everything changes. It’s a real reboot, as the platform moves from the Windows CE kernel to the Windows kernel (either x86 or the new Windows RT kernel for ARM processors). For the first time, Windows Phone can run on multi-core hardware and use screens larger than 800×480. Because the WP8 kernel is the Windows 8 kernel, WP8 phones should work with any Windows 8 device driver. That should let you do some pretty interesting hardware integrations – and gives WP8 a huge advantage of iOS and Android.

Near Field Communications (NFC) is a big part of WP8, as it is with Android. iOS still doesn’t get NFC – but I expect Apple to catch up fast, perhaps with the next-generation iPhone and an iOS 6.x update.

Most importantly for apps developers, you can also write apps in C/C++ or even use the .NET framework with C# or Visual Basic. That truly enables code migration to the mobile platform.

The bad news is that WP8 will require all new hardware. The only upgrade for existing WP7.x users (including those who just bought the much-hyped Nokia Lumia 900 phone)  is a cosmetic upgrade called Windows Phone 7.8 that improves the Start Page with new colors and three sizes of Live Tiles. Beyond this small enhancement, every existing Windows Phone customer is out of luck until their carrier’s contract lets them buy a new phone. Ouch.

Google’s Android 4.1 Jelly Bean is another incremental upgrade, similar in scope to what Apple is doing with iOS 6. The biggest change is called Project Butter, which improves the dreadful synchronization of the touchscreen with screen composition/refresh. This, along with improved triple buffering should go a long way toward eliminating common complaints that the Android UX is sluggish, unresponsive or even buggy, when it’s simply out of sync.

Another area of improvement is notifications – where you can provide more info to the end user, and even let them respond without switching to your app. You can more easily communicate with devices connected via USB or Bluetooth, access the NFC stack, and work with a better HTML5 renderer. A popular feature should be an upgrade to Ice Cream Sandwich’s message queuing service. Called Google Cloud Messaging, the system can now deliver up to 4K of data – and can leverage multicast.

We don’t know which existing Honeycomb (Android 3.x) and Ice Cream Sandwich (Android 4.x.x) devices will run Jelly Bean. This is determined largely by each hardware maker on a model-by-model basis. If past experience is any guide, some existing Android handsets or tablets will be upgradeable to Android 4.1, but most will be orphaned.

That’s a shame — but it’s better than those poor Windows Phone 7.x owners can expect.

Z Trek Copyright (c) Alan Zeichick

Is that really you, Dave?

Bet you never thought that AI would have tremendous applications to the field of computer security. AI’s challenge: Someone logs into your network or multi-user system using Dave’s userid and password. Can your computer be sure that it’s Dave logging in, and not someone who’s borrowed his password or cracked the system’s security measures? Can your computer be sure that Dave is not preparing to perform malicious activities?

First let’s verify that it’s really Dave who logged in. Over the past several years, computer-security researchers at SRI, Mitre, and other organizations (including the U.S. government) have learned that individuals have distinctive system-usage signatures. Data that can make up that signature include the name (or type) of programs executed, the method of changing system directories, the login time, and session length. Let’s say that Dave normally uses the mainframe during business hours to read e-mail. One Saturday night around 2:00 a.m., he logs in, scans the system read-only directories, and then attempts to rewrite the master password file. There’s a good chance your system’s been infiltrated.

That’s a simple scenario, of course. Programmers, who perform a wide variety of computer activities at all hours of the day and night, are more difficult to validate than 9-to-5 data-entry clerks. On an academic network, you’ll frequently need to recalculate your baseline models for each user as his or her expertise grows. The computer is vulnerable if hackers break into a new user’s account before there’s enough data to train the neural net properly or construct the model. Still, studies show that if the operating system is gathering the proper data, AI techniques can be applied in this area.

Expert systems can be applied to the second problem, trying to detect if Dave (or the intruder using Dave’s account) is misbehaving. A network monitoring tool can. see what commands Dave is issuing (like changes to other user’s files, or altering permission flags for various files). If the knowledge base contains data on known ways of hacking superuser privileges or crashing the system, it can watch for that type of activity. If Dave issues the first two commands in a dangerous three-command sequence, the expert system could alert the systems operator, flash a warning on Dave’s screen (“What are you doing, Dave?”), or even lock his account out of the system.

Perhaps you’re thinking that Big Brother is watching. You’re right. Instead of Orwellian thought police monitoring your private conversations, you might soon have AI software watching your every keystroke. Given today’s business realities, we might as well get used to that unpleasant idea.

I wrote the above essay in June 1994 and recently stumbled across it. Eighteen years later, it’s still relevant.

Z Trek Copyright (c) Alan Zeichick

It’s SaaS for the gold, followed by PaaS and private clouds

Which cloud models are developers working with? The short answer is – many different models are in play.

In May, we enlisted subscribers to SD Times and News on Monday to help us understand how they are building and testing applications using the cloud, and how they are deploying completed applications into the cloud. We had 425 responses – you can read many of their answers in last week’s Take, “Looking for the action? You’ll find it in the cloud.”

Let’s continue walking through the study. Of those who indicated that they are or soon will be using the cloud, the top 10 responses were:

Software-as-a-Service (SaaS) 54.4%
Platform-as-a-Service (PaaS) 40.9%
Private Cloud 37.7%
Virtual Private Cloud 34.2%
Infrastructure-as-a-Service (IaaS) 32.7%
Hosted Virtual Machines 27.8%
Database-as-a-Service (DBaaS) 26.7%
Public Cloud 24.6%
Hybrid Cloud 21.4%
Community Cloud 8.2%

There’s quite a wide range of answers – which may indicate a wide variety of need, or in my opinion reflects confusion in the marketplace. There’s little convergence or consistency in the use of phrases like Software-as-a-Service, Platform-as-a-Service or Infrastructure-as-a-Service.

What about specific technologies being used for cloud computing? The study asked about that as well. The choices were a catch-all – some of them reflected messaging infrastructure, some were application frameworks, and others were presentation-layer technologies. Here are the top 15 responses from those who said they are using the cloud or plan to begin doing so soon:

HTML5 60.9%
.NET 57.3%
Cloud-enabled APIs 53.0%
Virtual machines 44.8%
HTML 38.8%
Java EE 37.7%
REST 35.2%
SOAP 28.1%
SOA 21.7%
JDBC 21.7%
ODBC 16.4%
Spring 10.3%
Hyper-V 10.0%
Rails 8.9%
JAX 5.7%

With the high response for .NET notwithstanding, the most popular programming language for cloud-based development was reported to be Java. The top 15 responses, by those who are doing or plan to do cloud development, are:

Java 59.6%
C# 48.9%
Java Script/ECMAScript 46.4%
C/C++ 28.9%
PHP 28.9%
Visual Basic 18.2%
PL/SQL 17.1%
Python 15.0%
Ruby 14.6%
Perl 9.6%
Groovy 4.6%
Pascal/Delhi 1.8%
Scala 1.4%
Clojure 1.1%
Erlang 1.1%

Are you clear on the meaning of terms like SaaS, PaaS or IaaS? Which technologies and languages are your organizations using?

Z Trek Copyright (c) Alan Zeichick

Looking for the action? You’ll find it in the cloud

The temperature is rising, at least when it comes to developing and deploying in the cloud. In a survey of SD Times subscribers, 44.0% said that they are currently using the cloud to build or test applications, or to deploy applications. Another 24.9% said that they are not currently using the cloud, but expect to within the next year.

Breaking that down, of those who say that they are using the cloud, 26.0% say they have already built or tested several applications using the cloud and 22.8% are building their first application. Almost all of the remaining respondents say they are studying the issue; a very few plan to use the cloud only for deployment, not for development.

On the deployment side, of those who say they are using the cloud, 19.6% say they have deployed several applications into the cloud; 11.0% have deployed one application; 13.5% are developing productions applications but haven’t deployed them yet, and 11.7% are creating pilots or prototypes. Most of the rest are studying the issue, but a few plan to use the cloud only for development, not for deployment.

That’s a lot of cloud – higher, frankly, than I expected. Digging deeper into the data shows that when it comes to the cloud, adoption is moving fast, driven not only by the financial benefits of cloud computing, but for technical reasons as well.

The survey, conducted in May, was completed by 425 subscribers to SD Times and News on Monday. Most of the respondents are enterprise software development managers. One of the questions asked for the reasons why the respondent (or his company) is deploying applications to the cloud.

Of those who indicated that they are or will be using the cloud, the top 15 reasons are:

Scalability 58.5%
Long-term operating cost savings 48.7%
Reducing/eliminating capital costs 41.1%
Improve access to applications 35.6%
Ease of deployment 33.5%

Freedom from upgrades and hardware upgrades 26.9%
Simpler capacity planning 26.2%
All users are on the same version 24.0%
Shortened development cycle 23.3%

Improve application integration 23.3%
Short-term operating cost savings 22.9%
Better application performance 19.6%
Reduced need for power/cooling 19.3%
Spread costs out over time 18.9%

You can see the mix of technical and financial benefits – which points to the long-term viability of the cloud. It’s not simply another buzzword.

Of course, when someone talks about the cloud for developing or deploying applications, there’s some ambiguity. Some cloud service providers offer a straight-up hosting environment for virtual machines, where you’re renting little more than storage, CPU time and bandwidth. With others, you are getting a full-on development environment, customized for highly distributed applications.

Are you developing or testing in the cloud? Are you deploying in the cloud? If so, what are your top reasons?

Z Trek Copyright (c) Alan Zeichick

Calling winners — and non-winners — in the 2012 SD Times 100

Every year, I look forward to the judging and unveiling of the SD Times 100. The editors of SD Times and SDTimes.com spend literally months discussing the state of the industry, talking about leaders and innovators, where things are heading, who made the most impact, and which companies and projects truly made a difference.

We tweeted out the 2012 SD Times 100 on Thursday, May 31, and posted the results online the following day. Subscribers to SD Times could also read it in their June issue.

But just like an exciting horserace is followed by picking up litter around the viewing stands, so the week following each years’ SD Times 100 is filled with responding to queries by corporate marketing departments. Why, oh, why, didn’t we chose them?

Here is an email from a nice, but unhappy, PR professional:

Hi Alan,

Are you in charge of the SD Times 100 awards? I’m just curious if you can give any feedback on my client (redacted) not making the list but (competitor) has made it now twice in two years. Just want to know if it’s the criteria not being met or any kind of feedback would be helpful to go back to them with.

My response was short, and to the point, but sadly wasn’t what she wanted to read:

Thanks for your email. I’m one of the team of judges of the SD Times 100.

As a matter of policy, we never comment as to why a company was not named to the SD Times 100 — any more than the Oscar judges would have an official reason why a certain movie wasn’t named as Best Picture.

To help explain why, let me share two links from my blog. They give you a much longer, fuller answer to your question.

http://ztrek.blogspot.com/2009/06/post-sd-times-100-week.html
http://ztrek.blogspot.com/2008/06/why-you-didnt-win-sd-times-100.html

I know that’s not the feedback your client is looking for, but that’s the best we can offer.

Z Trek Copyright (c) Alan Zeichick

Windows Live ends without even a whimper, and won’t be missed


Remember Microsoft’s Windows Live brand? To be honest, I’d forgotten all about it. Randall Stross, a writer for the New York Times, noted its demise in “Goodbye to Windows Live (and Whatever It Meant),” and that sparked some vague memories.

Windows Live was launched by Microsoft about a million years ago — November 2005, to be precise — to consolidate its myriad Web-based properties. Here’s an excerpt from a message from the Windows Live Team blog archive in August 2007:

Windows Live is a growing group of Microsoft online services that work well together and with Windows. Some of the services, like Hotmail and Messenger, help you communicate more quickly and efficiently. Some, like Spaces, make it easy for you to post pictures and ideas and share them with the people you choose. Others, like OneCare and Family Safety, help protect you, your family, and your PC from threats on the Internet. Most Windows Live services are free, but a few have a premium version that you can pay for.


You can use one e-mail address and password, called a Windows Live ID, to sign in to all Windows Live services, and chances are, you already have one. That is, if you have a Hotmail or MSN e-mail address, if you use Messenger, or if you ever signed up for a Microsoft Passport account, you have a Windows Live ID. 

Microsoft stuffed more and more services under the Windows Live umbrella. For example, in 2010, Microsoft released Windows Live Essentials, which included Windows Live Photo Gallery, Windows Live Movie Maker, a new version of Windows Live Messenger, Windows Live Writer, Windows Live Mail and Windows Live Mesh (a backup utility which morphed into SkyDrive).

The marketplace’s response to Windows Live was a resounding “Meh.” While some of the services have been well received – Hotmail is still popular, and critics are raving about SkyDrive — the Windows Live name didn’t work. Perhaps customers grokked that Windows is a family of operating systems, and that extending the Windows brand to a set of websites simply didn’t make any sense whatsoever.

As Randall Stross reported, Microsoft now gets it. 

In his Building Windows 8 blog, Microsoft honcho Steven Sinofsky sort of announces Windows Live’s demise. While saying that over 500 million people use Windows Live services every month, Sinofsky wrote that …

…they still did not meet our expectations of a truly connected experience. Windows Live services and apps were built on versions of Windows that were simply not designed to be connected to a cloud service for anything other than updates, and as a result, they felt “bolted on” to the experience. This created some amount of customer confusion, which is noted in several reviews and editorials. The names we used to describe our products added to that complexity: we used “Windows Live” to refer to software for your PC (Windows Live Essentials), a suite of web-based services (Hotmail, SkyDrive, and Messenger), your account relationship with Microsoft (Windows Live ID), and a host of other offers.


Windows 8 provides us with an opportunity to reimagine our approach to services and software and to design them to be a seamless part of the Windows experience, accessible in Windows desktop apps, Windows Metro style apps, standard web browsers, and on mobile devices. Today the expectation is that a modern device comes with services as well as apps for communication and sharing. There is no “separate brand” to think about or a separate service to install – it is all included when you turn on your PC for the first time.  

In other words, Microsoft no longer wants the Windows Live brand, and as such, Windows Live services will be renamed to get rid of the Windows Live name. For example, Windows Live ID is now going to be called a Microsoft account; Windows Live Mesh will now be called SkyDrive Desktop; Windows Live Mail will be known as the Windows Mail app; and so-on.

Goodbye, Windows Live; we hardly knew you. And while the services themselves are fine, as far as the brand is concerned, we won’t miss you.

Z Trek Copyright (c) Alan Zeichick

Oracle, Google, Motorola and patents

According the jury, Google did not infringe on two Oracle patents. That news came the same day, May 22, that Google closed its acquisition of Motorola Mobility.

The acquisition sailed through smoothly, and Google was quick to try to assure other Android handset makers – who are now both partners and competitors – that they would not become second-class citizens.

“The acquisition will enable Google to supercharge the Android ecosystem and will enhance competition in mobile computing. Motorola Mobility will remain a licensee of Android and Android will remain open. Google will run Motorola Mobility as a separate business,” said the company in a statement.

You can’t blame the likes of HTC, LG and Samsung from being nervous. Google already sells its own phones, such as the Galaxy Nexus, which are often the first with leading-edge technology. If Motorola phones now get early access to Android technologies, Google will look a lot more like Apple, and the viability of the broader marketplace would suffer.

And don’t even get started about what Google’s ownership of Motorola’s patents might mean to innovation in the handset market.

Speaking of patents, well, according to a jury, the answer was clear in Oracle America Inc., Plaintiff, v. Google Inc., Defendant.

The jury said that no, Google did not infringe on two patents acquired by Oracle when it purchased Sun Microsystems in 2009.

Patent RE38104, called “Method and apparatus for resolving data references in generated code,” was invented by James Gosling in 2003. It covers the way that Java source code is turned into Java byte code, and then run by a Java virtual machine.

Patent US6061520, “Method and system for performing static initialization,” was invented by Frank Yellin and Richard Tuck. This patent covers efficient means for a virtual machine to statically initializing an array .

Bottom line: Oracle is not going get billions of dollars in damages from Google. Don’t cry for Oracle, though – it is hugely profitable and has plenty of cash. This would have been a windfall for its investors, nothing more.

What will Google do with the savings? Invest in mobile phone technology, which it can market first in Motorola-branded handsets? That’s what its competitors probably fear.

Z Trek Copyright (c) Alan Zeichick

Google I/O and Apple WWDC are hot

Holy Sellouts, Batman! I received an email from Apple at 6:44am Pacific time on Wednesday, April 25:

WWDC2012. Apple Worldwide Developers Conference. June 11-15 in San Francisco. It’s the week we’ve all been waiting for. Register now!

A little more than an hour later, I clicked the link. On the WWDC page, a box said

Sorry, tickets are sold out.

That’s par for the course for Apple’s WWDC. The same thing happened in 2011, and in previous years as well, especially since the introduction of iOS. And at Google I/O, which similarly sold in half an hour when tickets became available on March 26. Google’s conference is June 27-29, also in San Francisco.

Clearly, there’s something driving developers to focus on mobile. As you can see in a recent study on enterprise developers that we did at BZ Research, more than half of organizations are building mobile apps. While there are plenty of enterprise developers at conferences like Apple WWDC or Google I/O, there are also many entrepreneurial developers hoping to come up with the next Angry Birds.

As we prepare to hold our own mobile developer conferences for Android and Windows Phone app developers, it’s exciting to see this much activity in the development market.

Z Trek Copyright (c) Alan Zeichick

High-PPI displays coming to a desktop near you

Get ready for an onslaught of high-resolution displays, coming to everyone from smartphones to tablets to laptops to desktops.
As I wrote about last month in “In the iPad 3 era, pay attention to the pixels-per-inch,”, Apple computers users are enjoying screens with much higher PPI (pixels per inch) than has been the industry norm. But they’re not alone.
A standard desktop computer monitor or notebook PC shows about 100 pixels per inch. An iPad 2 tablet has a sharper screen with a PPI of 132.
Samsung ups the ante with devices like the Galaxy Tab 10.1, an Android tablet with a 1280×800 10.1-inch display. That’s a PPI of 149. Want more? Samsung’s Galaxy Tab 7.7 crams the same 1280×800 resolution into a 7.7-inch form factor. That’s a PPI of 197.
What? You want more? Apple’s iPad 3 shows 2048×1536 on a 9.7-inch screen, which computes out to a PPI of 264. And the iPhone 4/4S is 960×480 on a 3.5-inch screen, which is an amazing PPI of 326.
Photographs, text and icons on those high resolutions are stunning. But they are consum more bandwidth to transmit, more storage, more processing power, more electrical power. The iPad 3’s battery is considerably larger than the battery in the iPad 2, and the iPad 3 also has a stronger GPU. Yet battery life and apparent performance are about the same. The new model needs more horsepower simply to break even.
High resolution is about more than tablets and phones. The Liliputing website reports that we’ll be seeing these types of displays everywhere – desktops, notebooks – in only a year or two. The site’s story “Intel: Retina laptop, desktop displays coming in 2013” says this is what Intel sees happening in the computer space over the next few years:
Phones and media players with 5 inch, 1280 x 800 pixel displays
Tablets with 10 inch, 2560 x 1440 pixel displays
Ultrabooks with 11 inch, 2560 x 1440 pixel displays
Ultrabooks with 13 inch, 2800 x 1800 pixel displays
Laptops with 15 inch, 3840 x 2160 pixel displays
All-in-one desktops with 3840 x 2160 pixel displays
You should read the story – it does a good job of explaining the relationship between PPI and the viewing distance, and the limits of “retina” displays. At some point, the human eye simply can’t perceive any improvement in resolution.
But as anyone who has compared a iPad 3 or Galaxy Tab to a desktop screen knows, we have a long way to go.
By the way, I don’t want to give the impression that the high-PPI domain belongs exclusively to Apple or the Android tablet makers. Everyone is jumping on this bandwagon. In fact, Microsoft’s Steven Sinofsky has published a fascinating article, “Building Windows 8: Scaling to different screens,” explaining the company’s take on high-resolution displays. Read it to learn why 1024×768 screens are the bare minimum for those that use Metro-style user experiences.
Z Trek Copyright (c) Alan Zeichick

Farewell to Embedded Systems Programming magazine

I wish a fond farewell to Embedded Systems Programming magazine. ESP was launched by my friends Don Pazour (publisher), Ted Bahr (associate publisher) Regina Starr Ridley (executive editor) and J.D. Hildebrand (editor) in 1988.

ESP was renamed as Embedded Systems Design a few years ago. According to a newsletter published by embedded guru Jack Ganssle, “I have been informed that Embedded Systems Design magazine, too, is kaput. It will end with the May issue, and I’m told there are no plans for an on-line version. Instead, the focus will be on enhancing the companion web site, embedded.com.”
You can download the first issue of ESP here: www.ganssle.com/misc/firstesp.pdf
Z Trek Copyright (c) Alan Zeichick

Lessons from 25 Years of IBM’s OS/2

Once upon a time, IBM’s OS/2 operating system was the future. As we commemorate the 25th anniversary of its April 1987 launch, it’s instructive to look back on OS/2’s failure in the market.

OS/2 played a large part in my own history. Ted Bahr (the other founder of BZ Media and SD Times) launched OS/2 Magazine together in December 1993; I edited every issue until Miller Freeman, the publishing company, finally pulled the plug in January 1997.

It’s often forgotten, but IBM and Microsoft collaborated to bring OS/2 to market as the successor to the 16-bit MS-DOS and Windows 3.x. OS/2 was ahead of its time a 32-bit operating system with preemptive multitasking. It was much more stable than the DOS-based Windows 95 and the other graphical DOS shells then on the market.

OS/2’s failure certainly can be largely attributed to Microsoft’s marketing prowess around Windows 95. However, IBM is equally at fault, because Big Blue was never committed to its own product.

Incredibly, the IBM PC Company refused to preload OS/2 onto its own desktops or servers – which were offered with Windows instead. Top management didn’t force the issue. IBM’s own software for OS/2, with the notable exception of DB2/2, were substandard for the industry, and also ridiculously overpriced on per-seat or per-server licensing.

IBM never bothered to take care of its partners. The company never demonstrated to ISVs and IHVs why they would profit by supporting OS/2 instead of (or in addition to) Windows. With few exceptions, like a short-lived catalog program, IBM didn’t help its ISVs market the third-party products that did appear.

Worse, IBM treated programmers as a lucrative revenue source to be exploited – not as vital allies necessary in building a successful platform. ISVs and enterprise developers had to pay an arm and a leg to get poor-quality tools – which were again fantastically overpriced relative to compilers, editors and libraries for other platforms.

Despite Big Blue’s not-so-benign neglect, OS/2 garnered a loyal following, including some who still believe in the platform today. Die-hard fans continue to patch and augment OS/2 to support modern networks and the Internet. (OS/2 loyalists are up there with those who still revere Novell’s NetWare 3.x and the Commodore Amiga.)

Here are some other reminiscences of OS/2:

Esther Schindler: OS/2 is 25 Years Old

Steven J. Vaughan-Nichols: OS/2 Turns 25
Steve Wildstrom: Happy Birthday OS/2
Z Trek Copyright (c) Alan Zeichick

With software security, we’re outgunned

The good guys aren’t winning.
In the battle to keep our software safe, we are outgunned. To take a minor example: We set up a captcha system to filter out garbage comments on sdtimes.com stories and blog posts. It didn’t take long for hackers to find a way around it – and now our system is inundated with faux comments with links to term-paper writing services, loan sharks, pharmaceuticals and more.
The garbage comments are an annoyance, but we filter them out manually. No harm is done. Much worse are the persistent attacks by hackers – some so-called hacktivists, some independent troublemakers, some part of organized crime, and some potentially working for foreign governments.
A story in the March 30 edition of the Wall Street Journal reports, “Global Payments Inc., which processes credit cards and debit cards for banks and merchants, has been hit by a security breach that has put some 50,000 cardholders at risk, according to people with knowledge of the situation.”
“We are investigating a potential data breach & as a result, have alerted payment card issuers regarding accounts that may be at risk,” @MasterCard tweeted out, adding, “It is important to note, that MasterCard’s own systems have not been compromised in any manner.”
While we wait to see what happens, by coincidence the New York Times ran a story on the same day entitled “Case Based in China Puts a Face on Persistent Hacking.” Read the story, it’s a good one.
Let’s not kid ourselves: We are all vulnerable. Even the slightest flaw in our application design, operating systems, hardware or network security creates an opportunity for data theft, digital graffiti, the insertion of malware or backdoors, or worse.
The challenges are many. One is that our systems are complex, and the integration points are weak spots that can be exploited. Another is that our programmers are not sufficiently trained in secure coding techniques. Still another is that our security testing tools and techniques are always a step behind the bad guys.
And despite all of our end-user educational efforts, social engineering works. People click on links they shouldn’t click, visit websites they shouldn’t visit, and open documents they shouldn’t open.
The biggest problem, though, is that we are simply outgunned. Most corporate security teams generally are small and work in isolation. Their budgets are limited. Companies do not, for obvious reasons, talk openly about how they do security design and testing, and rarely collaborate with others in their industry.
The enemy, on the other hand, has a huge army of volunteers. Some are highly trained software engineers, others simply script kiddies with an attitude, some college students. That doesn’t count, of course, the botnets that carry out many of these attacks. Hackers share data with each other, and in some cases are well-financed by untouchable outside organizations.
Whether the hackers are targeting specific companies, or simply spraying out their attacks randomly across the Internet, they are winning.
Z Trek Copyright (c) Alan Zeichick

Android and Linux do the reverse-fork maneuver

Android forked out from Linux. And now, with Linux 3.3 (released on Mar. 18) it has been sucked back into the mainline.
The description on KernelNewbies is succinct and clear:
For a long time, code from the Android project has not been merged back to the Linux repositories due to disagreement between developers from both projects. Fortunately, after several years the differences are being ironed out. Various Android subsystems and features have already been merged, and more will follow in the future. This will make things easier for everybody, including the Android mod community, or Linux distributions that want to support Android programs.
Exactly right. Android has been run-away popular, but has been fraught with forking. First, Android itself forked from Linux. Then Android 3.0 (“Honeycomb”) became a tablet-only fork from the Android 2.3 (“Gingerbread”) code base, which remained focused on smartphones.
But that’s not all. Barnes & Noble’s Color Nook e-reader was a fork from the Android 2.2 (“Froyo”) code, while Amazon’s Kindle Fire is a forked version of Gingerbread. Confused yet?
With the B&N and Amazon forks, there’s no guarantee that changes to Android will make it back into the Android codebase. But elsewhere we are seeing progress, as in last year’s announcement that Android 4.0 (“Ice Cream Sandwich”), at least Gingerbread and Honeycomb are coming back together into One Set of APIs to Rule Them All.
However, even Ice Cream Sandwich left Android split apart from embedded Linux. While that probably wasn’t a big deal for smartphone or tablet manufacturers – and certainly consumers wouldn’t care – this rift was not in the best interest of either Linux or Android.
A lot of important work is being done with Android. It’s a positive step that with Linux 3.3, Android is going back into the fold. This was announced in December 2011 by Greg Kroah-Hartman, head of the Linux Driver Project, who wrote,
So hopefully, by the 3.3 kernel release, the majority of the Android code will be merged, but more work is still left to do to better integrate the kernel and userspace portions in ways that are more palatable to the rest of the kernel community. That will take longer, but I don’t foresee any major issues involved.
Not all the work is finished – there are small parts of Android that aren’t completely integrated into Linux 3.3. And certainly the extensions created by Amazon and B&N haven’t been contributed back to Linux as you can see on the Android Mainlining Project page. But this is a move that is good for Linux and good for Android.
Z Trek Copyright (c) Alan Zeichick

In the iPad 3 era, pay attention to the Pixels-Per-Inch

I love, love, love my Dell 3007WFP monitor. The 30” beast – showing 2560 x 1600 pixels – has been on my desk since January 2007, when I bought it (refurbished) from the Dell Outlet Store for $1,162.60.
Clearly, I’ve gotten my money’s worth out of this display, which has been variously connected to Windows desktops, Sun workstations, and now to a MacBook Air via a “Mini DisplayPort To Dual-Link DVI Adapter.”
The screen looks good, but to be honest, it’s not fantastic. One reason is that the pixel density of the Dell screen is usual for most desktop and notebook computers, at about 100 pixels per inch diagonally. (You get there by calculating the number of pixels on the diagonal, which is 3018, and dividing by the diagonal screen size, which is 30 inches.)
The internal display on the MacBook Air is visibly sharper, and not only because it’s newer. The main reason is because it has a higher pixel density. The screen is 1440 x 900 pixels on a 13-inch diagonal, for a PPI of 131. Thus, a graphic image of a certain size (say, 400 x 400 pixels) appears about 30% smaller on the laptop’s screen – and therefore, it’s sharper and crisper. The same is true with text as well. The higher the PPI, the sharper the graphics.
By comparison, the original iPad and then the iPad 2 had screens with essentially the same PPI as the MacBook Air’s 13” monitor. The tablets’ 9.7-inch screen has a resolution of 1024 x 768 pixels, which computes to a PPI of 132.
Most mainstream notebook and desktop displays are 100 PPI; a few, obviously go up higher. Variation is within a fairly narrow range — and so Web designers could basically ignore the issue and focus on the physical count of the pixels.
If your app server sniffed that the browser was, say, 1024 x 768, you knew that the end user had a small screen, and you might cut back how much you displayed. If you saw that the user had, say, 2048 x 1536, you could assume that the end user had a big 24-inch or 27-inch desktop monitor and you could show lots of information.
No more. We are entering a whole new world of high-PPI displays, which first appeared on the iPhone 4, but now are on the new iPad (which I’m going to call iPad 3, even though that’s not its official name).
The iPad 3’s display is 2048 x 1536, which computes out to a PPI of 263.9. That’s significantly larger. A 400×400 pixel graphic on my Dell external monitor will be four inches high. On the MacBook Air or on an iPad 2, it will be 3.1 inches high. On an iPad 3, a 400 x 400 graphic will be 1.5 inches high.
Or, to put it another way, if you have a Web graphic that uses a color band that’s 30 pixels high, it will be .30 inches high on a standard monitor, .23 inches on an iPad 2 or MacBook Air, and .11 inches on the iPad 3.
Say that color band contains 20-pixel-high white text. That text is a readable .20 inches on a standard monitor, but only . 07 inches high on the iPad 3. Unless the user zooms to scale it up, forget about reading it.
On a native app running on an iPad 3, of course, the operating system will take care of dynamically scaling text or graphics, or will substitute larger graphics provided by the developer. No problem. But what about Web pages? You can’t simply sniff out a 2048 x 1536 screen and assume you’re working with a large desktop screen. Not any more.
For now, the workaround is easy: If you can detect that it’s an iPad 3, you can adapt your Web page accordingly. You just need to remember to do that. And of course, pick up an iPad 3 for testing.
What about tomorrow? High-PPI displays will spread. Other tablets will have them soon. Notebooks will adopt them. Desktops. How long before Apple releases a 27-inch iMac display that’s 263 PPI? Dell – HP – Lenovo – Samsung. We are in a new era of small-pixel devices. We can’t assume anything any more.
Z Trek Copyright (c) Alan Zeichick

SQL Server 2012 is not the new iPad

Two important products were introduced this week. One was the new iPad from Apple. The other was SQL Server 2012 from Microsoft.
With all the coverage of Apple’s third-generation tablet, everything else in the tech industry ground to a halt. Not just the tech industry. Heck, even CNN and the New York Times sent out alerts:
From: CNN Breaking News
Subject: CNN Breaking News
Date: March 7, 2012 11:06:03 AM PST

Apple unveils new iPad with HD display, better camera and 4G wireless. Starting price remains $499.

One CNN Center Atlanta, GA 30303
(c) & (r) 2012 Cable News Network
That alert sums up Apple’s news, so let’s talk about SQL Server 2012. Large-scale enterprise databases – like SQL Server, DB2 or Oracle – are the least-talked about parts of IT infrastructure. They’re big, they’re fast, they’re essential to any data center or for any n-tiered application.
Despite all the talk about clouds – and Database-as-a-Service – performance and bandwidth dictate that database servers must rename close to their application servers. For truly large projects, those are staying entirely or mainly on-premises for years to come. Yet SQL Server 2012 anticipates the move to the cloud, and makes it feasible to have applications that span both on-premises data centers and cloud-based servers. That’s important.
SQL Server 2012 isn’t really news, of course. Customers have been using it for months – March 6 only saw the official “release to manufacturing” of the bits. Most of the details came out last October, when Microsoft started its previews, and focused on Big Data and integration with Hadoop.
The list of other changes – beyond the Hadoop, Big Data and cloud features – shows an incremental upgrade. Better high-availability functions with multiple subject failover clusters and more flexible failover policies. Programmability enhancements with statistical semantic search, property-scoped full-text search and customizable proximity search, ad-hoc query paging, circular arc segment support for spatial types, and support for sequence objects. Some needed scalability and performance enhancements for data warehouses, and support for 15,000 partitions (up from 1,000 partitions). And improvements to permissions and role-based management, as well as better auditability.
Is SQL Server 2012 a must-have upgrade? The answer is the same as with the new iPad: Only if you need the new features right now. Otherwise, no.
If you’re dying to make the move to mix cloud/on-premises computing (or want 4G LTE networking in your tablet), you should budget to make that purchase sooner rather than later.
But if you are happy with the your existing SQL Server 2008 R2 (or iPad 2), then keep your wallet in your pocket. Sure, you’ll probably go there eventually, but there’s no rational reason to be the first to make the upgrade. Give SQL Server 2012 (and the new iPad) time to settle down.
Z Trek Copyright (c) Alan Zeichick

SQL Server 2012 is not the new iPad

Two important products were introduced this week/last week {Adam – edit according to when it is posted}. One was the new iPad from Apple. {http://www.apple.com/ipad/} The other was SQL Server 2012 from Microsoft. {http://www.microsoft.com/sqlserver/}

As a friend pointed out, with all the coverage of Apple’s third-generation tablet, nearly everything else in the tech industry ground to a half. Heck, even CNN and the New York Times sent out alerts:

From: CNN Breaking News

Subject: CNN Breaking News

Date: March 7, 2012 11:06:03 AM PST

Apple unveils new iPad with HD display, better camera and 4G wireless. Starting price remains $499.

One CNN Center Atlanta, GA 30303

(c) & (r) 2012 Cable News Network

That sentence sums up Apple’s news, so let’s talk about SQL Server 2012. Large-scale enterprise databases – like SQL Server, DB2 or Oracle – are the least-talked about parts of IT infrastructure. They’re big, they’re fast, they’re essential to any data center or for any n-tiered application.

Despite all the talk about clouds – and Database-as-a-Service – performance and bandwidth dictate that database servers must rename close to their application servers, and for truly large projects, those are staying on-premises for years to come. Yet SQL Server 2012 anticipates the move to the cloud, and makes it feasible to have applications that span both on-premises data centers and cloud-based servers. That’s important.

SQL Server 2012 isn’t really news, of course. Customers have been using it for months – this March 6 only saw the official “release to manufacturing” of the bits. Most of the details came out last October, when Microsoft started its previews, and focused on Big Data and integration with Hadoop.

The list of other changes – beyond the Hadoop, Big Data and cloud features – shows an incremental upgrade. Better high-availability functions with multiple subject failover clusters and more flexible failover policies. Programmability enhancements with statistical semantic search, property-scoped full-text search and customizable proximity search, ad-hoc query paging, circular arc segment support for spatial types, and support for sequence objects. Some needed scalability and performance enhancements for data warehouses, and support for 15,000 partitions (up from 1,000 partitions). And improvements to permissions and role-based management, as well as better auditability.

Is SQL Server 2012 a must-have upgrade? The answer is the same as with the new iPad: Only if you need the new features right now. Otherwise, no.

If you’re dying to make the move to mix cloud/on-premises computing (or want 4G LTE networking in your tablet), you should budget to make that purchase sooner rather than later.

But if you are happy with the your existing SQL Server 2008 R2 (or iPad 2), then keep your wallet in your pocket. Sure, you’ll probably go there eventually, but there’s no rational reason to be the first to make the upgrade. Give SQL Server 2012 (and the new iPad) time to settle down.