The good guys aren’t winning.
In the battle to keep our software safe, we are outgunned. To take a minor example: We set up a captcha system to filter out garbage comments on sdtimes.com stories and blog posts. It didn’t take long for hackers to find a way around it – and now our system is inundated with faux comments with links to term-paper writing services, loan sharks, pharmaceuticals and more.
The garbage comments are an annoyance, but we filter them out manually. No harm is done. Much worse are the persistent attacks by hackers – some so-called hacktivists, some independent troublemakers, some part of organized crime, and some potentially working for foreign governments.
A story in the March 30 edition of the Wall Street Journal reports, “Global Payments Inc., which processes credit cards and debit cards for banks and merchants, has been hit by a security breach that has put some 50,000 cardholders at risk, according to people with knowledge of the situation.”
“We are investigating a potential data breach & as a result, have alerted payment card issuers regarding accounts that may be at risk,” @MasterCard tweeted out, adding, “It is important to note, that MasterCard’s own systems have not been compromised in any manner.”
While we wait to see what happens, by coincidence the New York Times ran a story on the same day entitled “Case Based in China Puts a Face on Persistent Hacking.” Read the story, it’s a good one.
Let’s not kid ourselves: We are all vulnerable. Even the slightest flaw in our application design, operating systems, hardware or network security creates an opportunity for data theft, digital graffiti, the insertion of malware or backdoors, or worse.
The challenges are many. One is that our systems are complex, and the integration points are weak spots that can be exploited. Another is that our programmers are not sufficiently trained in secure coding techniques. Still another is that our security testing tools and techniques are always a step behind the bad guys.
And despite all of our end-user educational efforts, social engineering works. People click on links they shouldn’t click, visit websites they shouldn’t visit, and open documents they shouldn’t open.
The biggest problem, though, is that we are simply outgunned. Most corporate security teams generally are small and work in isolation. Their budgets are limited. Companies do not, for obvious reasons, talk openly about how they do security design and testing, and rarely collaborate with others in their industry.
The enemy, on the other hand, has a huge army of volunteers. Some are highly trained software engineers, others simply script kiddies with an attitude, some college students. That doesn’t count, of course, the botnets that carry out many of these attacks. Hackers share data with each other, and in some cases are well-financed by untouchable outside organizations.
Whether the hackers are targeting specific companies, or simply spraying out their attacks randomly across the Internet, they are winning.
Z Trek Copyright (c) Alan Zeichick

Android forked out from Linux. And now, with Linux 3.3 (released on Mar. 18) it has been sucked back into the mainline.
The description on KernelNewbies is succinct and clear:
For a long time, code from the Android project has not been merged back to the Linux repositories due to disagreement between developers from both projects. Fortunately, after several years the differences are being ironed out. Various Android subsystems and features have already been merged, and more will follow in the future. This will make things easier for everybody, including the Android mod community, or Linux distributions that want to support Android programs.
Exactly right. Android has been run-away popular, but has been fraught with forking. First, Android itself forked from Linux. Then Android 3.0 (“Honeycomb”) became a tablet-only fork from the Android 2.3 (“Gingerbread”) code base, which remained focused on smartphones.
But that’s not all. Barnes & Noble’s Color Nook e-reader was a fork from the Android 2.2 (“Froyo”) code, while Amazon’s Kindle Fire is a forked version of Gingerbread. Confused yet?
With the B&N and Amazon forks, there’s no guarantee that changes to Android will make it back into the Android codebase. But elsewhere we are seeing progress, as in last year’s announcement that Android 4.0 (“Ice Cream Sandwich”), at least Gingerbread and Honeycomb are coming back together into One Set of APIs to Rule Them All.
However, even Ice Cream Sandwich left Android split apart from embedded Linux. While that probably wasn’t a big deal for smartphone or tablet manufacturers – and certainly consumers wouldn’t care – this rift was not in the best interest of either Linux or Android.
A lot of important work is being done with Android. It’s a positive step that with Linux 3.3, Android is going back into the fold. This was announced in December 2011 by Greg Kroah-Hartman, head of the Linux Driver Project, who wrote,
So hopefully, by the 3.3 kernel release, the majority of the Android code will be merged, but more work is still left to do to better integrate the kernel and userspace portions in ways that are more palatable to the rest of the kernel community. That will take longer, but I don’t foresee any major issues involved.
Not all the work is finished – there are small parts of Android that aren’t completely integrated into Linux 3.3. And certainly the extensions created by Amazon and B&N haven’t been contributed back to Linux as you can see on the Android Mainlining Project page. But this is a move that is good for Linux and good for Android.
Z Trek Copyright (c) Alan Zeichick
I love, love, love my Dell 3007WFP monitor. The 30” beast – showing 2560 x 1600 pixels – has been on my desk since January 2007, when I bought it (refurbished) from the Dell Outlet Store for $1,162.60.
Clearly, I’ve gotten my money’s worth out of this display, which has been variously connected to Windows desktops, Sun workstations, and now to a MacBook Air via a “Mini DisplayPort To Dual-Link DVI Adapter.”
The screen looks good, but to be honest, it’s not fantastic. One reason is that the pixel density of the Dell screen is usual for most desktop and notebook computers, at about 100 pixels per inch diagonally. (You get there by calculating the number of pixels on the diagonal, which is 3018, and dividing by the diagonal screen size, which is 30 inches.)
The internal display on the MacBook Air is visibly sharper, and not only because it’s newer. The main reason is because it has a higher pixel density. The screen is 1440 x 900 pixels on a 13-inch diagonal, for a PPI of 131. Thus, a graphic image of a certain size (say, 400 x 400 pixels) appears about 30% smaller on the laptop’s screen – and therefore, it’s sharper and crisper. The same is true with text as well. The higher the PPI, the sharper the graphics.
By comparison, the original iPad and then the iPad 2 had screens with essentially the same PPI as the MacBook Air’s 13” monitor. The tablets’ 9.7-inch screen has a resolution of 1024 x 768 pixels, which computes to a PPI of 132.
Most mainstream notebook and desktop displays are 100 PPI; a few, obviously go up higher. Variation is within a fairly narrow range — and so Web designers could basically ignore the issue and focus on the physical count of the pixels.
If your app server sniffed that the browser was, say, 1024 x 768, you knew that the end user had a small screen, and you might cut back how much you displayed. If you saw that the user had, say, 2048 x 1536, you could assume that the end user had a big 24-inch or 27-inch desktop monitor and you could show lots of information.
No more. We are entering a whole new world of high-PPI displays, which first appeared on the iPhone 4, but now are on the new iPad (which I’m going to call iPad 3, even though that’s not its official name).
The iPad 3’s display is 2048 x 1536, which computes out to a PPI of 263.9. That’s significantly larger. A 400×400 pixel graphic on my Dell external monitor will be four inches high. On the MacBook Air or on an iPad 2, it will be 3.1 inches high. On an iPad 3, a 400 x 400 graphic will be 1.5 inches high.
Or, to put it another way, if you have a Web graphic that uses a color band that’s 30 pixels high, it will be .30 inches high on a standard monitor, .23 inches on an iPad 2 or MacBook Air, and .11 inches on the iPad 3.
Say that color band contains 20-pixel-high white text. That text is a readable .20 inches on a standard monitor, but only . 07 inches high on the iPad 3. Unless the user zooms to scale it up, forget about reading it.
On a native app running on an iPad 3, of course, the operating system will take care of dynamically scaling text or graphics, or will substitute larger graphics provided by the developer. No problem. But what about Web pages? You can’t simply sniff out a 2048 x 1536 screen and assume you’re working with a large desktop screen. Not any more.
For now, the workaround is easy: If you can detect that it’s an iPad 3, you can adapt your Web page accordingly. You just need to remember to do that. And of course, pick up an iPad 3 for testing.
What about tomorrow? High-PPI displays will spread. Other tablets will have them soon. Notebooks will adopt them. Desktops. How long before Apple releases a 27-inch iMac display that’s 263 PPI? Dell – HP – Lenovo – Samsung. We are in a new era of small-pixel devices. We can’t assume anything any more.
Z Trek Copyright (c) Alan Zeichick

Apple CEO Tim Cook introduces the new iPad during an event in San Francisco, Wednesday, March 7, 2012. The new iPad features a sharper screen and a faster processor. Apple says the new display will be even sharper than the high-definition television set in the living room. (AP Photo/Paul Sakuma)

Two important products were introduced this week. One was the new iPad from Apple. The other was SQL Server 2012 from Microsoft.

With all the coverage of Apple’s third-generation tablet, everything else in the tech industry ground to a halt. Not just the tech industry. Heck, even general interest media sent out alerts:

From: CNN Breaking News

Subject: CNN Breaking News
Date: March 7, 2012 11:06:03 AM PST
 
Apple unveils new iPad with HD display, better camera and 4G wireless. Starting price remains $499.
 
One CNN Center Atlanta, GA 30303
(c) & (r) 2012 Cable News Network

That alert sums up Apple’s news, so let’s talk about SQL Server 2012. Large-scale enterprise databases – like SQL Server, DB2 or Oracle – are the least-talked about parts of IT infrastructure. They’re big, they’re fast, they’re essential to any data center or for any n-tiered application.

Despite all the talk about clouds – and Database-as-a-Service – performance and bandwidth dictate that database servers must rename close to their application servers. For truly large projects, those are staying entirely or mainly on-premises for years to come. Yet SQL Server 2012 anticipates the move to the cloud, and makes it feasible to have applications that span both on-premises data centers and cloud-based servers. That’s important.

SQL Server 2012 isn’t really news, of course. Customers have been using it for months – March 6 only saw the official “release to manufacturing” of the bits. Most of the details came out last October, when Microsoft started its previews, and focused on Big Data and integration with Hadoop.

The list of other changes – beyond the Hadoop, Big Data and cloud features – shows an incremental upgrade. Better high-availability functions with multiple subject failover clusters and more flexible failover policies. Programmability enhancements with statistical semantic search, property-scoped full-text search and customizable proximity search, ad-hoc query paging, circular arc segment support for spatial types, and support for sequence objects. Some needed scalability and performance enhancements for data warehouses, and support for 15,000 partitions (up from 1,000 partitions). And improvements to permissions and role-based management, as well as better auditability.

Is SQL Server 2012 a must-have upgrade? The answer is the same as with the new iPad: Only if you need the new features right now:

  • If you’re dying to make the move to mix cloud/on-premises computing (or want 4G LTE networking in your tablet), you should budget to make that purchase sooner rather than later.
  • If you are happy with the your existing SQL Server 2008 R2 (or iPad 2), then keep your wallet in your pocket. Sure, you’ll probably go there eventually, but there’s no rational reason to be the first to make the upgrade. Give SQL Server 2012 (and the new iPad) time to settle down.

Would you believe that 18.0% of developers that SD Times surveyed said that their organizations are running J2EE 1.4 in production environments? That’s the version of the Java server-side platform that was officially released in November 2003. That shows the persistence of deployed platforms. If it ain’t broke, don’t upgrade it.
What about new versions? 44.8% said that they are running Java EE 5 (which came out in May 2006), and 54.3% have some servers running Java EE 6 (which was released in December 2009).
It shouldn’t be a surprise that so many systems are running out of date versions of Java. I’ve run into shops that still have old version of Netware, and are running quite out-of-date versions of Windows Server, DB2, Oracle – you name it. Given the costs and risks of upgrading, unless there’s a clear reason to do, developers and IT administrators are going to be conservative. Can’t blame them.
Those numbers are from brand-new research conducted by SD Times – our Java & SOA Study, completed in December 2011.
I’ll share another data point: the most popular Java IDEs in use in the developer’s organization. These add to more than 100% because some organizations run multiple IDEs:
Eclipse JDT: 65.3%
Oracle NetBeans: 25.8%
Oracle JDeveloper: 16.9%
IBM WSAD: 13.2%
IBM RAD: 12.4%
Apple Xcode: 10.8%
JetBrains IntelliJ: 9.9%
Genuitec MyEclipse 9.4%
The rest all scored below 9% utilization. Interestingly 14.3% of respondents said that vi and vim are used with Java, and 9.3% said that their organization uses emacs.
Is your organization using obsolescent versions of server platforms? Leave a comment.
Z Trek Copyright (c) Alan Zeichick

The message popped up in my email last week: Canon was offering a firmware upgrade for my DSLR. Why was I in such a hurry to download and install it? Why do I get such pleasure from updating firmware, such delight in seeing new versions of mobile apps for my Android, iOS and Windows Phone devices? New maps for my GPS? Updated drivers for my printers? No idea.
The camera’s firmware update wasn’t even important, and wouldn’t even affect anything relevant to me:
Firmware Version 2.1.2 incorporates the following change: Optimizes the camera’s performance when using certain UDMA 7-compatible CF cards released in February 2012 or later.
Yes, the brand-new UDMA 7 spec allows for faster data throughput – but my trusty 32GB Compact Flash card is UDMA 6. I have no plans to buy a new memory card any time soon. But still, my psyche wouldn’t let me do anything else until the camera’s firmware was safely updated.
I’m also one of the first kids on the block to install updates to my notebook operating system, virus updates, application service packs – you name it.
Am I crazy? Perhaps. But it’s a dangerous game that I play.
While updates can fix bugs and add features, they can also contains introduce new problems and even remove functionality. For example, anyone who installed Security Update 2012-001 to Mac OS X 10.6 “Snow Leopard” suddenly couldn’t run PowerPC-based applications. Oops.
And in another situation, several years ago, I bricked a perfectly good Garmin Quest GPS doing a firmware update. Sadly, it was out of warranty. And now it’s dead.
In a casual or personal environment, it’s fine for individuals to choose to be early adopters. In a business environment, of course, even the most innocuous-seeming software updates should to be tested before deployment. That’s true whether the update is for a server operating system, critical software, a mobile app or an embedded platform. While employees chafe under IT’s apparent paranoia, it’s vital to realize that firmware and software updates can’t always be backed out – and IT must remain in control.
The word of caution for developers: Realize that many of your customers will want to download upgrades – but many customers may not want to upgrade. Perhaps they are worried. Perhaps they have policies. Perhaps they have other dependences that preclude your upgrade.
For example, Apple is pushing everyone to migrate from its MobileMe service to iCloud. Just one problem: iCloud requires that all Apple devices run at iOS 5 or Mac OS X 10.7.2 or later. However, one of my Macs is on Mac OS X 10.6 because it runs a business-critical PowerPC application under Rosetta. Yet MobileMe will be turned off on June 30, 2012. Apple is putting me into an impossible position. This does not make me happy.
Be a good software provider. Don’t nag your customers to install your latest bits. Don’t force them. Don’t bully them. Don’t deprecate older versions of your platform gratuitously, or set arbitrary deadlines to force customers to migrate. It’s not nice, whether you’re Apple or small ISV. Don’t do it.
Oh, look – a new driver just came through for my Brother printer. Excuse me, I have to go install it.
Z Trek Copyright (c) Alan Zeichick