1486079_10201603140415088_346126634_oWe need all the technical talent we can get. Whether we are talking developers, architects, network staff, IT admins, managers, hardware, software or firmware, the more women in technology, the better. For everyone – for companies, for customers, for women and for men.

I have recently started working with an organization called WITI – that’s Women in Technology International. (My nickname is now “WITI Alan.”)

WITI is a membership organization. Women who join get access to amazing resources for networking, professional development, career opportunities and more. Companies who join as corporate sponsors can take advantage of WITI’s incredible solutions for empowering women’s networks, including employee training and retention services, live events and more.

My role with WITI is going to be to help women in technology tell their stories. We kicked this off at January’s International Consumer Electronics show in Las Vegas, and we’ll be continuing this at numerous events in 2014 – including the WITI Annual Summit, coming to Santa Clara from June 1-3, 2014. (You can see me here at the WITI booth at CES with Michele Weisblatt.)

If you are a woman in technology, or if you know women in technology, or you understand the value of increasing the number of women in technology, please support the super-important work being done by WITI.

LAS VEGAS — If you are a geek, there are few events geekier than the huge Consumer Electronics Show, held here each January. Here is where you’ll find the latest toys, toys, toys, toys, toys and toys. Such as smart glasses, smart cars, shape-recognizing SDKs, robots with intelligent programmable faces, and so much more.

Most of the 150,000+ people who attended CES were mesmerized by the show-stopping curved UHD (ultra high definition) televisions that are at either 4K (2160p) or 8K (4320p) resolution. The 105-inch model from LG blows my home 60-inch Samsung 1080p television out of the water, and yes, I’ll buy one in a few years.

Samsung UHD TVs
Curved ultra high definition televisions, like these 4K models from Samsung, stole the show at the 2014 Consumer Electronics Show.

Beyond TVs, there are lots of 3D printers from startups likeMakerBot, feature-packed cameras from giants like Canon, self-driving cars from BMW, wearables like the LG Lifeband, and phone cases. Hundreds of booths with phone cases. It’s amazing how many phone case manufacturers and distributors are here in Las Vegas.

(Who the heck needs all those phone cases? The mind boggles.)

One thing I learned at CES—though I’m sure it’s common knowledge in the robotics community—is that it’s easier to build a robot that has a large LCD screen with an animated face instead of constructing a real humanoid robotic face. The cartoon face is more expressive and less intimidating than a realistic simulacrum. Plus, software is a lot less expensive to create and update than animatronic bones, skin, motors and servos.

Beyond TVs, cameras and phone cases, here are two introductions from smaller companies that caught my eye at CES as being very interesting for software developers:
#!• The new M100 smart glasses from Vuzix are similar to Google Glass, only you can buy it today, it’s less expensive than Google Glass (US$999), it’s a full Android implementation, and you don’t have to jump through Google’s restrictive hoops to build apps. If you are willing to forego the snob value of genuine Google Glass, and don’t need quite the high-end hardware, you can have an Ice Cream Sandwich head-mount display with 24-bit color, a 400×240 display, 1GB of RAM, 4GB of storage, a MicroSD slot, a speaker, a noise-cancelling microphone, a 5MP camera, 1080p video, a six-hour battery, WiFi and Bluetooth. Most recently, Vuzix announced that a software update will add voice recognition based on Nuance speech technology. Yes, you won’t get Google Glass’ 640×360 display, but did I mention the M100 is available now?

Vuzix M100 smart glasses
It’s here now, it’s less expensive than Google Glass, and its Android stack is wide open to developers: the M100 Smart Glasses from Vuzix. The kit includes the glasses, and you can wear it on either the left or right side.

The Asus Transformer Book Duet is a head-scratcher. It’s an Intel-based laptop that looks like an Apple MacBook Air, but runs both Windows 8.1 and Android 4.2.2. The review from Ars Technica says it best: It’s clunky. Let’s assume that it gets less clunky. What would you (or your customers or employees) do with a single device that combines the best of Windows and the best of Android? I’m not sure, but if Intel’s “Dual OS” concept catches on, there could be interesting developer opportunities.

By the way, my favorite tech event that’s even geekier than CES is ACM SIGGRAPH. Catch it in Vancouver this August. Gosh, I hope there aren’t any phone cases there.

Steve-WozniakI’ve had the opportunity to meet and listen to Steve Wozniak several times over the years. He’s always funny and engaging, and his scriptless riffs get better all the time. With this one, he had me rolling in the aisle.

The Woz’s hour-long talk (and Q&A session) covered familiar ground: His hacking the phone system with blue boxes (and meeting Captain Crunch), working his way though college, meeting Steve Jobs, designing the Apple I and Apple II computers, the dispute about the Apple Macintosh vs. Apple Lisa, his amnesia after a plane crash, his dedication to Elementary school teaching, his appearance on the TV competition Dancing with the Stars in 2009, and so on.

Many of us have heard and read these stories before — and love them.

Read all about his talk here, in my story on the SmartBear blog….

woodblocksIt looks like the tech industry is hiring more women. Maybe. Maybe not. The statistics are hard to interpret. Also, it’s unclear if the newly hired women are performing technical or other jobs.

I’m looking at a blog from the New York Times, “An Uptick in the Hiring of Women for Tech Jobs,” which correctly says that:

There are signs that tech companies are hiring more women, but women still appear to make up far less than half of all new hires in the industry.

In the year ending in September, according to the Bureau of Labor Statistics, the net change in the number of employees in the computer industry was 60,000. The net change in the number of female employees was 36,000 — or 60 percent of the net change, according to the bureau’s data.

Yet it does not necessarily mean that the tech industry hired more women than men. The bureau’s figure is a net change, meaning the numbers reflect new employees and those who left. More men than women probably left their jobs — because there are so many more men working in the tech industry. For example, it is possible that 100,000 men left their jobs, and 124,000 men were hired, while 10,000 women left their jobs and 46,000 were hired.

If we want more women software engineers, let’s encourage girls to think like engineers. That means encouraging play that incorporates both design and construction — and in a tactile way, not only through tablet apps.

When I was young, my father chopped 2x4s and dowels into various sized blocks. Some square, some round, some rectangular, some triangles. No paint, no stain, just some sanding — and the imagination is unlocked.

Today, in addition to wood blocks, construction blocks like Lego, Duplo or Megablox are what I’d give a young child, girl or boy. Bright colors are great. If there are “boy colors” like red or blue, and “girl colors” like pink and purple, I’d buy equal quantities of both and mix them together, to provide the widest possible color palette. I would buy lots of big buckets of plain old blocks, not kits where you follow directions and assemble a specific toy. That’s great for buying furniture at Ikea – but not at inspiring creativity and hands-on imagination.

There’s a video floating around YouTube, “GoldieBlox, Rube Goldberg, & Beastie Boys “Princess Machine” (a concert for little girls).”

What do you think: Step in the right direction, a marketing ploy, or both?

Microsoft’s woes are too big to ignore.

Problem area number one: The high-profile Surface tablet/notebook device is flopping. While the 64-bit Intel-based Surface Pro hasn’t sold well, the 32-bit ARM-based Surface RT tanked. Big time. Microsoft just slashed its price — maybe that will help. Too little too late?

To quote from Nathan Ingraham’s recent story in The Verve, 

Microsoft just announced earnings for its fiscal Q4 2013, and while the company posted strong results it also revealed some details on how the Surface RT project is costing the business money. Microsoft’s results showed a $900 million loss due to Surface RT “inventory adjustments,” a charge that comes just a few days after the company officially cut Surface RT prices significantly. This $900 million loss comes out of the company’s total Windows revenue, though its worth noting that Windows revenue still increased year-over-year. Unfortunately, Microsoft still doesn’t give specific Windows 8 sales or revenue numbers, but it probably performed well this quarter to make up for the big Surface RT loss.

At the end of the day, though, it looks like Microsoft just made too many Surface RT tablets — we heard late last year that Microsoft was building three to five million Surface RT tablets in the fourth quarter, and we also heard that Microsoft had only sold about one million of those tablets in March. We’ll be listening to Microsoft’s earnings call this afternoon to see if they further address Surface RT sales or future plans.

Microsoft has spent heavily, and invested a lot of its prestige, in the Surface. It needs to fix Windows 8 and make this platform work.

Problem are number two: A dysfunctional structure. A recent story in the New York Times reminded me of this 2011 cartoon describing six tech company’s charts. Look at Microsoft. Yup.

Steve Ballmer, who has been CEO since 2000, is finally trying to do something about the battling business units. The new structure, announced on July 11, is called “One Microsoft,” and in a public memo by Ballmer, the goal is described as:

Going forward, our strategy will focus on creating a family of devices and services for individuals and businesses that empower people around the globe at home, at work and on the go, for the activities they value most. 

Editing and restructuring the info in that memo somewhat, here’s what the six key non-administrative groups will look like:

Operating Systems Engineering Group will span all OS work for console, to mobile device, to PC, to back-end systems. The core cloud services for the operating system will be in this group.

Devices and Studios Engineering Group will have all hardware development and supply chain from the smallest to the largest devices, and studios experiences including all games, music, video and other entertainment.

Applications and Services Engineering Group will have broad applications and services core technologies in productivity, communication, search and other information categories.

Cloud and Enterprise Engineering Group will lead development of back-end technologies like datacenter, database and specific technologies for enterprise IT scenarios and development tools, plus datacenter development, construction and operation.

Advanced Strategy and Research Group will be focused on the intersection of technology and policy, and will drive the cross-company looks at key new technology trends.

Business Development and Evangelism Group will focus on key partnerships especially with innovation partners (OEMs, silicon vendors, key developers, Yahoo, Nokia, etc.) and broad work on evangelism and developer outreach. 

If implemented as described, this new organization should certainly eliminate waste, including redundant research and product developments. It might improve compatibility between different platforms and cut down on mixed messages.

However, it may also constraint the freedom to innovate, and promote the unhealthy “Windows everywhere” philosophy that has hamstrung Microsoft for years. It’s bad to spend time creating multiple operating systems, multiple APIs, multiple dev tool chains, multiple support channels. It’s equally bad to make one operating system, API set, dev tool chain and support channel fit all platforms and markets.

Another concern is the movement of developer outreach into a separate group that’s organizationally distinct from the product groups. Will that distance Microsoft’s product developers from customers and ISVs? Maybe. Will the most lucrative products get better developer support? Maybe.

Microsoft has excelled in developer support, and I’d hate to see that suffer as part of the new strategy. 

Read Steve Ballmer’s memo. What do you think?

Z Trek Copyright (c) Alan Zeichick

“You should double your top line revenue by making your products more awesome, not by doubling the size of your sales department.”

That was one of the insights shared during a technology roundtable held last July 16 in San Francisco. Called “The Developer is King,” the discussion was moderated by Dan Dodge of Google Ventures, formerly a startup evangelist at Microsoft and engineer at such diverse firms at AltaVista, Napster and Groove Networks. Also on the panel: John Collison, founder of online payment site Stripe; Tom Preston-Werner, founder of GitHub; Suhail Doshi, co-founder of Web analytics firm MixPanel; and Lew Cirne, founder of app monitoring firm New Relic.

The atmosphere around the panel was filled with pithy aphorisms about why so many developers are succeeding as entrepreneurs. For example, “developer aren’t just techies, they are artists who creating things,” and “a good startup founder is someone who doesn’t live only to write code, but who likes to solve problems.”

What made this conversation particularly interesting is that not only are these founders all developers, but their customers are also developers. The panelists offered some true words of wisdom for anyone targeting developers:

• Developers are hard to please. You have to build products that just work — you can’t create success through whiz-bang marketing.

• Developers will see your product and think they can build it themselves. It’s often not hard to duplicate your product. So you have to focus on the customers, ecosystem and simplicity.

• If you are building a commercial offering atop open source software, show that you help developers get their work done more easily than the open source version.

• Tools are quite viral; developers are great at telling their friends what works for them — and what doesn’t work for them.

• Focus on the initial user experience, and make customers more productive immediately. Contrast your offering with big platforms that require a lot of work to install, configure, train and use before the customer sees any benefit.

• The way to innovate is to try lots of things – and create a culture that tolerates failure.

• When hiring, a cultural fit beats anything on the resume. You can teach skills – you can’t teach character.

• Don’t set out to build a company; instead, start out creating a solution to a real problem, and then grow that into a business.

• Don’t get hung up on analyst estimates of market size. Create markets, don’t pursue them.

and my favorite,

• You shouldn’t build a company by focusing on a current fad or gold rush. Rather, figure out where people are frustrated or having problems. Make something that people want. Figure out how to make people happy.

Dr. Douglas Engelbart, who passed away on July 2, was best known as the inventor of the computer mouse. While Dr. Engelbart was the brains behind many revolutionary ideas, his demonstration of a word processor using a mouse in 1968 paved the way for the graphical user interfaces in Xerox’s Alto (1973), Apple’s Lisa (1979) and Macintosh (1984), Microsoft’s Windows (1985) and IBM’s OS/2 Presentation Manager (1988).

Future generations may regard the mouse as a transitional technology. Certainly the touch interface, popularized in the iPad, Android tablets and Windows 8, are making a dent in the need for the mouse — though my Microsoft Surface Pro is far easier to use with a mouse, in addition to the touch screen.

Voice recognition is also making powerful strides. When voice is combined with a touch screen, it’s possible to envision the post-WIMP (Windows, Icons, Menus and Pointing Devices) mobile-style user experience surpassing mouse-driven systems.

Dr. Engelbart, who was recently fêted in Silicon Valley, was 88. Here are some links to help us gain more insight into his vision:

Obituary in the New York Times, by John Markoff.

“The Mother of All Demos” on 1968. Specifically, see clips 3 and 12 where Dr. Engelbart edits documents with a mouse.

A thoughtful essay about Dr. Engelbart’s career, by Tom Foremski.

I never had the honor of meeting Dr. Engelbart. There was a special event commemorating his accomplishments at Stanford Research Institute in 2008, but unfortunately I was traveling.

It’s remarkable for one person to change the world in such a significant way – and so fast. Dr. Engelbart and his team invented not only the mouse, but also personal computing as we know it today. It is striking how that 1968 demo resembles desktop and notebook computing circa 2013. Not bad. Not bad at all. May his memory be a blessing.

Web sites developed for desktop browsers look, quite frankly, terrible on a mobile device. The look and feel is often wrong, very wrong. Text is the wrong size. Gratuitous clip art on the home page chews up bandwidth. Features like animations won’t behave as expected. Don’t get me started on menus — or on the use-cases for how a mobile user would want to use and navigate the site.

Too often, some higher-up says, “Golly, we must make our website more friendly,” and what that results in is a half-thought-out patch job. Not good. Not the right information, not the right workflow, not the right anything.

One organization, UserTesting.com, says that there are four big pitfalls that developers (and designers) encounter when creating mobile versions of their websites. The company, which focuses on usability testing, says that the biggest issues are:

Trap #1 – Clinging to Legacy: ‘Porting’ a Computer App or Website to Mobile
Trap #2 – Creating Fear: Feeding Mobile Anxiety
Trap #3 – Creating Confusion: Cryptic Interfaces and Crooked Success Paths
Trap #4 – Creating Boredom: Failure to Quickly Engage the User

Makes sense, right? UserTesting.com offers a quite detailed report, “The Four Mobile Traps,” that goes into more detail.

The report says,

Companies creating mobile apps and websites often underestimate how different the mobile world is. They assume incorrectly that they can create for mobile using the same design and business practices they learned in the computing world. As a result, they frequently struggle to succeed in mobile.

These companies can waste large amounts of time and money as they try to understand why their mobile apps and websites don’t meet expectations. What’s worse, their awkward transition to mobile leaves them vulnerable to upstart competitors who design first for mobile and don’t have the same computing baggage holding them back. From giants like Facebook to the smallest web startup, companies are learning that the transition to mobile isn’t just difficult, it’s also risky.

Look at your website. Is it mobile friendly? I mean, truly designed for the needs, devices, software and connectivity of your mobile users?

If not — do something about it.

Data can be abused. The rights of individuals can be violated. Bits of disparate information can be tracked without a customer’s knowledge, and used to piece together identities or other profile information that a customer did not intend to divulge. Thanks to Big Data and other analytics, patterns can be tracked that would dismay customers or partners.

What is the responsibility of the software development team to make sure that a company does the right thing – both morally and legally? The straight-up answer from most developers, and most IT managers outside the executive suite, is probably, “That’s not our problem.” That is not a very good answer.

Corporations and other organizations have senior managers, such as owners, presidents, CEOs and board of directors. There is no doubt that those individuals have the power to say yes – and the power to say no.

Top bosses might consult with legal authorities, such as in-house counsel or outside experts. The ultimate responsibility for making the right decision rests with the ultimate decision-makers. I am not a lawyer, but I expect that in a lawsuit, any potential liability belongs with managers who misuse data. Programmers who coded an analytics solution would not be named or harmed.

This topic has been on my mind for some time, as I ponder both the ethics and the legalities implicit in large-scale data mining. Certainly this has been a major subject of discussion by pundits and elected officials, at least in the United States, when it comes to customer info and social-media posts being captured and utilized by marketers.

Some recent articles on this subject:

Era of Online Sharing Offers Benefits of ‘Big Data,’ Privacy Trade-Offs

The Challenge of Big Data for Data Protection

Big Data Is Opening Doors, but Maybe Too Many

What are we going to do in the face of questionable software development requirements? Whether we are data scientists, computer scientists or other IT professionals, it is quite unclear. A few developers might prefer to resign rather than write software they believe crosses a moral line. Frankly, I doubt that many would do so.

Some developers might say, “I didn’t understand the implications.” Or they might say, “If I don’t code this application, management will fire me and get someone else to do it.” Or they might even say, “I was just following orders.”

Perhaps I’m an old fogey, but I can’t help but smile when I see press releases like this: “IBM Unveils New Software to Enable Mainframe Applications on Cloud, Mobile Devices.” 

Everything old will become new again, as the late Australian musician Peter Allen famously sang in his song of that name.

Mainframes were all the rage in the 1960s and 1970s. Though large organizations still used mainframes as their basis of their business-critical transaction systems in the 1990s and 2000s, the excitement was around client/server and n-tier architectures built up from racks of low-cost commodity hardware.

Over the past 15 years or so, it’s become clear that distributed processing for Web applications fit itself into that clustered model. Assemble a few racks of servers and add a load-balancing appliance, and you’ve got all the scalability and reliability anyone needs.

But you know, from the client perspective, the cloud looks like, well, a thundering huge mainframe.

Yes, I am an old fogey, who cut his teeth on FORTRAN, COBOL, PL/1 and CICS on Big Blue’s big iron (that is to say, IBM System/370). Yes, I can’t help but think, “Hmm, that’s just like a mainframe” far too often. And yes, the mainframe is very much alive.

IBM’s press release says that,

Today, nearly 15 percent of all new enterprise application functionality is written in COBOL. The programming language also powers many everyday services such as ATM transactions, check processing, travel booking and insurance claims. With more than 200 billion lines of COBOL code being used across industries such as banking, insurance, retail and human resources, it is crucial for businesses to have the appropriate framework to improve performance, modernize key applications and increase productivity.

I believe that. Sure, there are lots of applications written in Java, C++, C# and JavaScript. Those are on the front end, where if  a database read or write fails, or a non-responsive screen is an annoyance, nothing more. On the back end, if you want the fastest possible response time, without playing games with load balancers, and without failures, you’re still looking at a small number of big boxes, not a large number of small boxes.

This fogey is happy that the mainframe is alive and well.

According to IDG Research, 80% of business leaders say that Big Data should enable more informed business decisions – and 37% say that the insights provided by Big Data should prove critical to those decisions.

A February 2013 survey on Big Data was designed and executed jointly by IDG Research and Kapow Software, which sells an integration platform for Big Data. As with all vendor surveys, bear in mind that Kapow wants to make Big Data sound exciting, and to align the questions with its own products and services.

That said, the results of the survey of 200 business leaders, are interesting:

• 71% said that Big Data should help increase their competitive advantage by keeping them ahead of market trends

• 68% said that Big Data should improve customer satisfaction

• 62% believe Big Data should increase end-user productivity by providing real-time access to business information

• 60% said Big Data should improve information security and/or compliance

• 55% said Big Data should help create compelling new products and services

•33% said Big Data should help them monitor and respond to social media in real time

Those are big expectations for Big Data! The results to date… not so much. The study revealed that only one-third of organizations surveyed have implemented any sort of Big Data initiative – but another third expect to do so over the next year.

What are the barriers to Big Data success? The study’s answers:

• 53% say a lack of awareness of Big Data’s potential

• 49% say concerns about the time-to-value of the data

• 45% say having the right employee skills and training

• 43% say ability to extract data from the correct sources

The software development world keeps on changing. Just when we think we get a handle on something as simple as application lifecycle management, or cloud computing, or mobile apps, we get new models, new innovations, new technologies.

Forget plugging pair programming or continuous delivery or automated testing before checking code into the repository. The industry has moved on. Today, the innovation is around DevOps and Big Data and HTML5 and app stores and… well… it keeps changing.

Tracking and documenting those changes – that’s what we do at SD Times. Each year, the editors stop, catch our breath, and make a big list of the top innovators of the software development industry. We identify the leaders – the companies, the open-source projects, the organizations who ride the cutting edge.

To quote from Rex Kramer in the movie Airplane, the SD Times 100 are the boss, the head man, the top dog, the big cheese, the head honcho, number one…

Who are the SD Times 100? This week, all will be revealed. We will begin tweeting out the SD Times 100 on Thursday. Follow the action by watching @sdtimes on Twitter, or look for hashtag #SDTimes100.

After all the tweeting is complete, the complete list will be published to www.sdtimes.com. Be part of the conversation!

The classic database engines – like the big relational behemoths from IBM, Microsoft and Oracle – store the data on disk. So do many of the open-source databases, like MySQL and PostgreSQL, as well as the vast array of emerging NoSQL databases. While such database engines keep  all the rows and columns on the relatively slow, disks, they can boost performance by putting some element, including indices and sophisticated predicted caches, on faster solid-state storage or even faster main memory.

From a performance perspective, it would be great to store everything in main memory. It’s fast, fast, fast. It’s also expensive, expensive, expensive, and in traditional services, is not persistent. That’s why database designers and administrators leverage a hierarchy: A few key elements in the fastest, most costly main memory; more data in fast, costly solid-state storage; the bulk in super-cheap rotating disks. In some cases, of course, some of the data goes into a fourth layer in the hierarchy, off-line optical or tape storage.

In-memory databases challenge those assumptions for applications where database response time is the bottleneck to application performance. Sure, main memory is still fabulously expensive, but it’s not as costly as it used to be. New non-volatile RAM technologies can make main memory somewhat persistent without dramatically harming read/write times. (To the best of my knowledge, NVRAM remains slower than standard RAM – but not enough to matter.)

That’s not to say that your customer database, your server logs, or your music library, should be stored within an in-memory database. Nope. Not even close. But as you examine your application architecture, think about database contents that dramatically affect either raw performance, user experience or API response time. If you can isolate those elements, and store them within an in-memory database, you might realize several orders of magnitude improvement at minimal cost — and with potentially less complex code than you’d need to manage a hierarchical database system.

Not long ago, in-memory databases were a well-kept secret. The secret is out, according to research from Evans Data. Their new Global Development survey says that the developers using in-memory databases has increased 40% worldwide from 18% to 26% during the last six months. An additional 39% globally say they plan to incorporate in-memory databases into their development work within the next 12 months.

Z Trek Copyright (c) Alan Zeichick

Like many of you, I travel with a vast array of personal electronic devices – so much that my briefcase bulges with screens, batteries, cables and charging bricks. Some devices are turned off when I’m on an airplane – and some aren’t, often because I forget.

Take this week, for example. I am working out of SD Times’ New York headquarters, instead of my usual office near San Francisco. What did I bring? A 13-inch mid-2011 MacBook Air notebook, an iPad Mini with Logitech Ultrathin Keyboard, a Google Nexus 7 tablet, a Galaxy Nexus phone, a Virgin Mobile MiFi access point, Bose QuietComfort 15 noise-cancelling headphones, RocketFish RF-MAB2 Bluetooth stereo headset, a Microsoft Notebook Optical Mouse 3000, a USB hub, and an HP-15C calculator. Oh, let’s not forget the Canon PowerShot S100 digital camera. And my Pebble watch.

All that for a five-day trip. A bit excessive? Maybe.

I can guarantee that not every device is powered down during a flight. Yes, the flight attendants ask passengers to turn devices all the way off, and I have good intentions. But there’s a good chance that the laptop is sleeping, that some tablets and the phone might in airplane mode instead of off, I might have forgotten to slide the switch on the Logitech keyboard, and so-on.

Think about all the electronic noise from those electronics. Think about all the potential interference from the WiFi, cellular and Bluetooth radios, the GPSes in the phone and Google tablet… yet it doesn’t seem to make a tangible difference.

I’m not alone in failing to turn off every personal electronic device. According to a new study by the Consumer Electronics Association,

Almost one-third (30 percent) of passengers report they have accidently left a PED turned on during a flight. The study found that when asked to turn off their electronic devices, 59 percent of passengers say they always turn their devices completely off, 21 percent of passengers say they switch their devices to “airplane mode,” and five percent say they sometimes turn their devices completely off. Of those passengers who accidently left their PED turned on in-flight, 61 percent said the device was a smartphone.

At least I have good intentions. Many travelers intentionally keep playing games with their phones, hiding them when the flight attendant walks by, taking them out as soon as the uniformed crewmember stops looking.

That doesn’t change the reality that devices are left turned on — and the flights appear to be perfectly safe. It’s time for the U.S. Federal Aviation Administration, and the U.S. Federal Communications Commission, to stop the ban on using electronic devices during takeoff, landing, and flying at altitudes under 10,000 feet.

Not long ago, if the corporate brass wanted the change major functionality in a big piece of software, the IT delivery time might be six to 12 months, maybe longer. Once upon a time, that was acceptable. Not today.

Thanks to agile, many software changes can be delivered in, say, six to 12 weeks. That’s a huge improvement — but not huge enough. Business imperatives might require that IT deploy new application functionality in six to 12 days.

Sounds impossible, right? Maybe. Maybe not. I had dinner a few days ago with S. “Soma” Somasegar (pictured), the corporate vice president of Microsoft’s Developer Division. He laughed – and nodded – when I mentioned the need for a 30x shift in software delivery from months to days.

After all, as Soma pointed out, Microsoft is deploying new versions of its cloud-based Team Foundation Service every three weeks. The company has also realize that revving Visual Studio itself every two or three years isn’t serving the needs of developers. That’s why his team has begun rolling out regular updates that include not only bug fixes but also new features. The latest is Update 2 to Visual Studio 2012, released in late April, which added in new features for agile planning, quality assurance, line-of-business app developer, and improvements to the developer experience.

I like what I’m hearing from Soma and Microsoft about their developer tools, and about their direction. For example, the company appears sincere in its engagement of the open source community through Microsoft Open Technologies — but I’ll confess to still being a skeptic, based on Microsoft’s historical hostility toward open source.

Soma said that it’s vital not only for Microsoft to contribute to open source, but also to let open source communities engage with Microsoft. It’s about time!

Soma also cited the company’s new-found dedication to DevOps. He said that future versions of both on-premises and cloud-based tools will help tear down the walls between development and deployment. That’s where the 30x velocity improvement might come from.

Another positive shift is that Microsoft appears to truly accept that other platforms are important to developers and customers. He acknowledges that the answer to every problem cannot be to use Microsoft technologies exclusively.

Case in point: Soma said that fully 60% of Microsoft developers are building applications that touch at least three different platforms. He acknowledged that Microsoft still believes that it has the best platforms and tools, but said, “We now know that developers make other choices for valid reasons. We want to meet developers where they are” – that is, engaging with other platforms.

Soma’s words may seem like a modest and obvious statement, but it’s a huge step forward for Microsoft.

Tickets for the Apple Worldwide Developer Conference went on sale on Thursday, April 25. They sold out in two minutes.

Who says that the iPhone has lost its allure? Not developers. Sure, Apple’s stock price is down, but at least Apple Maps on iOS doesn’t show the bridge over Hoover Dam dropping into Black Canyon any more.

Two minutes.

To quote from a story on TechCrunch,

Tickets for the developer-focused event at San Francisco’s Moscone West, which features presentations and one-on-one time with Apple’s own in-house engineers, sold out in just two hours in 2012, in under 12 hours in 2011, and in eight days in 2010.

Who attends the Apple WWDC? Independent software developers, enterprise developers and partners. Thousands of them. Many are building for iOS, but there are also developers creating software or services for other aspects of Apple’s huge ecosystem, from e-books to Mac applications.

Two minutes.

Mobile developers love tech conferences. Take Google’s I/O developer conference, scheduled for May 15-17. Tickets sold out super-fast there as well.

The audience for Google I/O is potentially more diverse, mainly because Google offers a wider array of platforms. You’ve got Android, of course, but also Chrome, Maps, Play, AppEngine, Google+, Glass and others beside. My suspicion, though, is that enterprise and entrepreneurial interest in Android is filling the seats.

Mobile. That’s where the money is. I’m looking forward to seeing exactly what Apple will introduce at WWDC, and Google at Google I/O.

Meanwhile, if you are an Android developer and didn’t get into Google I/O before it sold out – or if you are looking for a technical conference 100% dedicated to Android development – let me invite you to register for AnDevCon Boston, May 28-31. We still have a few seats left. Hope to see you there.

Last week, we held the debut Big Data TechCon in Cambridge, Mass. It was a huge success – more attendees than we expected, which is great. (With a debut event, you never really know.)

We had lots of sessions, many of which were like trying to drink from a fire hose. That’s a good thing.

A commonality is that there is no single thing called Big Data. There are oodles of problems that have to do with capturing, processing and storing large quantities of structured and unstructured data. Some of those problems are called Big Data today, but some have evolved out of diverse disciplines like data management, data warehousing, business intelligence and matrix-based statistics.

Problems that seemed simple to solve when you were talking about megabytes or terabytes are not simple when you’re talking about petabytes.

You may have heard about the “Four V’s of Big Data” – Volume, Velocity, Variety and Veracity. Some Big Data problems are impacted by some of these V’s. Other Big Data problems are impacted by other V’s.

Think about problem domains where you have very large multidimensional data sets to be analyzed, like insurance or protein folding. Those petabytes are static or updated somewhat slowly. However, you’d like to be able to run a broad range of queries. That’s an intersection of data warehousing and business intelligence. You’ve got volume and veracity. Not much variety. Velocity is important on reporting, not on data management.

Or you might have a huge mass of real-time data. Imagine a wide variety of people, like in a social network, constantly creating all different types of data, from text to links to audio to video to photos to chats to comments. You not only have to store this, but also quickly decide what to present to whom, through relationships, permissions and filters, but also implement a behind-the-scenes recommendation engine to prioritize the flow. Oh, and you have to do it all sub-second. There all four V’s coming into play.

Much in Big Data has to do with how you model the data or how you visualize it. In non-trivial cases, there are many ways of implementing a solution. Some run faster, some are slower; some scale more, others scale less; some can be done by coding into your existing data infrastructure, and others require drastic actions that bolt on new systems or invite rip-and-replace.

Big Data is fascinating. Please join us for the second Big Data TechCon, coming to the San Francisco Bay Area in October. See www.bigdatatechcon.com.

While in Cambridge wrapping up the conference, I received an press release from IDC: “PC Shipments Post the Steepest Decline Ever in a Single Quarter, According to IDC.”

To selectively quote:

Worldwide PC shipments totaled 76.3 million units in the first quarter of 2013 (1Q13), down -13.9% compared to the same quarter in 2012 and worse than the forecast decline of -7.7%.

Despite some mild improvement in the economic environment and some new PC models offering Windows 8, PC shipments were down significantly across all regions compared to a year ago. Fading Mini Notebook shipments have taken a big chunk out of the low-end market while tablets and smartphones continue to divert consumer spending. PC industry efforts to offer touch capabilities and ultraslim systems have been hampered by traditional barriers of price and component supply, as well as a weak reception for Windows 8. The PC industry is struggling to identify innovations that differentiate PCs from other products and inspire consumers to buy, and instead is meeting significant resistance to changes perceived as cumbersome or costly.

The industry is going through a critical crossroads, and strategic choices will have to be made as to how to compete with the proliferation of alternative devices and remain relevant to the consumer. 

It’s all about the tablets, folks. That’s right: iPads and Android-based devices like the Samsung Galaxy, Kindle Fire, Barnes & Noble Nook and Google Nexus. Attempts to make standard PCs more tablet-like (such as the Microsoft Surface devices) just aren’t cutting it. Just as we moved from minicomputers to desktops, and from desktops to notebooks, we are moving from notebooks to tablets.

(I spent most of the time at the Big Data TechCon working on a 7-inch tablet with a Bluetooth keyboard. I barely used my notebook at all. The tablet/keyboard had a screen big enough to write stories with, a real keyboard with keys, and best of all, would fit into my pocket.)

Just as desktops/notebooks have different operating systems, applications, data storage models and user experiences than minicomputers (and minicomputer terminals), so too the successful tablet devices aren’t going to look like a notebook with a touchscreen. Apps, not applications; cloud-based storage; massively interconnected networks; inherently social. We are at an inflection point. There’s no going back.

I know many female IT professionals. In some parts of the tech field, there are lots of women. In others — including software development — females are fairly rare.

Is this a problem? If so, why? Those are legitimate questions. Do companies have compelling reasons to recruit more female developers? Do universities have compelling reasons to seek more female computer science students – or more female computer science faculty and researchers? Do open source projects and other peer-driven collaborative ventures have compelling reasons to welcome female contributors?

I say yes to all the above. The reasons are difficult to articulate, but it’s clear to me that a programming culture that pushes women away is cutting off access to half the pool of available talent. I also believe (at a gut level) that gender-balanced departments and teams are more collaborative, more creative, and more welcoming to those females who work there – and to many men as well.

 This is a problem of culture, not one of intelligence, talent, drive or initiative. The macho attitude pervading many coding shops creates a hostile attitude for many women. Not just hostile. Sometimes the project teams are quite literally abusive in ways both subtle and overt.

In that sort of toxic environment, everyone, men and women alike, are justified in finding someplace more welcoming to work or study or contribute. When women chose a different department, a different company, a different career, a different academic major, or a different online community, everyone loses.

What are the solutions? I truly don’t know. I don’t believe that books like Facebook COO Sheryl Sandberg’s “Lean In” have the answer. Similarly, I don’t believe that Yahoo CEO Marissa Mayer can serve as a reasonable role model for female rank-and-file programmers.

The life of a huge company’s CEO or top executive is worlds away, no matter the gender, from the workers in the cubicles. Yes, it’s fun and informative to learn from standout performers like Sandberg, Mayer, Carol Bartz, Meg Whitman, Ursula Burns or Virginia Rometty. However, their example does not clearly illustrate a career path that other women can follow, any more than the typical male programmer can advance by copying Steve Jobs, Bill Gates, Larry Ellison or Mark Zuckerburg.

Let me point out a few resources.

 

Packing lists – check.  Supplies ordered – check. Show bags on schedule – check. Speakers all confirmed – check. Missing laptop power cord located – check. Airline tickets verified – check. Candy purchased for reservation desk – check.

Our team is getting excited for the debut Big Data TechCon. It’s coming up very shortly: April 8-10 in Boston.

What drove us to launch this technical conference? Frustration, really, that there were mainly two types of face-to-face conferences surrounding Big Data.

The first were executive-level meetings that could be summarized as “Here’s WHY you should be jumping on the Big Data bandwagon.” Thought leadership, perhaps, but little that someone could walk away with.

The second were training sessions or user meetings focused on specific technologies or products. Those are great if you are already using those products and need to train your staff on specific tools.

What was missing? A practical, technical, conference focused on HOW TO do Big Data. How to choose between a wide variety of tools and technologies, without bias toward a particular platform. How to kick off a Big Data project, or scale existing projects. How to avoid pitfalls. How to define and measure success. How to leverage emerging best practices.

All that with dozens of tutorials and technical classes, plus inspiring keynotes and lots and lots of networking opportunities with the expert speakers and fellow attendees. After all, folks learn in both the formal classroom and the informal hallway and lunch table.

The result – Big Data TechCon, April 8-10 in Boston. If you are thinking about attending, now’s the time to sign up. Learn more at www.bigdatatechcon.com.

See you in Boston!

What is going on at Google? I’m not sure, and neither are the usual pundits.

Last week, Google announce that Andy Rubin, the long-time head of the Android team, is moving to another role within the company, and will be replaced by Sundar Pichai — the current head of the company’s Chrome efforts.

To quote from Larry Page’s post

Having exceeded even the crazy ambitious goals we dreamed of for Android—and with a really strong leadership team in place—Andy’s decided it’s time to hand over the reins and start a new chapter at Google. Andy, more moonshots please!

Going forward, Sundar Pichai will lead Android, in addition to his existing work with Chrome and Apps. Sundar has a talent for creating products that are technically excellent yet easy to use—and he loves a big bet. Take Chrome, for example. In 2008, people asked whether the world really needed another browser. Today Chrome has hundreds of millions of happy users and is growing fast thanks to its speed, simplicity and security. So while Andy’s a really hard act to follow, I know Sundar will do a tremendous job doubling down on Android as we work to push the ecosystem forward. 

What is the real story? The obvious speculation is that Google may have too many mobile platforms, and may look to merge the Android and Chrome OS operating systems.

Ryan Tate of Wired wrote, in “Andy Rubin and the Great Narrowing of Google,”

The two operating system chiefs have long clashed as part of a political struggle between Rubin’s Android and Pichai’s Chrome OS, and the very different views of the future each man espouses. The two operating systems, both based on Linux, are converging, with Android growing into tablets and Chrome OS shrinking into smaller and smaller laptops, including some powered by chips using the ARM architecture popular in smartphones.

Tate continues,

There’s a certain logic to consolidating the two operating systems, but it does seem odd that the man in charge of Android – far and away the more successful and promising of the two systems – did not end up on top. And there are hints that the move came as something of a surprise even inside the company; Rubin’s name was dropped from a SXSW keynote just a few days before the Austin, Texas conference began.

Other pundits seem equally confused. Hopefully, we’ll know what’s on going on soon. Registration for Google’s I/O conference opened – and closed – on March 13. If you blinked, you missed it. We’ll obviously be covering the Android side of this at our own AnDevCon conference, coming to Boston on May 28-31.

What do companies use Big Data technologies to analyze? Sales transactions. Social media trends. Scientific data. Social media trends. Weather readings. Social media trends. Prices for raw materials. Social media trends. Stock values. Social media trends. Web logs. And social media trends.

Sometimes I wonder if the entire point of Big Data is to sort through tweets. And Pinterest, Facebook and Tumblr – as well as closed social media networks like Salesforce.com’s Chatter and Microsoft’s recently acquired Yammer.

Perhaps this is a reflection that “social” is more than a way for businesses to disintermediate and reach customers directly. (Remember “disintermediation”? It was the go-to word during the early dot-com era of B-to-B and B-to-C e-commerce, and implied unlimited profits.)

Social media – nowadays referred to simply as “social” – is proving to be very effective in helping organizations improve communications. Document repositories and databases are essential, of course. Portal systems are vital. But traditional ways of communication, namely e-mail and standard one-to-one instant messaging, aren’t getting the job done, not in big organizations. Employees drown in their overflowing inboxes, and don’t know whom to message for information or input or workflow.

Enter a new Big Data angle on social. It’s one that goes beyond sifting through public messages to identifying what’s trending so you can sell more products or get on top of customer dissatisfaction before it goes viral. (Not to say those aren’t important, but that’s only the tip of the iceberg.)

What Big Data analysis can show you is not just what is going on and what the trends are, but who is driving them, or who are at least on top of the curve.

Use analytics to find out which of your customers are tastemakers – and cultivate them. Find out which of your partners are generating the most tractions – and deepen those ties. And find out which of your employees, through in-house social tools like instant messaging, blogs, wikis and forums, are posting the best information, are attracting followers and comments, and are otherwise leading the pack.

Treasure those people, especially those who are in your IT and development departments.

Big Social is the key to your organization’s future. Big Data helps you find and turn that key. We’ll cover both those trends at Big Data TechCon, coming to Boston from April 8-10. Hope to see you there.

Everything, it seems, is a game. When I use the Waze navigation app on my smartphone, I earn status for reporting red-light cameras. What’s next: If I check in code early to version-control system, do I win a prize? Get points? Become a Code Warrior Level IV?

Turning software development into a game is certainly not entirely new. Some people live for “winning,” and like getting points – or status – by committing code to open-source projects or by reporting bugs as a beta tester. For the most part, however, that was minor. The main reason to commit the code or document the defect was to make the product better. Gaining status should be a secondary consideration – a reward, if you will, not a motivator.

For some enterprise workers, however, gamification of the job can be more than a perk or added bonus. It may be the primary motivator for a generation reared on computer games. Yes, you’ll get paid if you get your job done (and fired if you don’t). But you’ll work harder if you are encouraged to compete against other colleagues, against other teams, against your own previous high score.

Would gamification work with, say, me? I don’t think so. But from what I gather, it’s truly a generational divide. I’m a Baby Boomer; when I was a programmer, Back in the Day, I put in my hours for a paycheck and promotions. What I cared about most: What my boss thought about my work.

For Generation Y / Millennials (in the U.S, generally considered to be those born between 1982 and 2000), it’s a different game.

Here are some resources that I’ve found about gamification in the software development profession. What do you think about them? Do you use gamification techniques in your organization to motivate your workers?

Gamification in Software Development and Agile

Gamifying Software Engineering and Maintenance

Gamifying software still in its infancy, but useful for some

Some Thoughts on Gamification and Software

TED Talk: Gaming can make a better world 

Just about everyone is talking about Big Data, and I’m not only saying that because I’m conference chair for Big Data TechCon, coming up in April in Boston.

Take Microsoft, for example. On Feb. 13, the company released survey results that talked about their big customers’ biggest data challenges, and how those relate to Big Data.

In its “Big Data Trends: 2013” study, Microsoft talked to 282 U.S. IT decision-makers who are responsible for business intelligence, and presumably, other data-related issues. To quote some findings from Microsoft’s summary of that study:

• 32% expect the amount of data they store to double in the next two to three years.

• 62% of respondents currently store at least 100 TB of data. 

• Respondents reported an average of 38% of their current data as unstructured.

• 89% already have a dedicated budget for a Big Data solution.

• 51% of companies surveyed are in the middle stages of planning a big data solution

• 13% have fully deployed a Big Data solution.

• 72% have begun the planning process but have not  yet tested or deployed a solution; of those currently planning, 76% expect to have a solution implemented in less than one year.

• 62% said developing near-real-time predictive analytics or data-mining capabilities during the next 24 months is extremely important.

• 58% rated expanding data storage infrastructure and resources as extremely important.

• 53% rated increased amounts of unstructured data to analyze as extremely important.

• Respondents expect an average of 37% growth in data during the next two to three years.

I can’t help but be delighted by the final bullet point from Microsoft’s study. “Most respondents (54 percent) listed industry conferences as one of the two most strategic and reliable sources of information on big data.”

Hope to see you at Big Data TechCon.

teslaIf there’s no news… well, let’s make some up. That’s my thought upon reading all the stories about Apple’s forthcoming iWatch – a product that, as far as anyone knows, doesn’t exist.

That hasn’t stopped everyone from Forbes to CNN to the New York Times from jumping in with breathless analysis of the rumor.

Turn the page.

More breathless analysis focused on why Microsoft’s stores and retail partners didn’t have enough stock of the Surface Pro tablet. Was this intentional, some wondered, part of a scheme to make the device appear more popular?

My friend John P. Mello Jr. had solid analysis in his article for PC World, “Microsoft Surface Pro sell-out flap: Is the tablet really that popular?

I think the real reason is that Microsoft isn’t very good at sales estimation or manufacturing logistics. Companies like Apple and HP have dominated, in large part, because of their master of the supply chain. Despite its success with the Xbox consoles, Microsoft is a hardware newbie. I think the inventory shortfall was a screw-up, but an honest one.

After all, when Apple or Samsung run out of hot items, nobody says “It’s a trick.”

Can’t leave the conversation about rumors without mentioning the kerfuffle with the New York Times’s story, “Stalled Out on Tesla’s Electric Highway.” In short: Times columnist John M. Broder claims that the Tesla Model S electric car doesn’t live up to its claimed 265-mile estimated range. Tesla founder Elon Musk tweeted “NYTimes article about Tesla range in cold is fake.”

Everyone loves a good twitter-fight. Dozens of pundits, and gazillions of clicks, are keeping this story in the news.

Cloud computing is seductive. Incredibly so. Reduced capital costs. No more power and cooling of a server closet or data center. High-speed Internet backbones. Outsourced disaster recovery. Advanced edge caching. Deployments are lightning fast, with capacity ramp-ups only a mouse-click away – making the cloud a panacea for Big Data applications.

Cloud computing is scary. Vendors come and vendors go. Failures happen, and they are out of your control. Software is updated, sometimes with your knowledge, sometimes not. You have to take their word for security. And the costs aren’t always lower.

An interesting new study from KPMG, “The Cloud Takes Shape,” digs into the expectations of cloud deployment – and the realities.

According to the study, cloud migration was generally a success. It showed that 33% of senior executives using the cloud said that the implementation, transition and integration costs were too high; 30% cited challenges with data loss and privacy risks; 30% were worried about the loss of control. Also, 26% were worried about the lack of visibility into future demand and associated costs, 26% fretted about the lack of interoperability standards between cloud providers; and 21% were challenged by the risk of intellectual property theft.

There’s a lot more depth in the study, and I encourage you to download and browse through it. (Given that KPMG is a big financial and tax consulting firm, there’s a lot in the report about the tax challenges and opportunities in cloud computing.)

The study concludes,

Our survey finds that the majority of organizations around the world have already begun to adopt some form of cloud (or ‘as-a-service’) technology within their enterprise, and all signs indicate that this is just the beginning; respondents expect to move more business processes to the cloud in the next 18 months, gain more budget for cloud implementation and spend less time building and defending the cloud business case to their leadership. Clearly, the business is becoming more comfortable with the benefits and associated risks that cloud brings.

With experience comes insight. It is not surprising, therefore, that the top cloud-related challenges facing business and IT leaders has evolved from concerns about security and performance capability to instead focus on some of the ‘nuts and bolts’ of cloud implementation. Tactical challenges such as higher than expected implementation costs, integration challenges and loss of control now loom large on the cloud business agenda, demonstrating that – as organizations expand their usage and gain more experience in the cloud – focus tends to turn towards implementation, operational and governance challenges.

Big Data can sometimes mean Big Obstacles. And often those obstacles are simply that the Big Data isn’t there.

That’s what more than 1400 CIOs told Robert Half Technology, a staffing agency. According to the study, whose data was released in January, only 23% of CIOs said their companies collected customer data about demographics or buying habits. Of those that did collect this type of data, 53% of the CIOs said they had insufficient staff to access or analyze that data.

Ouch. 

The report was part of Robert Half Technology’s 2013 Salary Guide. There is a page about Big Data, which says,

When you consider that more than 2.7 billion likes and comments are generated on Facebook every day — and that 15 out of 17 U.S. business sectors have more data stored per company than the U.S. Library of Congress — it’s easy to understand why companies are seeking technology professionals who can crack the big data “code.”

Until recently, information collected and stored by companies was a mishmash waiting to be synthesized. This was because most companies didn’t have an effective way to aggregate it.

Now, more powerful and cost-effective computing solutions are allowing companies of all sizes to extract the value of their data quickly and efficiently. And when companies have the ability to tap a gold mine of knowledge locked in data warehouses, or quickly uncover relevant patterns in data coming from dynamic sources such as the Web, it helps them create more personalized online experiences for customers, develop highly targeted marketing campaigns, optimize business processes and more.

“In contrast to classical logical systems, fuzzy logic is aimed at a formalization of modes of reasoning that are approximate rather than exact. Basically, a fuzzy logical system may be viewed as a result of fuzzifying a standard logical system. Thus, one may speak of fuzzy predicate logic, fuzzy modal local, fuzzy default logic, fuzzy multivalued logic, fuzzy epistemic logic, and so-on. In this perspective, fuzzy logic is essentially a union of fuzzified logical systems in which precise reasoning is viewed as a limiting case of approximate reasoning.”

So began one of the most important technical articles published by AI Expert Magazine during my tenure as its editor: “The Calculus of Fuzzy If/Then Rules,” by Lotfi A. Zadah, in March 1992.

Even then, more than 20 years ago, Dr. Zadeh was revered as the father of fuzzy logic. I recall my interactions with him on that article very fondly.

I was delighted to learn that Fundacion BBVA, the philanthropic foundation of the Spanish bank BBVA, has recognized Dr. Zadeh with their 2012 Frontiers of Knowledge Award.

To quote from the Web page for the award,

The BBVA Foundation Frontiers of Knowledge Award in the Information and Communication Technologies (ICT) category has been granted in this fifth edition to the electrical engineer Lotfi A. Zadeh, “for the invention and development of fuzzy logic.” This “revolutionary” breakthrough, affirms the jury in its citation, has enabled machines to work with imprecise concepts, in the same way humans do, and thus secure more efficient results more aligned with reality. In the last fifty years, this methodology has generated over 50,000 patents in Japan and the U.S. alone. 

The key paper, the one that started it all, was “Fuzzy Sets,” published by Dr. Zadeh in June 1965 in the journal “Information and Control.” You can read the paper here as a PDF. I would not call it light reading.

Congratulations, Dr. Zadeh, for your many contributions to computer science and software engineering – and to the modern world.

Modern companies thrive by harnessing and interpreting data. The more data we have, and the more we focus on analyzing it, the better we can make decisions. Data about our customers, data about purchasing patterns, data about network throughput, data in server logs, data in sales receipts. When we crunch our internal data, and cross-reference it against external data sources, we get goodness. That’s what Big Data is all about.

Data crunching and data correlation isn’t new, of course. That’s what business intelligence is all about. Spotting trends and making predictions is what business analysts have been doing for 40 years or more. From weather forecasters to the World Bank, from particle physicists to political pollsters, all that’s new is that our technology has gotten better. Our hardware, our software and our algorithms are a lot better.

Admittedly, some political pollsters in the recent United States presidential election didn’t seem to have better data analytics. That’s another story for another day.

Is “Big Data” the best term for talking about data acquisition and predictive analytics using Hadoop, Map/Reduce, Cassandra, Avro, HBase, NoSQL databases and so-on? Maybe. Folks like Strata conference chair Edd Dumbill and TechCrunch editor Leena Rao think not.

Indeed, Rao suggests, “Let’s banish the term ‘big data’ with pivot, cloud and all the other meaningless buzzwords we have grown to hate.” She continues, “the term itself is outdated, and consists of an overly general set of words that don’t reflect what is actually happening now with data. It’s no longer about big data, it’s about what you can do with the data.”

Yes, “Big Data” is a fairly generic phrase, and our focus should rightfully be on benefits, not on the 1s and 0s themselves. However, the phrase neatly fronts a broad concept that plenty of people seem to understand very well, thank you very much. Language is a tool; if the phrase Big Data gets the job done, we’ll stick with it, both as a term to use in SD Times and as the name of our technical training conference focused on data acquisition, predictive analytics, etc., Big Data TechCon.

The name doesn’t matter. Big Data. Business Intelligence. Predictive Analytics. Decision Support. Whatever. What matters is that we’re doing it.

walled-gardenToday’s word is “open.” What does open mean in terms of open platforms and open standards? It’s a tricky concept. Is Windows more open than Mac OS X? Is Linux more open than Solaris? Is Android more open than iOS? Is the Java language more open than C#? Is Firefox more open than Chrome? Is SQL Server more open than DB2?

The answer in all these cases can be summarized in two more words: “That depends.” To some purists, anything that is owned by a non-commercial project or standards body is open. By contrast, anything that is owned by a company, or controlled by a company, is by definition not open.

There are infinite shades of gray. Openness isn’t a line or a spectrum, and it’s not a two-dimensional matrix either. There are countless dimensions.

Take iOS. The language used to program iPhone/iPad apps is Objective-C. It’s pretty open – certainly, some would say that Objective-C is more open than Java, which is owned and controlled by Oracle. Since iOS uses Objective-C, and Android uses Java, doesn’t that makes iOS open, and Android not open?

But wait – perhaps when people talk about the openness of the mobile platforms, they mean whether there is a walled garden around its primary app store. If you want to distribute native apps to through Apple’s store, you must meet Apple’s criteria in lots of ways, from the use of APIs to revenue sharing for in-app purchases. That’s not very open. If you want to distribute native apps to Android devices, you can choose Google Play, where the standards for app acceptance are fairly low, or another app store (like Amazon’s), or even set up your own. That’s more open.

If you want to build apps that are distributed and use Microsoft’s new tiled user experience, you have to put them into the Windows Store. In fact, such applications are called Windows Store Apps. Microsoft keeps a 30% cut of sales, and reserves the right to not only kick your app out of the Windows Store, but also remove your app from customer’s devices. That’s not very open.

The trend these days is for everyone to set up their own app store – whether it’s the Windows Store, Google Play, the Raspberry Pi Store, Salesforce.com AppExchange, Firefox Marketplace, Chrome Web Store, BlackBerry App World, Facebook Apps Center or the Apple App Store. There are lots more. Dozens. Hundreds perhaps.

Every one of these stores affects the openness of the platform – whether the platform is a mobile or desktop device, browser, operating system or cloud-based app. Forget programming language. Forget APIs. The true test of openness is becoming the character of the app store, whether consumers are locked into using open “approved” stores, what restrictions are placed on what may be placed in that app store, and whether developers have the freedom to fully utilize everything the platform can offer. (If the platform vendor’s own apps, or those from preferred partners, can access APIs that are not allowed in the app store, that’s not a good sign.)

Nearly every platform is a walled garden. The walls aren’t simple; they make Calabi-Yau manifolds look like child’s play. The walls twist. They turn. They move.

Forget standards bodies. Today’s openness is the openness of the walled garden.

ethan-evansIn 1996, according to the Wikipedia, Sun Microsystems promised

Java’s write-once-run-everywhere capability along with its easy accessibility have propelled the software and Internet communities to embrace it as the de facto standard for writing applications for complex networks

That was version 1.0. Version 2.0 of the write-once-run-everywhere promise goes to HTML5. There are four real challenges with pure HTML5 apps, though, especially on mobile devices:

  • The specification isn’t finished, and devices and browsers don’t always support the full draft spec.
  • Run-time performance can be slow, especially on older mobile devices – and HTML5 apps developers can’t always manage or predict client performance.
  • Network latency can adversely affect the user experience, especially compared to native apps.
  • HTML5 apps can’t always access native device features – and what they can access may depend on the client operating system, browser design and sandbox constraints.

What should you do about it? According to Ethan Evans, Director of App Developer Services at Amazon.com, the answer is to build hybrid apps that combine HTML5 with native code.

In his keynote address at AnDevCon earlier this month, Evans said that there are three essential elements to building hybrid apps. First, architecting the correct division between native code and HTML5 code. Second, make sure the native code is blinding fast. Third, make sure the HTML5/JavaScript is blinding fast.

Performance is the key to giving a good user experience, he said, with the goal that a native app and a hybrid apps should be indistinguishable. That’s not easy, especially on older devices with underpowered CPUs and GPUs, small amounts of memory, and of course, poor support for HTML5 in the stack.

“Old versions of Android live forever,” Evans said, along with old versions of Webkit. Hardware acceleration varies wildly, as does the browser’s use of hardware acceleration. A real problem is flinging – that is, rapidly trying to scroll data that’s being fed from the Internet. Native code can handle that well; HTML5 can fall flat.

Thus, Evans said, you need to go native. His heuristic is:

  • HTML5 is good for parts of the user experience that involve relatively low interactivity. For example, text and static display, video playback, showing basic online content, handling basic actions like payment portals.
  • HTML5 is less good when there is more user interactivity. For example, scrolling, complex physics that use native APIs, multiple concurrent sounds, sustained high frame rates, multi-touch or gesture recognition.
  • HTML5 is also a challenge when you need access to hardware features or other applications on the device, such as the camera, calendar or contacts.
  • Cross-platform HTML5 is difficult to optimize to different CPUs, GPUs, operating systems versions, or even to accommodate single-core vs. multi-core devices.
  • Native code, by contrast, is good at handling the performance issues, assuming that you can build and test on all the key platforms. That means that you’ll have to port.
  • With HTML5, code updates are handled on the server. When building native apps, code updates will require apps upgrades. That’s fast and easy on Android, but slow and hard on iOS due to Apple’s review process.
  • Building a good user interface is relatively easy using HTML5 and CSS, but is harder using native code. Testing that user interface is much harder with native code due to the variations you will encounter.

Bottom line, says Amazon’s Ethan Evans: HTML5 + CSS + JavaScript + Native = Good.