As I write this on Friday, Apr. 19, it’s been a rough week. A tragic week. Boston is on lockdown, as the hunt for the suspected Boston Marathon bombers continues. Explosion at a fertilizer plant in Texas. Killings in Syria. Suicide bombings in Iraq. And much more besides.

The Boston incident struck me hard. Not only as a native New Englander who loves that city, and not only because I have so many friends and family there, but also because I was near Copley Square only a week earlier. My heart goes out to all of the past week’s victims, in Boston and worldwide.

Changing the subject entirely: I’d like to share some data compiled by Black Duck Software and North Bridge Venture Partners. This is their seventh annual report about open source software (OSS) adoption. The notes are analysis from Black Duck and North Bridge.

How important will the following trends be for open source over the next 2-3 years?

#1 Innovation (88.6%)
#2 Knowledge and Culture in Academia (86.4%)
#3 Adoption of OSS into non-technical segments (86.3%)
#4 OSS Development methods adopted inside businesses (79.3%)
#5 Increased awareness of OSS by consumers (71.9%)
#6 Growth of industry specific communities (63.3%)

Note: Over 86% of respondents ranked Innovation and Knowledge and Culture of OSS in Academia as important/very important.

How important are the following factors to the adoption and use of open source? Ranked in response order:

#1 – Better Quality
#2 – Freedom from vendor lock-in
#3 – Flexibility, access to libraries of software, extensions, add-ons
#4 – Elasticity, ability to scale at little cost or penalty
#5 – Superior security
#6 – Pace of innovation
#7 – Lower costs
#8 – Access to source code

Note: Quality jumped to #1 this year, from third place in 2012.

How important are the following factors when choosing between using open source and proprietary alternatives? Ranked in response order:

#1 – Competitive features/technical capabilities
#2 – Security concerns
#3 – Cost of ownership
#4 – Internal technical skills
#5 – Familiarity with OSS Solutions
#6 – Deployment complexity
#7 – Legal concerns about licensing

Note: A surprising result was “Formal Commercial Vendor Support” was ranked as the least important factor – 12% of respondents ranked it as unimportant.  Support has traditionally been held as an important requirement by large IT organizations, with awareness of OSS rising, the requirement is rapidly diminishing.

When hiring new software developers, how important are the following aspects of open source experience? Ranked in response order:

2012
#1 – Variety of projects
#2 – Code contributions
#3 – Experience with major projects
#4 – Experience as a committer
#5 – Community management experience

2013
#1 – Experience with relevant/specific projects
#2 – Code contributions
#3 – Experience with a variety of projects
#4 – Experience as a committer
#5 – Community management experience

Note: The 2013 results signal a shift to “deep vs. broad experience” where respondents are most interested in specific OSS project experience vs. a variety of projects, which was #1 in 2012.

There is a lot more data in the Future of Open Source 2013 survey. Go check it out. 

Last week, we held the debut Big Data TechCon in Cambridge, Mass. It was a huge success – more attendees than we expected, which is great. (With a debut event, you never really know.)

We had lots of sessions, many of which were like trying to drink from a fire hose. That’s a good thing.

A commonality is that there is no single thing called Big Data. There are oodles of problems that have to do with capturing, processing and storing large quantities of structured and unstructured data. Some of those problems are called Big Data today, but some have evolved out of diverse disciplines like data management, data warehousing, business intelligence and matrix-based statistics.

Problems that seemed simple to solve when you were talking about megabytes or terabytes are not simple when you’re talking about petabytes.

You may have heard about the “Four V’s of Big Data” – Volume, Velocity, Variety and Veracity. Some Big Data problems are impacted by some of these V’s. Other Big Data problems are impacted by other V’s.

Think about problem domains where you have very large multidimensional data sets to be analyzed, like insurance or protein folding. Those petabytes are static or updated somewhat slowly. However, you’d like to be able to run a broad range of queries. That’s an intersection of data warehousing and business intelligence. You’ve got volume and veracity. Not much variety. Velocity is important on reporting, not on data management.

Or you might have a huge mass of real-time data. Imagine a wide variety of people, like in a social network, constantly creating all different types of data, from text to links to audio to video to photos to chats to comments. You not only have to store this, but also quickly decide what to present to whom, through relationships, permissions and filters, but also implement a behind-the-scenes recommendation engine to prioritize the flow. Oh, and you have to do it all sub-second. There all four V’s coming into play.

Much in Big Data has to do with how you model the data or how you visualize it. In non-trivial cases, there are many ways of implementing a solution. Some run faster, some are slower; some scale more, others scale less; some can be done by coding into your existing data infrastructure, and others require drastic actions that bolt on new systems or invite rip-and-replace.

Big Data is fascinating. Please join us for the second Big Data TechCon, coming to the San Francisco Bay Area in October. See www.bigdatatechcon.com.

While in Cambridge wrapping up the conference, I received an press release from IDC: “PC Shipments Post the Steepest Decline Ever in a Single Quarter, According to IDC.”

To selectively quote:

Worldwide PC shipments totaled 76.3 million units in the first quarter of 2013 (1Q13), down -13.9% compared to the same quarter in 2012 and worse than the forecast decline of -7.7%.

Despite some mild improvement in the economic environment and some new PC models offering Windows 8, PC shipments were down significantly across all regions compared to a year ago. Fading Mini Notebook shipments have taken a big chunk out of the low-end market while tablets and smartphones continue to divert consumer spending. PC industry efforts to offer touch capabilities and ultraslim systems have been hampered by traditional barriers of price and component supply, as well as a weak reception for Windows 8. The PC industry is struggling to identify innovations that differentiate PCs from other products and inspire consumers to buy, and instead is meeting significant resistance to changes perceived as cumbersome or costly.

The industry is going through a critical crossroads, and strategic choices will have to be made as to how to compete with the proliferation of alternative devices and remain relevant to the consumer. 

It’s all about the tablets, folks. That’s right: iPads and Android-based devices like the Samsung Galaxy, Kindle Fire, Barnes & Noble Nook and Google Nexus. Attempts to make standard PCs more tablet-like (such as the Microsoft Surface devices) just aren’t cutting it. Just as we moved from minicomputers to desktops, and from desktops to notebooks, we are moving from notebooks to tablets.

(I spent most of the time at the Big Data TechCon working on a 7-inch tablet with a Bluetooth keyboard. I barely used my notebook at all. The tablet/keyboard had a screen big enough to write stories with, a real keyboard with keys, and best of all, would fit into my pocket.)

Just as desktops/notebooks have different operating systems, applications, data storage models and user experiences than minicomputers (and minicomputer terminals), so too the successful tablet devices aren’t going to look like a notebook with a touchscreen. Apps, not applications; cloud-based storage; massively interconnected networks; inherently social. We are at an inflection point. There’s no going back.

Git, the open-source version control system, is becoming popular with enterprise developers. Or so it appears not only from anecdotal evidence I hear from developers all the time, but also from a new marketing study from CollabNet.

The study, called “The State of Git in the Enterprise,” was conducted by InformationWeek, but was paid for by CollabNet, which coincidentally sells tools and services to help development teams use Git. You should bear that in mind when interpreting the study,  which you can only receive by giving CollabNet your contact information.

That said, there are five interesting findings in the January 2013 study, which surveyed 248 development and business technology professionals at companies with 100 or more employees who use source code management tools:

First: Most developers are not using or planning to use Git. But of those that do, usage is split between on-premises or in a public/private cloud.

How do you use (or intend to use by 2013) Git deployment?

On premises: 30%
Private cloud/virtualized: 23%
Public cloud: 10%
Don’t use/do not intend to use 54%

Second: What best describes your use of Git today?

Git is our corporate standard: 5%
Git is one of several SCMs we use: 20%
Still kicking the tires on Git: 18%
Not currently using Git: 57%

Third: What do you like about Git?

Power branching/merging: 61%
Network performance: 53%
Everyone seems to be using it: 35%
It’s our corporate standard: 13%

Fourth: How do you conduct code reviews?

Automated and manual: 46%
Manual only: 24%
Manual, but only occasionally: 17%
Automated only: 7%
Not at all: 6%

Fifth: By the end of 2013, which SCM tools do you plan to use?

Microsoft TFS/VSS: 33%
Subversion: 32%
Git: 27%
IBM ClearCase: 22%
CVS: 21%
Perforce: 11%
Mercurial: 7%
None: 4%

Some of these technologies have been around for a long time. For example, CVS first appeared in 1986. CollabNet started Subversion in 2000, and it’s now a top-level Apache project. By contrast, Git’s initial release was only in 2005, and it flew under the radar for years before getting traction. Git’s rise to the third position on this study is impressive.

Packing lists – check.  Supplies ordered – check. Show bags on schedule – check. Speakers all confirmed – check. Missing laptop power cord located – check. Airline tickets verified – check. Candy purchased for reservation desk – check.

Our team is getting excited for the debut Big Data TechCon. It’s coming up very shortly: April 8-10 in Boston.

What drove us to launch this technical conference? Frustration, really, that there were mainly two types of face-to-face conferences surrounding Big Data.

The first were executive-level meetings that could be summarized as “Here’s WHY you should be jumping on the Big Data bandwagon.” Thought leadership, perhaps, but little that someone could walk away with.

The second were training sessions or user meetings focused on specific technologies or products. Those are great if you are already using those products and need to train your staff on specific tools.

What was missing? A practical, technical, conference focused on HOW TO do Big Data. How to choose between a wide variety of tools and technologies, without bias toward a particular platform. How to kick off a Big Data project, or scale existing projects. How to avoid pitfalls. How to define and measure success. How to leverage emerging best practices.

All that with dozens of tutorials and technical classes, plus inspiring keynotes and lots and lots of networking opportunities with the expert speakers and fellow attendees. After all, folks learn in both the formal classroom and the informal hallway and lunch table.

The result – Big Data TechCon, April 8-10 in Boston. If you are thinking about attending, now’s the time to sign up. Learn more at www.bigdatatechcon.com.

See you in Boston!

What is going on at Google? I’m not sure, and neither are the usual pundits.

Last week, Google announce that Andy Rubin, the long-time head of the Android team, is moving to another role within the company, and will be replaced by Sundar Pichai — the current head of the company’s Chrome efforts.

To quote from Larry Page’s post

Having exceeded even the crazy ambitious goals we dreamed of for Android—and with a really strong leadership team in place—Andy’s decided it’s time to hand over the reins and start a new chapter at Google. Andy, more moonshots please!

Going forward, Sundar Pichai will lead Android, in addition to his existing work with Chrome and Apps. Sundar has a talent for creating products that are technically excellent yet easy to use—and he loves a big bet. Take Chrome, for example. In 2008, people asked whether the world really needed another browser. Today Chrome has hundreds of millions of happy users and is growing fast thanks to its speed, simplicity and security. So while Andy’s a really hard act to follow, I know Sundar will do a tremendous job doubling down on Android as we work to push the ecosystem forward. 

What is the real story? The obvious speculation is that Google may have too many mobile platforms, and may look to merge the Android and Chrome OS operating systems.

Ryan Tate of Wired wrote, in “Andy Rubin and the Great Narrowing of Google,”

The two operating system chiefs have long clashed as part of a political struggle between Rubin’s Android and Pichai’s Chrome OS, and the very different views of the future each man espouses. The two operating systems, both based on Linux, are converging, with Android growing into tablets and Chrome OS shrinking into smaller and smaller laptops, including some powered by chips using the ARM architecture popular in smartphones.

Tate continues,

There’s a certain logic to consolidating the two operating systems, but it does seem odd that the man in charge of Android – far and away the more successful and promising of the two systems – did not end up on top. And there are hints that the move came as something of a surprise even inside the company; Rubin’s name was dropped from a SXSW keynote just a few days before the Austin, Texas conference began.

Other pundits seem equally confused. Hopefully, we’ll know what’s on going on soon. Registration for Google’s I/O conference opened – and closed – on March 13. If you blinked, you missed it. We’ll obviously be covering the Android side of this at our own AnDevCon conference, coming to Boston on May 28-31.

What do companies use Big Data technologies to analyze? Sales transactions. Social media trends. Scientific data. Social media trends. Weather readings. Social media trends. Prices for raw materials. Social media trends. Stock values. Social media trends. Web logs. And social media trends.

Sometimes I wonder if the entire point of Big Data is to sort through tweets. And Pinterest, Facebook and Tumblr – as well as closed social media networks like Salesforce.com’s Chatter and Microsoft’s recently acquired Yammer.

Perhaps this is a reflection that “social” is more than a way for businesses to disintermediate and reach customers directly. (Remember “disintermediation”? It was the go-to word during the early dot-com era of B-to-B and B-to-C e-commerce, and implied unlimited profits.)

Social media – nowadays referred to simply as “social” – is proving to be very effective in helping organizations improve communications. Document repositories and databases are essential, of course. Portal systems are vital. But traditional ways of communication, namely e-mail and standard one-to-one instant messaging, aren’t getting the job done, not in big organizations. Employees drown in their overflowing inboxes, and don’t know whom to message for information or input or workflow.

Enter a new Big Data angle on social. It’s one that goes beyond sifting through public messages to identifying what’s trending so you can sell more products or get on top of customer dissatisfaction before it goes viral. (Not to say those aren’t important, but that’s only the tip of the iceberg.)

What Big Data analysis can show you is not just what is going on and what the trends are, but who is driving them, or who are at least on top of the curve.

Use analytics to find out which of your customers are tastemakers – and cultivate them. Find out which of your partners are generating the most tractions – and deepen those ties. And find out which of your employees, through in-house social tools like instant messaging, blogs, wikis and forums, are posting the best information, are attracting followers and comments, and are otherwise leading the pack.

Treasure those people, especially those who are in your IT and development departments.

Big Social is the key to your organization’s future. Big Data helps you find and turn that key. We’ll cover both those trends at Big Data TechCon, coming to Boston from April 8-10. Hope to see you there.

Everything, it seems, is a game. When I use the Waze navigation app on my smartphone, I earn status for reporting red-light cameras. What’s next: If I check in code early to version-control system, do I win a prize? Get points? Become a Code Warrior Level IV?

Turning software development into a game is certainly not entirely new. Some people live for “winning,” and like getting points – or status – by committing code to open-source projects or by reporting bugs as a beta tester. For the most part, however, that was minor. The main reason to commit the code or document the defect was to make the product better. Gaining status should be a secondary consideration – a reward, if you will, not a motivator.

For some enterprise workers, however, gamification of the job can be more than a perk or added bonus. It may be the primary motivator for a generation reared on computer games. Yes, you’ll get paid if you get your job done (and fired if you don’t). But you’ll work harder if you are encouraged to compete against other colleagues, against other teams, against your own previous high score.

Would gamification work with, say, me? I don’t think so. But from what I gather, it’s truly a generational divide. I’m a Baby Boomer; when I was a programmer, Back in the Day, I put in my hours for a paycheck and promotions. What I cared about most: What my boss thought about my work.

For Generation Y / Millennials (in the U.S, generally considered to be those born between 1982 and 2000), it’s a different game.

Here are some resources that I’ve found about gamification in the software development profession. What do you think about them? Do you use gamification techniques in your organization to motivate your workers?

Gamification in Software Development and Agile

Gamifying Software Engineering and Maintenance

Gamifying software still in its infancy, but useful for some

Some Thoughts on Gamification and Software

TED Talk: Gaming can make a better world 

Just about everyone is talking about Big Data, and I’m not only saying that because I’m conference chair for Big Data TechCon, coming up in April in Boston.

Take Microsoft, for example. On Feb. 13, the company released survey results that talked about their big customers’ biggest data challenges, and how those relate to Big Data.

In its “Big Data Trends: 2013” study, Microsoft talked to 282 U.S. IT decision-makers who are responsible for business intelligence, and presumably, other data-related issues. To quote some findings from Microsoft’s summary of that study:

• 32% expect the amount of data they store to double in the next two to three years.

• 62% of respondents currently store at least 100 TB of data. 

• Respondents reported an average of 38% of their current data as unstructured.

• 89% already have a dedicated budget for a Big Data solution.

• 51% of companies surveyed are in the middle stages of planning a big data solution

• 13% have fully deployed a Big Data solution.

• 72% have begun the planning process but have not  yet tested or deployed a solution; of those currently planning, 76% expect to have a solution implemented in less than one year.

• 62% said developing near-real-time predictive analytics or data-mining capabilities during the next 24 months is extremely important.

• 58% rated expanding data storage infrastructure and resources as extremely important.

• 53% rated increased amounts of unstructured data to analyze as extremely important.

• Respondents expect an average of 37% growth in data during the next two to three years.

I can’t help but be delighted by the final bullet point from Microsoft’s study. “Most respondents (54 percent) listed industry conferences as one of the two most strategic and reliable sources of information on big data.”

Hope to see you at Big Data TechCon.

Cloud computing is seductive. Incredibly so. Reduced capital costs. No more power and cooling of a server closet or data center. High-speed Internet backbones. Outsourced disaster recovery. Advanced edge caching. Deployments are lightning fast, with capacity ramp-ups only a mouse-click away – making the cloud a panacea for Big Data applications.

Cloud computing is scary. Vendors come and vendors go. Failures happen, and they are out of your control. Software is updated, sometimes with your knowledge, sometimes not. You have to take their word for security. And the costs aren’t always lower.

An interesting new study from KPMG, “The Cloud Takes Shape,” digs into the expectations of cloud deployment – and the realities.

According to the study, cloud migration was generally a success. It showed that 33% of senior executives using the cloud said that the implementation, transition and integration costs were too high; 30% cited challenges with data loss and privacy risks; 30% were worried about the loss of control. Also, 26% were worried about the lack of visibility into future demand and associated costs, 26% fretted about the lack of interoperability standards between cloud providers; and 21% were challenged by the risk of intellectual property theft.

There’s a lot more depth in the study, and I encourage you to download and browse through it. (Given that KPMG is a big financial and tax consulting firm, there’s a lot in the report about the tax challenges and opportunities in cloud computing.)

The study concludes,

Our survey finds that the majority of organizations around the world have already begun to adopt some form of cloud (or ‘as-a-service’) technology within their enterprise, and all signs indicate that this is just the beginning; respondents expect to move more business processes to the cloud in the next 18 months, gain more budget for cloud implementation and spend less time building and defending the cloud business case to their leadership. Clearly, the business is becoming more comfortable with the benefits and associated risks that cloud brings.

With experience comes insight. It is not surprising, therefore, that the top cloud-related challenges facing business and IT leaders has evolved from concerns about security and performance capability to instead focus on some of the ‘nuts and bolts’ of cloud implementation. Tactical challenges such as higher than expected implementation costs, integration challenges and loss of control now loom large on the cloud business agenda, demonstrating that – as organizations expand their usage and gain more experience in the cloud – focus tends to turn towards implementation, operational and governance challenges.

Big Data can sometimes mean Big Obstacles. And often those obstacles are simply that the Big Data isn’t there.

That’s what more than 1400 CIOs told Robert Half Technology, a staffing agency. According to the study, whose data was released in January, only 23% of CIOs said their companies collected customer data about demographics or buying habits. Of those that did collect this type of data, 53% of the CIOs said they had insufficient staff to access or analyze that data.

Ouch. 

The report was part of Robert Half Technology’s 2013 Salary Guide. There is a page about Big Data, which says,

When you consider that more than 2.7 billion likes and comments are generated on Facebook every day — and that 15 out of 17 U.S. business sectors have more data stored per company than the U.S. Library of Congress — it’s easy to understand why companies are seeking technology professionals who can crack the big data “code.”

Until recently, information collected and stored by companies was a mishmash waiting to be synthesized. This was because most companies didn’t have an effective way to aggregate it.

Now, more powerful and cost-effective computing solutions are allowing companies of all sizes to extract the value of their data quickly and efficiently. And when companies have the ability to tap a gold mine of knowledge locked in data warehouses, or quickly uncover relevant patterns in data coming from dynamic sources such as the Web, it helps them create more personalized online experiences for customers, develop highly targeted marketing campaigns, optimize business processes and more.

“In contrast to classical logical systems, fuzzy logic is aimed at a formalization of modes of reasoning that are approximate rather than exact. Basically, a fuzzy logical system may be viewed as a result of fuzzifying a standard logical system. Thus, one may speak of fuzzy predicate logic, fuzzy modal local, fuzzy default logic, fuzzy multivalued logic, fuzzy epistemic logic, and so-on. In this perspective, fuzzy logic is essentially a union of fuzzified logical systems in which precise reasoning is viewed as a limiting case of approximate reasoning.”

So began one of the most important technical articles published by AI Expert Magazine during my tenure as its editor: “The Calculus of Fuzzy If/Then Rules,” by Lotfi A. Zadah, in March 1992.

Even then, more than 20 years ago, Dr. Zadeh was revered as the father of fuzzy logic. I recall my interactions with him on that article very fondly.

I was delighted to learn that Fundacion BBVA, the philanthropic foundation of the Spanish bank BBVA, has recognized Dr. Zadeh with their 2012 Frontiers of Knowledge Award.

To quote from the Web page for the award,

The BBVA Foundation Frontiers of Knowledge Award in the Information and Communication Technologies (ICT) category has been granted in this fifth edition to the electrical engineer Lotfi A. Zadeh, “for the invention and development of fuzzy logic.” This “revolutionary” breakthrough, affirms the jury in its citation, has enabled machines to work with imprecise concepts, in the same way humans do, and thus secure more efficient results more aligned with reality. In the last fifty years, this methodology has generated over 50,000 patents in Japan and the U.S. alone. 

The key paper, the one that started it all, was “Fuzzy Sets,” published by Dr. Zadeh in June 1965 in the journal “Information and Control.” You can read the paper here as a PDF. I would not call it light reading.

Congratulations, Dr. Zadeh, for your many contributions to computer science and software engineering – and to the modern world.

Modern companies thrive by harnessing and interpreting data. The more data we have, and the more we focus on analyzing it, the better we can make decisions. Data about our customers, data about purchasing patterns, data about network throughput, data in server logs, data in sales receipts. When we crunch our internal data, and cross-reference it against external data sources, we get goodness. That’s what Big Data is all about.

Data crunching and data correlation isn’t new, of course. That’s what business intelligence is all about. Spotting trends and making predictions is what business analysts have been doing for 40 years or more. From weather forecasters to the World Bank, from particle physicists to political pollsters, all that’s new is that our technology has gotten better. Our hardware, our software and our algorithms are a lot better.

Admittedly, some political pollsters in the recent United States presidential election didn’t seem to have better data analytics. That’s another story for another day.

Is “Big Data” the best term for talking about data acquisition and predictive analytics using Hadoop, Map/Reduce, Cassandra, Avro, HBase, NoSQL databases and so-on? Maybe. Folks like Strata conference chair Edd Dumbill and TechCrunch editor Leena Rao think not.

Indeed, Rao suggests, “Let’s banish the term ‘big data’ with pivot, cloud and all the other meaningless buzzwords we have grown to hate.” She continues, “the term itself is outdated, and consists of an overly general set of words that don’t reflect what is actually happening now with data. It’s no longer about big data, it’s about what you can do with the data.”

Yes, “Big Data” is a fairly generic phrase, and our focus should rightfully be on benefits, not on the 1s and 0s themselves. However, the phrase neatly fronts a broad concept that plenty of people seem to understand very well, thank you very much. Language is a tool; if the phrase Big Data gets the job done, we’ll stick with it, both as a term to use in SD Times and as the name of our technical training conference focused on data acquisition, predictive analytics, etc., Big Data TechCon.

The name doesn’t matter. Big Data. Business Intelligence. Predictive Analytics. Decision Support. Whatever. What matters is that we’re doing it.

walled-gardenToday’s word is “open.” What does open mean in terms of open platforms and open standards? It’s a tricky concept. Is Windows more open than Mac OS X? Is Linux more open than Solaris? Is Android more open than iOS? Is the Java language more open than C#? Is Firefox more open than Chrome? Is SQL Server more open than DB2?

The answer in all these cases can be summarized in two more words: “That depends.” To some purists, anything that is owned by a non-commercial project or standards body is open. By contrast, anything that is owned by a company, or controlled by a company, is by definition not open.

There are infinite shades of gray. Openness isn’t a line or a spectrum, and it’s not a two-dimensional matrix either. There are countless dimensions.

Take iOS. The language used to program iPhone/iPad apps is Objective-C. It’s pretty open – certainly, some would say that Objective-C is more open than Java, which is owned and controlled by Oracle. Since iOS uses Objective-C, and Android uses Java, doesn’t that makes iOS open, and Android not open?

But wait – perhaps when people talk about the openness of the mobile platforms, they mean whether there is a walled garden around its primary app store. If you want to distribute native apps to through Apple’s store, you must meet Apple’s criteria in lots of ways, from the use of APIs to revenue sharing for in-app purchases. That’s not very open. If you want to distribute native apps to Android devices, you can choose Google Play, where the standards for app acceptance are fairly low, or another app store (like Amazon’s), or even set up your own. That’s more open.

If you want to build apps that are distributed and use Microsoft’s new tiled user experience, you have to put them into the Windows Store. In fact, such applications are called Windows Store Apps. Microsoft keeps a 30% cut of sales, and reserves the right to not only kick your app out of the Windows Store, but also remove your app from customer’s devices. That’s not very open.

The trend these days is for everyone to set up their own app store – whether it’s the Windows Store, Google Play, the Raspberry Pi Store, Salesforce.com AppExchange, Firefox Marketplace, Chrome Web Store, BlackBerry App World, Facebook Apps Center or the Apple App Store. There are lots more. Dozens. Hundreds perhaps.

Every one of these stores affects the openness of the platform – whether the platform is a mobile or desktop device, browser, operating system or cloud-based app. Forget programming language. Forget APIs. The true test of openness is becoming the character of the app store, whether consumers are locked into using open “approved” stores, what restrictions are placed on what may be placed in that app store, and whether developers have the freedom to fully utilize everything the platform can offer. (If the platform vendor’s own apps, or those from preferred partners, can access APIs that are not allowed in the app store, that’s not a good sign.)

Nearly every platform is a walled garden. The walls aren’t simple; they make Calabi-Yau manifolds look like child’s play. The walls twist. They turn. They move.

Forget standards bodies. Today’s openness is the openness of the walled garden.

ethan-evansIn 1996, according to the Wikipedia, Sun Microsystems promised

Java’s write-once-run-everywhere capability along with its easy accessibility have propelled the software and Internet communities to embrace it as the de facto standard for writing applications for complex networks

That was version 1.0. Version 2.0 of the write-once-run-everywhere promise goes to HTML5. There are four real challenges with pure HTML5 apps, though, especially on mobile devices:

  • The specification isn’t finished, and devices and browsers don’t always support the full draft spec.
  • Run-time performance can be slow, especially on older mobile devices – and HTML5 apps developers can’t always manage or predict client performance.
  • Network latency can adversely affect the user experience, especially compared to native apps.
  • HTML5 apps can’t always access native device features – and what they can access may depend on the client operating system, browser design and sandbox constraints.

What should you do about it? According to Ethan Evans, Director of App Developer Services at Amazon.com, the answer is to build hybrid apps that combine HTML5 with native code.

In his keynote address at AnDevCon earlier this month, Evans said that there are three essential elements to building hybrid apps. First, architecting the correct division between native code and HTML5 code. Second, make sure the native code is blinding fast. Third, make sure the HTML5/JavaScript is blinding fast.

Performance is the key to giving a good user experience, he said, with the goal that a native app and a hybrid apps should be indistinguishable. That’s not easy, especially on older devices with underpowered CPUs and GPUs, small amounts of memory, and of course, poor support for HTML5 in the stack.

“Old versions of Android live forever,” Evans said, along with old versions of Webkit. Hardware acceleration varies wildly, as does the browser’s use of hardware acceleration. A real problem is flinging – that is, rapidly trying to scroll data that’s being fed from the Internet. Native code can handle that well; HTML5 can fall flat.

Thus, Evans said, you need to go native. His heuristic is:

  • HTML5 is good for parts of the user experience that involve relatively low interactivity. For example, text and static display, video playback, showing basic online content, handling basic actions like payment portals.
  • HTML5 is less good when there is more user interactivity. For example, scrolling, complex physics that use native APIs, multiple concurrent sounds, sustained high frame rates, multi-touch or gesture recognition.
  • HTML5 is also a challenge when you need access to hardware features or other applications on the device, such as the camera, calendar or contacts.
  • Cross-platform HTML5 is difficult to optimize to different CPUs, GPUs, operating systems versions, or even to accommodate single-core vs. multi-core devices.
  • Native code, by contrast, is good at handling the performance issues, assuming that you can build and test on all the key platforms. That means that you’ll have to port.
  • With HTML5, code updates are handled on the server. When building native apps, code updates will require apps upgrades. That’s fast and easy on Android, but slow and hard on iOS due to Apple’s review process.
  • Building a good user interface is relatively easy using HTML5 and CSS, but is harder using native code. Testing that user interface is much harder with native code due to the variations you will encounter.

Bottom line, says Amazon’s Ethan Evans: HTML5 + CSS + JavaScript + Native = Good.

The subject line in today’s email from United Airlines was friendly. “Alan, it’s been a while since your last trip from Austin.”

Friendly, yes. Effective? Not at all close.

Alan, you see, lives in northern California, not in central Texas. Alan rarely goes to Austin. Alan has never originated a round trip from Austin.

My most recent trip to Austin was from SFO to AUS on Feb. 13, 2011, returning on Feb. 15, 2011. The trip before that? In 2007.

Technically United is correct. It indeed has been a while since my last trip from Austin. Who cares? Why in the world would United News & Deals — the “from” name on that marketing email— think that I would be looking for discounted round-trip flights from Austin?

It is Big Data gone bad.

We see example of this all the time. A friend loves to post snarky screen shots of totally off-base Facebook ads, like the one that offered him ways to “meet big and beautiful women now,” or non-stop ads for luxury vehicles. For some reason, Lexus finds his demographic irresistible. However: My friend and his wife live in Manhattan. They don’t own or want a car.

Behavioral ad targeting relies upon Big Data techniques. Clearly, those techniques are not always effective, as the dating, car-sales and air travel messages demonstrate. There is both art and science to Big Data – gathering the vast quantities of data, processing it quickly and intelligently, and of course, using the information effectively to drive a business purpose like behavioral marketing.

Sometimes it works. Oops, sometimes it doesn’t. Being accurate isn’t the same as being useful.

Where to learn that art and science? Let me suggest Big Data TechCon. Three days, dozens of practical how-to classes that will teach you and your team how to get Big Data right. No, it’s not in Austin— it’s near Boston, from April 8-10, 2013. Hope to see you there— especially if you work for United Airlines or Lexus.

Once upon a time, application programming interfaces were hooks that applications used to tap into operating system services. Want to open a port? Call an API. Need to find a printer? Call an API. Open a winder? Call an API. Write to a file? Call an API.

Developers still use classic APIs of course. They are necessary for both native and managed code. Windows, iOS, Android, Unix, Linux, all are stuffed to the brim with hundreds and thousands of APIs. In fact, one of the most useful features of an integrated development environment like Visual Studio, Eclipse and Xcode is to provide an handy reference to APIs, check their syntax and arguments, and help fill them out with autocomplete.

Classic APIs are fundamental. Cloud-based APIs, which provide loosely coupled function calls to services over the Internet, are more sexy and more dangerous.

The December issue of SD Times contains a feature by Alexa Weber Morales, “Connecting the World with APIs.” She explains that the variety of cloud-based APIs far exceeds the biggest, most visible examples, such as those from Amazon and Google. APIs are everywhere, from social media players like Facebook and Twitter, to business services like MailChimp and Salesforce.com.

Like electricity from the wall socket, or water from the kitchen faucet, it is easy to take cloud-based APIs for granted. Too easy. We outsource core functionality of our applications to cloud-based services, some free, some paid for by subscription. We expect them to work consistently. We expect them to be monolithic and unchanging. We expect them to be fast. We expect them to be secure.

We must not make any of those assumptions. Our software must be able to detect if a cloud-based API is offline or is running slowly, and should be able to handle such a situation gracefully. (I.e., not hang or crash.) We should never assume that APIs are secure and will keep our data safe or our customers’ data safe. We should not expect the API vendor to proactively notify us if they change some of the functionality within the APIs. It’s our job to be on top of any changes.

The availability of cloud-based APIs – unlike operating system APIs – is out of our hands. Our decision to upgrade a server’s OS is on our schedule, and we have time to read the documentation. When a mobile platform maker, like Apple, Google or Microsoft, releases a new operating system, we get plenty of notice and have plenty of time to understand about the newest APIs, the changed APIs and the deprecated APIs.

Not true with cloud-based APIs. While the three-letter acronym may be the same, our applications’ calls to a RESTful cloud-based APIs are not at all the same as our applications’ calls to native operating system services. While convenient, cloud-based APIs are ephemeral, distant and fundamentally unreliable. Never forget it.

Tomorrow Americans will celebrate Thanksgiving. This is an odd holiday. It’s partly religious, but also partly secular, dating back to the English colonization of eastern North America. A recent tradition is for people to share what they are thankful for. In a lighthearted way, let me share some of my tech-related joys.

• I am thankful for PDF files. Websites that share documents in other formats (such as Microsoft Word) are kludgy, and document never looks quite right.

• I am thankful for native non-PDF files. Extracting content from PDF files to use in other applications is a time-consuming process that often requires significant post-processing.

• I am thankful that Hewlett-Packard is still in business – for now at least. It’s astonishing how HP bungles acquisition after acquisition after acquisition.

• I am thankful for consistent language specifications, such as C++, Java, HTML4 and JavaScript, which give us a fighting chance at cross-platform compatibility. A world with only proprietary languages would be horrible.

• I am thankful for HTML5 and CSS3, which solve many important problems for application development and deployment.

• I am thankful that most modern operating systems and applications can be updated via the Internet. No more floppies, CDs or DVDs.

• I am thankful that floppies are dead, dead, dead, dead, dead.

• I am thankful that Apple and Microsoft don’t force consumers to purchase applications for their latest desktop operating systems from their app stores. It’s my computer, and I should be able to run any bits that I want.

• I am thankful for Hadoop and its companion Apache projects like Avro, Cassandra, HBase and Pig, which in a only a couple of years became the de facto platform for Big Data and a must-know technology for developers.

• I am thankful that Linux exists as a compelling server operating system, as the foundation of Android, and as a driver of innovation.

• I am thankful for RAW photo image files and for Adobe Lightroom to process those RAW files.

• I am thankful for the Microsoft Surface, which is the most exciting new hardware platform since the Apple’s iPad and MacBook Air.

• I am thankful to still get a laugh by making the comment, “There’s an app for that!” in random non-tech-related conversations.

• I am thankful for the agile software movement, which has refocused our attention to efficiently creating excellent software, and which has created a new vocabulary for sharing best practices.

• I am thankful for RFID technology, especially as implemented in the East Coast’s E-Zpass and California’s FasTrak toll readers.

• I am thankful that despite the proliferation of e-book readers, technology books are still published on paper. E-books are great for novels and documents meant to be read linearly, but are not so great for learning a new language or studying a platform.

• I am thankful that nobody has figured out how to remotely hack into my car’s telematics systems yet – as far as I know.

• I am thankful for XKCD.

• I am thankful that Oracle seems to be committed to evolving Java and keeping it open.

• I am thankful for the wonderful work done by open-source communities like Apache, Eclipse and Mozilla.

• I am thankful that my Android phone uses an industry-standard Micro-USB connector.

• I am thankful for readers like you, who have made SD Times the leading news source in the software development community.

Happy Thanksgiving to you and yours.

echoEchosystem. What a marvelous typo! An email from an analyst firm referred several times to a particular software development ecosystem, but in one of the instances, she misspelled “ecosystem” as “echosystem.” As a technology writer and analyst myself, that misspelling immediately set my mind racing. Echosystem. I love it.

An echosystem would be a type of meme. Not the silly graphics that show up on Twitter and Facebook, but more the type of meme envisioned by Richard Dawkins in his book, The Selfish Gene, where an idea or concept takes on a life of its own. In this case, the echosystem is where a meme is simply echoed, and is believed to be true simply because it is repeated so often. In particular, the echosystem would apply to ideas that are repeated around by analysts, technology writers and journalists, influential bloggers, and so-on.

In another time and place, what I’m now calling the echosystem would be called the bandwagon. I like the idea of a mashup between the bandwagon and the echo chamber being the echosystem.

We have lots of memes in the software development echosystem. For example, that the RIM BlackBerry is toast. Is the platform doomed? Maybe. But it’s become so casual, so matter-of-fact, for writers and analysts to refer to the BlackBerry as toast that repetition is creating its own truthiness (as Stephen Colbert would say).

Another is echosystem chatter that skeuomorphs are bad, and that Apple is behind the times (and falling behind Android and Windows 8) because its applications have fake leather textures and fake wooden bookshelves. Heck, I only learned about the term recently but repeating the chatter, wrote my own column about it last month, “Fake leather textures on your mobile apps: Good or bad?” True analysis? Maybe. Echoing the echosystem? Definitely

The echosystem anoints technologies or approaches, and then tears them down again. 

HTML5? The echosystem decided that this draft protocol was the ultimate portable platform, but then pounced when Facebook’s Mark Zuckerberg dissed his company’s efforts.

SOAP? The echosystem loved, loved, loved, loved, loved Simple Object Access Protocol and the WS* methods of implementing Web services, until the new narrative became that RESTful Web services were better. The SOAP bubble popped almost instantly when the meme “WS* is too complicated” spread everywhere.

Echoes in the echosystem pronounced judgment on Windows 8 long before it came out. Echoes weighed in on the future of Java before Oracle’s acquisition of Sun even closed and have chosen JavaScript as the ultimate programming language.

There is a lot of intelligence in the echosystem. Smart people hear what’s being said and repeat it and amplify it and repeat it some more. Sometimes pundits put a lot of thought into their echoes of popular. Sometimes pundits are merely hopping onto the bandwagon. The trick is to tell the differences.

windows-phone-8It take a lot to push the U.S. elections off the television screen, but Hurricane Sandy managed the trick. We would like to express our sympathies to those affected by the storm – too many lives were lost, homes and property destroyed, businesses closed.

Microsoft and Google had scheduled tech events for the week of Oct. 29. Build took place as scheduled on the Microsoft campus in Redmond, Wash. Google cancelled its New York City launch event and offered its products rollouts via blog.

The big Microsoft news was the release of Windows Phone 8, with handsets from HTC, Nokia and Samsung set to go on sale starting in November. This follows, of course, the rollout of Windows 8 and the Surface with Windows RT ARM-based notebook/tablet device on Oct. 26.

Everyone that I know who has talked to who has used a prerelease Windows Phone 8 has been impressed. (I have a Windows Phone 7.5 device and find the Live Tile apps to be quite usable and exciting. I look forward to installing Windows Phone 7.8 on that device.) Through a strong program of incentives for app developers, there are many flagship apps for the phone already.

There are three compelling messages Windows Phone developers:

  • You can use Visual Studio and familiar tools to build apps for Windows Phone 8.
  • Windows Phone 8 is almost identical to Windows 8, so there’s minimal learning curve.
  • Windows Phone 8 is a reboot of the platform, which means you’ll face few competitors in the app store, called Windows Phone Store.

Of course, the downside is:

  • The installed base of Windows Phone 8 is nonexistent, compared to gazillions of iOS, Android and even BlackBerry OS.

If I were an entrepreneurial mobile app developer, I’d give Windows Phone 8 a try.

Google’s news was much more incremental: More hardware and a minor rev of Android.

The new hardware, announced in the Google Official Blog, is a new phone called the Nexus 4 and a 10-inch tablet called the Nexus 10. The big tablet has 2560×1600 display – that’s the same resolution as many 27-inch desktop monitors, and I’d love to see one.

Google’s seven-inch tablet announced during the summer, the Nexus 7, came only with 16GB of RAM and WiFi. Now you can get it with 32GB RAM or GSM-based cellular connections using the HSPA+ mobile standard. These are good hardware upgrades, but aren’t “stop the presses” material in the weeks surrounding the launch of Windows Phone, Windows Phone 8, Surface and Apple’s iPad Mini. Heck, the tablet doesn’t even have 4G.

The operating system update is Android 4.2, which is still called Jelly Bean. There are plenty of consumer features, such as a spherical panoramic camera mode, and a smarter predictive keyboard. The ability to support many users is a good feature, and one frankly that is long overdue for these expensive tablets.

Expect to see more about Android 4.2 at AnDevCon IV, coming up Dec. 4-7, 2012. Maybe someone will bring one of those 10-inch tablets so we can see the screen.

Skeuomorph. I learned this word a few weeks ago, after a flurry of stories broke on various mass-media websites about an apparent kerfuffle within Apple about user interface design.

A skeuomorph is a design element that looks functional, but is actually purely ornamental. The automotive world is rife with skeuomorphs. Fake hood scoops on sports cars, plastic tire covers that imitate wire wheels, plastic that’s textured and painted to look like wood.

Check out the Wikipedia page and you’ll see several examples, including the program that sparked a number of articles. That’s Apple’s iCal calendaring application on the company’s iPhone and iPad devices, or Calendar on a Mac.

Look at the calener on an iPad. See how the app is designed to resemble an old printed calendar, and the top of the app looks like embossed leather, complete with stitching? See how there’s even a little graphic detail that make it look like pages have been torn out.

Some find that kitschy or distracting. Some find it cute. Some people, like me, never particularly noticed those elements. Some people, apparently like the late Steve Job, believe that faux-reality designs like the leather calendar, or like the wooden bookshelves in iBooks, enhance the experience. Some people, apparently, are infuriated by the notion of foisting an outdated analog user-interface model on a digital device.

A number of those infuriated people are quoted in a story in Fast Company, “Will Apple’s Tacky Software-Design Philosophy Cause a Revolt?”

Some of these designs may be nostalgic to older customers, but may be increasingly meaningless to most consumers of digital products. I’ve seen phone-dialer apps that look like the old rotary telephone dial – and they’re stupid, in my humble opinion. So are address-book apps that look like an old Rolodex, or calendar programs that resemble the Pocket Day-Timer I carried around in the 1980s and 1990s.

If you (or your young coworkers) never used a rotary phone, or owned a Rolodex, or carried a Day-Timer, those user interface metaphors make little sense. They don’t enhance productivity, they detract from it.

Worse, the strictures of the old UI metaphors may constrain the creativity of both developers and end users. If you want to innovate and reinvent productivity tools or business applications, you may not want to force your visual design or workflow to conform to old analog models. Microsoft’s Windows 8, in fact, is being held up as the new paradigm – simple colorful squares, no drop shadows or eye candy, and no skeuomorph. See another article from Fast Company, “Windows 8: The Boldest, Biggest Redesign in Microsoft’s History.”

The jury is in: Samsung was found to have infringed upon Apple’s numerous mobile patents. The jury’s verdict form, handed down in the United States District Court in San Jose, Calif., found that in many cases that the “Samsung entity has diluted any Apple trade dress(es).” What’s more, Apple proved “by a preponderance of the evidence that the Samsung entity’s direction was willful.”

Ouch. This is the worst case scenario for Samsung. Forget about the US$1.049 billion in damages that Samsung is supposed to pay Apple. What this means is that the jury agreed with what everyone knew simply by looking at the hardware and playing with the software: the Samsung Galaxy Tab 10.1 is just like the iPad.

On the short term, this ruling is going have a chilling effect not only on Apple, but on every maker of Android devices. The more similar the devices are to Apple’s iOS phones and tablets, the more scared the hardware manufacturers are going to be. (That is, if the verdict stands and isn’t overturned on appeal.)

We can expect to see a lot of introspection within the Android ecosystem. Google, Samsung and the other device manufacturers will look close, really close, to make sure they stay away from the specific patents cited in this case.

We can expect to see software updates and hardware guidelines that will take Android devices farther from Apple’s devices.

On the short term – this will depress sales of Android devices. On the longer term, we will see a ton of innovation that will truly differentiate Android from iOS.

For too long, Android handset- and tablet-makers have been trying to get as close to the iPhone and iPad design as possible. It’s not laziness or a lack of technical savvy, in my opinion. It’s just that Apple has done such a good job of defining the smartphone and tablet that consumers expect that, well, that’s just how the platforms should work.

Salespeople want to sell Android devices that are identical to Apple devices, only less expensive.

Consumers who choose Android are sometimes making those selections based on technical merit, but are sometimes looking for something that’s just like an iPhone/iPad, only different. Perhaps they want more memory, perhaps a bigger phone screen, perhaps a smaller tablet screen, perhaps a slide-out keyboard, sometimes a removable battery, sometimes simply a brand that isn’t spelled “Apple.”

Of course, with rumors that Apple is about to release a 7-inch iPad, the job of Android tablet companies is only going to get harder. In my own informal polling, folks who have purchased 7-inch tablets have done so mainly because Apple doesn’t sell one.

For the next year or so, Samsung and the whole Android community will fall back and retrench. That will involve unleashing innovation that may have been stifled, as they preferred to imitate the iOS designs instead of pushing their own ideas.

Imitation may be the most sincere form of flattery – but in the smartphone and tablet markets, imitation is off the table. For good.

This past week, I’ve started receiving messages from eFax telling me that I’ve received a fax, and to click on a link to download my document. As a heavy eFax user, this seemed perfectly normal… until I clicked one of the links. It took me to a malware site. Fortunately, the site was designed to target Windows computers, and simply froze my Mac’s browser.
The faux eFax messages were well designed. They had clean headers and made it through my email service provider’s malware filters.
Since then, six of those malicious messages have appeared. I have to look carefully at the embedded link to distinguish those from genuine eFax messages with links to genuine faxes.
The cybercrime wars continue unabated, with no end in sight. I’ve also received fake emails from UPS, asking me to print out a shipping label… which of course leads me to a phishing site.
Malicious email – whether it’s phishing, a “419”-style confidence scam, or an attempt to add your computers to someone’s botnet – is only one type of cybercrime. Most of the time, as software developers, we’re not focusing on bad emails, unless we’re trying to protect our own email account, or worrying about the design of emails sent into automated systems. SQL Injection delivered by email? That’s nothing I want to see.
Most of the attacks that we have to content with are more directly against our software – or the platforms that they are built upon. Some of those attacks come from outside; some from inside.
Some attacks are successful because of our carelessness in coding, testing, installing or configuring our systems. Other attacks succeed despite everything we try to do, because there are vulnerabilities we don’t know about, or don’t know how to defend against. And sometimes we don’t even know that a successful attack occurred, and that data or intellectual property has been stolen.
We need to think longer and harder about software security. SD Times has run numerous articles about the need to train developers and tester to learn secure coding techniques. We’ve written about tools that provided automated scanning of both source code and binaries. We’re talked about fuzz testers, penetration tests, you name it.
What we generally don’t talk about is the backstory – the who and the why. Frankly, we generally don’t care why someone is trying to hack our systems; it’s our job to protect our systems, not sleuth out perpetrators.
We are all soldiers in the cybercrime war – whether we like it or not. Please read a story by SD Times editor Suzanne Kattau, “Cybercrime: How organizations can protect themselves,” where she interviewed Steve Durbin, for the Information Security Forum. It’s interesting to see this perspective on the broader problem.

Let’s talk about the HP-67 and HP-97 programmable calculators.

Introduced in 1976, both those models hold place of pride in my collection of vintage computation devices – which consists of a tremendous number of older Hewlett-Packard and Texas Instruments calculators, as well as dozens of slide rules going back to the late 1800s.

The four-function pocket calculator was the feature phone of its era. Arriving in the early 1970s, they swiftly replaced adding machines. The HP-35 calculator (1972) with its trig, log and exponential functions, singlehandedly killed the slide rule industry.

Programmable calculators with persistent removable storage – specifically Hewlett-Packard’s HP-65 (1974) and Texas Instruments’ SR-52 (1975) – were the equivalent of the first smartphones. Why? Because you could store and load programs on little magnetic cards. You could buy pre-written packs of programs on those cards from HP and TI. There were user groups where calculator programs could publish and share programs. And there were even a few commercial developers who sold programs on cards as well.

Some of my earliest published programs were written for HP and TI calculators in the mid-1970s. A foundational part of my own history as a computer scientist was learning how to do some pretty sophisticated work with only a few hundred bytes of addressable memory. Not megabyes. Not kilobytes. Bytes.

In modern terms, we would call calculator programs distributed on mag cards “apps.” The HP-65 Users Library and the TI PPX-52 (Personal Program Exchange) were among the first app stores.

This brings me to the HP-67 and HP-97, which were introduced simultaneously at prices of US$450 and $750, respectively. They were essentially the same device – except that the HP-67 was a 0.7-pound pocket calculator and the HP-97 was a 2.5-pound battery-powered desktop model with a built-in thermal printer.

“Calculator” is probably the wrong word for these devices. They were portable computers – in fact, they were truly personal computers, albeit with a custom microprocessor, one-line numeric display and only 224 bytes of programmable memory.

Although the form factors and key placement were different – and the HP-97 had the printer – both used the same programming language. Both models had a mag-card reader – and a program written on one could be used on the other without modification. This was unique.

In modern terms, the HP-67 and HP-97 were like handhelds and tablets sharing the same apps, like the iPhone and iPad, or Android phones and tablets.

No matter how far we’ve come, we’ve been here before.

I don’t like the trend toward ‘brogrammers’ – that is, a very chauvinistic, juvenile attitude that seems to be creating a male-centric, female-exclusionary culture in software development departments – and across IT. It’s time to put an end to the put-downs, pin-ups, constant sports in-jokes and warfare metaphors, management by belittlement, and insulting locker-room attitude.

When I was a student studying math and computer science, nearly all of my fellow students, and nearly all of the faculty, were male. Although my idol was Admiral Grace Hopper, there were few Grace Hoppers in our profession to serve as role models for young women — or men.

Change came slowly. In the 1980s, nearly all writers of technical articles in computer magazines were male. Nearly all readers were mail. Nearly all attendees of technology conferences were male; the females at the show were almost exclusively marketers or booth babes.

Much has changed in the past few decades. For example, while the demographic research shows that most SD Times readers are male, the percentage of female readers is rising. The same is true of the technical conferences that our company produces. While female faces are still a minority, that is becoming less true every year, thanks in part to organizations like the Anita Borg Foundation.

That’s a good thing. A very good thing. Our fast-growing, demanding profession needs all the brainpower we can get. Women, we need you. Having female programmers on your team doesn’t mean that you need to buy pink mice and purple IDEs. It means that you have more top-notch architects, coders and testers, and you will create better software faster.

That’s why the so-called brogrammer trend is so infuriating. Why don’t managers and executives understand?

A few days ago, a female techie friend wrote to me in anger about a new website called Hot Tech Today which features short technology stories allegedly written by attractive young women posing in bikinis.

Disgusting.

We are better than this. We must be better than this.

Let’s put our resources into changing the brogrammer culture. Let’s make our profession not only safe for females, but also inviting and friendly. That means ditching the inappropriate language, curbing the stupid jokes, stopping the subtle put-downs of the women in your organization, and having a zero-tolerance rule to anyone who creates a hostile work environment for anyone, regardless of gender, race, national origin or anything.

Brogrammers. Just say no.

For more on this nasty trend, see:

The Rise of the Brogrammer, by SD Times’ Victoria Reitano

Oh Hai Sexism, by Charles Arthur

In tech, some bemoan the rise of the ‘brogrammer’ culture, by Doug Gross

In war for talent, ‘brogrammers’ will be losers, by Gina Trapani

Toys, toys, toys. I love to read about new toys, especially sleek sports cars and nifty computerized gadgets. This week has been a bonanza – from two different directions.
You might think my focus would be on the big annual Consumer Electronics Show in Las Vegas. Actually, I’ve been more keenly following the happenings at the North American International Auto Show, which kicked off January 9.
Dozens of exciting cars and concept vehicles were introduced at the NAIAS, which is also known as the Detroit Auto Show. They include a smokin’ hot Acura NSX super car (pictured), the futuristic Lexus LF-LC, a new Mini Roadster, the four-door Porsche Panamera Turbo R, the fast-looking Mercedes SL550, the BMW i8 electric car… the list goes on and on.
A big part of the news from Detroit overlapped what was also talked about at the Consumer Electronics Show. Sure, CES features lot of “ultrabook” lightweight notebook computers, incredibly thin televisions, high-definition digital cameras, three-dimensional printers, even electric razors. But automotive computers were very much front and center.
There’s a lot more to computerized cars than iPod jacks or even streaming Pandora on a 28-speaker Bose sound system. Companies like BMW, Ford and Mercedes-Benz are integrating phone applications with vehicles’ onboard computers. The smartphone sends the car email and text messages. The car sends back real-time diagnostics. I’m told you can even make phone calls!
Soon, you will update your car’s firmware as often as you update your smartphone’s apps.
To change the subject only slightly: Let’s talk about developing smartphone software. You know that BZ Media – the company behind SD Times and News on Monday – produces developer conferences for Android and iPhone/iPad developers. We are proud to announce support for another platform at WPDevCon: The Windows Phone Developer Conference.
WPDevCon is coming to the San Francisco Bay Area from Oct. 22-24, 2012. We are currently assembling a full slate of workshops and technical classes, and the program will be ready in early March. However, we invite you to check out the website, www.wpdevcon.net, and of course, mark your calendar if you or your colleagues are interested in attending.
Want to propose a class? See the Call for Speakers and then drop me a line. Interested in exhibiting? Contact my colleague Adam Teichholz.
Which is more interesting to you, the latest cars at the Detroit Auto Show or the snazzy gadgets at the Consumer Electronics Show?

Going agile makes sense. Navigating with traditional methodologies doesn’t make sense. I don’t know about you, but nothing sucks the life out of a software development project faster having to fully flesh out all the requirements before starting to build the solution.

Perhaps it’s a failure of imagination. Perhaps it’s incomplete vision. But as both a business owner and as an IT professional, it’s rare that a successfully completed application-development project comes even close to matching our original ideas.

Forget about cosmetic issues like the user interface, or unforeseen technical hurtles that must be overcome. No, I’m talking about the reality that my business – and yours, perhaps – moves fast and changes fast. We perceive the needs for new applications or for feature changes long before we understand all the details, dependencies and ramifications.

But we know enough to get started on our journey. We know enough to see whether our first steps are in the first direction. We know enough to steer us back onto the correct heading when we wander off course. Perhaps agile is the modern equivalent of celestial navigation, where we keep tacking closer and closer to our destination. In the words of John Masefield, “Give me a tall ship and a star to steer her by.”

Contrast that to the classic method of determining a complete set of requirements up front. That’s when teams create project plans that are followed meticulously until someone stands up and says, “Hey, the requirements changed!” At that point, you stop, revise the requirements, update the project plan and redo work that must be redone.

Of course, if the cost of creating and revising the requirements and project plan are low, sure, go for it. My automobile GPS does exactly that. If I tell it that I want to drive from San Francisco to New York City (my requirements), it will compute the entire 2,907-mile journey (my project plan) with incredible accuracy, from highway to byway, from interchange to intersection. Of course, every time the GPS detects that I missed an exit or pulled off the highway to get fuel, the device calculates the entire journey again. But that’s okay, as the cost of having the device recreate the project plan when it detects a requirements change is trivial.

In the world of software development, the costs of determining, documenting and getting approvals for a project’s requirements and project plans are extremely expensive, both in terms of time and money. Worse, there are no automated ways of knowing when business needs have changed, and therefore the project plan must change also. Thus, we can spend a lot of time sailing in the wrong direction. That’s where agile makes a difference – be design, it can detect when something going wrong faster than classic methodologies.

In a perfect world, if it were easy to create requirements and project plans, there would be no substantive difference between agile and classic methodologies. But in the messy, every-changing real world of software development that I live in, though, agile is the navigation methodology for me.

A few weeks ago, in “Can you trust the integrity of your data,” I wrote about the potential for shenanigans with a new computer-controlled watt-hour meter that a local electric utility installed at my home. The worry: My bill might go up.

That, my friends, may only be the tip of the iceberg.

We’ve all heard about backdoors installed into software – secret root passwords, or overrides installed into payroll software. Many of those backdoors are urban legends, but I’ve encountered such things in real life. You probably have too.

What if backdoors are being installed into your nation’s defense systems at the hardware level – secretly – by your enemies? While that sounds like the topic of a good science-fiction movie, it’s not a far-fetched scenario at all.

On Oct. 26, John Markoff of the New York Times wrote a cyberwar story called “Old Trick Threatens the Newest Weapons.” He wrote that only about 2% of the chips used in American military equipment are manufactured in secure facilities, and that the other 98% might hide kill switches or backdoor access points.

“As advanced systems like aircraft, missiles and radars have become dependent on their computing capabilities, the specter of subversion causing weapons to fail in times of crisis, or secretly corrupting crucial data, has come to haunt military planners. The problem has grown more severe as most American semiconductor manufacturing plants have moved offshore.”

Could attempts to subvert those chips be detected? Not a chance. Markoff wrote chillingly,

“Cyberwarfare analysts argue that while most computer security efforts have until now been focused on software, tampering with hardware circuitry may ultimately be an equally dangerous threat. That is because modern computer chips routinely comprise hundreds of millions, or even billions, of transistors. The increasing complexity means that subtle modifications in manufacturing or in the design of chips will be virtually impossible to detect.”

The thought that an enemy of your country could shut down – or take over – one of your nation’s weapon systems is terrible to contemplate. The threat, however, isn’t merely to defense systems or military equipment. What would be the economic implications of secret kill switches built into business-grade network servers or network routers? How about remote subversion of consumer-grade mobile phones, laptop computers or automobile chips?

And to think I was worried about my electricity bills.

It looks like Oracle is going to buy Sun Microsystems for $5.6 billion (net of Sun’s cash cache). Maybe the deal won’t happen. Maybe IBM will swing in with a counter offer. At this point, though, the odds are good that Oracle’s going to end up owning Java and all the other Sun technologies.

Oracle is getting a lot of very nice intellectual property. Whether that IP — as well as Sun’s product lines, maintenance agreements, licenses, consulting gigs and sales contracts — are worth $5.6 billion, that’s hard to say.

Overall, though, Oracle is clearly the biggest winner in this deal. It’s getting core technology that will cement its position in the application server market, and also give it obvious control over key industry specifications like the Java language, the enterprise Java EE platform, and the very important Java ME platform. Expect Oracle to exercise that control.

Let’s see who else wins and loses.

Loser: IBM. For years, I’ve speculated that IBM would purchase Sun just to secure a tight control over Java – which is a core technology that IBM depends upon. Now, that technology, as well as the Java Community Process, is going to fall into enemy hands. Bummer, Big Blue.

Winner: Java. Java is very important to Sun. Expect a lot of investment — in the areas that are important to Oracle.

Loser: The Java Community Process. Oracle is not known for openness. Oracle is not known for embracing competitors, or for collaborating with them to create markets. Instead, Oracle is known to play hardball to dominate its markets.

Winner: Customers that pay for Sun’s enterprise software. Oracle will take good care of them, though naturally there will be some product consolidation. Software customers may like being taken of by a company that’s focused on software, not hardware.

Loser. Customers that use open-source or community-supported versions of Sun’s software. Oracle is not in the free software business, except when that free software supports its paid software business. Don’t expect that to change.

Winner: Enterprise Linux vendors. Red Hat and other enterprise Linux distros will be dancing if Oracle decides that it doesn’t want to be in the Solaris business. On the other hand, this purchase makes it less likely that Oracle will spend big dollars to buy Red Hat in the near future.

Loser: MySQL customers. If Oracle keeps MySQL, expect it to be at the bottom of the heap as a lead-in for upgrades to Oracle’s big-gun database products. If Oracle decides not to kill or spin off MySQL, that’s going to mean disruption for the community.

Winner: Eclipse Foundation. Buh-bye, NetBeans! Oracle is heavily invested in Eclipse, and would be unlikely to continue investing in NetBeans. It’s hard to imagine that anyone would buy it, and the community probably couldn’t thrive if Oracle set it free.

Loser: Sun’s hardware customers. If Oracle stays in the hardware business, expect those Sun boxes to be only a bit player in Oracle’s product portfolio. If Oracle sells it, whoever buys it will probably milk it. How does “IBM System s (SPARC)” sound to you? Not very attractive.

Biggest Winner: Sun’s shareholders, including employees with options. After watching their shares plummet in value, and after getting a scare from IBM’s paltry offer, they must be counting their blessings right now.

Cloud computing took a big hit this week amid two significant service outages.

The biggest one, at least as it affects enterprise computing, is the eight-hour failure of Amazon’s Simple Storage Service. Check out the Amazon Web Services service health dashboard, and then select Amazon S3 in the United States for July 20. You’ll see that problems began at 9:05 am Pacific Time with “elevated error rates,” and that service wasn’t reported as being fully restored until 5:00 pm.

About the error, Amazon said,

We wanted to share a brief note about what we observed during yesterday’s event and where we are at this stage. As a distributed system, the different components of Amazon S3 need to be aware of the state of each other. For example, this awareness makes it possible for the system to decide to which redundant physical storage server to route a request. In order to share this state information across the system, we use a gossip protocol. Yesterday, we experienced a problem related to gossiping our internal state information, leaving the system components unable to interact properly and causing customers’ requests to Amazon S3 to fail. After exploring several alternatives, we determined that we had to temporarily take the service offline so that we could clear all gossipped state and restart gossip to rebuild the state.

These are sophisticated systems and it generally takes a while to get to root cause in such a situation. We’re working very hard to do this and will be providing more information here when we’ve fully investigated the incident. We also wanted to let you know that for this particular event, we’ll be waiving our standard SLA process and applying the appropriate service credit to all affected customers for the July billing period. Customers will not need to send us an e-mail to request their credits, as these will be automatically applied. This transaction will be reflected in our customers’ August billing statements.

Kudos to Amazon for issuing a billing adjustment. However, as we all know, the business cost of a service failure like this vastly exceeds the cost you pay for the service. If your applications were offline for eight hours because Amazon S3 was malfunctioning, that really hurts your bottom line. This wasn’t their first service failure, either: Amazon S3 went down in February as well.

Less significant to enterprises, but just as annoying to those concerned, involved hosted e-mail accounts hosted on Apple’s MobileMe service. MobileMe is the new name of the .Mac service, and the service was updated in mid-July along with the launch of the iPhone 3G. Unfortunately, not everything worked right. As you can see from Apple’s dashboard, some subscribers can’t access their email. Currently, this is affects about 1% of their subscribers — but it’s been like that since last Friday.

According to Apple,

We understand this is a serious issue and apologize for this service interruption. We are working hard to restore your service.

This reminds me of the poem from that great Maine writer, Henry Wadsworth Longfellow:

There was a little girl
Who had a little curl
Right in the middle of her forehead;
And when she was good
She was very, very good,
But when she was bad she was horrid.

I echo the comments by Tina Gasperson, in her post, “Linux distro for women? Thanks, but no thanks.” It reminds me of the tool kits for women you see in all the department stores, with pink-handled screwdrivers “just for her.”

What, my wife can’t use our Craftsman screwdrivers or Black & Decker drills? We’re supposed to have two sets of tools, one for me and our son, one for my wife? Are we supposed to buy some Craftswoman tools, or get her gear from Pink & Decker? How condescending.

Software, including operating systems, should be written for people. Not for men, not for women, not for girls, not for boys. People.

I never knew that the Red Hat and SUSE were “for boys,” and that my wife is supposed to run a different server operating system than the males in the household.

How stupid is that?