bankamericard“My name is Patricia from the Bank of America fraud prevention department. This important message is for Mr. Alan Zeichick. We are calling to verify some potentially suspicious activity on your account. It is very important that we speak with you.”

Tuesday’s voicemail from my bank was short and simple. Nobody had pilfered a credit-card receipt or hacked into my account, the representative told me during our conversation. Rather, BofA had been notified by Visa (the credit card clearinghouse) that a retailer had been hacked, and many credit card numbers were stolen. Including mine. As of right now, my card was frozen; the bank will issue me a new card with a new number.

Who was the merchant? According to the BofA representative, Visa didn’t divulge that information due to an ongoing investigation. Nor did the representative know how many credit card numbers were stolen; all she knew what was that BofA was given a list of their bank’s customers who were affected.

These stories are coming far too often. Millions of cards were stolen in 2014 from diverse merchants like P.F. Chang’s China Bistro (a restaurant chain), Michaels Stores (art supplies), Sally Beauty (cosmetics), and Shaws (grocery stores). And those are only a few of the major vendors. Who knows how many smaller card thefts are either never reported, or aren’t deemed sufficiently juicy by the news media?

Some of you might be thinking, “We don’t take credit card numbers on our websites, so there’s no potential risk exposure.” Wrong. I am frequently astonished by the number of companies that maintain lots of customer data, and have that data pilfered. The Payment Card Industry (PCI) standards say that you should never store customer payment information. We’ve all seen that those standards are not followed, sometimes intentionally through neglect, and sometimes through flawed architecture, bad coding or lousy testing.

Let’s be clear: Encrypting browser communications does not protect your customers’ personal or financial information. If you are storing that information anywhere—in your data center, in the cloud—it is at real risk. The threats are active. Are your countermeasures active?

What is even more astonishing is that many of these thefts are of personal information stored on employees’ laptops. You may recall a high-profile case in 2013, where nearly 840,000 Horizon Blue Cross Blue Shield customers had their information compromised when two laptops were stolen from the New Jersey-based health insurance company.

To quote from the Star Ledger’s story,

The stolen laptops were password-protected but had unencrypted data, Horizon said in a statement today. A subsequent investigation determined the computers may have contained files with personal information, including names, addresses, dates of birth, and, in some instances, Social Security numbers and limited clinical information, the insurer said.

How is that possible? No possible scenario should allow customer information to be downloaded onto a desktop or laptop or tablet or phone. Ever. Encrypted or not, the data should never leave the server.

Please tell me you aren’t storing credit card info in files that can be stolen. Please tell me  your company has actively sought to ensure that customer information can never ever ever be downloaded from servers.

Data theft is a nuisance, for cardholders like me, and for businesses like yours. Do you protect your customers’ information?

azure-statusCloud-based development tools are great. Until they don’t work.

I don’t know if you were affected by Microsoft’s Azure service outage on Thursday, August 14, 2014. As of my deadline, services had been offline for nearly six hours. On its status page, Microsoft was reporting:

Visual Studio Online – Multi-Region – Full Service Interruption

Starting 22:45 13 Aug, 2014 UTC, Visual Studio Online customers may have experienced issues with latency and extended Execution times. The initial incident mitigated at approximately 14:00 UTC. During investigation at 13:52 14 Aug, 2014 UTC, engineering teams began receiving alerts for a separate issue where customers were unable to log in to their Visual Studio Online services. From 13:52 to 19:45 on 14 Aug, 2014 UTC, customers were unable to access their Visual Studio Online resources. Engineering teams have validated their mitigation efforts for both issues and have confirmed that full service has been restored to our Visual Studio Online users. These incidents are now mitigated.

My goal here isn’t to throw Microsoft under the bus. Azure has been quite stable, and other cloud providers, including Amazon, Apple and Google, have seen similar problems. Actually, Amazon in particular has seen a lot of uptime and stability problems with AWS over the past couple of years, though its dashboard on Thursday afternoon shows full service availability.

Let’s think about the broader issue. What’s your contingency plan if your cloud-based services go down, whether it’s one of those players, or a service like GitHub, Salesforce.com, SourceForge, or you-name-it? Do you have backups, in case code or artifacts are lost or corrupted? (Do you have any way to know if data is lost or corrupted?)

This is a worry.

In the case of the August 14 outage, the system wasn’t down for long — but long enough to kill a day’s productivity for many workers. Microsoft’s Visual Studio Online blog has a little bit of insight into the problem, but not much. Posted at 16:56 UTC, Microsoft said:

The actual root cause is still under investigation, but initial investigation is indicating a contention in our core database seems to be causing blocking and performance issues in the services. Our DevOps teams have identified a couple of mitigation steps and currently going thru validations. We will provide an update as soon as we have a mitigation in place. We apologize for the inconvenience and appreciate your patience while working on resolving this issue

This time you can blame Microsoft for any loss of productivity. Next time the service goes down, if you haven’t made contingency plans, the blame is yours.

four-cornersWe drove slightly more than 2,500 miles (4,000 kilometers), my wife and I, during a weeklong holiday. We explored different states in the western United States: Arizona (where we live), Colorado, New Mexico and Wyoming. The Rocky Mountains are incredible. Most of our vacation was at altitudes above 6,000 feet (1,800 meters). Many of the mountain peaks were above 14,000 feet (4,200 meters), and one road went above 11,000 feet (3,300 meters). Exciting!

The adventure involved bringing only smartphones, one running Android, one running iOS. We used mobile apps for navigation, for communication, for photography, for reading, for social media, for finding hotels and restaurants, just about everything.

We learned that apps only seem to run well when there is copious bandwidth, either WiFi at a hotel or a fast cellular data link. If a smartphone registered 4G or LTE, all was good. If the phone indicated that the connection was EDGE, GPRS or 3G, all bets were off. It’s not that data loaded slowly. That would be expected. It’s that the apps would crash, or time out, or posting data would fail, or nothing would happen at all. Many modern apps expect or demand lots and lots of bandwidth.

I’m not talking here about apps running completely offline. That’s an entirely different conversation. I’m talking about apps not gracefully handling situations where the bandwidth is narrower than a drinking straw.

Many developers test out their mobile apps using simulators. That, or on devices that have very high bandwidth connections, such an office WiFi network or the type of high-speed network that you’ll find in Silicon Valley, New York City, or other major tech hubs around the world. Having lots of mobile bandwidth is undoubtedly a blessing for developers, but for many consumers, that’s simply not the case.

Lots of customers live in areas with poor bandwidth, or find themselves traveling in places where connectivity is slow or intermittent. Given the use cases for mobile devices—that is, they are frequently used when not at home or in an office—optimizing apps for bad bandwidth should be mandatory. Hey, this isn’t about streaming 1080p movies. This is about being able to use a search engine, or call up a map, or be able to find a hotel room.

Will people use your apps in poor-bandwidth or intermittent-bandwidth situations? If so, here are some steps you can do to improve the user experience:

  1. Make sure that part of your testing involves low-bandwidth and intermittent-bandwidth scenarios. Find beta testers who live with poor bandwidth or who travel to such locations.
  2. Have your app test for throughput, and not only at application launch. Merely detecting whether the connection is WiFi or cellular is insufficient. If throughput is low, consider degrading the experience, such as by using lower-end graphics, in order to keep data moving.
  3. Cache, cache, cache.
  4. Don’t insist on reloading data each and every time the user either launches the app or switches to it. Alan’s pet peeves include news and other websites that freeze the UI while loading the latest headlines or content each time the app is brought to the foreground.
  5. If you detect that the device is in a low-bandwidth environment, pause background data syncing, or at least ask the user if he/she would like to do so.
  6. If you are sending audio or video, compress the heck out of it. That may involve choosing different algorithms for different bandwidth situations, with low-bandwidth scenarios using narrower and lossier codecs.

Microsoft-SharePoint-Foundation-2010-logoWhere do your employees go to find shared data? If it’s external data, probably an external search engine, like Google (which apparently holds 67.6% of the U.S. market) or Bing (18.7%) or one of the niche players.

What about internal corporate data? If your organization uses a platform like Microsoft’s SharePoint, that platform includes a pretty robust search engine. You can use SharePoint to find documents stored inside the SharePoint database, or external documents linked to it, and conversations and informal data hosted by SharePoint. If you are familiar with a product called FAST, which Microsoft acquired in 2008, SharePoint’s search contains some elements of FAST and some elements of Bing. It’s quite good.

What if you are not a SharePoint shop, or if you are in a shop that hasn’t rolled SharePoint out to every portion of the organization?  You probably don’t have any good way for employees to find structured and unstructured documents, as well as data. You’ve got information in Dropbox. In Box.com. In Lotus Notes, maybe. In private Facebook groups. In Yammer (another Microsoft acquisition, by the way). In Ribose, a neat startup. Any number of places that might be on enterprise servers or cloud services, and I’m not even talking about the myriad code repositories that you may have, from ClearCase to Perforce to Subversion to GitHub.

All of those sources are good. There are reasons to use each of them for document sharing and collaboration and source-code development. That’s the problem. Like the classic potato chips advertisements say, you can’t only eat one.

Even in a small company, the number of legitimate sharing platforms can proliferate like weeds. As organizations grow, the potential places to stash information can grow exponentially, especially if there is a culture that allows for end users or line-of-business departments to roll out ad hoc solutions. Add mobile, and the problem explodes.

This is a governance problem: How do you ensure that data is accounted for, check that external sharing solutions are secure, or even detect if information has been stolen or tampered with?

This is a productivity problem: How much time is wasted by employees looking for information?

This is a business problem: How much money is wasted, or how much work must be duplicated or redone because data can’t be found?

This is a Big Data problem: How can you analyze it if you can’t find it?

The answer has to be a smarter intranet portal. In a recent essay by the Nielsen Norman Group, usability experts Patty Caya and Kara Pernice write that “Intranet portals are the hub of the enterprise universe.”

The trick is to discover it, index it, and make it available to authorized users—without stifling productivity. That includes data from applications that your developers are creating and maintaining.

satya-nadellaMicrosoft has evolved considerably. It’s moved from its early days selling developer tools, or its era focusing on Windows and Office, or its run as a server software maker, or its first iteration as a cloud/online services company. Despite all the myriad changes, it’s always been true that Microsoft does not excel at innovation.

In fact, when the company focuses on innovation, it often misses with its products and pricing. Features are implemented badly, bugs proliferate, messages are muddled and strategy appears non-existent.

This confuses customers, annoys developers and frustrates partners.

When, by contrast, Microsoft focuses on execution, it does much, much better. Software and services are about getting the details right, and that means understanding the customers, not slamming out a bewildering product that has state-of-the-art technology but doesn’t make sense to anyone.

This is true whether you are talking about operating systems like Windows, or back-end products like Bing or SharePoint, or mobile phones. The new, innovative, visionary, ground-breaking products (or product upgrades) nearly always disappoint.

Reading new CEO Satya Nadella’s letter to his employees, I am concerned that Microsoft doesn’t understand that customers want excellent products. That means execution more than it means innovation.

Nadella’s letter, called “Bold Ambition & Our Core,” was published on July 10. Right up front, Nadella says, “The day I took on my new role I said that our industry does not respect tradition – it only respects innovation.”

That scares me. I think he misses the point.

Nadella writes,

At our core, Microsoft is the productivity and platform company for the mobile-first and cloud-first world. We will reinvent productivity to empower every person and every organization on the planet to do more and achieve more.

What does it mean to reinvent productivity? I’m sure it means more than carrying around a Microsoft Surface Pro 3 device that tries to be both a notebook computer and a tablet, but doesn’t truly succeed in either configuration.

Nadella continues,

Productivity for us goes well beyond documents, spreadsheets and slides. We will reinvent productivity for people who are swimming in a growing sea of devices, apps, data and social networks. We will build the solutions that address the productivity needs of groups and entire organizations as well as individuals by putting them at the center of their computing experiences.

It’s a beautiful concept – but so far, Microsoft’s bread and butter has been specifically documents, spreadsheets and slides. Is he talking about SharePoint and Yammer?

In the 3,000-word missive, Nadella spends a lot of time talking about specific areas. He talks about “digital work and life experiences,” which are productivity enhancers designed for the mobile-first and cloud-first world. He talks about context-rich connections between experience, such as with the Cortana app on Windows Phone. He talks about the cloud, where

the combination of Azure and Windows Server makes us the only company with a public, private and hybrid cloud platform that can power modern business. We will transform the return on IT investment by enabling enterprises to combine their existing datacenters and our public cloud into one cohesive infrastructure backplane.

Nadella also talks about Xbox:

The single biggest digital life category, measured in both time and money spent, in a mobile-first world is gaming. We are fortunate to have Xbox in our family to go after this opportunity with unique and bold innovation. Microsoft will continue to vigorously innovate and delight gamers with Xbox.

What’s missing from Nadella’s call-to-arms letter? You won’t read much specifically about Windows Phone, about notebooks and desktop computers, about desktop Windows, or even traditional Office.

You also didn’t see much about execution, about delivering excellent products. All I read is innovate, innovate, innovate. Ideas are nice, Mr. Nadella, but I’d like to see a company that actually delights its customers, instead of frustrating them with its latest upgrades.

OTN-Tour-2014-370x395If your developers aren’t enrolled in developer relations programs, they will grow old and stale. They will become moldy. They will pine for the Good Old Days and opine endlessly about the irrelevance of new tools, new platforms, new paradigms and new ideas. No matter their brilliance today, they will become obsolescent.

You can’t let that happen!

Developer relations programs are all over the map, literally. Some are focused on operating systems – see those from Apple and Microsoft. Some are about back-end platforms, like programs from IBM or Oracle. Some are tied to very specific products.

It’s hard to know where your developers will get the best value. Let’s take a very simplistic case. If you have a bright programmer who is working to integrate back-end Oracle databases with Windows servers, should she be a member of the Oracle Technology Networkor MSDN? Likely both; it doesn’t hurt to sign up. But where should she spend her time?

It’s tricky to make that call, and it largely depends on both the developers’ self-starter motivation and your own corporate culture. Some developer programs are free, but others aren’t, with prices ranging from a hundred dollars per year to thousands of dollars. Do you offer to cover the costs of belonging to the developer program for each architect, designer, coder or tester who wants to sign up – or do they have to go through hoops that send out the message that the programs aren’t important (or that the employee isn’t worth the investment)?

Let’s say that you are running a Windows shop, and being a Windows Server guru is seen as essential for career growth. Clearly, your bright programmer should grow and enhance her skills as a Windows expert. While deep Oracle expertise is essential, it might be a secondary investment for her time.

Of course, if your team is seen as an Oracle shop, and the Microsoft aspect is seen as secondary, she should invest her time in Oracle technologies.

The scenario above is too simple. There’s no reason that the bright programmer can’t participate in two developer programs. However, what’s a reasonable ceiling. Two? Three? Five? Ten? If developers spread themselves out too thin, it’s hard to gain deep expertise. To my mind, a developer should engage with 3-5 development programs; probably no more. Depending on the situation, though, perhaps only one or two would be appropriate. If you have a developer who doesn’t see any benefit in belonging to a developer relations program, look at where he or she is spending time. There may be local user groups that provide the same level of engagement. But if you have someone who doesn’t want to engage at all in the larger world beyond his or her team — who doesn’t see the value of building deep expertise in products or platforms — you should be concerned.

Early in 2014, the market research firm Evans Data Corp. conducted a study on developer relations programs. They asked developers, “What most motivates you to seek solutions from developer programs?” The answers should not surprise anyone:

  • 35.5%: Need to upgrade from existing, outdated technology
  • 24.3%: Present skillset is insufficient
  • 21.7%: Present toolsets are insufficient
  • 8.5%: Anticipating future problems
  • 6.3%: Need to match or beat competition
  • 3.8%: Other

I’d keep an eye on those who answered “present skillset is insufficient.” Those employees are investing in themselves — and they are going places!

googletvGOOGLE I/O 2004, SAN FRANCISCO — What is Android? It’s hard to know these days, and I’m not sure if that’s good or not. We all know what happened when Microsoft began seeing Windows as a common operating system for everything from embedded systems to desktops to phones to servers. By trying to be reasonably good at everything, Windows lost its way and ceased being the best platform for anything.

Once upon a time, Android was a free operating system for smartphones, conceived of as a rival for Symbian and (believe it or not) Windows Mobile. Google purchased Android Inc. in 2005; the Open Handset Alliance launched in 2007; and the first smartphone running Android appeared in 2008. Today, Android-based phones dominate the market, with the most visible handset makers being Samsung and LG. Some estimates show that at the end of 2013, more than 81% of all smartphones were running Android.

From its origins in smartphones, it was natural that Android would expand to tablets. Although no Android tablet has emerged as a clear market leader, there are many manufacturers, from Samsung to Amazon to Google to Asus. While Android has decisively eclipsed Apple’s iPhone in the smartphone market, the iPad still defines tablets.

What else? Android is now an operating system for head-mounted displays, smartwatches, wearables, televisions and automotive entertainment systems.

We’re all familiar with Google Glass, which is based on Android. The company is working hard to recruit developers to build Glassware. This spring, Android announced Android Wear, which is described as “your key to a multiscreen world,” especially if one of those screens will be a smart watch. A few companies, including LG, Samsung and Motorola, have announced watches.

Remember Google TV? It was not a success in the market. The replacement, announced this week here at the annual Google I/O developer conference, is called Android TV. According to Google, “Thousands of apps in the Google Play Store are already optimized for TVs.”

Google is clearly interested in cars, and not only because it wants to build self-driving vehicles. A few aftermarket audio system makers have used off-the-shelf Android as the driver in replacement automotive head units. This week, Google announced Android Autoas a competitor to Apple’s iOS-focused CarPlay. As with smartphones, Google set up a vendor alliance — in this case, the Open Automotive Alliance — to developer industry specifications and to drive alliances with car manufacturers.

From the looks of things, Android is now intended to become a general-purpose operating system. Good for embedded, small-footprint, app-based, highly connected devices.

Google’s emphasis, though, isn’t on the hardware, but on that increasingly multiscreen world. With screens spanning the wrist, phone, tablet, head-mounted displays and televisions, Android looks to be everywhere. And that means that Google Play will be everywhere. Thus Google advertisements everywhere too. I mean, duh.

I guess that’s the future of computing: Android Everywhere.

gold-handcuffAre you covered by a non-compete agreement at your current employer? Are your workers covered by a non-compete? While non-competes may make your executives (and their attorneys) feel good, they may not be good for your company.

Non-compete agreements restrict where you can work after you leave your current employer. They might lock you out of taking another job in your industry for years—agreements of even up to five years are not uncommon. Of course, non-competes can dissuade competitors from recruiting you while you are still working. That’s not all: They could also scare away potential employers even if you lose your job due to a layoff or involuntary termination.

By restricting the free flow of employees from company to company, non-compete agreements limit employment options. They also can cut down on a vibrant culture of innovation and startups.

Why have non-compete agreements? One argument is that employers invest in hiring and training their employees, and don’t want to have those employees, once trained, snatched up by competitors. Another is that by virtue of their jobs, employees have access to intellectual property, and that the non-compete stops competitors from gaining ready access to that IP.

As you can imagine, information technology workers, including software developers, are often seen as key strategic assets and may find non-compete agreements a necessary evil. Programmers aren’t the only folks locked in by non-competes, though. A recent story by Steven Greenhouse in the New York Times, “Noncompete Clauses Increasingly Pop Up in Array of Jobs,” talked about a Boston-area teenager who couldn’t work at a summer camp in 2014 because the camp she worked at in 2013 had a 12-month non-compete agreement. Ugh.

Some states, including California, give more protections to employees than to employers, and non-compete agreements are often unenforceable. Some analysts believe that this worker-friendly environment contributes to the vitality of the area’s tech economy—see Sharon Wienbar’s blog post, “The Power of a Fluid Market: Employee Mobility Makes Silicon Valley Flow.”

Wienbar writes

A non-compete isn’t the only reason why the abundance of VC $$ remain in Silicon Valley but it is an important item to consider. In the technology landscape, talent can make or break a business and the more fluid the resources, the better the chance of building a viable business.

Other states are friendlier to business and uphold non-compete agreements. I would not like to work in that environment, personally. I’m not a fan of non-compete agreements and have never signed one or asked an employee to sign one. However, I recognize that in some locations, and in some companies, that’s the only way to get the job you want at the company you want.

Compelling arguments against such agreements are plentiful. Read Jeff Haden’s, “The Case Against Non-Compete Agreements,” published in Inc.; “Time to get rid of ‘noncompete’ agreements” by the Boston Globe’s Scott Kirshner; and Jon Zemke’s “To Compete or Non-Compete: Contracts That Make Michigan Less Competitive,” in Concentrate.

bad-dataTwo consulting projects this year have involved lots and lots of data. One was the migration of a very complex customer database and transaction logging system to a cloud-based CRM platform from a homegrown system. The other involved performing serious analytics on a non-profit’s membership system that had data spanning decades.

Both projects required incredible manual intervention in the data processing. Data came from different original sources and had wildly varying schemas. Some data was relational, and some was flat-file. Some of the data was clearly contradictory. Timestamps were missing on records. Valuable data was stored in comment fields. Documentation didn’t exist. Keys were lost. Fields were abandoned. Live data was mixed with archival data.

Both systems were a mess—but that wasn’t the problem. The issues I’m describing are the everyday result of messy software, evolving databases and the real world. Solving those challenges takes some effort, but we all know the importance of factoring data cleansing into any type of migration or analytics project, both in terms of time and of finances.

The real challenge is that most of the data was totally wrong. No resemblance to reality. That person never lived at that address. The relationships in the SQL database were not correct. Conventions were nonexistent. When data is being collected by many systems—and stored in many systems—over years and decades, this is what happens.

Yes, both projects were successful. However, we had to throw away a lot of data that would have added value to the organizations and their customers or members. Worse, we learned that both organizations had been using bad data for years, resulting in missed opportunities, less-than-ideal customer service, and flawed business planning.

Garbage in, garbage out. After all, if you are thinking about offering a new product or service, and are basing your decisions on bad data, you aren’t making a good decision. You are guessing.

What went wrong? It wasn’t in the migration and analytics projects. We went in, cleaned up the data best we could, and got out. It was a finite task and went as well as could be expected.

The root causes weren’t bad programming either, or poor database administration. In many IT shops, schemas change. Documents are lost. Corruption happens. Ideas are tried and abandoned. That’s simply what happens when data is kept past its sell-by data.

The failure is that nobody regularly (or ever) checked the data to make sure that it’s still good. Nobody performs period data hygiene. Nobody tested addresses, or eyeballed records to see if they made sense, or validated the databases against other sources (or even against themselves).

Data is a valuable corporate asset. In fact, when it comes to customer data and transaction records, data may be the single biggest asset of your company. Most companies work hard to ensure that their assets are solid. A manufacturer checks its raw materials and finished goods to ensure that they are as expected. Materials in warehouses are inventoried. Random samples are pulled from time to time, tested, and examined carefully.

When it comes to data, long-term quality is rarely a consideration. Data is stored and used. Is it checked? Rarely, if ever. We all know the benefits of Big Data for our business. What about the costs of Bad Data? Unknown, but real. I’ve seen this time and again. As Bad Data is used and reused, it will only get worse.

swift-chris-lattnerSAN FRANCISCO — I expected a new version of OS X, the operating system for Mac desktops and notebooks. I expected a new version of iOS, the operating system for iPhones and iPads. I did not expect a new programming language. Yet that’s what we got at Apple’s Worldwide Developers Conference, held here this week. And I’m delighted.

Along with the previews of OS X 10.10 “Yosemite” and iOS 8, Apple showed Swift, a language that runs inside its familiar Xcode integrated development environment.

Swift is designed primarily for safety. As Apple puts it:

Swift eliminates entire classes of unsafe code. Variables are always initialized before use, arrays and integers are checked for overflow, and memory is managed automatically. Syntax is tuned to make it easy to define your intent—for example, simple three-character keywords define a variable (var) or constant (let). The safe patterns in Swift are tuned for the powerful Cocoa and Cocoa Touch APIs. Understanding and properly handling cases where objects are nil is fundamental to the frameworks, and Swift code makes this extremely easy. Adding a single character can replace what used to be an entire line of code in Objective-C. This all works together to make building iOS and Mac apps easier and safer than ever before.

Obviously we have not had a chance to work with Swift directly. However, it appears to be a very easy language to learn, especially for developers who are used to Objective-C. It’s designed for Apple’s Cocoa and Cocoa Touch libraries, is built with the familiar and stable LLVM compiler, and creates the same runtime as Objective-C.

Apple says Swift programs are fast. The company claims that a complex object sort will run in Objective-C about 2.8x faster than in Python. But if you use Swift, it runs 3.9x faster.

Learn more about the Swift announcement, as well as a ton of new APIs, in Rob Marvin’s article, “Apple announces Swift programming language, new SDK and developer features at WWDC.”

A neat feature of Swift is what Apple calls “playgrounds,” where you can see how the code runs as you write it—without building a new version. That’s cool. Swift isn’t ready yet, but you can check out Apple’s preliminary documentation, including a tour and language guide.

Swift is a cross-platform language, if you define “cross platform” to mean “both of Apple’s platforms.” Yes, you can use Swift to write apps for OS X and for iOS, and because it creates the same runtimes as Objective-C, you should be able to run Swift applications on older versions of those operating systems, not only OS X 10.10 and iOS 8. What you can’t do, and probably never will do, is run Swift apps on competing platforms. No Android, no Windows Phone. Given that the language does target the Cocoa and Cocoa Touch libraries, that’s clearly no surprise.

Most mobile app developers already target iOS first, even though there are more Android devices than iPhones and iPads. Swift won’t change that, and in fact it will likely make iOS development even more attractive.

On the desktop/notebook side, developers writing native software probably target Windows first because of the huge installed base of machines. It’s unlikely that Swift will change behaviors there. But then again, writing desktop/notebook software truly is becoming a niche activity.

gitThere are lots of reasons to use Git as your source-code management system. Whether used as a primary system, or in conjunction with an existing legacy repository, I’m going to argue that if you’re not using Git now, you should be at least testing it out.

Basics of Git: It is open source, and runs on Linux, Unix and Windows servers. It is stable. It is solid. It is fast. It is supported by just about every major tool vendor. Developers love Git. Managers love Git.

Not long ago, much of the world standardized on Concurrent Versions System (CVS) as its version control system. Then Subversion (SVN) came along, and the world standardized on that. Yes, yes, I know there are dozens of other version control systems, ranging from Microsoft’s Visual SourceSafe and Team Foundation Server to IBM Rational’s ClearCase. Those have always been niche products. Some are very successful niche products, but the industry standards have been CVS and SVN for years.

Along came Git, designed by Linus Torvalds in 2005, now headed up by Junio Hamano. For a brief history of Git, read “The Legacy of Linus Torvalds: Linux, Git, and One Giant Flamethrower,” by Robert McMillan, published in Wired in November 2012. For the official history, see the Git website.

What’s so wonderful about Git? I’ll answer in two ways: industry support and impressive functionality.

For industry support, let me refer you to two new articles by SD Times’ Lisa Morgan. Those stories inspired this column. The first is“How to get Git into the enterprise,” and the other is “Git smart about tools: A Buyers Guide.” You’ll see that nearly every major industry player supports Git—even competing SCM systems have worked to ensure interoperability. That’s a heck of an endorsement, and shows the stability and maturity of the platform.

Don’t take my word for it for the impressive functionality. Instead, let me quote from other bloggers.

Tobias Günther: “Work Offline: What if you want to work while you’re on the move? With a centralized VCS like Subversion or CVS, you’re stranded if you’re not connected to the central repository. With Git, almost everything is possible simply on your local machine: make a commit, browse your project’s complete history, merge or create branches… Git lets you decide where and when you want to work.”

Stephen Ball: “Resolving conflicts is way easier (than SVN): In Git, if I have a private branch from a branch that has been updated with new (conflicting) commits, I can rebase its commits one at a time against the public destination branch. I can resolve conflicts as they arise between my code and the current codebase. This makes dealing with conflicts easy because I get the context of the conflict (my commit message) and only see one conflict at a time.

“In SVN if I merge a branch against another and there are a lot of conflicts, there’s nothing I can do but resolve them all at the same time. What a mess.”

Scott Chacon: “There are tons of fantastic and powerful features in Git that help with debugging, complex diffing and merging, and more. There is also a great developer community to tap into and become a part of and a number of really good free resources online to help you learn and use Git…

“I want to share with you the concept that you can think about version control not as a necessary inconvenience that you need to put up with in order to collaborate, but rather as a powerful framework for managing your work separately in contexts, for being able to switch and merge between those contexts quickly and easily, for being able to make decisions late and craft your work without having to pre-plan everything all the time. Git makes all of these things easy and prioritizes them and should change the way you think about how to approach a problem in any of your projects and version control itself.”

Nicola Paolucci:
“If you don’t like speed, being productive and more reliable coding practices, then you shouldn’t use Git.”

Peter Cho: “Most developers would be delighted if they can change their workflow to use Git. Switching over early would be more ideal unless, of course, your SCM relies on a large network of dependent applications. If it’s not viable to change SCM systems, I would highly recommend using it on future projects.

“Git is infamous for having a large suite of tools that even seasoned users need months to master. However, getting into the fundamentals of Git is simple if you’re trying to switch over from SVN or CVS. So give a try sometime.”

Thomas Koch: “Somebody probably already recommended you to switch to Git, because it’s the best VCS. I’d like to go a step further now and talk about the risk you’re taking if you won’t switch soon. By still using SVN (if you’re using CVS you’re doomed anyway), you communicate the following: We’re ignorant about the fact that the rest of the (free) world switched to Git. We don’t invest time to train our developers in new technologies. We don’t care to provide the best development infrastructure. We’re not used to collaborate with external contributors. We’re not aware how much Subversion sucks and that Subversion does not support any decent development process. Yes, our development process most certainly sucks too.”

Günther also wrote, “Go With the Flow: Only dead fish swim with the stream. And sometimes, clever developers do, too. Git is used by more and more well-known companies and Open Source projects: Ruby On Rails, jQuery, Perl, Debian, the Linux Kernel, and many more. A large community often is an advantage by itself because an ecosystem evolves around the system. Lots of tutorials, tools (do I have to mention Tower?) and services make Git even more attractive.”

I’m sure there are arguments against Git. Nearly all the ones I’ve heard have come to me via competing source-code management vendors, not from developers who have actually tried Git for more at least one pilot. If you aren’t using Git, check it out. It’s the present and future of version control systems.

arthur-hickenSouth San Francisco, California — Writing software would be oh, so much simpler if we didn’t have all those darned choices. HTML5 or native apps? Windows Server in the data center or Windows Azure in the cloud? Which Linux distro? Java or C#? Continuous Integration? Continuous Delivery? Git or Subversion or both? NoSQL? Which APIs? Node.js? Follow-the-sun?

In a panel discussion on real-world software delivery bottlenecks, “complexity” was suggested as a main challenge. The panel, held here at the SDLC Acceleration Summit, pointed out that the complexity of constantly evaluating new technologies, techniques and choices can bring uncertainty and doubt and consume valuable mental bandwidth—and those might sometimes negate the benefits of staying on the cutting edge. (Pictured: My friend Arthur Hicken, aka “The Code Curmudgeon,” chief evangelist at Parasoft, which sponsored the event.)

I was the moderator. Sitting on the panel were David Intersimone from Embarcadero Technologies; Paul Dhaliwal from 383 Media; Andrew Binstock, editor of Dr. Dobb’s Journal; and Norman Buck from SQS.

Choices are not simple. Merely keeping up with the latest technologies can consume tons of time. Not only reading resources like SD Times, but also following your favorite Twitter feeds, reading blogs like Stack Overflow, meeting thought leaders at conferences, and, of course, hearing vendor pitches.

While complexity can be overwhelming, the truth is that we can’t opt out. We must keep up with the latest platforms and changes. We must have a mobile strategy. Yes, you can choose to ignore, say, the recent advances in cloud computing, Web APIs and service virtualization, but if you do so, you’re potentially missing out on huge benefits. Yes, technologies like Software Defined Networking (SDN) and OpenFlow may not seem applicable to you today, but odds are that they will be soon. Ignore them now and play catch-up later.

Complexity is not new. If you were writing FORTRAN code back in the 1970s, you had choices of libraries. Developing client/server software for NetWare or AIX? Building with Oracle? We have always had complexity and choices in platforms, tools, methodologies, databases and libraries. We always had to ensure that our code ran (and ran properly) on a variety of different targets, including a wide range of browsers, Java runtimes, rendering engines and more.

Yet today the number of combinations and permutations seems to be significantly greater than at any time in the past. Clouds, virtual machines, mobile devices, APIs, tools. Perhaps we need a new abstraction layer. In any case, though, complexity is a root cause of our challenges with software delivery. We must deal with it.

Would you like billion-dollar-billa billion dollars? Software companies, both startups and established firms, are selling like hotcakes. Some are selling for millions of U.S. dollars. Some are selling for billions. While the bulk of the sales price often goes back to venture financiers, a sale can be sweet for equity-holding employees, and even for non-equity employees who get a bonus. Hurray for stock options!

A million U.S. dollars is a lot of money. A billion dollars is a mind-blowing quantity of money, at least for me. A billion dollars is how much money Sun Microsystems paid to buy MySQL in 2008. Nineteen billion dollars is how much money Facebook is spending to buy the WhatsApp messaging platform in 2014 (see “With social media, it’s about making and spending lots of money”).

I’m going to share some analysis from Berkery Noyes, an investment bank that tracks mergers and acquisitions in the software industry. Here’s info excerpted from their Q1 2014 Software Industry Trends Report:

Software transaction volume declined four percent over the past three months, from 435 to 419. However, this represented a 14 percent increase compared to Q1 2013. Deal value gained 72 percent, from $22.6 billion in Q4 2013 to $38.8 billion in Q1 2014. This rise in aggregate value was attributable in large part to Facebook’s acquisition of Whatsapp [sic], a cross-platform mobile messaging application, for $16 billion. The top ten largest transactions accounted for 61 percent of the industry’s total value in Q1 2014, compared to 55 percent in Q4 2013 and 38 percent Q1 2013.

The Niche Software segment, which consists of software that is targeted to specific vertical markets, underwent a ten percent volume increase in Q1 2014. In terms of growth areas within the segment, deal volume pertaining to the Healthcare IT market increased 31 percent. Meanwhile, the largest Niche Software transaction during Q1 2014 was Thoma Bravo’s acquisition of Travelclick [sic], which provides cloud-based hotel management software, for $930 million.

According to Berkery Noyes, the niche software market was the largest of four segments defined by the bank’s analysts. You can read about the transactions in the business, consumer and infrastructure software segments in their report.

Why would someone acquire a software company? Sometimes it’s because of the customer base or the strength of a brand name. Sometimes it’s to eliminate a competitor. Sometimes it’s to grab intellectual property (like source code or patents). And sometimes it’s to lock up some specific talent. That’s particularly true of very small software companies that are doing innovative work and have rock-star developers.

Those “acqui-hires” are often lucrative for the handful of employees. They get great jobs, hiring bonuses and, if they have equity, a share of the purchase price. But not always. I was distressed to read about an acquisition where, according to one employee, “Amy”:

Under the terms of Google’s offer, Amy’s startup received enough money to pay back its original investors, plus about $10,000 in cash for each employee. Amy’s CEO was hired as a mid-level manager, and her engineering colleagues were given offers from Google that came with $250,000 salaries and significant signing bonuses. She was left jobless, with only $10,000 and a bunch of worthless stock.

The implication in the story, “The Secret Shame of an Unacquired Tech Worker,” is that this is sexist: The four male employees were hired by Google, and the one female employee was not.

We don’t know the real story here, and frankly, we probably never will. Still, stories like this ring true because of the brogrammer culture in Silicon Valley, and in the tech industry.

Let’s end with a bit of good news in that regard. According to market research firm Evans Data Corp.:

The number of females in software development has increased by 87% since first being measured in 2001, according to Evans Data’s recently released Developer Marketing 2014 survey.  In 2014, 19.3% of software developers are women, or approximately three and a half million female software developers worldwide.  While today’s number is strong compared to 2001, it is even stronger compared to the years of 2003 to 2009 when the percent of female developers dipped into the single digit range.

We are making progress.

Carla-Schroder“I tried working for some tech companies like Microsoft, Tektronix, IBM, and Intel. What a fiasco. I can’t count how many young men with way less experience and skills than me snagged the good fun hands-on tech jobs, while I got stuck doing some kind of crap customer service job. I still remember this guy who got hired as a desktop technician. He was in his 30s, but in bad health, always red and sweaty and breathing hard. It took him forever to do the simplest task, like connecting a monitor or printer. He didn’t know much and was usually wrong, but he kept his job. I busted my butt to show I was serious and already had a good skill set, and would work my tail off to excel, and they couldn’t see past that I wasn’t male. So I got the message, mentally told them to eff off and stuck with freelancing.”

So writes Carla Schroder in her blog post, “My Nerd Life: Too Loud, Too Funny, Too Smart, Too Fat” on linux.com. Her story is an important one for female techies – and all techies. Read it.

1486079_10201603140415088_346126634_oWe need all the technical talent we can get. Whether we are talking developers, architects, network staff, IT admins, managers, hardware, software or firmware, the more women in technology, the better. For everyone – for companies, for customers, for women and for men.

I have recently started working with an organization called WITI – that’s Women in Technology International. (My nickname is now “WITI Alan.”)

WITI is a membership organization. Women who join get access to amazing resources for networking, professional development, career opportunities and more. Companies who join as corporate sponsors can take advantage of WITI’s incredible solutions for empowering women’s networks, including employee training and retention services, live events and more.

My role with WITI is going to be to help women in technology tell their stories. We kicked this off at January’s International Consumer Electronics show in Las Vegas, and we’ll be continuing this at numerous events in 2014 – including the WITI Annual Summit, coming to Santa Clara from June 1-3, 2014. (You can see me here at the WITI booth at CES with Michele Weisblatt.)

If you are a woman in technology, or if you know women in technology, or you understand the value of increasing the number of women in technology, please support the super-important work being done by WITI.

Steve-WozniakI’ve had the opportunity to meet and listen to Steve Wozniak several times over the years. He’s always funny and engaging, and his scriptless riffs get better all the time. With this one, he had me rolling in the aisle.

The Woz’s hour-long talk (and Q&A session) covered familiar ground: His hacking the phone system with blue boxes (and meeting Captain Crunch), working his way though college, meeting Steve Jobs, designing the Apple I and Apple II computers, the dispute about the Apple Macintosh vs. Apple Lisa, his amnesia after a plane crash, his dedication to Elementary school teaching, his appearance on the TV competition Dancing with the Stars in 2009, and so on.

Many of us have heard and read these stories before — and love them.

Read all about his talk here, in my story on the SmartBear blog….

Microsoft’s woes are too big to ignore.

Problem area number one: The high-profile Surface tablet/notebook device is flopping. While the 64-bit Intel-based Surface Pro hasn’t sold well, the 32-bit ARM-based Surface RT tanked. Big time. Microsoft just slashed its price — maybe that will help. Too little too late?

To quote from Nathan Ingraham’s recent story in The Verve, 

Microsoft just announced earnings for its fiscal Q4 2013, and while the company posted strong results it also revealed some details on how the Surface RT project is costing the business money. Microsoft’s results showed a $900 million loss due to Surface RT “inventory adjustments,” a charge that comes just a few days after the company officially cut Surface RT prices significantly. This $900 million loss comes out of the company’s total Windows revenue, though its worth noting that Windows revenue still increased year-over-year. Unfortunately, Microsoft still doesn’t give specific Windows 8 sales or revenue numbers, but it probably performed well this quarter to make up for the big Surface RT loss.

At the end of the day, though, it looks like Microsoft just made too many Surface RT tablets — we heard late last year that Microsoft was building three to five million Surface RT tablets in the fourth quarter, and we also heard that Microsoft had only sold about one million of those tablets in March. We’ll be listening to Microsoft’s earnings call this afternoon to see if they further address Surface RT sales or future plans.

Microsoft has spent heavily, and invested a lot of its prestige, in the Surface. It needs to fix Windows 8 and make this platform work.

Problem are number two: A dysfunctional structure. A recent story in the New York Times reminded me of this 2011 cartoon describing six tech company’s charts. Look at Microsoft. Yup.

Steve Ballmer, who has been CEO since 2000, is finally trying to do something about the battling business units. The new structure, announced on July 11, is called “One Microsoft,” and in a public memo by Ballmer, the goal is described as:

Going forward, our strategy will focus on creating a family of devices and services for individuals and businesses that empower people around the globe at home, at work and on the go, for the activities they value most. 

Editing and restructuring the info in that memo somewhat, here’s what the six key non-administrative groups will look like:

Operating Systems Engineering Group will span all OS work for console, to mobile device, to PC, to back-end systems. The core cloud services for the operating system will be in this group.

Devices and Studios Engineering Group will have all hardware development and supply chain from the smallest to the largest devices, and studios experiences including all games, music, video and other entertainment.

Applications and Services Engineering Group will have broad applications and services core technologies in productivity, communication, search and other information categories.

Cloud and Enterprise Engineering Group will lead development of back-end technologies like datacenter, database and specific technologies for enterprise IT scenarios and development tools, plus datacenter development, construction and operation.

Advanced Strategy and Research Group will be focused on the intersection of technology and policy, and will drive the cross-company looks at key new technology trends.

Business Development and Evangelism Group will focus on key partnerships especially with innovation partners (OEMs, silicon vendors, key developers, Yahoo, Nokia, etc.) and broad work on evangelism and developer outreach. 

If implemented as described, this new organization should certainly eliminate waste, including redundant research and product developments. It might improve compatibility between different platforms and cut down on mixed messages.

However, it may also constraint the freedom to innovate, and promote the unhealthy “Windows everywhere” philosophy that has hamstrung Microsoft for years. It’s bad to spend time creating multiple operating systems, multiple APIs, multiple dev tool chains, multiple support channels. It’s equally bad to make one operating system, API set, dev tool chain and support channel fit all platforms and markets.

Another concern is the movement of developer outreach into a separate group that’s organizationally distinct from the product groups. Will that distance Microsoft’s product developers from customers and ISVs? Maybe. Will the most lucrative products get better developer support? Maybe.

Microsoft has excelled in developer support, and I’d hate to see that suffer as part of the new strategy. 

Read Steve Ballmer’s memo. What do you think?

Z Trek Copyright (c) Alan Zeichick

Apple is sporting an nasty black eye, and the shiner isn’t only because iPad sales are slipping – with a 14% year-on-year decline reported. This time, it’s because QoS on the company’s cloud servers is ugly, ugly, ugly.

As of my writing (on Thursday, July 25), Apple’s developer portal has been offline for days. As you can see on the dashboard, just about everything is down. If you go to a dev center, you see this message:

We apologize for the significant inconvenience caused by our developer website downtime. We’ve been working around the clock to overhaul our developer systems, update our server software, and rebuild our entire database. While we complete the work to bring our systems back online, we want to share the latest with you.

We plan to roll out our updated systems, starting with Certificates, Identifiers & Profiles, Apple Developer Forums, Bug Reporter, pre-release developer libraries, and videos first. Next, we will restore software downloads, so that the latest betas of iOS 7, Xcode 5, and OS X Mavericks will once again be available to program members. We’ll then bring the remaining systems online. To keep you up to date on our progress, we’ve created a status page to display the availability of our systems.

As you may have read elsewhere, the reason for the outage is apparently a researcher found a massive security hole in the App dev center system. To prevent the flaw from being exploited, Apple took the entire system down – on July 18. That’s right, it’s been over a week.

Ouch.

And then, today, July 25, there are reports that the authentication server needed to set up new iPhone accounts is offline. Apple’s IT department certainly isn’t looking too savvy right now – and perhaps this points to bigger challenges within the company’s spending priorities.

However, before anyone piles onto Apple, bear in mind that service outages are not uncommon, especially in the cloud. Certainly, they are not new; I’ve written about them before, such as in 2008’s “When the cloud was good, it was very very good, but when it was bad, it was horrid” and 2011’s “Skynet didn’t take down Amazon Web Services.”

Cloud failure is not a matter of if. It’s a matter of when. When huge corporations like Amazon and Apple can suffer these sorts of outages, anyone can, no matter how big.

What’s the game plan? Do you have a fail-over strategy to spool up a backup provider? Do you have messaging ready for your customers and partners? Alternatives to suggest?

I have no idea how much money Apple is losing due to these outages – or how much its developer partners and customers are affected. Apple, however, is big enough to handle the hit. How about you?

Z Trek Copyright (c) Alan Zeichick

“You should double your top line revenue by making your products more awesome, not by doubling the size of your sales department.”

That was one of the insights shared during a technology roundtable held last July 16 in San Francisco. Called “The Developer is King,” the discussion was moderated by Dan Dodge of Google Ventures, formerly a startup evangelist at Microsoft and engineer at such diverse firms at AltaVista, Napster and Groove Networks. Also on the panel: John Collison, founder of online payment site Stripe; Tom Preston-Werner, founder of GitHub; Suhail Doshi, co-founder of Web analytics firm MixPanel; and Lew Cirne, founder of app monitoring firm New Relic.

The atmosphere around the panel was filled with pithy aphorisms about why so many developers are succeeding as entrepreneurs. For example, “developer aren’t just techies, they are artists who creating things,” and “a good startup founder is someone who doesn’t live only to write code, but who likes to solve problems.”

What made this conversation particularly interesting is that not only are these founders all developers, but their customers are also developers. The panelists offered some true words of wisdom for anyone targeting developers:

• Developers are hard to please. You have to build products that just work — you can’t create success through whiz-bang marketing.

• Developers will see your product and think they can build it themselves. It’s often not hard to duplicate your product. So you have to focus on the customers, ecosystem and simplicity.

• If you are building a commercial offering atop open source software, show that you help developers get their work done more easily than the open source version.

• Tools are quite viral; developers are great at telling their friends what works for them — and what doesn’t work for them.

• Focus on the initial user experience, and make customers more productive immediately. Contrast your offering with big platforms that require a lot of work to install, configure, train and use before the customer sees any benefit.

• The way to innovate is to try lots of things – and create a culture that tolerates failure.

• When hiring, a cultural fit beats anything on the resume. You can teach skills – you can’t teach character.

• Don’t set out to build a company; instead, start out creating a solution to a real problem, and then grow that into a business.

• Don’t get hung up on analyst estimates of market size. Create markets, don’t pursue them.

and my favorite,

• You shouldn’t build a company by focusing on a current fad or gold rush. Rather, figure out where people are frustrated or having problems. Make something that people want. Figure out how to make people happy.

Dr. Douglas Engelbart, who passed away on July 2, was best known as the inventor of the computer mouse. While Dr. Engelbart was the brains behind many revolutionary ideas, his demonstration of a word processor using a mouse in 1968 paved the way for the graphical user interfaces in Xerox’s Alto (1973), Apple’s Lisa (1979) and Macintosh (1984), Microsoft’s Windows (1985) and IBM’s OS/2 Presentation Manager (1988).

Future generations may regard the mouse as a transitional technology. Certainly the touch interface, popularized in the iPad, Android tablets and Windows 8, are making a dent in the need for the mouse — though my Microsoft Surface Pro is far easier to use with a mouse, in addition to the touch screen.

Voice recognition is also making powerful strides. When voice is combined with a touch screen, it’s possible to envision the post-WIMP (Windows, Icons, Menus and Pointing Devices) mobile-style user experience surpassing mouse-driven systems.

Dr. Engelbart, who was recently fêted in Silicon Valley, was 88. Here are some links to help us gain more insight into his vision:

Obituary in the New York Times, by John Markoff.

“The Mother of All Demos” on 1968. Specifically, see clips 3 and 12 where Dr. Engelbart edits documents with a mouse.

A thoughtful essay about Dr. Engelbart’s career, by Tom Foremski.

I never had the honor of meeting Dr. Engelbart. There was a special event commemorating his accomplishments at Stanford Research Institute in 2008, but unfortunately I was traveling.

It’s remarkable for one person to change the world in such a significant way – and so fast. Dr. Engelbart and his team invented not only the mouse, but also personal computing as we know it today. It is striking how that 1968 demo resembles desktop and notebook computing circa 2013. Not bad. Not bad at all. May his memory be a blessing.

Quality Assurance. Testing. No matter what you call it – and of course, there are subtle distinctions between testing and QA – the discipline is essential for successfully creating professional-grade software.

Sure, a one-person shop or a small consultancy might get away without having formal test teams or serious QA policies. Most of us can’t afford to work that way. The cost of software failure, to us and to our customers, can be huge in so many ways.

SD Times and sdtimes.com recently asked readers about test and QA in a research study. Here are some of the results; how well do the answers match your organization’s profile?

Does your organization have separate development and test teams? (Please check one only)

Yes, all development teams and test/QA teams are separate 35.9%
Some development and test/QA teams are separate, some are integrated 33.4%
All test and development teams are integrated 27.4%
Don’t know 3.3%

If any of the test/QA teams in your organization are separate, where do those test teams report? (Please check all that apply)

To the development team 16.2%
To a development manager, director, or VP of development 33.8%
To an IT manager not managing development 22.2%
To a software architect or project leader on a particular project 19.7%
To the CIO/CTO 9.2%
To line of business managers 14.8%
Don’t know 8.1%

What background do your test/QA managers and directors typically have? (Please check all that apply)

Development 20.3%
Test/QA only 28.9%
Development and test/QA 48.9%
General IT background 31.7%
General management background 18.5%
No particular background – we train them from scratch 14.2%

Does your company outsource any of its software quality assurance or testing? (Please check one only)

Yes, all of it 3.7%
Yes, some of it 32.3%
No, none of it 58.1%
Don’t know 5.9%

Who is responsible for internally-developed application performance testing and monitoring in your company? (Please check all that apply)

Software/Application Developers 68.2%
Software/Application Development Management 54.2%
Testers 52.3%
Testing Management 43.9%
Systems administrators 34.9%
IT top management (development) (VP or above) 29.3%
Networking personnel 25.2%
IT top management (non-development) (VP or above) 24.6%
Line-of-business management 21.5%
Consultants 20.2%
Service providers 19.0%
Networking management 18.1%

What is the state of software security testing at your company? (Please check all that apply)

Software security is checked by the developers 41.2%
Software security is checked by the test/QA team 31.6%
Software security is tested by a separate security team 26.9%
Software security testing is done for Web applications 25.7%
Software security is checked by the IT/networking department 25.4%
Software security testing is done for in-house applications 24.1%
Software security testing is done for public-facing applications 21.7%
We don’t have a specific security testing process 20.4%
Software security is checked by contractors 9.3%
Software security testing is not our responsibility 3.1%

At what stage is your company, or companies that you consult, using the cloud for software testing? (Please check one only)

No plans to use the cloud for software testing 42.3%
We are studying the technology but have not started yet 21.2%
We are experimenting with using the cloud for software testing 16.0%
We are using the cloud for software testing on a routine basis 10.7%
Don’t know 9.8%

Lots of good data here!

Z Trek Copyright (c) Alan Zeichick

If you were at Microsoft Build this week in San Francisco, you hung out with six thousand of your closest friends. At least, your closest friends who are enterprise .NET developers, or who are building apps for some version of Windows 8.

Those aren’t necessarily the same people. The Microsoft world is more bifurcated than ever before.

There’s the solid yet slow-moving world of the Microsoft enterprise stack. Windows Server, SQL Server, Exchange, SharePoint, Azure and all that jazz. This part of Microsoft thinks that it’s Oracle or IBM.

And then there’s the quixotic set of consumer-facing products. Xbox, Windows Phone, the desktop version of Windows 8, and of course, snazzy new hardware like the Surface tablet. This part of Microsoft thinks that it’s Apple or Google – or maybe Samsung.

While the company’s most important (and most loyal) customer base is the enterprise, there’s no doubt that Microsoft wants to be seen as Apple, not IBM. Hip. Creative. Innovative. In touch with consumers.

#Microsoft wants to trend on Twitter.

To thrive in the consumer world, the company must dig deeper and do better. The highlight of Build was the preview of Windows 8.1, with user experience improvements that undo some of the damage done by Windows 8.0.

It’s great that you can now boot into the “desktop,” or traditional Windows. That is important for both desktop and tablet users. Yet the platform remains frenetic, inconsistent and missing key apps in the Tile motif.

While the Tile experience is compelling, it’s incomplete. You can’t live in it 100%. Yet Windows 8.0 locked you away from living in the old “desktop” environment. Windows 8.1 helps, but it’s not enough.

In his keynote address (focus on consumer tech), Microsoft CEO Steve Ballmer pushed two themes. 

One was that the company is moving to ship software faster. Citing the one-year timeline between Windows 8.0 and Windows 8.1 — instead of the traditional three-year cycle — the unstated message is that Microsoft is emulating Apple’s annual platform releases. “Rapid Release is the new norm,” Ballmer said.

A second theme is that Microsoft’s story is still Windows, Windows, Windows. This is no change from the past. Yes, Microsoft plays better with other platforms than ever before. Even so, Redmond wants to control every screen — and can’t understand why you might use anything other than Windows.

The more things change, the more they stay the same.

Web sites developed for desktop browsers look, quite frankly, terrible on a mobile device. The look and feel is often wrong, very wrong. Text is the wrong size. Gratuitous clip art on the home page chews up bandwidth. Features like animations won’t behave as expected. Don’t get me started on menus — or on the use-cases for how a mobile user would want to use and navigate the site.

Too often, some higher-up says, “Golly, we must make our website more friendly,” and what that results in is a half-thought-out patch job. Not good. Not the right information, not the right workflow, not the right anything.

One organization, UserTesting.com, says that there are four big pitfalls that developers (and designers) encounter when creating mobile versions of their websites. The company, which focuses on usability testing, says that the biggest issues are:

Trap #1 – Clinging to Legacy: ‘Porting’ a Computer App or Website to Mobile
Trap #2 – Creating Fear: Feeding Mobile Anxiety
Trap #3 – Creating Confusion: Cryptic Interfaces and Crooked Success Paths
Trap #4 – Creating Boredom: Failure to Quickly Engage the User

Makes sense, right? UserTesting.com offers a quite detailed report, “The Four Mobile Traps,” that goes into more detail.

The report says,

Companies creating mobile apps and websites often underestimate how different the mobile world is. They assume incorrectly that they can create for mobile using the same design and business practices they learned in the computing world. As a result, they frequently struggle to succeed in mobile.

These companies can waste large amounts of time and money as they try to understand why their mobile apps and websites don’t meet expectations. What’s worse, their awkward transition to mobile leaves them vulnerable to upstart competitors who design first for mobile and don’t have the same computing baggage holding them back. From giants like Facebook to the smallest web startup, companies are learning that the transition to mobile isn’t just difficult, it’s also risky.

Look at your website. Is it mobile friendly? I mean, truly designed for the needs, devices, software and connectivity of your mobile users?

If not — do something about it.

Data can be abused. The rights of individuals can be violated. Bits of disparate information can be tracked without a customer’s knowledge, and used to piece together identities or other profile information that a customer did not intend to divulge. Thanks to Big Data and other analytics, patterns can be tracked that would dismay customers or partners.

What is the responsibility of the software development team to make sure that a company does the right thing – both morally and legally? The straight-up answer from most developers, and most IT managers outside the executive suite, is probably, “That’s not our problem.” That is not a very good answer.

Corporations and other organizations have senior managers, such as owners, presidents, CEOs and board of directors. There is no doubt that those individuals have the power to say yes – and the power to say no.

Top bosses might consult with legal authorities, such as in-house counsel or outside experts. The ultimate responsibility for making the right decision rests with the ultimate decision-makers. I am not a lawyer, but I expect that in a lawsuit, any potential liability belongs with managers who misuse data. Programmers who coded an analytics solution would not be named or harmed.

This topic has been on my mind for some time, as I ponder both the ethics and the legalities implicit in large-scale data mining. Certainly this has been a major subject of discussion by pundits and elected officials, at least in the United States, when it comes to customer info and social-media posts being captured and utilized by marketers.

Some recent articles on this subject:

Era of Online Sharing Offers Benefits of ‘Big Data,’ Privacy Trade-Offs

The Challenge of Big Data for Data Protection

Big Data Is Opening Doors, but Maybe Too Many

What are we going to do in the face of questionable software development requirements? Whether we are data scientists, computer scientists or other IT professionals, it is quite unclear. A few developers might prefer to resign rather than write software they believe crosses a moral line. Frankly, I doubt that many would do so.

Some developers might say, “I didn’t understand the implications.” Or they might say, “If I don’t code this application, management will fire me and get someone else to do it.” Or they might even say, “I was just following orders.”

Perhaps I’m an old fogey, but I can’t help but smile when I see press releases like this: “IBM Unveils New Software to Enable Mainframe Applications on Cloud, Mobile Devices.” 

Everything old will become new again, as the late Australian musician Peter Allen famously sang in his song of that name.

Mainframes were all the rage in the 1960s and 1970s. Though large organizations still used mainframes as their basis of their business-critical transaction systems in the 1990s and 2000s, the excitement was around client/server and n-tier architectures built up from racks of low-cost commodity hardware.

Over the past 15 years or so, it’s become clear that distributed processing for Web applications fit itself into that clustered model. Assemble a few racks of servers and add a load-balancing appliance, and you’ve got all the scalability and reliability anyone needs.

But you know, from the client perspective, the cloud looks like, well, a thundering huge mainframe.

Yes, I am an old fogey, who cut his teeth on FORTRAN, COBOL, PL/1 and CICS on Big Blue’s big iron (that is to say, IBM System/370). Yes, I can’t help but think, “Hmm, that’s just like a mainframe” far too often. And yes, the mainframe is very much alive.

IBM’s press release says that,

Today, nearly 15 percent of all new enterprise application functionality is written in COBOL. The programming language also powers many everyday services such as ATM transactions, check processing, travel booking and insurance claims. With more than 200 billion lines of COBOL code being used across industries such as banking, insurance, retail and human resources, it is crucial for businesses to have the appropriate framework to improve performance, modernize key applications and increase productivity.

I believe that. Sure, there are lots of applications written in Java, C++, C# and JavaScript. Those are on the front end, where if  a database read or write fails, or a non-responsive screen is an annoyance, nothing more. On the back end, if you want the fastest possible response time, without playing games with load balancers, and without failures, you’re still looking at a small number of big boxes, not a large number of small boxes.

This fogey is happy that the mainframe is alive and well.

According to IDG Research, 80% of business leaders say that Big Data should enable more informed business decisions – and 37% say that the insights provided by Big Data should prove critical to those decisions.

A February 2013 survey on Big Data was designed and executed jointly by IDG Research and Kapow Software, which sells an integration platform for Big Data. As with all vendor surveys, bear in mind that Kapow wants to make Big Data sound exciting, and to align the questions with its own products and services.

That said, the results of the survey of 200 business leaders, are interesting:

• 71% said that Big Data should help increase their competitive advantage by keeping them ahead of market trends

• 68% said that Big Data should improve customer satisfaction

• 62% believe Big Data should increase end-user productivity by providing real-time access to business information

• 60% said Big Data should improve information security and/or compliance

• 55% said Big Data should help create compelling new products and services

•33% said Big Data should help them monitor and respond to social media in real time

Those are big expectations for Big Data! The results to date… not so much. The study revealed that only one-third of organizations surveyed have implemented any sort of Big Data initiative – but another third expect to do so over the next year.

What are the barriers to Big Data success? The study’s answers:

• 53% say a lack of awareness of Big Data’s potential

• 49% say concerns about the time-to-value of the data

• 45% say having the right employee skills and training

• 43% say ability to extract data from the correct sources

The software development world keeps on changing. Just when we think we get a handle on something as simple as application lifecycle management, or cloud computing, or mobile apps, we get new models, new innovations, new technologies.

Forget plugging pair programming or continuous delivery or automated testing before checking code into the repository. The industry has moved on. Today, the innovation is around DevOps and Big Data and HTML5 and app stores and… well… it keeps changing.

Tracking and documenting those changes – that’s what we do at SD Times. Each year, the editors stop, catch our breath, and make a big list of the top innovators of the software development industry. We identify the leaders – the companies, the open-source projects, the organizations who ride the cutting edge.

To quote from Rex Kramer in the movie Airplane, the SD Times 100 are the boss, the head man, the top dog, the big cheese, the head honcho, number one…

Who are the SD Times 100? This week, all will be revealed. We will begin tweeting out the SD Times 100 on Thursday. Follow the action by watching @sdtimes on Twitter, or look for hashtag #SDTimes100.

After all the tweeting is complete, the complete list will be published to www.sdtimes.com. Be part of the conversation!

The classic database engines – like the big relational behemoths from IBM, Microsoft and Oracle – store the data on disk. So do many of the open-source databases, like MySQL and PostgreSQL, as well as the vast array of emerging NoSQL databases. While such database engines keep  all the rows and columns on the relatively slow, disks, they can boost performance by putting some element, including indices and sophisticated predicted caches, on faster solid-state storage or even faster main memory.

From a performance perspective, it would be great to store everything in main memory. It’s fast, fast, fast. It’s also expensive, expensive, expensive, and in traditional services, is not persistent. That’s why database designers and administrators leverage a hierarchy: A few key elements in the fastest, most costly main memory; more data in fast, costly solid-state storage; the bulk in super-cheap rotating disks. In some cases, of course, some of the data goes into a fourth layer in the hierarchy, off-line optical or tape storage.

In-memory databases challenge those assumptions for applications where database response time is the bottleneck to application performance. Sure, main memory is still fabulously expensive, but it’s not as costly as it used to be. New non-volatile RAM technologies can make main memory somewhat persistent without dramatically harming read/write times. (To the best of my knowledge, NVRAM remains slower than standard RAM – but not enough to matter.)

That’s not to say that your customer database, your server logs, or your music library, should be stored within an in-memory database. Nope. Not even close. But as you examine your application architecture, think about database contents that dramatically affect either raw performance, user experience or API response time. If you can isolate those elements, and store them within an in-memory database, you might realize several orders of magnitude improvement at minimal cost — and with potentially less complex code than you’d need to manage a hierarchical database system.

Not long ago, in-memory databases were a well-kept secret. The secret is out, according to research from Evans Data. Their new Global Development survey says that the developers using in-memory databases has increased 40% worldwide from 18% to 26% during the last six months. An additional 39% globally say they plan to incorporate in-memory databases into their development work within the next 12 months.

Z Trek Copyright (c) Alan Zeichick

Not long ago, if the corporate brass wanted the change major functionality in a big piece of software, the IT delivery time might be six to 12 months, maybe longer. Once upon a time, that was acceptable. Not today.

Thanks to agile, many software changes can be delivered in, say, six to 12 weeks. That’s a huge improvement — but not huge enough. Business imperatives might require that IT deploy new application functionality in six to 12 days.

Sounds impossible, right? Maybe. Maybe not. I had dinner a few days ago with S. “Soma” Somasegar (pictured), the corporate vice president of Microsoft’s Developer Division. He laughed – and nodded – when I mentioned the need for a 30x shift in software delivery from months to days.

After all, as Soma pointed out, Microsoft is deploying new versions of its cloud-based Team Foundation Service every three weeks. The company has also realize that revving Visual Studio itself every two or three years isn’t serving the needs of developers. That’s why his team has begun rolling out regular updates that include not only bug fixes but also new features. The latest is Update 2 to Visual Studio 2012, released in late April, which added in new features for agile planning, quality assurance, line-of-business app developer, and improvements to the developer experience.

I like what I’m hearing from Soma and Microsoft about their developer tools, and about their direction. For example, the company appears sincere in its engagement of the open source community through Microsoft Open Technologies — but I’ll confess to still being a skeptic, based on Microsoft’s historical hostility toward open source.

Soma said that it’s vital not only for Microsoft to contribute to open source, but also to let open source communities engage with Microsoft. It’s about time!

Soma also cited the company’s new-found dedication to DevOps. He said that future versions of both on-premises and cloud-based tools will help tear down the walls between development and deployment. That’s where the 30x velocity improvement might come from.

Another positive shift is that Microsoft appears to truly accept that other platforms are important to developers and customers. He acknowledges that the answer to every problem cannot be to use Microsoft technologies exclusively.

Case in point: Soma said that fully 60% of Microsoft developers are building applications that touch at least three different platforms. He acknowledged that Microsoft still believes that it has the best platforms and tools, but said, “We now know that developers make other choices for valid reasons. We want to meet developers where they are” – that is, engaging with other platforms.

Soma’s words may seem like a modest and obvious statement, but it’s a huge step forward for Microsoft.

Tickets for the Apple Worldwide Developer Conference went on sale on Thursday, April 25. They sold out in two minutes.

Who says that the iPhone has lost its allure? Not developers. Sure, Apple’s stock price is down, but at least Apple Maps on iOS doesn’t show the bridge over Hoover Dam dropping into Black Canyon any more.

Two minutes.

To quote from a story on TechCrunch,

Tickets for the developer-focused event at San Francisco’s Moscone West, which features presentations and one-on-one time with Apple’s own in-house engineers, sold out in just two hours in 2012, in under 12 hours in 2011, and in eight days in 2010.

Who attends the Apple WWDC? Independent software developers, enterprise developers and partners. Thousands of them. Many are building for iOS, but there are also developers creating software or services for other aspects of Apple’s huge ecosystem, from e-books to Mac applications.

Two minutes.

Mobile developers love tech conferences. Take Google’s I/O developer conference, scheduled for May 15-17. Tickets sold out super-fast there as well.

The audience for Google I/O is potentially more diverse, mainly because Google offers a wider array of platforms. You’ve got Android, of course, but also Chrome, Maps, Play, AppEngine, Google+, Glass and others beside. My suspicion, though, is that enterprise and entrepreneurial interest in Android is filling the seats.

Mobile. That’s where the money is. I’m looking forward to seeing exactly what Apple will introduce at WWDC, and Google at Google I/O.

Meanwhile, if you are an Android developer and didn’t get into Google I/O before it sold out – or if you are looking for a technical conference 100% dedicated to Android development – let me invite you to register for AnDevCon Boston, May 28-31. We still have a few seats left. Hope to see you there.