, , , , ,

Big Security, Big Cloud and the Big Goodbye

ddjSoftware-defined networks and Network Functions Virtualization will redefine enterprise computing and change the dynamics of the cloud. Data thefts and professional hacks will grow, and development teams will shift their focus from adding new features to hardening against attacks. Those are two of my predictions for 2015.

Big Security: As 2014 came to a close, huge credit-card breaches from retailers like Target faded into the background. Why? The Sony Pictures hack, and the release of an incredible amount of corporate data, made us ask a bigger question: “What is all that information doing on the network anyway?” Attackers took off with Sony Pictures’ spreadsheets about executive salaries, confidential e-mails about actors and actresses, and much, much more.

What information could determined, professional hackers make off with from your own company? If it’s on the network, if it’s on a server, then it could be stolen. And if hackers can gain access to your cloud systems (perhaps through social engineering, perhaps by exploiting bugs), then it’s game over. From pre-released movies and music albums by artists like Madonna, to sensitive healthcare data and credit-card numbers, if it’s on a network, it’s fair game.

No matter where you turn, vulnerabilities are everywhere. Apple patched a hole in its Network Time Protocol implementation. Who’d have thought attackers would use NTP? GitHub has new security flaws. ICANN has scary security flaws. Microsoft released flawed updates. Inexpensive Android phones and tablets are found to have backdoor malware baked right into the devices. I believe that 2015 will demonstrate that attackers can go anywhere and steal anything.

That’s why I think that savvy development organizations will focus on reviewing their new code and existing applications, prioritizing security over adding new functionality. It’s not fun, but it’s 100% necessary.

Big Cloud: Software-defined networking and Network Functions Virtualization are reinventing the network. The fuzzy line between intranet and Internet is getting fuzzier. Cloud Ethernet is linking the data center directly to the cloud. The network edge and core are indistinguishable. SDN and NFV are pushing functions like caching, encryption, load balancing and firewalls into the cloud, improving efficiency and enhancing the user experience.

In the next year, mainstream enterprise developers will begin writing (and rewriting) back-end applications to specifically target and leverage SDN/NFV-based networks. The question of whether the application is going to run on-premises or in the cloud will cease to be relevant. In addition, as cloud providers become more standards-based and interoperable, enterprises will gain more confidence in that model of computing. Get used to cloud APIs; they are the future.

Looking to boost your job skills? Learn about SDN and NFV. Want to bolster your development team’s efforts? Study your corporate networking infrastructure, and tailor your efforts to matching the long-term IT plans. And put security first—both of your development environments and your deployed applications.

Big Goodbye: The tech media world is constantly changing, and not always for the better. The biggest one is the sunsetting of Dr. Dobb’s Journal, a website for serious programmers, and an enthusiastic bridge between the worlds of computer science and enterprise computing. After 38 years in print and online, the website will continue, but no new articles or content will be commissioned or published.

DDJ was the greatest programming magazine ever. There’s a lot that can be said about its sad demise, and I will refer you to two people who are quite eloquent on the subject: Andrew Binstock, the editor of DDJ, and Larry O’Brien, SD Times columnist and former editor of Software Development Magazine, which was folded into DDJ a long time ago.

Speaking as a long-time reader—and as one of the founding judges of DDJ’s Jolt Awards—I can assure you that Dr. Dobb’s will be missed.

, , ,

Innovate in the cloud, cheaply and securely

sony_pictures_logoFor development teams, cloud computing is enthralling. Where’s the best place for distributed developers, telecommuters and contractors to reach the code repository? In the cloud. Where do you want the high-performance build servers? At a cloud host, where you can commandeer CPU resources as needed. Storing artifacts? Use cheap cloud storage. Hosting test harness? The cloud has tremendous resources. Load testing? The scales. Management of beta sites? Cloud. Distribution of finished builds? Cloud. Access to libraries and other tools? Other than the primary IDE itself, cloud. (I’m not a fan of working in a browser, sorry.)

Sure, a one-person dev team can store an entire software development environment on a huge workstation or a convenient laptop. Sure, a corporation or government that has exceptional concerns or extraordinary requirements may choose to host its own servers and tools. In most cases, however, there are undeniable benefits for cloud-oriented development, and if developers aren’t there today, they will be soon. My expectation is that new projects and team launch on the cloud. Existing projects and teams will remain on their current dev platforms (and on-prem) until there’s a good reason to make the switch.

The economics are unassailable, the convenience is unparalleled, and both performance and scalability can’t be matched by in-house code repositories. Security in the cloud may also outmatch most organizations’ internal software development servers too.

We have read horror stories about the theft of millions of credit cards and other personal data, medical data, business documents, government diplomatic files, e-mails and so-on. It’s all terrible and unlikely to stop, as the recent hacking of Sony Pictures demonstrates.

What we haven’t heard about, through all these hacks, is the broad theft of source code, and certainly not thefts from hosted development environments. Such hacks would be bad, not only because proprietary source code contains trade secrets, but also because the source can be reverse-engineered to reveal attack vulnerabilities. (Open-source projects also can be reverse-engineered, of course, but that is expected and in fact encouraged.)

Even worse that reverse-engineering of stolen source code would be unauthorized and undetected modifications to a codebase. Can you imagine if hackers could infiltrate an e-commerce system’s hosted code and inject a back door or keylogger? You get the idea.

I am not implying that cloud-based software development systems are more secure than on-premises systems. I am also not implying the inverse. My instinct is to suggest that hosted cloud dev systems are as safe, or safer, than internal data center systems. However, there’s truly no way to know.

A recent report from the analyst firm Technology Business Research took this stance, arguing that security for cloud-based services will end up being better than security at local servers and data centers. While not speaking specifically to software development, a recent TBR report concluded, “Security remains the driving force behind cloud vendor adoption, while the emerging trends of hybrid IT and analytics, and the associated security complications they bring to the table, foreshadow steady and growing demand for cloud professional services over the next few years.”

Let me close by drawing your attention to a competition geared at startups innovating in the cloud. The Clouded Leopard’s Den is for young companies looking for A-series or B/C-series funding, and offers tools and resources to help them grow, attract publicity, and possibly even find new funding. If you work at a cloud startup, check it out!

, , , ,

Is the best place for data in your data center or in the cloud? Ask your lawyer

lawyer

Cloud-based storage is amazing. Simply amazing. That’s especially true when you are talking about data from end users that are accessing your applications via the public Internet.

If you store data in your local data center, you have the best control over it. You can place it close to your application servers. You can amortize it as a long-term asset. You can see it, touch it and secure it—or at least, have full control over security.

There are downsides, of course, to maintaining your own on-site data storage. You have to back it up. You have to plan for disasters. You have to anticipate future capacity requirements through budgeting and advance purchases. You have to pay for the data center itself, including real estate, electricity, heating, cooling, racks and other infrastructure. Operationally you have to pipe that data to and from your remote end users through your own connections to the Internet or to cloud application servers.

By contrast, cloud storage is very appealing. You pay only for what you use. You can hold service providers to service-level guarantees. You can pay the cloud provider to replicate the storage in various locations, so customers and end-users are closer to their data. You can pay for security, for backups, for disaster recovery provisions. And if you find that performance isn’t sufficient, you can migrate to another provider or order up a faster pipe. That’s a lot easier, cheaper and faster than ripping-and-replacing outdated storage racks in your own data center.

Gotta say, if I were setting up a new application for use by off-site users (whether customers or employees), I’d lean toward cloud storage. In most cases, the costs are comparable, and the operational convenience can’t be beat.

Plus, if you are at a startup, a monthly storage bill is easier to work with than a large initial outlay for on-site storage infrastructure.

Case closed? No, not exactly. On-site still has some tricks up its sleeve. If your application servers are on-site, local storage is faster to access. If your users are within your own building or campus, you can keep everything within your local area network.

There also may be legal advantages to maintaining and using onsite storage. For compliance purposes, you know exactly where the data is at all times. You can set up your own instruction detection systems and access logs, rather than relying upon the access controls offered by the cloud provider. (If your firm isn’t good at security, of course, you may want to trust the cloud provider over your own IT department.)

On that subject: Lawsuits. In her story, “Eek! Lawyers are Coming After Your Fitbit!,” Sharon Fisher writes about insurance attorneys issuing subpoenas against a client’s FitBit data to show that she wasn’t truly as injured as she claimed. The issue here isn’t only about wearables or healthcare. It’s also about access. “Will legal firms be able to subpoena your cloud provider if that’s where your fitness data is stored? How much are they going to fight to protect you?” Fisher asks.

Say a hostile attorney wants to subpoena some of your data. If the storage is in your own data center, the subpoena comes to your company, where your own legal staff can advise whether to respond by complying or fighting the subpoena.

Yet: If the data is stored in the cloud, attorneys or government officials could come after you, or try to get access by giving a subpoena to the cloud service provider. Of course, encryption might prevent the cloud provider from complying. Still, this is a new concern, especially given the broad subpoena powers granted to prosecutors, litigating attorneys and government agencies.

It’s something to talk to your corporate counsel about. Bring your legal eagles into the conversation.

, , ,

The wisdom, innovation, and net neutrality of Bob Metcalfe

bob-metcalfeWashington, D.C. — “It’s not time to regulate and control and tax the Internet.” Those are words of wisdom about Net Neutrality from Dr. Robert Metcalfe, inventor of Ethernet, held here at the MEF GEN14, the annual conference from the Metro Ethernet Forum.

Bob Metcalfe is a legend. Not only for his role in inventing Ethernet and founding 3Com, but also now for his role as a professor of innovation at the University of Texas at Austin. (Disclosure: Bob is also a personal friend and former colleague.)

At MEF GEN14, Bob gave a keynote, chaired a panel on innovation, and was behind the microphone on several other occasions. I’m going to share some of his comments and observations.

  • Why didn’t WiFi appear earlier? According to Bob, radio links were part of the original work on Ethernet, but the radios themselves were too slow, too large, and required too much electricity. “It was Moore’s Law,” he explained, saying that chips and circuits needed to evolve in order to make radio-based Ethernet viable.
  • Interoperability is key for innovation. Bob believes that in order to have strong competitive markets, you need to have frameworks for compatibility, such as standards organizations and common protocols. This helps startups and established players compete by creating faster, better and cheaper implementations, and also creating new differentiated value-added features on top of those standards. “The context must be interoperability,” he insisted.
  • Implicit with interoperability is that innovation must respect backward compatibility. Whether in consumer or enterprise computing, customers and markets do not like to throw away their prior investments. “I have learned about efficacy of FOCACA: Freedom of Choice Among Competing Alternatives. That’s the lesson,” Bob said, citing Ethernet protocols but also pointing at all layers of the protocol stack.
  • There is a new Internet coming: the Gigabit Internet. “We started with the Kilobit Internet, where the killer apps were remote login and tty,” Bob explained. Technology and carriers then moved to today’s ubiquitous Megabit Internet, “where we got the World Wide Web and social media.” The next step is the Gigabit Internet. “What will the killer app be for the Gigabit Internet? Nobody knows.”
  • With the Internet of Things, is Moore’s Law going to continue? Bob sees the IoT being constrained by hardware, especially microprocessors. He pointed out that as semiconductor feature sizes have gone down to 14nm scale, the costs of building fabrication factories has grown to billions of dollars. While chip features shrink, the industry has also moved to consolidation, larger wafers, 3D packing, and much lower power consumption—all of which are needed to make cheap chips for IoT devices. There is a lot of innovation in the semiconductor market, Bob said, “but with devices counted in the trillions, the bottleneck is how long it takes to design and build the chips!”
  • With Net Neutrality, the U.S. Federal Communications Commission should keep out. “The FCC is being asked to invade this party,” Bob said. “The FCC used to run the Internet. Do you remember that everyone had to use acoustic couplers because it was too dangerous to connect customer equipment to the phone network directly?” He insists that big players—he named Google—are playing with fire by lobbying for Net Neutrality. “Inviting the government to come in and regulate the Internet. Where could it go? Not in the way of innovation!” he insisted.
, , , ,

Tomorrow’s forecast: Distributed Denial of Service

forecastMalicious agents can crash a website by implementing a DDoS—a Distributed Denial of Service Attack—against a server. So can sloppy programmers.

Take, for example, the National Weather Service’s website, operated by the United States National Oceanic and Atmospheric Administration, or NOAA. On August 29, the service went down, hard, as single rogue Android app overwhelmed the NOAA’s servers.

As far as anyone knows, there was nothing deliberately malicious about the Android app, and of course there is nothing specific to Android in this situation. However, the app in question was making service requests of the NOAA server’s public APIs every few milliseconds. With hundreds, thousands or tens of thousands of instances of that app running simultaneously, the NOAA system collapsed.

There is plenty of blame to go around. Let’s start with the app developer.

Certainly the app developer was sloppy, sloppy, sloppy. I can imagine that the app worked great in testing, when only one or two instances of the app were running at any one time on a simulator or on actual devices. Scale it up—boom! This is a case where manual code reviews may have found the problem. Maybe not.

Alternatively, the app developer could have checked to see if the public APIs it required (such as NOAA’s weather API) could handle the anticipated load. However, if the coders didn’t write the software correctly, load testing may not have sufficed. For example, say that the design of the app was to pull data every 10 seconds. If the programmers accidentally set up the data retrieval to pull the data every 10 milliseconds, the load would be 1,000x greater than anticipated. Every 10 seconds, no problem. Every 10 milliseconds, big problem. Boom!

This is a nasty bug, to be sure. Compilers, libraries, test systems, all would verify that the software ran correctly, because it did run correctly. In the scenario I’ve painted, it simply wasn’t coded to meet the design. The bug might have been spotted if someone noticed a very high number of external API calls, or again, perhaps during a manual code review. Otherwise, it’s not hard to see how it would slip through the crack.

Let’s talk about NOAA now. In 2004, the weather service beefed up its Internet loads in anticipation of Hurricane Charley, contracting with Akamai to host some of its busiest Web pages, using distributed edge caching to reduce the load. This worked well, and Akamai continued to work with NOAA. It’s unclear if Akamai also fronted public API calls; my guess is that those were passed straight through to the National Weather Service servers.

NOAA’s biggest problem is that it has little control over external applications that use its public APIs. Even so, Akamai was still in the circuit and, fortunately, was able to help with the response to the Aug. 29 accidental DDoS situation. At that time, the National Weather Service put out a bulletin on its NIDS messaging service that said:

TO – ALL CUSTOMERS SUBJECT – POINT FORECAST ISSUES. WE ARE PROVIDING NOTICE TO ALL THAT NIDS HAS IDENTIFIED AN ABUSING ANDROID APP THAT IS IMPACTING FORECAST.WEATHER.GOV. WE HAVE FORCED ALL SITES TO ZONES WHILE WE WORK WITH THE DEVELOPER. AKAMAI IS BEING ENGAGED TO BLOCK THE APPLICATION. WE CONTINUE TO WORK ON THIS ISSUE AND APPRECIATE YOUR PATIENCE AS WE WORK TO RESOLVE THIS ISSUE.

Kudos to NOAA for responding quickly and transparently to this issue. Still, this appalling situation—that a single DDoS attack could cripple such a vital service—is unacceptable. Imagine if this had been a malicious attack, rather than an accidental coding error, and if the attacker was able to modify the attack in real time to go around Akamai’s attempts to block the traffic.

What could NOAA have done differently? For best results, DDoS attacks must be blocked within the network before they reach (and overwhelm) the server. Therefore, DDoS detection and blocking systems should already have been in place.

For example, with the ability to detect potential attacks due to abnormally high volumes of requests from a specific app, raise alarms, and also drop such requests (which is fast and takes few resources), instead of servicing them (which is slow and takes more resources). Perfect? No. DDoS scenarios are nasty and messy. No matter how you slice it, though, a single misbehaving app should never be able to crash your server.

, , ,

Capriza’s clever mobility via HTML screen scraping

caprizaHTML browser virtualization, not APIs, may be the best way to mobilize existing enterprise applications like SAP ERP, Oracle E-Business Suite or Microsoft Dynamics.

At least, that’s the perspective of Capriza, a company offering a SaaS-based mobility platform that uses a cloud-based secure virtualized browser to screen-scrape data and context from the enterprise application’s Web interface. That data is then sent to a mobile device (like a phone or tablet), where it’s rendered and presented through Capriza’s app.

The process is bidirectional: New transactional data can be entered into the phone’s Capriza app, which transmits it to the cloud-based platform. The Capriza cloud, in turn, opens up a secure virtual browser session with the enterprise software and performs the transaction.

The Capriza platform, which I saw demonstrated last week, is designed for employees to access enterprise applications from their Android or Apple phones, or from tablets.

The platform isn’t cheap – it’s licensed on a per-seat, per-enterprise-application basis, and you can expect a five-digit or six-digit annual cost, at the least. However, Capriza is solving a pesky problem.

Think about the mainstream way to deploy a mobile application that accesses big enterprise back-end platforms. Of course, if the enterprise software vendor offers a mobile app, and if that app meets you needs, that’s the way to go. What if the enterprise software’s vendor doesn’t have a mobile app – or if the software is homegrown? The traditional approach would be to open up some APIs allowing custom mobile apps to access the back-end systems.

That approach is fraught with peril. It takes a long time. It’s expensive. It could destabilize the platform. It’s hard to ensure security, and often it’s a challenge to synchronize API access policies with client/server or browser-based access policies and ACLs. Even if you can license the APIs from an enterprise software vendor, how comfortable are you exposing them over the public Internet — or even through a VPN?

That’s why I like the Capriza approach of using a virtual browser to access the existing Web-based interface. In theory (and probably in practice), the enterprise software doesn’t have to be touched at all. Since the Capriza SaaS platform has each mobile user log into the enterprise software using the user’s existing Web interface credentials, there should be no security policies and ACLs to replicate or synchronize.

In fact, you can think of Capriza as an intentional man-in-the-middle for mobile users, translating mobile transactions to and from Web transactions on the fly, in real time.

As the company explains it, “Capriza helps companies leverage their multi-million dollar investments in existing enterprise software and leapfrog into the modern mobile era. Rather than recreate the wheel trying to make each enterprise application run on a mobile device, Capriza breaks complex, über business processes into mini ones. Its approach bypasses the myriad of tools, SDKs, coding, integration and APIs required in traditional mobile app development approaches, avoiding the perpetual cost and time requirements, risk and questionable ROI.”

It certainly looks like Capriza wins this week’s game of Buzzword Bingo. Despite the marketing jargon, however, the technology is sound, and Capriza has real customers—and has recently landed a US$27 million investment. That means we’re going to see a lot of more this solution.

Can Capriza do it all? Well, no. It works best on plain vanilla Web sites; no Flash, no Java, no embedded apps. While it’s somewhat resilient, changes to an internal Web site can break the screen-scraping technology. And while the design process for new mobile integrations doesn’t require a real programmer, the designer must be very proficient with the enterprise application, and model all the pathways through the software. This can be tricky to design and test.

Plus, of course, you have to be comfortable letting a third-party SaaS platform act as the man-in-the-middle to your business’s most sensitive applications.

Bottom line: If you are mobilizing enterprise software — either commercial or home-grown — that allow browser access, Capriza offers a solution worth considering.

, , ,

Next steps for Hewlett-Packard post-split

Neineil-sedakal Sedaka insists that breakin’ up is hard to do. Will that apply to the planned split of Hewlett-Packard into two companies? Let’s be clear: This split is a wonderful idea, and it’s long overdue.

Once upon a time, HP was in three businesses: Electronics test equipment (like gas spectrometers); expensive, high-margin data center products and services (like minicomputers and consulting); and cheap, low-margin commodity tech products (like laptops, small business routers and ink-jet printers).

HP spun off the legacy test-equipment business in 1999 (forming Agilent Technologies) and that was a win-win for both Agilent and for the somewhat-more-focused remainder of HP. Now it’s time to do it again.

There are precious few synergies between the enterprise side of HP and the commodity side. The enterprise side has everything that a big business would want, from high-end hyperscale servers to Big Data, Software Defined Networks, massive storage arrays, e-commerce security, and oh, lots of consulting services.

Over the past few years, HP has been on an acquisitions binge to support its enterprise portfolio, helping make it more competitive against arch-rival IBM. The company has snapped up ArcSight and Fortify Software (software security); Electronic Data Systems (IT services and consulting); 3PAR (storage); Vertica Systems (database analytics); Shunra (network virtualization); Eucalyptus (private and hybrid cloud); Stratavia/ExtraQuest (data center automation); and of course, the absurdly overpriced Autonomy (data management).

Those high-touch, high-cost, high-margin enterprise products and services have little synergy with, say, the HP Deskjet 1010 Color Printer, available for US$29.99 at Staples. Sure, there’s money in printers, toner and ink, monitors, laptops and so on. But that’s a very different market, with a race-to-the-bottom drive for market share, horrible margins, crazy supply chain and little to differentiate one Windows-based product from another.

Analysts and investors have been calling for the breakup of HP for years; the company refused, saying that the unified company benefitted from an economy of scale. It’s good that CEO Meg Whitman has acknowledged what everyone knew: HP is sick, and this breakup into Hewlett-Packard Enterprise and HP Inc. is absolutely necessary.

Is breaking up hard to do? For most companies it’s a challenge at the best of times, but this one should be relatively painless. First of all, HP has split up before, so at least there’s some practice. Second, these businesses are so different that it should be obvious where most of HP’s employees, products, customer relationships, partner relationships and intellectual properly will end up.

That’s not to say it’s going to be easy. However, it’s at least feasible.

Both organizations will be attractive takeover targets, that’s for sure. I give it a 50/50 chance that within five years, IBM or Oracle will make a play for Hewlett-Packard Enterprise, or it will combine with a mid-tier player like VMware or EMC.

The high-volume, low-margin HP Inc. will have trouble surviving on its own, because that is an area where scale helps drive down costs and helps manage the supply chain and retail channels. I could see HP Inc. being acquired by Dell or Lenovo, or even by a deep-pocket Internet retailer like Amazon.com.

This breakup is necessary and may be the salvation of Hewlett-Packard’s enterprise business. It may also be the beginning of the end for the most storied company in Silicon Valley.

, , , , ,

Big Data Divinations – Your business partner’s book about Big Data

Big Data Divination Pam BakerYou’ve gotta read “Data Divination: Big Data Strategies,” Pam Baker’s new book about Big Data.

Actually, let me change my recommendation. If you are a techie and you are looking for suggestions on how to configure your Hadoop installation or optimize the storage throughput in your NAS array, this isn’t the book for you. Rather, this is the book for your business-side manager or partner, who is looking to understand not only what Big Data is, but really really learn how to apply data analysis to business problems.

One of the challenges with Big Data is simply understanding it. The phrase is extremely broad and quite nebulous. Yet behind the overhyping of Big Data, there are genuine use cases that demonstrate that looking at your business’ data in a new way can transform your business. It is real, and it is true.

Bake is the editor of the “Fierce Big Data” website. She deconstructs the concept by dispensing with the jargon and the, well, overly smug Big Data worship that one finds in a lot of literature and pushed out by the vendors. With a breezy style that reflects her background as a technology journalist, Baker uses clear examples and lots of interviews to make her points.

What will you learn? To start with, “Data Divination” teaches you how to ask good questions. After all, if you don’t ask, you won’t learn anything from all that data and all those reports. Whether it’s predictive analytics or trend spotting or real-time analysis, she helps you understand which data is valuable and which isn’t. That’s why this book is best for the executive and business-side managers, who are the ultimate beneficiaries of your enterprise’s Big Data investments.

This book goes beyond other books on the subject, which could generally be summarized either as too fluffy and cheerleading, or as myopically focused on implementation details of specific Big Data architectures. For example, there is a lengthy chapter on the privacy implications of data gathering and data analysis, the sort of chapter that a journalist would write, but an engineer wouldn’t even think about.

Once you’ve finished with the basics, Baker jumps into several fascinating use cases: in healthcare, in the security industry, in government and law enforcement, in small business, in agriculture, in transportation, in energy, in retail, in manufacturing, and so on. Those are the most interesting parts of the book, and each use had takeaways that could apply to any industry. Baker is to be commended for digging into the noteworthy challenges that Big Data attempts to help businesses overcome.

It’s a good book. Read it. And tell your business partner, CIO or even CEO to read it too.

, , ,

What to do when, not if, your cloud goes down

azure-statusCloud-based development tools are great. Until they don’t work.

I don’t know if you were affected by Microsoft’s Azure service outage on Thursday, August 14, 2014. As of my deadline, services had been offline for nearly six hours. On its status page, Microsoft was reporting:

Visual Studio Online – Multi-Region – Full Service Interruption

Starting 22:45 13 Aug, 2014 UTC, Visual Studio Online customers may have experienced issues with latency and extended Execution times. The initial incident mitigated at approximately 14:00 UTC. During investigation at 13:52 14 Aug, 2014 UTC, engineering teams began receiving alerts for a separate issue where customers were unable to log in to their Visual Studio Online services. From 13:52 to 19:45 on 14 Aug, 2014 UTC, customers were unable to access their Visual Studio Online resources. Engineering teams have validated their mitigation efforts for both issues and have confirmed that full service has been restored to our Visual Studio Online users. These incidents are now mitigated.

My goal here isn’t to throw Microsoft under the bus. Azure has been quite stable, and other cloud providers, including Amazon, Apple and Google, have seen similar problems. Actually, Amazon in particular has seen a lot of uptime and stability problems with AWS over the past couple of years, though its dashboard on Thursday afternoon shows full service availability.

Let’s think about the broader issue. What’s your contingency plan if your cloud-based services go down, whether it’s one of those players, or a service like GitHub, Salesforce.com, SourceForge, or you-name-it? Do you have backups, in case code or artifacts are lost or corrupted? (Do you have any way to know if data is lost or corrupted?)

This is a worry.

In the case of the August 14 outage, the system wasn’t down for long — but long enough to kill a day’s productivity for many workers. Microsoft’s Visual Studio Online blog has a little bit of insight into the problem, but not much. Posted at 16:56 UTC, Microsoft said:

The actual root cause is still under investigation, but initial investigation is indicating a contention in our core database seems to be causing blocking and performance issues in the services. Our DevOps teams have identified a couple of mitigation steps and currently going thru validations. We will provide an update as soon as we have a mitigation in place. We apologize for the inconvenience and appreciate your patience while working on resolving this issue

This time you can blame Microsoft for any loss of productivity. Next time the service goes down, if you haven’t made contingency plans, the blame is yours.

, , ,

Look to the intranet for shared corporate data — it’s a Big Data problem

Microsoft-SharePoint-Foundation-2010-logoWhere do your employees go to find shared data? If it’s external data, probably an external search engine, like Google (which apparently holds 67.6% of the U.S. market) or Bing (18.7%) or one of the niche players.

What about internal corporate data? If your organization uses a platform like Microsoft’s SharePoint, that platform includes a pretty robust search engine. You can use SharePoint to find documents stored inside the SharePoint database, or external documents linked to it, and conversations and informal data hosted by SharePoint. If you are familiar with a product called FAST, which Microsoft acquired in 2008, SharePoint’s search contains some elements of FAST and some elements of Bing. It’s quite good.

What if you are not a SharePoint shop, or if you are in a shop that hasn’t rolled SharePoint out to every portion of the organization?  You probably don’t have any good way for employees to find structured and unstructured documents, as well as data. You’ve got information in Dropbox. In Box.com. In Lotus Notes, maybe. In private Facebook groups. In Yammer (another Microsoft acquisition, by the way). In Ribose, a neat startup. Any number of places that might be on enterprise servers or cloud services, and I’m not even talking about the myriad code repositories that you may have, from ClearCase to Perforce to Subversion to GitHub.

All of those sources are good. There are reasons to use each of them for document sharing and collaboration and source-code development. That’s the problem. Like the classic potato chips advertisements say, you can’t only eat one.

Even in a small company, the number of legitimate sharing platforms can proliferate like weeds. As organizations grow, the potential places to stash information can grow exponentially, especially if there is a culture that allows for end users or line-of-business departments to roll out ad hoc solutions. Add mobile, and the problem explodes.

This is a governance problem: How do you ensure that data is accounted for, check that external sharing solutions are secure, or even detect if information has been stolen or tampered with?

This is a productivity problem: How much time is wasted by employees looking for information?

This is a business problem: How much money is wasted, or how much work must be duplicated or redone because data can’t be found?

This is a Big Data problem: How can you analyze it if you can’t find it?

The answer has to be a smarter intranet portal. In a recent essay by the Nielsen Norman Group, usability experts Patty Caya and Kara Pernice write that “Intranet portals are the hub of the enterprise universe.”

The trick is to discover it, index it, and make it available to authorized users—without stifling productivity. That includes data from applications that your developers are creating and maintaining.

, , ,

Git control of your software development assets

gitThere are lots of reasons to use Git as your source-code management system. Whether used as a primary system, or in conjunction with an existing legacy repository, I’m going to argue that if you’re not using Git now, you should be at least testing it out.

Basics of Git: It is open source, and runs on Linux, Unix and Windows servers. It is stable. It is solid. It is fast. It is supported by just about every major tool vendor. Developers love Git. Managers love Git.

Not long ago, much of the world standardized on Concurrent Versions System (CVS) as its version control system. Then Subversion (SVN) came along, and the world standardized on that. Yes, yes, I know there are dozens of other version control systems, ranging from Microsoft’s Visual SourceSafe and Team Foundation Server to IBM Rational’s ClearCase. Those have always been niche products. Some are very successful niche products, but the industry standards have been CVS and SVN for years.

Along came Git, designed by Linus Torvalds in 2005, now headed up by Junio Hamano. For a brief history of Git, read “The Legacy of Linus Torvalds: Linux, Git, and One Giant Flamethrower,” by Robert McMillan, published in Wired in November 2012. For the official history, see the Git website.

What’s so wonderful about Git? I’ll answer in two ways: industry support and impressive functionality.

For industry support, let me refer you to two new articles by SD Times’ Lisa Morgan. Those stories inspired this column. The first is“How to get Git into the enterprise,” and the other is “Git smart about tools: A Buyers Guide.” You’ll see that nearly every major industry player supports Git—even competing SCM systems have worked to ensure interoperability. That’s a heck of an endorsement, and shows the stability and maturity of the platform.

Don’t take my word for it for the impressive functionality. Instead, let me quote from other bloggers.

Tobias Günther: “Work Offline: What if you want to work while you’re on the move? With a centralized VCS like Subversion or CVS, you’re stranded if you’re not connected to the central repository. With Git, almost everything is possible simply on your local machine: make a commit, browse your project’s complete history, merge or create branches… Git lets you decide where and when you want to work.”

Stephen Ball: “Resolving conflicts is way easier (than SVN): In Git, if I have a private branch from a branch that has been updated with new (conflicting) commits, I can rebase its commits one at a time against the public destination branch. I can resolve conflicts as they arise between my code and the current codebase. This makes dealing with conflicts easy because I get the context of the conflict (my commit message) and only see one conflict at a time.

“In SVN if I merge a branch against another and there are a lot of conflicts, there’s nothing I can do but resolve them all at the same time. What a mess.”

Scott Chacon: “There are tons of fantastic and powerful features in Git that help with debugging, complex diffing and merging, and more. There is also a great developer community to tap into and become a part of and a number of really good free resources online to help you learn and use Git…

“I want to share with you the concept that you can think about version control not as a necessary inconvenience that you need to put up with in order to collaborate, but rather as a powerful framework for managing your work separately in contexts, for being able to switch and merge between those contexts quickly and easily, for being able to make decisions late and craft your work without having to pre-plan everything all the time. Git makes all of these things easy and prioritizes them and should change the way you think about how to approach a problem in any of your projects and version control itself.”

Nicola Paolucci:
“If you don’t like speed, being productive and more reliable coding practices, then you shouldn’t use Git.”

Peter Cho: “Most developers would be delighted if they can change their workflow to use Git. Switching over early would be more ideal unless, of course, your SCM relies on a large network of dependent applications. If it’s not viable to change SCM systems, I would highly recommend using it on future projects.

“Git is infamous for having a large suite of tools that even seasoned users need months to master. However, getting into the fundamentals of Git is simple if you’re trying to switch over from SVN or CVS. So give a try sometime.”

Thomas Koch: “Somebody probably already recommended you to switch to Git, because it’s the best VCS. I’d like to go a step further now and talk about the risk you’re taking if you won’t switch soon. By still using SVN (if you’re using CVS you’re doomed anyway), you communicate the following: We’re ignorant about the fact that the rest of the (free) world switched to Git. We don’t invest time to train our developers in new technologies. We don’t care to provide the best development infrastructure. We’re not used to collaborate with external contributors. We’re not aware how much Subversion sucks and that Subversion does not support any decent development process. Yes, our development process most certainly sucks too.”

Günther also wrote, “Go With the Flow: Only dead fish swim with the stream. And sometimes, clever developers do, too. Git is used by more and more well-known companies and Open Source projects: Ruby On Rails, jQuery, Perl, Debian, the Linux Kernel, and many more. A large community often is an advantage by itself because an ecosystem evolves around the system. Lots of tutorials, tools (do I have to mention Tower?) and services make Git even more attractive.”

I’m sure there are arguments against Git. Nearly all the ones I’ve heard have come to me via competing source-code management vendors, not from developers who have actually tried Git for more at least one pilot. If you aren’t using Git, check it out. It’s the present and future of version control systems.

, , , ,

Coping with complexity at the SDLC Acceleration Summit

arthur-hickenSouth San Francisco, California — Writing software would be oh, so much simpler if we didn’t have all those darned choices. HTML5 or native apps? Windows Server in the data center or Windows Azure in the cloud? Which Linux distro? Java or C#? Continuous Integration? Continuous Delivery? Git or Subversion or both? NoSQL? Which APIs? Node.js? Follow-the-sun?

In a panel discussion on real-world software delivery bottlenecks, “complexity” was suggested as a main challenge. The panel, held here at the SDLC Acceleration Summit, pointed out that the complexity of constantly evaluating new technologies, techniques and choices can bring uncertainty and doubt and consume valuable mental bandwidth—and those might sometimes negate the benefits of staying on the cutting edge. (Pictured: My friend Arthur Hicken, aka “The Code Curmudgeon,” chief evangelist at Parasoft, which sponsored the event.)

I was the moderator. Sitting on the panel were David Intersimone from Embarcadero Technologies; Paul Dhaliwal from 383 Media; Andrew Binstock, editor of Dr. Dobb’s Journal; and Norman Buck from SQS.

Choices are not simple. Merely keeping up with the latest technologies can consume tons of time. Not only reading resources like SD Times, but also following your favorite Twitter feeds, reading blogs like Stack Overflow, meeting thought leaders at conferences, and, of course, hearing vendor pitches.

While complexity can be overwhelming, the truth is that we can’t opt out. We must keep up with the latest platforms and changes. We must have a mobile strategy. Yes, you can choose to ignore, say, the recent advances in cloud computing, Web APIs and service virtualization, but if you do so, you’re potentially missing out on huge benefits. Yes, technologies like Software Defined Networking (SDN) and OpenFlow may not seem applicable to you today, but odds are that they will be soon. Ignore them now and play catch-up later.

Complexity is not new. If you were writing FORTRAN code back in the 1970s, you had choices of libraries. Developing client/server software for NetWare or AIX? Building with Oracle? We have always had complexity and choices in platforms, tools, methodologies, databases and libraries. We always had to ensure that our code ran (and ran properly) on a variety of different targets, including a wide range of browsers, Java runtimes, rendering engines and more.

Yet today the number of combinations and permutations seems to be significantly greater than at any time in the past. Clouds, virtual machines, mobile devices, APIs, tools. Perhaps we need a new abstraction layer. In any case, though, complexity is a root cause of our challenges with software delivery. We must deal with it.

, , , ,

Microsoft keeps stumbling

Microsoft’s woes are too big to ignore.

Problem area number one: The high-profile Surface tablet/notebook device is flopping. While the 64-bit Intel-based Surface Pro hasn’t sold well, the 32-bit ARM-based Surface RT tanked. Big time. Microsoft just slashed its price — maybe that will help. Too little too late?

To quote from Nathan Ingraham’s recent story in The Verve, 

Microsoft just announced earnings for its fiscal Q4 2013, and while the company posted strong results it also revealed some details on how the Surface RT project is costing the business money. Microsoft’s results showed a $900 million loss due to Surface RT “inventory adjustments,” a charge that comes just a few days after the company officially cut Surface RT prices significantly. This $900 million loss comes out of the company’s total Windows revenue, though its worth noting that Windows revenue still increased year-over-year. Unfortunately, Microsoft still doesn’t give specific Windows 8 sales or revenue numbers, but it probably performed well this quarter to make up for the big Surface RT loss.

At the end of the day, though, it looks like Microsoft just made too many Surface RT tablets — we heard late last year that Microsoft was building three to five million Surface RT tablets in the fourth quarter, and we also heard that Microsoft had only sold about one million of those tablets in March. We’ll be listening to Microsoft’s earnings call this afternoon to see if they further address Surface RT sales or future plans.

Microsoft has spent heavily, and invested a lot of its prestige, in the Surface. It needs to fix Windows 8 and make this platform work.

Problem are number two: A dysfunctional structure. A recent story in the New York Times reminded me of this 2011 cartoon describing six tech company’s charts. Look at Microsoft. Yup.

Steve Ballmer, who has been CEO since 2000, is finally trying to do something about the battling business units. The new structure, announced on July 11, is called “One Microsoft,” and in a public memo by Ballmer, the goal is described as:

Going forward, our strategy will focus on creating a family of devices and services for individuals and businesses that empower people around the globe at home, at work and on the go, for the activities they value most. 

Editing and restructuring the info in that memo somewhat, here’s what the six key non-administrative groups will look like:

Operating Systems Engineering Group will span all OS work for console, to mobile device, to PC, to back-end systems. The core cloud services for the operating system will be in this group.

Devices and Studios Engineering Group will have all hardware development and supply chain from the smallest to the largest devices, and studios experiences including all games, music, video and other entertainment.

Applications and Services Engineering Group will have broad applications and services core technologies in productivity, communication, search and other information categories.

Cloud and Enterprise Engineering Group will lead development of back-end technologies like datacenter, database and specific technologies for enterprise IT scenarios and development tools, plus datacenter development, construction and operation.

Advanced Strategy and Research Group will be focused on the intersection of technology and policy, and will drive the cross-company looks at key new technology trends.

Business Development and Evangelism Group will focus on key partnerships especially with innovation partners (OEMs, silicon vendors, key developers, Yahoo, Nokia, etc.) and broad work on evangelism and developer outreach. 

If implemented as described, this new organization should certainly eliminate waste, including redundant research and product developments. It might improve compatibility between different platforms and cut down on mixed messages.

However, it may also constraint the freedom to innovate, and promote the unhealthy “Windows everywhere” philosophy that has hamstrung Microsoft for years. It’s bad to spend time creating multiple operating systems, multiple APIs, multiple dev tool chains, multiple support channels. It’s equally bad to make one operating system, API set, dev tool chain and support channel fit all platforms and markets.

Another concern is the movement of developer outreach into a separate group that’s organizationally distinct from the product groups. Will that distance Microsoft’s product developers from customers and ISVs? Maybe. Will the most lucrative products get better developer support? Maybe.

Microsoft has excelled in developer support, and I’d hate to see that suffer as part of the new strategy. 

Read Steve Ballmer’s memo. What do you think?

Z Trek Copyright (c) Alan Zeichick
, ,

Cloud failures: It’s not if, it’s when

Apple is sporting an nasty black eye, and the shiner isn’t only because iPad sales are slipping – with a 14% year-on-year decline reported. This time, it’s because QoS on the company’s cloud servers is ugly, ugly, ugly.

As of my writing (on Thursday, July 25), Apple’s developer portal has been offline for days. As you can see on the dashboard, just about everything is down. If you go to a dev center, you see this message:

We apologize for the significant inconvenience caused by our developer website downtime. We’ve been working around the clock to overhaul our developer systems, update our server software, and rebuild our entire database. While we complete the work to bring our systems back online, we want to share the latest with you.

We plan to roll out our updated systems, starting with Certificates, Identifiers & Profiles, Apple Developer Forums, Bug Reporter, pre-release developer libraries, and videos first. Next, we will restore software downloads, so that the latest betas of iOS 7, Xcode 5, and OS X Mavericks will once again be available to program members. We’ll then bring the remaining systems online. To keep you up to date on our progress, we’ve created a status page to display the availability of our systems.

As you may have read elsewhere, the reason for the outage is apparently a researcher found a massive security hole in the App dev center system. To prevent the flaw from being exploited, Apple took the entire system down – on July 18. That’s right, it’s been over a week.

Ouch.

And then, today, July 25, there are reports that the authentication server needed to set up new iPhone accounts is offline. Apple’s IT department certainly isn’t looking too savvy right now – and perhaps this points to bigger challenges within the company’s spending priorities.

However, before anyone piles onto Apple, bear in mind that service outages are not uncommon, especially in the cloud. Certainly, they are not new; I’ve written about them before, such as in 2008’s “When the cloud was good, it was very very good, but when it was bad, it was horrid” and 2011’s “Skynet didn’t take down Amazon Web Services.”

Cloud failure is not a matter of if. It’s a matter of when. When huge corporations like Amazon and Apple can suffer these sorts of outages, anyone can, no matter how big.

What’s the game plan? Do you have a fail-over strategy to spool up a backup provider? Do you have messaging ready for your customers and partners? Alternatives to suggest?

I have no idea how much money Apple is losing due to these outages – or how much its developer partners and customers are affected. Apple, however, is big enough to handle the hit. How about you?

Z Trek Copyright (c) Alan Zeichick
, , ,

Building on Microsoft Build

If you were at Microsoft Build this week in San Francisco, you hung out with six thousand of your closest friends. At least, your closest friends who are enterprise .NET developers, or who are building apps for some version of Windows 8.

Those aren’t necessarily the same people. The Microsoft world is more bifurcated than ever before.

There’s the solid yet slow-moving world of the Microsoft enterprise stack. Windows Server, SQL Server, Exchange, SharePoint, Azure and all that jazz. This part of Microsoft thinks that it’s Oracle or IBM.

And then there’s the quixotic set of consumer-facing products. Xbox, Windows Phone, the desktop version of Windows 8, and of course, snazzy new hardware like the Surface tablet. This part of Microsoft thinks that it’s Apple or Google – or maybe Samsung.

While the company’s most important (and most loyal) customer base is the enterprise, there’s no doubt that Microsoft wants to be seen as Apple, not IBM. Hip. Creative. Innovative. In touch with consumers.

#Microsoft wants to trend on Twitter.

To thrive in the consumer world, the company must dig deeper and do better. The highlight of Build was the preview of Windows 8.1, with user experience improvements that undo some of the damage done by Windows 8.0.

It’s great that you can now boot into the “desktop,” or traditional Windows. That is important for both desktop and tablet users. Yet the platform remains frenetic, inconsistent and missing key apps in the Tile motif.

While the Tile experience is compelling, it’s incomplete. You can’t live in it 100%. Yet Windows 8.0 locked you away from living in the old “desktop” environment. Windows 8.1 helps, but it’s not enough.

In his keynote address (focus on consumer tech), Microsoft CEO Steve Ballmer pushed two themes. 

One was that the company is moving to ship software faster. Citing the one-year timeline between Windows 8.0 and Windows 8.1 — instead of the traditional three-year cycle — the unstated message is that Microsoft is emulating Apple’s annual platform releases. “Rapid Release is the new norm,” Ballmer said.

A second theme is that Microsoft’s story is still Windows, Windows, Windows. This is no change from the past. Yes, Microsoft plays better with other platforms than ever before. Even so, Redmond wants to control every screen — and can’t understand why you might use anything other than Windows.

The more things change, the more they stay the same.

, , , , ,

Four common mobile development mistakes

Web sites developed for desktop browsers look, quite frankly, terrible on a mobile device. The look and feel is often wrong, very wrong. Text is the wrong size. Gratuitous clip art on the home page chews up bandwidth. Features like animations won’t behave as expected. Don’t get me started on menus — or on the use-cases for how a mobile user would want to use and navigate the site.

Too often, some higher-up says, “Golly, we must make our website more friendly,” and what that results in is a half-thought-out patch job. Not good. Not the right information, not the right workflow, not the right anything.

One organization, UserTesting.com, says that there are four big pitfalls that developers (and designers) encounter when creating mobile versions of their websites. The company, which focuses on usability testing, says that the biggest issues are:

Trap #1 – Clinging to Legacy: ‘Porting’ a Computer App or Website to Mobile
Trap #2 – Creating Fear: Feeding Mobile Anxiety
Trap #3 – Creating Confusion: Cryptic Interfaces and Crooked Success Paths
Trap #4 – Creating Boredom: Failure to Quickly Engage the User

Makes sense, right? UserTesting.com offers a quite detailed report, “The Four Mobile Traps,” that goes into more detail.

The report says,

Companies creating mobile apps and websites often underestimate how different the mobile world is. They assume incorrectly that they can create for mobile using the same design and business practices they learned in the computing world. As a result, they frequently struggle to succeed in mobile.

These companies can waste large amounts of time and money as they try to understand why their mobile apps and websites don’t meet expectations. What’s worse, their awkward transition to mobile leaves them vulnerable to upstart competitors who design first for mobile and don’t have the same computing baggage holding them back. From giants like Facebook to the smallest web startup, companies are learning that the transition to mobile isn’t just difficult, it’s also risky.

Look at your website. Is it mobile friendly? I mean, truly designed for the needs, devices, software and connectivity of your mobile users?

If not — do something about it.

, , ,

Hurray for COBOL and the mainframe

Perhaps I’m an old fogey, but I can’t help but smile when I see press releases like this: “IBM Unveils New Software to Enable Mainframe Applications on Cloud, Mobile Devices.” 

Everything old will become new again, as the late Australian musician Peter Allen famously sang in his song of that name.

Mainframes were all the rage in the 1960s and 1970s. Though large organizations still used mainframes as their basis of their business-critical transaction systems in the 1990s and 2000s, the excitement was around client/server and n-tier architectures built up from racks of low-cost commodity hardware.

Over the past 15 years or so, it’s become clear that distributed processing for Web applications fit itself into that clustered model. Assemble a few racks of servers and add a load-balancing appliance, and you’ve got all the scalability and reliability anyone needs.

But you know, from the client perspective, the cloud looks like, well, a thundering huge mainframe.

Yes, I am an old fogey, who cut his teeth on FORTRAN, COBOL, PL/1 and CICS on Big Blue’s big iron (that is to say, IBM System/370). Yes, I can’t help but think, “Hmm, that’s just like a mainframe” far too often. And yes, the mainframe is very much alive.

IBM’s press release says that,

Today, nearly 15 percent of all new enterprise application functionality is written in COBOL. The programming language also powers many everyday services such as ATM transactions, check processing, travel booking and insurance claims. With more than 200 billion lines of COBOL code being used across industries such as banking, insurance, retail and human resources, it is crucial for businesses to have the appropriate framework to improve performance, modernize key applications and increase productivity.

I believe that. Sure, there are lots of applications written in Java, C++, C# and JavaScript. Those are on the front end, where if  a database read or write fails, or a non-responsive screen is an annoyance, nothing more. On the back end, if you want the fastest possible response time, without playing games with load balancers, and without failures, you’re still looking at a small number of big boxes, not a large number of small boxes.

This fogey is happy that the mainframe is alive and well.

, , , ,

Let’s boost developer velocity by 30x

Not long ago, if the corporate brass wanted the change major functionality in a big piece of software, the IT delivery time might be six to 12 months, maybe longer. Once upon a time, that was acceptable. Not today.

Thanks to agile, many software changes can be delivered in, say, six to 12 weeks. That’s a huge improvement — but not huge enough. Business imperatives might require that IT deploy new application functionality in six to 12 days.

Sounds impossible, right? Maybe. Maybe not. I had dinner a few days ago with S. “Soma” Somasegar (pictured), the corporate vice president of Microsoft’s Developer Division. He laughed – and nodded – when I mentioned the need for a 30x shift in software delivery from months to days.

After all, as Soma pointed out, Microsoft is deploying new versions of its cloud-based Team Foundation Service every three weeks. The company has also realize that revving Visual Studio itself every two or three years isn’t serving the needs of developers. That’s why his team has begun rolling out regular updates that include not only bug fixes but also new features. The latest is Update 2 to Visual Studio 2012, released in late April, which added in new features for agile planning, quality assurance, line-of-business app developer, and improvements to the developer experience.

I like what I’m hearing from Soma and Microsoft about their developer tools, and about their direction. For example, the company appears sincere in its engagement of the open source community through Microsoft Open Technologies — but I’ll confess to still being a skeptic, based on Microsoft’s historical hostility toward open source.

Soma said that it’s vital not only for Microsoft to contribute to open source, but also to let open source communities engage with Microsoft. It’s about time!

Soma also cited the company’s new-found dedication to DevOps. He said that future versions of both on-premises and cloud-based tools will help tear down the walls between development and deployment. That’s where the 30x velocity improvement might come from.

Another positive shift is that Microsoft appears to truly accept that other platforms are important to developers and customers. He acknowledges that the answer to every problem cannot be to use Microsoft technologies exclusively.

Case in point: Soma said that fully 60% of Microsoft developers are building applications that touch at least three different platforms. He acknowledged that Microsoft still believes that it has the best platforms and tools, but said, “We now know that developers make other choices for valid reasons. We want to meet developers where they are” – that is, engaging with other platforms.

Soma’s words may seem like a modest and obvious statement, but it’s a huge step forward for Microsoft.

, , , , ,

Mobile developer mojo

Tickets for the Apple Worldwide Developer Conference went on sale on Thursday, April 25. They sold out in two minutes.

Who says that the iPhone has lost its allure? Not developers. Sure, Apple’s stock price is down, but at least Apple Maps on iOS doesn’t show the bridge over Hoover Dam dropping into Black Canyon any more.

Two minutes.

To quote from a story on TechCrunch,

Tickets for the developer-focused event at San Francisco’s Moscone West, which features presentations and one-on-one time with Apple’s own in-house engineers, sold out in just two hours in 2012, in under 12 hours in 2011, and in eight days in 2010.

Who attends the Apple WWDC? Independent software developers, enterprise developers and partners. Thousands of them. Many are building for iOS, but there are also developers creating software or services for other aspects of Apple’s huge ecosystem, from e-books to Mac applications.

Two minutes.

Mobile developers love tech conferences. Take Google’s I/O developer conference, scheduled for May 15-17. Tickets sold out super-fast there as well.

The audience for Google I/O is potentially more diverse, mainly because Google offers a wider array of platforms. You’ve got Android, of course, but also Chrome, Maps, Play, AppEngine, Google+, Glass and others beside. My suspicion, though, is that enterprise and entrepreneurial interest in Android is filling the seats.

Mobile. That’s where the money is. I’m looking forward to seeing exactly what Apple will introduce at WWDC, and Google at Google I/O.

Meanwhile, if you are an Android developer and didn’t get into Google I/O before it sold out – or if you are looking for a technical conference 100% dedicated to Android development – let me invite you to register for AnDevCon Boston, May 28-31. We still have a few seats left. Hope to see you there.

, , , , , ,

Android + Chrome = Confusion

What is going on at Google? I’m not sure, and neither are the usual pundits.

Last week, Google announce that Andy Rubin, the long-time head of the Android team, is moving to another role within the company, and will be replaced by Sundar Pichai — the current head of the company’s Chrome efforts.

To quote from Larry Page’s post

Having exceeded even the crazy ambitious goals we dreamed of for Android—and with a really strong leadership team in place—Andy’s decided it’s time to hand over the reins and start a new chapter at Google. Andy, more moonshots please!

Going forward, Sundar Pichai will lead Android, in addition to his existing work with Chrome and Apps. Sundar has a talent for creating products that are technically excellent yet easy to use—and he loves a big bet. Take Chrome, for example. In 2008, people asked whether the world really needed another browser. Today Chrome has hundreds of millions of happy users and is growing fast thanks to its speed, simplicity and security. So while Andy’s a really hard act to follow, I know Sundar will do a tremendous job doubling down on Android as we work to push the ecosystem forward. 

What is the real story? The obvious speculation is that Google may have too many mobile platforms, and may look to merge the Android and Chrome OS operating systems.

Ryan Tate of Wired wrote, in “Andy Rubin and the Great Narrowing of Google,”

The two operating system chiefs have long clashed as part of a political struggle between Rubin’s Android and Pichai’s Chrome OS, and the very different views of the future each man espouses. The two operating systems, both based on Linux, are converging, with Android growing into tablets and Chrome OS shrinking into smaller and smaller laptops, including some powered by chips using the ARM architecture popular in smartphones.

Tate continues,

There’s a certain logic to consolidating the two operating systems, but it does seem odd that the man in charge of Android – far and away the more successful and promising of the two systems – did not end up on top. And there are hints that the move came as something of a surprise even inside the company; Rubin’s name was dropped from a SXSW keynote just a few days before the Austin, Texas conference began.

Other pundits seem equally confused. Hopefully, we’ll know what’s on going on soon. Registration for Google’s I/O conference opened – and closed – on March 13. If you blinked, you missed it. We’ll obviously be covering the Android side of this at our own AnDevCon conference, coming to Boston on May 28-31.

, , ,

Big challenges with data and Big Data

Just about everyone is talking about Big Data, and I’m not only saying that because I’m conference chair for Big Data TechCon, coming up in April in Boston.

Take Microsoft, for example. On Feb. 13, the company released survey results that talked about their big customers’ biggest data challenges, and how those relate to Big Data.

In its “Big Data Trends: 2013” study, Microsoft talked to 282 U.S. IT decision-makers who are responsible for business intelligence, and presumably, other data-related issues. To quote some findings from Microsoft’s summary of that study:

• 32% expect the amount of data they store to double in the next two to three years.

• 62% of respondents currently store at least 100 TB of data. 

• Respondents reported an average of 38% of their current data as unstructured.

• 89% already have a dedicated budget for a Big Data solution.

• 51% of companies surveyed are in the middle stages of planning a big data solution

• 13% have fully deployed a Big Data solution.

• 72% have begun the planning process but have not  yet tested or deployed a solution; of those currently planning, 76% expect to have a solution implemented in less than one year.

• 62% said developing near-real-time predictive analytics or data-mining capabilities during the next 24 months is extremely important.

• 58% rated expanding data storage infrastructure and resources as extremely important.

• 53% rated increased amounts of unstructured data to analyze as extremely important.

• Respondents expect an average of 37% growth in data during the next two to three years.

I can’t help but be delighted by the final bullet point from Microsoft’s study. “Most respondents (54 percent) listed industry conferences as one of the two most strategic and reliable sources of information on big data.”

Hope to see you at Big Data TechCon.

, , , ,

The complications of cloud adoption

Cloud computing is seductive. Incredibly so. Reduced capital costs. No more power and cooling of a server closet or data center. High-speed Internet backbones. Outsourced disaster recovery. Advanced edge caching. Deployments are lightning fast, with capacity ramp-ups only a mouse-click away – making the cloud a panacea for Big Data applications.

Cloud computing is scary. Vendors come and vendors go. Failures happen, and they are out of your control. Software is updated, sometimes with your knowledge, sometimes not. You have to take their word for security. And the costs aren’t always lower.

An interesting new study from KPMG, “The Cloud Takes Shape,” digs into the expectations of cloud deployment – and the realities.

According to the study, cloud migration was generally a success. It showed that 33% of senior executives using the cloud said that the implementation, transition and integration costs were too high; 30% cited challenges with data loss and privacy risks; 30% were worried about the loss of control. Also, 26% were worried about the lack of visibility into future demand and associated costs, 26% fretted about the lack of interoperability standards between cloud providers; and 21% were challenged by the risk of intellectual property theft.

There’s a lot more depth in the study, and I encourage you to download and browse through it. (Given that KPMG is a big financial and tax consulting firm, there’s a lot in the report about the tax challenges and opportunities in cloud computing.)

The study concludes,

Our survey finds that the majority of organizations around the world have already begun to adopt some form of cloud (or ‘as-a-service’) technology within their enterprise, and all signs indicate that this is just the beginning; respondents expect to move more business processes to the cloud in the next 18 months, gain more budget for cloud implementation and spend less time building and defending the cloud business case to their leadership. Clearly, the business is becoming more comfortable with the benefits and associated risks that cloud brings.

With experience comes insight. It is not surprising, therefore, that the top cloud-related challenges facing business and IT leaders has evolved from concerns about security and performance capability to instead focus on some of the ‘nuts and bolts’ of cloud implementation. Tactical challenges such as higher than expected implementation costs, integration challenges and loss of control now loom large on the cloud business agenda, demonstrating that – as organizations expand their usage and gain more experience in the cloud – focus tends to turn towards implementation, operational and governance challenges.

, , ,

Movable walls in the garden

walled-gardenToday’s word is “open.” What does open mean in terms of open platforms and open standards? It’s a tricky concept. Is Windows more open than Mac OS X? Is Linux more open than Solaris? Is Android more open than iOS? Is the Java language more open than C#? Is Firefox more open than Chrome? Is SQL Server more open than DB2?

The answer in all these cases can be summarized in two more words: “That depends.” To some purists, anything that is owned by a non-commercial project or standards body is open. By contrast, anything that is owned by a company, or controlled by a company, is by definition not open.

There are infinite shades of gray. Openness isn’t a line or a spectrum, and it’s not a two-dimensional matrix either. There are countless dimensions.

Take iOS. The language used to program iPhone/iPad apps is Objective-C. It’s pretty open – certainly, some would say that Objective-C is more open than Java, which is owned and controlled by Oracle. Since iOS uses Objective-C, and Android uses Java, doesn’t that makes iOS open, and Android not open?

But wait – perhaps when people talk about the openness of the mobile platforms, they mean whether there is a walled garden around its primary app store. If you want to distribute native apps to through Apple’s store, you must meet Apple’s criteria in lots of ways, from the use of APIs to revenue sharing for in-app purchases. That’s not very open. If you want to distribute native apps to Android devices, you can choose Google Play, where the standards for app acceptance are fairly low, or another app store (like Amazon’s), or even set up your own. That’s more open.

If you want to build apps that are distributed and use Microsoft’s new tiled user experience, you have to put them into the Windows Store. In fact, such applications are called Windows Store Apps. Microsoft keeps a 30% cut of sales, and reserves the right to not only kick your app out of the Windows Store, but also remove your app from customer’s devices. That’s not very open.

The trend these days is for everyone to set up their own app store – whether it’s the Windows Store, Google Play, the Raspberry Pi Store, Salesforce.com AppExchange, Firefox Marketplace, Chrome Web Store, BlackBerry App World, Facebook Apps Center or the Apple App Store. There are lots more. Dozens. Hundreds perhaps.

Every one of these stores affects the openness of the platform – whether the platform is a mobile or desktop device, browser, operating system or cloud-based app. Forget programming language. Forget APIs. The true test of openness is becoming the character of the app store, whether consumers are locked into using open “approved” stores, what restrictions are placed on what may be placed in that app store, and whether developers have the freedom to fully utilize everything the platform can offer. (If the platform vendor’s own apps, or those from preferred partners, can access APIs that are not allowed in the app store, that’s not a good sign.)

Nearly every platform is a walled garden. The walls aren’t simple; they make Calabi-Yau manifolds look like child’s play. The walls twist. They turn. They move.

Forget standards bodies. Today’s openness is the openness of the walled garden.

, , ,

The API as an overloaded operator

Once upon a time, application programming interfaces were hooks that applications used to tap into operating system services. Want to open a port? Call an API. Need to find a printer? Call an API. Open a winder? Call an API. Write to a file? Call an API.

Developers still use classic APIs of course. They are necessary for both native and managed code. Windows, iOS, Android, Unix, Linux, all are stuffed to the brim with hundreds and thousands of APIs. In fact, one of the most useful features of an integrated development environment like Visual Studio, Eclipse and Xcode is to provide an handy reference to APIs, check their syntax and arguments, and help fill them out with autocomplete.

Classic APIs are fundamental. Cloud-based APIs, which provide loosely coupled function calls to services over the Internet, are more sexy and more dangerous.

The December issue of SD Times contains a feature by Alexa Weber Morales, “Connecting the World with APIs.” She explains that the variety of cloud-based APIs far exceeds the biggest, most visible examples, such as those from Amazon and Google. APIs are everywhere, from social media players like Facebook and Twitter, to business services like MailChimp and Salesforce.com.

Like electricity from the wall socket, or water from the kitchen faucet, it is easy to take cloud-based APIs for granted. Too easy. We outsource core functionality of our applications to cloud-based services, some free, some paid for by subscription. We expect them to work consistently. We expect them to be monolithic and unchanging. We expect them to be fast. We expect them to be secure.

We must not make any of those assumptions. Our software must be able to detect if a cloud-based API is offline or is running slowly, and should be able to handle such a situation gracefully. (I.e., not hang or crash.) We should never assume that APIs are secure and will keep our data safe or our customers’ data safe. We should not expect the API vendor to proactively notify us if they change some of the functionality within the APIs. It’s our job to be on top of any changes.

The availability of cloud-based APIs – unlike operating system APIs – is out of our hands. Our decision to upgrade a server’s OS is on our schedule, and we have time to read the documentation. When a mobile platform maker, like Apple, Google or Microsoft, releases a new operating system, we get plenty of notice and have plenty of time to understand about the newest APIs, the changed APIs and the deprecated APIs.

Not true with cloud-based APIs. While the three-letter acronym may be the same, our applications’ calls to a RESTful cloud-based APIs are not at all the same as our applications’ calls to native operating system services. While convenient, cloud-based APIs are ephemeral, distant and fundamentally unreliable. Never forget it.

, , , , , ,

Happy Thanksgiving

Tomorrow Americans will celebrate Thanksgiving. This is an odd holiday. It’s partly religious, but also partly secular, dating back to the English colonization of eastern North America. A recent tradition is for people to share what they are thankful for. In a lighthearted way, let me share some of my tech-related joys.

• I am thankful for PDF files. Websites that share documents in other formats (such as Microsoft Word) are kludgy, and document never looks quite right.

• I am thankful for native non-PDF files. Extracting content from PDF files to use in other applications is a time-consuming process that often requires significant post-processing.

• I am thankful that Hewlett-Packard is still in business – for now at least. It’s astonishing how HP bungles acquisition after acquisition after acquisition.

• I am thankful for consistent language specifications, such as C++, Java, HTML4 and JavaScript, which give us a fighting chance at cross-platform compatibility. A world with only proprietary languages would be horrible.

• I am thankful for HTML5 and CSS3, which solve many important problems for application development and deployment.

• I am thankful that most modern operating systems and applications can be updated via the Internet. No more floppies, CDs or DVDs.

• I am thankful that floppies are dead, dead, dead, dead, dead.

• I am thankful that Apple and Microsoft don’t force consumers to purchase applications for their latest desktop operating systems from their app stores. It’s my computer, and I should be able to run any bits that I want.

• I am thankful for Hadoop and its companion Apache projects like Avro, Cassandra, HBase and Pig, which in a only a couple of years became the de facto platform for Big Data and a must-know technology for developers.

• I am thankful that Linux exists as a compelling server operating system, as the foundation of Android, and as a driver of innovation.

• I am thankful for RAW photo image files and for Adobe Lightroom to process those RAW files.

• I am thankful for the Microsoft Surface, which is the most exciting new hardware platform since the Apple’s iPad and MacBook Air.

• I am thankful to still get a laugh by making the comment, “There’s an app for that!” in random non-tech-related conversations.

• I am thankful for the agile software movement, which has refocused our attention to efficiently creating excellent software, and which has created a new vocabulary for sharing best practices.

• I am thankful for RFID technology, especially as implemented in the East Coast’s E-Zpass and California’s FasTrak toll readers.

• I am thankful that despite the proliferation of e-book readers, technology books are still published on paper. E-books are great for novels and documents meant to be read linearly, but are not so great for learning a new language or studying a platform.

• I am thankful that nobody has figured out how to remotely hack into my car’s telematics systems yet – as far as I know.

• I am thankful for XKCD.

• I am thankful that Oracle seems to be committed to evolving Java and keeping it open.

• I am thankful for the wonderful work done by open-source communities like Apache, Eclipse and Mozilla.

• I am thankful that my Android phone uses an industry-standard Micro-USB connector.

• I am thankful for readers like you, who have made SD Times the leading news source in the software development community.

Happy Thanksgiving to you and yours.

, , ,

The joy of being a geek: 60-core chips, self-driving cars

So much I could write about today. The U.S. presidential elections. Intel’s new 60-core PCIX-based coprocessor chip. The sudden departure of Steven Sinofsky from Microsoft, after three years as president of the Windows Division. The Android 4.2 upgrade that unexpectedly changed the user experience on my Nexus phone. All were candidates.

Nah. All those ideas are off the table. Today, let’s bask in the warm geekiness of the Google Self-Driving Car. The vehicle, an extensively modified Lexus RH450h hybrid sport utility, lives here in Silicon Valley. The cars are frequently sighted on the highways around here, and in fact my wife Carole saw one in Mountain View last week.

Until today, I had never seen one in action, but at lunchtime, the Self-Driving Car played with me on I-280. If you’re not familiar with the Google Self-Driving Car, here’s a great story in the New York Times about one of the small fleet, “Yes, Driverless Cars Know the Way to San Jose.”

I encountered the Google car going northbound on I-280, and passed it carefully. Many cars lengths ahead, I carefully changed into its lane and slowed down slightly — and waited to see what the self-driving car would do.

The Google car approached slowly, signaled, moved into the next lane, and passed me. I was taking pictures out the window — and the Google engineer sitting in the passenger seat smiled and waved. It was just another day for the experimental hardware, software and cloud-based services.

Yet, why do I have the feeling of having a Star Trek-style First Contact with an alien artificial life form? It is wonderful living in Silicon Valley and being a participant in the evolution of modern technology – both at the IDE and behind the wheel.

, , ,

Echoing the echosystem

echoEchosystem. What a marvelous typo! An email from an analyst firm referred several times to a particular software development ecosystem, but in one of the instances, she misspelled “ecosystem” as “echosystem.” As a technology writer and analyst myself, that misspelling immediately set my mind racing. Echosystem. I love it.

An echosystem would be a type of meme. Not the silly graphics that show up on Twitter and Facebook, but more the type of meme envisioned by Richard Dawkins in his book, The Selfish Gene, where an idea or concept takes on a life of its own. In this case, the echosystem is where a meme is simply echoed, and is believed to be true simply because it is repeated so often. In particular, the echosystem would apply to ideas that are repeated around by analysts, technology writers and journalists, influential bloggers, and so-on.

In another time and place, what I’m now calling the echosystem would be called the bandwagon. I like the idea of a mashup between the bandwagon and the echo chamber being the echosystem.

We have lots of memes in the software development echosystem. For example, that the RIM BlackBerry is toast. Is the platform doomed? Maybe. But it’s become so casual, so matter-of-fact, for writers and analysts to refer to the BlackBerry as toast that repetition is creating its own truthiness (as Stephen Colbert would say).

Another is echosystem chatter that skeuomorphs are bad, and that Apple is behind the times (and falling behind Android and Windows 8) because its applications have fake leather textures and fake wooden bookshelves. Heck, I only learned about the term recently but repeating the chatter, wrote my own column about it last month, “Fake leather textures on your mobile apps: Good or bad?” True analysis? Maybe. Echoing the echosystem? Definitely

The echosystem anoints technologies or approaches, and then tears them down again. 

HTML5? The echosystem decided that this draft protocol was the ultimate portable platform, but then pounced when Facebook’s Mark Zuckerberg dissed his company’s efforts.

SOAP? The echosystem loved, loved, loved, loved, loved Simple Object Access Protocol and the WS* methods of implementing Web services, until the new narrative became that RESTful Web services were better. The SOAP bubble popped almost instantly when the meme “WS* is too complicated” spread everywhere.

Echoes in the echosystem pronounced judgment on Windows 8 long before it came out. Echoes weighed in on the future of Java before Oracle’s acquisition of Sun even closed and have chosen JavaScript as the ultimate programming language.

There is a lot of intelligence in the echosystem. Smart people hear what’s being said and repeat it and amplify it and repeat it some more. Sometimes pundits put a lot of thought into their echoes of popular. Sometimes pundits are merely hopping onto the bandwagon. The trick is to tell the differences.

, , ,

When the cloud was good, it was very very good. But when it was bad, it was horrid

Cloud computing took a big hit this week amid two significant service outages.

The biggest one, at least as it affects enterprise computing, is the eight-hour failure of Amazon’s Simple Storage Service. Check out the Amazon Web Services service health dashboard, and then select Amazon S3 in the United States for July 20. You’ll see that problems began at 9:05 am Pacific Time with “elevated error rates,” and that service wasn’t reported as being fully restored until 5:00 pm.

About the error, Amazon said,

We wanted to share a brief note about what we observed during yesterday’s event and where we are at this stage. As a distributed system, the different components of Amazon S3 need to be aware of the state of each other. For example, this awareness makes it possible for the system to decide to which redundant physical storage server to route a request. In order to share this state information across the system, we use a gossip protocol. Yesterday, we experienced a problem related to gossiping our internal state information, leaving the system components unable to interact properly and causing customers’ requests to Amazon S3 to fail. After exploring several alternatives, we determined that we had to temporarily take the service offline so that we could clear all gossipped state and restart gossip to rebuild the state.

These are sophisticated systems and it generally takes a while to get to root cause in such a situation. We’re working very hard to do this and will be providing more information here when we’ve fully investigated the incident. We also wanted to let you know that for this particular event, we’ll be waiving our standard SLA process and applying the appropriate service credit to all affected customers for the July billing period. Customers will not need to send us an e-mail to request their credits, as these will be automatically applied. This transaction will be reflected in our customers’ August billing statements.

Kudos to Amazon for issuing a billing adjustment. However, as we all know, the business cost of a service failure like this vastly exceeds the cost you pay for the service. If your applications were offline for eight hours because Amazon S3 was malfunctioning, that really hurts your bottom line. This wasn’t their first service failure, either: Amazon S3 went down in February as well.

Less significant to enterprises, but just as annoying to those concerned, involved hosted e-mail accounts hosted on Apple’s MobileMe service. MobileMe is the new name of the .Mac service, and the service was updated in mid-July along with the launch of the iPhone 3G. Unfortunately, not everything worked right. As you can see from Apple’s dashboard, some subscribers can’t access their email. Currently, this is affects about 1% of their subscribers — but it’s been like that since last Friday.

According to Apple,

We understand this is a serious issue and apologize for this service interruption. We are working hard to restore your service.

This reminds me of the poem from that great Maine writer, Henry Wadsworth Longfellow:

There was a little girl
Who had a little curl
Right in the middle of her forehead;
And when she was good
She was very, very good,
But when she was bad she was horrid.