, , ,

Capriza’s clever mobility via HTML screen scraping

caprizaHTML browser virtualization, not APIs, may be the best way to mobilize existing enterprise applications like SAP ERP, Oracle E-Business Suite or Microsoft Dynamics.

At least, that’s the perspective of Capriza, a company offering a SaaS-based mobility platform that uses a cloud-based secure virtualized browser to screen-scrape data and context from the enterprise application’s Web interface. That data is then sent to a mobile device (like a phone or tablet), where it’s rendered and presented through Capriza’s app.

The process is bidirectional: New transactional data can be entered into the phone’s Capriza app, which transmits it to the cloud-based platform. The Capriza cloud, in turn, opens up a secure virtual browser session with the enterprise software and performs the transaction.

The Capriza platform, which I saw demonstrated last week, is designed for employees to access enterprise applications from their Android or Apple phones, or from tablets.

The platform isn’t cheap – it’s licensed on a per-seat, per-enterprise-application basis, and you can expect a five-digit or six-digit annual cost, at the least. However, Capriza is solving a pesky problem.

Think about the mainstream way to deploy a mobile application that accesses big enterprise back-end platforms. Of course, if the enterprise software vendor offers a mobile app, and if that app meets you needs, that’s the way to go. What if the enterprise software’s vendor doesn’t have a mobile app – or if the software is homegrown? The traditional approach would be to open up some APIs allowing custom mobile apps to access the back-end systems.

That approach is fraught with peril. It takes a long time. It’s expensive. It could destabilize the platform. It’s hard to ensure security, and often it’s a challenge to synchronize API access policies with client/server or browser-based access policies and ACLs. Even if you can license the APIs from an enterprise software vendor, how comfortable are you exposing them over the public Internet — or even through a VPN?

That’s why I like the Capriza approach of using a virtual browser to access the existing Web-based interface. In theory (and probably in practice), the enterprise software doesn’t have to be touched at all. Since the Capriza SaaS platform has each mobile user log into the enterprise software using the user’s existing Web interface credentials, there should be no security policies and ACLs to replicate or synchronize.

In fact, you can think of Capriza as an intentional man-in-the-middle for mobile users, translating mobile transactions to and from Web transactions on the fly, in real time.

As the company explains it, “Capriza helps companies leverage their multi-million dollar investments in existing enterprise software and leapfrog into the modern mobile era. Rather than recreate the wheel trying to make each enterprise application run on a mobile device, Capriza breaks complex, über business processes into mini ones. Its approach bypasses the myriad of tools, SDKs, coding, integration and APIs required in traditional mobile app development approaches, avoiding the perpetual cost and time requirements, risk and questionable ROI.”

It certainly looks like Capriza wins this week’s game of Buzzword Bingo. Despite the marketing jargon, however, the technology is sound, and Capriza has real customers—and has recently landed a US$27 million investment. That means we’re going to see a lot of more this solution.

Can Capriza do it all? Well, no. It works best on plain vanilla Web sites; no Flash, no Java, no embedded apps. While it’s somewhat resilient, changes to an internal Web site can break the screen-scraping technology. And while the design process for new mobile integrations doesn’t require a real programmer, the designer must be very proficient with the enterprise application, and model all the pathways through the software. This can be tricky to design and test.

Plus, of course, you have to be comfortable letting a third-party SaaS platform act as the man-in-the-middle to your business’s most sensitive applications.

Bottom line: If you are mobilizing enterprise software — either commercial or home-grown — that allow browser access, Capriza offers a solution worth considering.

, , ,

Next steps for Hewlett-Packard post-split

Neineil-sedakal Sedaka insists that breakin’ up is hard to do. Will that apply to the planned split of Hewlett-Packard into two companies? Let’s be clear: This split is a wonderful idea, and it’s long overdue.

Once upon a time, HP was in three businesses: Electronics test equipment (like gas spectrometers); expensive, high-margin data center products and services (like minicomputers and consulting); and cheap, low-margin commodity tech products (like laptops, small business routers and ink-jet printers).

HP spun off the legacy test-equipment business in 1999 (forming Agilent Technologies) and that was a win-win for both Agilent and for the somewhat-more-focused remainder of HP. Now it’s time to do it again.

There are precious few synergies between the enterprise side of HP and the commodity side. The enterprise side has everything that a big business would want, from high-end hyperscale servers to Big Data, Software Defined Networks, massive storage arrays, e-commerce security, and oh, lots of consulting services.

Over the past few years, HP has been on an acquisitions binge to support its enterprise portfolio, helping make it more competitive against arch-rival IBM. The company has snapped up ArcSight and Fortify Software (software security); Electronic Data Systems (IT services and consulting); 3PAR (storage); Vertica Systems (database analytics); Shunra (network virtualization); Eucalyptus (private and hybrid cloud); Stratavia/ExtraQuest (data center automation); and of course, the absurdly overpriced Autonomy (data management).

Those high-touch, high-cost, high-margin enterprise products and services have little synergy with, say, the HP Deskjet 1010 Color Printer, available for US$29.99 at Staples. Sure, there’s money in printers, toner and ink, monitors, laptops and so on. But that’s a very different market, with a race-to-the-bottom drive for market share, horrible margins, crazy supply chain and little to differentiate one Windows-based product from another.

Analysts and investors have been calling for the breakup of HP for years; the company refused, saying that the unified company benefitted from an economy of scale. It’s good that CEO Meg Whitman has acknowledged what everyone knew: HP is sick, and this breakup into Hewlett-Packard Enterprise and HP Inc. is absolutely necessary.

Is breaking up hard to do? For most companies it’s a challenge at the best of times, but this one should be relatively painless. First of all, HP has split up before, so at least there’s some practice. Second, these businesses are so different that it should be obvious where most of HP’s employees, products, customer relationships, partner relationships and intellectual properly will end up.

That’s not to say it’s going to be easy. However, it’s at least feasible.

Both organizations will be attractive takeover targets, that’s for sure. I give it a 50/50 chance that within five years, IBM or Oracle will make a play for Hewlett-Packard Enterprise, or it will combine with a mid-tier player like VMware or EMC.

The high-volume, low-margin HP Inc. will have trouble surviving on its own, because that is an area where scale helps drive down costs and helps manage the supply chain and retail channels. I could see HP Inc. being acquired by Dell or Lenovo, or even by a deep-pocket Internet retailer like Amazon.com.

This breakup is necessary and may be the salvation of Hewlett-Packard’s enterprise business. It may also be the beginning of the end for the most storied company in Silicon Valley.

, , , ,

They want to steal your data

bankamericard“My name is Patricia from the Bank of America fraud prevention department. This important message is for Mr. Alan Zeichick. We are calling to verify some potentially suspicious activity on your account. It is very important that we speak with you.”

Tuesday’s voicemail from my bank was short and simple. Nobody had pilfered a credit-card receipt or hacked into my account, the representative told me during our conversation. Rather, BofA had been notified by Visa (the credit card clearinghouse) that a retailer had been hacked, and many credit card numbers were stolen. Including mine. As of right now, my card was frozen; the bank will issue me a new card with a new number.

Who was the merchant? According to the BofA representative, Visa didn’t divulge that information due to an ongoing investigation. Nor did the representative know how many credit card numbers were stolen; all she knew what was that BofA was given a list of their bank’s customers who were affected.

These stories are coming far too often. Millions of cards were stolen in 2014 from diverse merchants like P.F. Chang’s China Bistro (a restaurant chain), Michaels Stores (art supplies), Sally Beauty (cosmetics), and Shaws (grocery stores). And those are only a few of the major vendors. Who knows how many smaller card thefts are either never reported, or aren’t deemed sufficiently juicy by the news media?

Some of you might be thinking, “We don’t take credit card numbers on our websites, so there’s no potential risk exposure.” Wrong. I am frequently astonished by the number of companies that maintain lots of customer data, and have that data pilfered. The Payment Card Industry (PCI) standards say that you should never store customer payment information. We’ve all seen that those standards are not followed, sometimes intentionally through neglect, and sometimes through flawed architecture, bad coding or lousy testing.

Let’s be clear: Encrypting browser communications does not protect your customers’ personal or financial information. If you are storing that information anywhere—in your data center, in the cloud—it is at real risk. The threats are active. Are your countermeasures active?

What is even more astonishing is that many of these thefts are of personal information stored on employees’ laptops. You may recall a high-profile case in 2013, where nearly 840,000 Horizon Blue Cross Blue Shield customers had their information compromised when two laptops were stolen from the New Jersey-based health insurance company.

To quote from the Star Ledger’s story,

The stolen laptops were password-protected but had unencrypted data, Horizon said in a statement today. A subsequent investigation determined the computers may have contained files with personal information, including names, addresses, dates of birth, and, in some instances, Social Security numbers and limited clinical information, the insurer said.

How is that possible? No possible scenario should allow customer information to be downloaded onto a desktop or laptop or tablet or phone. Ever. Encrypted or not, the data should never leave the server.

Please tell me you aren’t storing credit card info in files that can be stolen. Please tell me  your company has actively sought to ensure that customer information can never ever ever be downloaded from servers.

Data theft is a nuisance, for cardholders like me, and for businesses like yours. Do you protect your customers’ information?

, , ,

What to do when, not if, your cloud goes down

azure-statusCloud-based development tools are great. Until they don’t work.

I don’t know if you were affected by Microsoft’s Azure service outage on Thursday, August 14, 2014. As of my deadline, services had been offline for nearly six hours. On its status page, Microsoft was reporting:

Visual Studio Online – Multi-Region – Full Service Interruption

Starting 22:45 13 Aug, 2014 UTC, Visual Studio Online customers may have experienced issues with latency and extended Execution times. The initial incident mitigated at approximately 14:00 UTC. During investigation at 13:52 14 Aug, 2014 UTC, engineering teams began receiving alerts for a separate issue where customers were unable to log in to their Visual Studio Online services. From 13:52 to 19:45 on 14 Aug, 2014 UTC, customers were unable to access their Visual Studio Online resources. Engineering teams have validated their mitigation efforts for both issues and have confirmed that full service has been restored to our Visual Studio Online users. These incidents are now mitigated.

My goal here isn’t to throw Microsoft under the bus. Azure has been quite stable, and other cloud providers, including Amazon, Apple and Google, have seen similar problems. Actually, Amazon in particular has seen a lot of uptime and stability problems with AWS over the past couple of years, though its dashboard on Thursday afternoon shows full service availability.

Let’s think about the broader issue. What’s your contingency plan if your cloud-based services go down, whether it’s one of those players, or a service like GitHub, Salesforce.com, SourceForge, or you-name-it? Do you have backups, in case code or artifacts are lost or corrupted? (Do you have any way to know if data is lost or corrupted?)

This is a worry.

In the case of the August 14 outage, the system wasn’t down for long — but long enough to kill a day’s productivity for many workers. Microsoft’s Visual Studio Online blog has a little bit of insight into the problem, but not much. Posted at 16:56 UTC, Microsoft said:

The actual root cause is still under investigation, but initial investigation is indicating a contention in our core database seems to be causing blocking and performance issues in the services. Our DevOps teams have identified a couple of mitigation steps and currently going thru validations. We will provide an update as soon as we have a mitigation in place. We apologize for the inconvenience and appreciate your patience while working on resolving this issue

This time you can blame Microsoft for any loss of productivity. Next time the service goes down, if you haven’t made contingency plans, the blame is yours.

, , , ,

For your customers, support low- and intermittent-bandwidth mobility

four-cornersWe drove slightly more than 2,500 miles (4,000 kilometers), my wife and I, during a weeklong holiday. We explored different states in the western United States: Arizona (where we live), Colorado, New Mexico and Wyoming. The Rocky Mountains are incredible. Most of our vacation was at altitudes above 6,000 feet (1,800 meters). Many of the mountain peaks were above 14,000 feet (4,200 meters), and one road went above 11,000 feet (3,300 meters). Exciting!

The adventure involved bringing only smartphones, one running Android, one running iOS. We used mobile apps for navigation, for communication, for photography, for reading, for social media, for finding hotels and restaurants, just about everything.

We learned that apps only seem to run well when there is copious bandwidth, either WiFi at a hotel or a fast cellular data link. If a smartphone registered 4G or LTE, all was good. If the phone indicated that the connection was EDGE, GPRS or 3G, all bets were off. It’s not that data loaded slowly. That would be expected. It’s that the apps would crash, or time out, or posting data would fail, or nothing would happen at all. Many modern apps expect or demand lots and lots of bandwidth.

I’m not talking here about apps running completely offline. That’s an entirely different conversation. I’m talking about apps not gracefully handling situations where the bandwidth is narrower than a drinking straw.

Many developers test out their mobile apps using simulators. That, or on devices that have very high bandwidth connections, such an office WiFi network or the type of high-speed network that you’ll find in Silicon Valley, New York City, or other major tech hubs around the world. Having lots of mobile bandwidth is undoubtedly a blessing for developers, but for many consumers, that’s simply not the case.

Lots of customers live in areas with poor bandwidth, or find themselves traveling in places where connectivity is slow or intermittent. Given the use cases for mobile devices—that is, they are frequently used when not at home or in an office—optimizing apps for bad bandwidth should be mandatory. Hey, this isn’t about streaming 1080p movies. This is about being able to use a search engine, or call up a map, or be able to find a hotel room.

Will people use your apps in poor-bandwidth or intermittent-bandwidth situations? If so, here are some steps you can do to improve the user experience:

  1. Make sure that part of your testing involves low-bandwidth and intermittent-bandwidth scenarios. Find beta testers who live with poor bandwidth or who travel to such locations.
  2. Have your app test for throughput, and not only at application launch. Merely detecting whether the connection is WiFi or cellular is insufficient. If throughput is low, consider degrading the experience, such as by using lower-end graphics, in order to keep data moving.
  3. Cache, cache, cache.
  4. Don’t insist on reloading data each and every time the user either launches the app or switches to it. Alan’s pet peeves include news and other websites that freeze the UI while loading the latest headlines or content each time the app is brought to the foreground.
  5. If you detect that the device is in a low-bandwidth environment, pause background data syncing, or at least ask the user if he/she would like to do so.
  6. If you are sending audio or video, compress the heck out of it. That may involve choosing different algorithms for different bandwidth situations, with low-bandwidth scenarios using narrower and lossier codecs.
, , , ,

Microsoft keeps stumbling

Microsoft’s woes are too big to ignore.

Problem area number one: The high-profile Surface tablet/notebook device is flopping. While the 64-bit Intel-based Surface Pro hasn’t sold well, the 32-bit ARM-based Surface RT tanked. Big time. Microsoft just slashed its price — maybe that will help. Too little too late?

To quote from Nathan Ingraham’s recent story in The Verve, 

Microsoft just announced earnings for its fiscal Q4 2013, and while the company posted strong results it also revealed some details on how the Surface RT project is costing the business money. Microsoft’s results showed a $900 million loss due to Surface RT “inventory adjustments,” a charge that comes just a few days after the company officially cut Surface RT prices significantly. This $900 million loss comes out of the company’s total Windows revenue, though its worth noting that Windows revenue still increased year-over-year. Unfortunately, Microsoft still doesn’t give specific Windows 8 sales or revenue numbers, but it probably performed well this quarter to make up for the big Surface RT loss.

At the end of the day, though, it looks like Microsoft just made too many Surface RT tablets — we heard late last year that Microsoft was building three to five million Surface RT tablets in the fourth quarter, and we also heard that Microsoft had only sold about one million of those tablets in March. We’ll be listening to Microsoft’s earnings call this afternoon to see if they further address Surface RT sales or future plans.

Microsoft has spent heavily, and invested a lot of its prestige, in the Surface. It needs to fix Windows 8 and make this platform work.

Problem are number two: A dysfunctional structure. A recent story in the New York Times reminded me of this 2011 cartoon describing six tech company’s charts. Look at Microsoft. Yup.

Steve Ballmer, who has been CEO since 2000, is finally trying to do something about the battling business units. The new structure, announced on July 11, is called “One Microsoft,” and in a public memo by Ballmer, the goal is described as:

Going forward, our strategy will focus on creating a family of devices and services for individuals and businesses that empower people around the globe at home, at work and on the go, for the activities they value most. 

Editing and restructuring the info in that memo somewhat, here’s what the six key non-administrative groups will look like:

Operating Systems Engineering Group will span all OS work for console, to mobile device, to PC, to back-end systems. The core cloud services for the operating system will be in this group.

Devices and Studios Engineering Group will have all hardware development and supply chain from the smallest to the largest devices, and studios experiences including all games, music, video and other entertainment.

Applications and Services Engineering Group will have broad applications and services core technologies in productivity, communication, search and other information categories.

Cloud and Enterprise Engineering Group will lead development of back-end technologies like datacenter, database and specific technologies for enterprise IT scenarios and development tools, plus datacenter development, construction and operation.

Advanced Strategy and Research Group will be focused on the intersection of technology and policy, and will drive the cross-company looks at key new technology trends.

Business Development and Evangelism Group will focus on key partnerships especially with innovation partners (OEMs, silicon vendors, key developers, Yahoo, Nokia, etc.) and broad work on evangelism and developer outreach. 

If implemented as described, this new organization should certainly eliminate waste, including redundant research and product developments. It might improve compatibility between different platforms and cut down on mixed messages.

However, it may also constraint the freedom to innovate, and promote the unhealthy “Windows everywhere” philosophy that has hamstrung Microsoft for years. It’s bad to spend time creating multiple operating systems, multiple APIs, multiple dev tool chains, multiple support channels. It’s equally bad to make one operating system, API set, dev tool chain and support channel fit all platforms and markets.

Another concern is the movement of developer outreach into a separate group that’s organizationally distinct from the product groups. Will that distance Microsoft’s product developers from customers and ISVs? Maybe. Will the most lucrative products get better developer support? Maybe.

Microsoft has excelled in developer support, and I’d hate to see that suffer as part of the new strategy. 

Read Steve Ballmer’s memo. What do you think?

Z Trek Copyright (c) Alan Zeichick
, , , ,

Let’s boost developer velocity by 30x

Not long ago, if the corporate brass wanted the change major functionality in a big piece of software, the IT delivery time might be six to 12 months, maybe longer. Once upon a time, that was acceptable. Not today.

Thanks to agile, many software changes can be delivered in, say, six to 12 weeks. That’s a huge improvement — but not huge enough. Business imperatives might require that IT deploy new application functionality in six to 12 days.

Sounds impossible, right? Maybe. Maybe not. I had dinner a few days ago with S. “Soma” Somasegar (pictured), the corporate vice president of Microsoft’s Developer Division. He laughed – and nodded – when I mentioned the need for a 30x shift in software delivery from months to days.

After all, as Soma pointed out, Microsoft is deploying new versions of its cloud-based Team Foundation Service every three weeks. The company has also realize that revving Visual Studio itself every two or three years isn’t serving the needs of developers. That’s why his team has begun rolling out regular updates that include not only bug fixes but also new features. The latest is Update 2 to Visual Studio 2012, released in late April, which added in new features for agile planning, quality assurance, line-of-business app developer, and improvements to the developer experience.

I like what I’m hearing from Soma and Microsoft about their developer tools, and about their direction. For example, the company appears sincere in its engagement of the open source community through Microsoft Open Technologies — but I’ll confess to still being a skeptic, based on Microsoft’s historical hostility toward open source.

Soma said that it’s vital not only for Microsoft to contribute to open source, but also to let open source communities engage with Microsoft. It’s about time!

Soma also cited the company’s new-found dedication to DevOps. He said that future versions of both on-premises and cloud-based tools will help tear down the walls between development and deployment. That’s where the 30x velocity improvement might come from.

Another positive shift is that Microsoft appears to truly accept that other platforms are important to developers and customers. He acknowledges that the answer to every problem cannot be to use Microsoft technologies exclusively.

Case in point: Soma said that fully 60% of Microsoft developers are building applications that touch at least three different platforms. He acknowledged that Microsoft still believes that it has the best platforms and tools, but said, “We now know that developers make other choices for valid reasons. We want to meet developers where they are” – that is, engaging with other platforms.

Soma’s words may seem like a modest and obvious statement, but it’s a huge step forward for Microsoft.

, , , ,

The complications of cloud adoption

Cloud computing is seductive. Incredibly so. Reduced capital costs. No more power and cooling of a server closet or data center. High-speed Internet backbones. Outsourced disaster recovery. Advanced edge caching. Deployments are lightning fast, with capacity ramp-ups only a mouse-click away – making the cloud a panacea for Big Data applications.

Cloud computing is scary. Vendors come and vendors go. Failures happen, and they are out of your control. Software is updated, sometimes with your knowledge, sometimes not. You have to take their word for security. And the costs aren’t always lower.

An interesting new study from KPMG, “The Cloud Takes Shape,” digs into the expectations of cloud deployment – and the realities.

According to the study, cloud migration was generally a success. It showed that 33% of senior executives using the cloud said that the implementation, transition and integration costs were too high; 30% cited challenges with data loss and privacy risks; 30% were worried about the loss of control. Also, 26% were worried about the lack of visibility into future demand and associated costs, 26% fretted about the lack of interoperability standards between cloud providers; and 21% were challenged by the risk of intellectual property theft.

There’s a lot more depth in the study, and I encourage you to download and browse through it. (Given that KPMG is a big financial and tax consulting firm, there’s a lot in the report about the tax challenges and opportunities in cloud computing.)

The study concludes,

Our survey finds that the majority of organizations around the world have already begun to adopt some form of cloud (or ‘as-a-service’) technology within their enterprise, and all signs indicate that this is just the beginning; respondents expect to move more business processes to the cloud in the next 18 months, gain more budget for cloud implementation and spend less time building and defending the cloud business case to their leadership. Clearly, the business is becoming more comfortable with the benefits and associated risks that cloud brings.

With experience comes insight. It is not surprising, therefore, that the top cloud-related challenges facing business and IT leaders has evolved from concerns about security and performance capability to instead focus on some of the ‘nuts and bolts’ of cloud implementation. Tactical challenges such as higher than expected implementation costs, integration challenges and loss of control now loom large on the cloud business agenda, demonstrating that – as organizations expand their usage and gain more experience in the cloud – focus tends to turn towards implementation, operational and governance challenges.

, , , , , ,

Happy Thanksgiving

Tomorrow Americans will celebrate Thanksgiving. This is an odd holiday. It’s partly religious, but also partly secular, dating back to the English colonization of eastern North America. A recent tradition is for people to share what they are thankful for. In a lighthearted way, let me share some of my tech-related joys.

• I am thankful for PDF files. Websites that share documents in other formats (such as Microsoft Word) are kludgy, and document never looks quite right.

• I am thankful for native non-PDF files. Extracting content from PDF files to use in other applications is a time-consuming process that often requires significant post-processing.

• I am thankful that Hewlett-Packard is still in business – for now at least. It’s astonishing how HP bungles acquisition after acquisition after acquisition.

• I am thankful for consistent language specifications, such as C++, Java, HTML4 and JavaScript, which give us a fighting chance at cross-platform compatibility. A world with only proprietary languages would be horrible.

• I am thankful for HTML5 and CSS3, which solve many important problems for application development and deployment.

• I am thankful that most modern operating systems and applications can be updated via the Internet. No more floppies, CDs or DVDs.

• I am thankful that floppies are dead, dead, dead, dead, dead.

• I am thankful that Apple and Microsoft don’t force consumers to purchase applications for their latest desktop operating systems from their app stores. It’s my computer, and I should be able to run any bits that I want.

• I am thankful for Hadoop and its companion Apache projects like Avro, Cassandra, HBase and Pig, which in a only a couple of years became the de facto platform for Big Data and a must-know technology for developers.

• I am thankful that Linux exists as a compelling server operating system, as the foundation of Android, and as a driver of innovation.

• I am thankful for RAW photo image files and for Adobe Lightroom to process those RAW files.

• I am thankful for the Microsoft Surface, which is the most exciting new hardware platform since the Apple’s iPad and MacBook Air.

• I am thankful to still get a laugh by making the comment, “There’s an app for that!” in random non-tech-related conversations.

• I am thankful for the agile software movement, which has refocused our attention to efficiently creating excellent software, and which has created a new vocabulary for sharing best practices.

• I am thankful for RFID technology, especially as implemented in the East Coast’s E-Zpass and California’s FasTrak toll readers.

• I am thankful that despite the proliferation of e-book readers, technology books are still published on paper. E-books are great for novels and documents meant to be read linearly, but are not so great for learning a new language or studying a platform.

• I am thankful that nobody has figured out how to remotely hack into my car’s telematics systems yet – as far as I know.

• I am thankful for XKCD.

• I am thankful that Oracle seems to be committed to evolving Java and keeping it open.

• I am thankful for the wonderful work done by open-source communities like Apache, Eclipse and Mozilla.

• I am thankful that my Android phone uses an industry-standard Micro-USB connector.

• I am thankful for readers like you, who have made SD Times the leading news source in the software development community.

Happy Thanksgiving to you and yours.