, ,

Email clients and 3D paint applications do not belong in operating system releases

No, no, no, no, no!

The email client updates in the 10.12.4 update to macOS Sierra is everything that’s wrong with operating systems today. And so is the planned inclusion of an innovative, fun-sounding 3D painter as part of next week’s Windows 10 Creators Update.

Repeat after me: Applications do not belong in operating systems. Diagnostics, yes. Shared libraries, yes. Essential device drivers, yes. Hardware abstraction layers, yes. File systems, yes. Program loads and tools, yes. A network stack, yes. A graphical user interface, yes. A scripting/job control language, yes. A basic web browser, yes.

Applications? No, no, no!

Why not?

Applications bloat up the operating system release. What if you don’t need a 3D paint program? What if you don’t want to use the built-in mail client? The binaries are there anyway taking up storage. Whenever the operating system is updated, the binaries are updated, eating up bandwidth and CPU time.

If you do want those applications, bug fixes are tied to OS updates. The Sierra 10.12.4 update fixes a bug in Mail. Why must that be tied to an OS update? The update supports more digital camera RAW formats. Why are they tied to the operating system, and not released as they become available? The 10.12.4 update also fixes a Siri issue regarding cricket scores in the IPL. Why, for heaven’s sake, is that functionality tied to an operating system update?? That’s simply insane.

An operating system is easier for the developer test and verify if it’s smaller. The more things in your OS update release train, the more things can go wrong, whether it’s in the installation process or in the code itself. A smaller OS means less regression testing and fewer bugs.

An operating system is easier for the client to test and verify if it’s smaller. Take your corporate clients — if they are evaluating macOS Sierra 10/12/4 or Windows 10 Creators Update prior to roll-out, if there’s less stuff there, the validation process is easier.

Performance and memory utilization are better if it’s smaller. The microkernel concept says that the OS should be as small as possible – if something doesn’t have to be in the OS, leave it out. Well, that’s not the case any more, at least in terms of the software release trains.

This isn’t new

No, Alan isn’t off his rocker, at least not more than usual. Operating system releases, especially those for consumers, have been bloated up with applications and junk for decades. I know that. Nothing will change.

Yes, it would be better if productivity applications and games were distributed and installed separately. Maybe as free downloads, as optional components on the release CD/DVD, or even as a separate SKU. Remember Microsoft Plus and Windows Ultimate Extras? Yeah, those were mainly games and garbage. Never mind.

Still, seeing the macOS Sierra Update release notes today inspired this missive. I hope you enjoyed it. </rant>

, ,

Three years of the 2013 OWASP Top 10 — and it’s the same vulnerabilities over and over

Can’t we fix injection already? It’s been nearly four years since the most recent iteration of the OWASP Top 10 came out — that’s June 12, 2013. The OWASP Top 10 are the most critical web application security flaws, as determined by a large group of experts. The list doesn’t change much, or change often, because the fundamentals of web application security are consistent.

The 2013 OWASP Top 10 were

  1. Injection
  2. Broken Authentication and Session Management
  3. Cross-Site Scripting (XSS)
  4. Insecure Direct Object References
  5. Security Misconfiguration
  6. Sensitive Data Exposure
  7. Missing Function Level Access Control
  8. Cross-Site Request Forgery (CSRF)
  9. Using Components with Known Vulnerabilities
  10. Unvalidated Redirects and Forwards

The preceding list came out on April 19. 2010:

  1. Injection
  2. Cross-Site Scripting (XSS)
  3. Broken Authentication and Session Management
  4. Insecure Direct Object References
  5. Cross-Site Request Forgery (CSRF)
  6. Security Misconfiguration
  7. Insecure Cryptographic Storage
  8. Failure to Restrict URL Access
  9. Insufficient Transport Layer Protection
  10. Unvalidated Redirects and Forwards

Looks pretty familiar. If you go back further to the inaugural Open Web Application Security Project 2004 and then the 2007 lists, the pattern of flaws stays the same. That’s because programmers, testers, and code-design tools keep making the same mistakes, over and over again.

Take the #1, Injection (often written as SQL Injection, but it’s broader than simply SQL). It’s described as:

Injection flaws occur when an application sends untrusted data to an interpreter. Injection flaws are very prevalent, particularly in legacy code. They are often found in SQL, LDAP, Xpath, or NoSQL queries; OS commands; XML parsers, SMTP Headers, program arguments, etc. Injection flaws are easy to discover when examining code, but frequently hard to discover via testing. Scanners and fuzzers can help attackers find injection flaws.

The technical impact?

Injection can result in data loss or corruption, lack of accountability, or denial of access. Injection can sometimes lead to complete host takeover.

And the business impact?

Consider the business value of the affected data and the platform running the interpreter. All data could be stolen, modified, or deleted. Could your reputation be harmed?

Eliminating the vulnerability to injection attacks is not rocket science. OWASP summaries three approaches:

Preventing injection requires keeping untrusted data separate from commands and queries.

The preferred option is to use a safe API which avoids the use of the interpreter entirely or provides a parameterized interface. Be careful with APIs, such as stored procedures, that are parameterized, but can still introduce injection under the hood.

If a parameterized API is not available, you should carefully escape special characters using the specific escape syntax for that interpreter. OWASP’s ESAPI provides many of these escaping routines.

Positive or “white list” input validation is also recommended, but is not a complete defense as many applications require special characters in their input. If special characters are required, only approaches 1. and 2. above will make their use safe. OWASP’s ESAPI has an extensible library of white list input validation routines.

Not rocket science, not brain surgery — and the same is true of the other vulnerabilities. There’s no excuse for still getting these wrong, folks. Cut down on these top 10, and our web applications will be much safer, and our organizational risk much reduced.

Do you know how often your web developers make the OWASP Top 10 mistakes? The answer should be “never.” They’ve had plenty of time to figure this out.

, ,

An intimate take on cybersecurity: Yes, medical devices can be hacked and compromised

Modern medical devices increasingly leverage microprocessors and embedded software, as well as sophisticated communications connections, for life-saving functionality. Insulin pumps, for example, rely on a battery, pump mechanism, microprocessor, sensors, and embedded software. Pacemakers and cardiac monitors also contain batteries, sensors, and software. Many devices also have WiFi- or Bluetooth-based communications capabilities. Even hospital rooms with intravenous drug delivery systems are controlled by embedded microprocessors and software, which are frequently connected to the institution’s network. But these innovations also mean that a software defect can cause a critical failure or security vulnerability.

In 2007, former vice president Dick Cheney famously had the wireless capabilities of his pacemaker disabled. Why? He was concerned “about reports that attackers could hack the devices and kill their owners.” Since then, the vulnerabilities caused by the larger attack surface area on modern medical devices have gone from hypothetical to demonstrable, in part due to the complexity of the software, and in part due to the failure to properly harden the code.

In October 2011, The Register reported that “a security researcher has devised an attack that hijacks nearby insulin pumps, enabling him to surreptitiously deliver fatal doses to diabetic patients who rely on them.” The insulin pump worked because the pump contained a short-range radio that allow patients and doctors to adjust its functions. The researcher showed that, by using a special antenna and custom-written software, he could locate and seize control of any such device within 300 feet.

report published by Independent Security Evaluators (ISE) shows the danger. This report examined 12 hospitals, the organization concluded “that remote adversaries can easily deploy attacks that manipulate records or devices in order to fully compromise patient health” (p. 25). Later in the report, the researchers show how they demonstrated the ability to manipulate the flow of medicine or blood samples within the hospital, resulting in the delivery of improper medicate types and dosages (p. 37)–and do all this from the hospital lobby. They were also able to hack into and remotely control patient monitors and breathing tubes – and trigger alarms that might cause doctors or nurses to administer unneeded medications.

Read more in my blog post for Parasoft, “What’s the Cure for Software Defects and Vulnerabilities in Medical Devices?

, ,

How to take existing enterprise code to Microsoft Azure or Google Cloud Platform

The best way to have a butt-kicking cloud-native application is to write one from scratch. Leverage the languages, APIs, and architecture of the chosen cloud platform before exploiting its databases, analytics engines, and storage. As I wrote for Ars Technica, this will allow you to take advantage of the wealth of resources offered by companies like Microsoft, with their Azure PaaS (Platform-as-a-Service) offering or by Google Cloud Platform’s Google App Engine PaaS service.

Sometimes, however, that’s not the job. Sometimes, you have to take a native application running on a server in your local data center or colocation facility and make it run in the cloud. That means virtual machines.

Before we get into the details, let’s define “native application.” For the purposes of this exercise, it’s an application written in a high-level programming language, like C/C++, C#, or Java. It’s an application running directly on a machine talking to an operating system, like Linux or Windows, that you want to run on a cloud platform like Windows Azure, Amazon Web Services (AWS), or Google Cloud Platform (GCP).

What we are not talking about is an application that has already been virtualized, such as already running within VMware’s ESXi or Microsoft’s Hyper-V virtual machine. Sure, moving an ESXi or Hyper-V application running on-premises into the cloud is an important migration that may improve performance and add elasticity while switching capital expenses to operational expenses. Important, yes, but not a challenge. All the virtual machine giants and cloud hosts have copious documentation to help you make the switch… which amounts to basically copying the virtual machine file onto a cloud server and turning it on.

Many possible scenarios exist for moving a native datacenter application into the cloud. They boil down to two main types of migrations, and there’s no clear reason to choose one over the other:

The first is to create a virtual server within your chosen cloud provider, perhaps running Windows Server or running a flavor of Linux. Once that virtual server has been created, you migrate the application from your on-prem server to the new virtual server—exactly as you would if you were moving from one of your servers to a new server. The benefits: the application migration is straightforward, and you have 100-percent control of the server, the application, and security. The downside: the application doesn’t take advantage of cloud APIs or other special servers. It’s simply a migration that gets a server out of your data center. When you do this, you are leveraging a type of cloud called Infrastructure-as-a-Service (IaaS). You are essentially treating the cloud like a colocation facility.

The second is to see if your application code can be ported to run within the native execution engine provided by the cloud service. This is called Platform-as-a-Service (PaaS). The benefits are that you can leverage a wealth of APIs and other services offered by the cloud provider. The downsides are that you have to ensure that your code can work on the service (which may require recoding or even redesign) in order to use those APIs or even to run at all. You also don’t have full control over the execution environment, which means that security is managed by the cloud provider, not by you.

And of course, there’s the third option mentioned at the beginning: Writing an entirely new application native for the cloud provider’s PaaS. That’s still the best option, if you can do it. But our task today is to focus on migrating an existing application.

Let’s look into this more closely, via my recent article for Ars Technica, “Great app migration takes enterprise “on-prem” applications to the Cloud.”

, , ,

Cybersecurity alert: Trusted websites can harbor malware, thanks to savvy hackers

According to a recent study, 46% of the top one million websites are considered risky. Why? Because the homepage or background ad sites are running software with known vulnerabilities, the site was categorized as a known bad for phishing or malware, or the site had a security incident in the past year.

According to Menlo Security, in its “State of the Web 2016” report introduced mid-December 2016, “… nearly half (46%) of the top million websites are risky.” Indeed, Menlo says, “Primarily due to outdated software, cyber hackers now have their veritable pick of half the web to exploit. And exploitation is becoming more widespread and effective for three reasons: 1. Risky sites have never been easier to exploit; 2. Traditional security products fail to provide adequate protection; 3. Phishing attacks can now utilize legitimate sites.”

This has been a significant issue for years. However, the issue came to the forefront earlier this year when several well-known media sites were essentially hijacked by malicious ads. The New York Times, the BBC, MSN and AOL were hit by tainted advertising that installed ransomware, reports Ars Technica. From their March 15, 2016, article, “Big-name sites hit by rash of malicious ads spreading crypto ransomware”:

The new campaign started last week when ‘Angler,’ a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

The results of this attack, reported The Guardian at around the same time: 

When the infected adverts hit users, they redirect the page to servers hosting the malware, which includes the widely-used (amongst cybercriminals) Angler exploit kit. That kit then attempts to find any back door it can into the target’s computer, where it will install cryptolocker-style software, which encrypts the user’s hard drive and demands payment in bitcoin for the keys to unlock it.

If big-money trusted media sites can be hit, so can nearly any corporate site, e-commerce portal, or any website that uses third-party tools – or where there might be the possibility of unpatched servers and software. That means just about anyone. After all, not all organizations are diligent about monitoring for common vulnerabilities and exploits (CVE) on their on-premises servers. When companies run their websites on multi-tenant hosting facilities, they don’t even have access to the operating system directly, but rely upon the hosting company to install patches and fixes to Windows Server, Linux, Joomla, WordPress and so-on.

A single unpatched operating system, web server platform, database or extension can introduce a vulnerability which can be scanned for. Once found, that CVE can be exploited, by a talented hacker — or by a disgruntled teenager with a readily-available web exploit kit

What can you do about it? Well, you can read my complete story on this subject, “Malware explosion: The web is risky,” published on ITProPortal.

, ,

Where’s the best Java coding style guide? Not at Oracle

For programmers, a language style guide is essential for helping learn a language’s standards. A style guide also can resolve potential ambiguities in syntax and usage. Interestingly, though, the official Code Conventions for the Java Programming Language guide has not been updated since April 20,1999 – back from long before Oracle bought Sun Microsystems. In fact, the page is listed as for “Archival Purposes Only.”

What’s up with that? I wrote to Andrew Binstock (@PlatypusGuy), the editor-in-chief of Oracle Java Magazine. In the November/December 2016 issue of the magazine, Andrew explained that according to the Java team, the Code Conventions guide was meant as an internal coding guide – not as an attempt to standardize the language.

Instead of Coding Conventions, Mr. B recommends the Google Java Style Guide as a “full set of well-reasoned Java coding guidelines.” So there you have it: If you want the good Java guidelines, look to Google — not to Oracle. Here’s the letter and the response.

, , , , ,

Medical devices – the wild west for cybersecurity vulnerabilities and savvy hackers

bloombergMedical devices are incredibly vulnerable to hacking attacks. In some cases it’s because of software defects that allow for exploits, like buffer overflows, SQL injection or insecure direct object references. In other cases, you can blame misconfigurations, lack of encryption (or weak encryption), non-secure data/control networks, unfettered wireless access, and worse.

Why would hackers go after medical devices? Lots of reasons. To name but one: It’s a potential terrorist threat against real human beings. Remember that Dick Cheney famously disabled the wireless capabilities of his implanted heart monitor for fear of an assassination attack.

Certainly healthcare organizations are being targeted for everything from theft of medical records to ransomware. To quote the report “Hacking Healthcare IT in 2016,” from the Institute for Critical Infrastructure Technology (ICIT):

The Healthcare sector manages very sensitive and diverse data, which ranges from personal identifiable information (PII) to financial information. Data is increasingly stored digitally as electronic Protected Health Information (ePHI). Systems belonging to the Healthcare sector and the Federal Government have recently been targeted because they contain vast amounts of PII and financial data. Both sectors collect, store, and protect data concerning United States citizens and government employees. The government systems are considered more difficult to attack because the United States Government has been investing in cybersecurity for a (slightly) longer period. Healthcare systems attract more attackers because they contain a wider variety of information. An electronic health record (EHR) contains a patient’s personal identifiable information, their private health information, and their financial information.

EHR adoption has increased over the past few years under the Health Information Technology and Economics Clinical Health (HITECH) Act. Stan Wisseman [from Hewlett-Packard] comments, “EHRs enable greater access to patient records and facilitate sharing of information among providers, payers and patients themselves. However, with extensive access, more centralized data storage, and confidential information sent over networks, there is an increased risk of privacy breach through data leakage, theft, loss, or cyber-attack. A cautious approach to IT integration is warranted to ensure that patients’ sensitive information is protected.”

Let’s talk devices. Those could be everything from emergency-room monitors to pacemakers to insulin pumps to X-ray machines whose radiation settings might be changed or overridden by malware. The ICIT report says,

Mobile devices introduce new threat vectors to the organization. Employees and patients expand the attack surface by connecting smartphones, tablets, and computers to the network. Healthcare organizations can address the pervasiveness of mobile devices through an Acceptable Use policy and a Bring-Your-Own-Device policy. Acceptable Use policies govern what data can be accessed on what devices. BYOD policies benefit healthcare organizations by decreasing the cost of infrastructure and by increasing employee productivity. Mobile devices can be corrupted, lost, or stolen. The BYOD policy should address how the information security team will mitigate the risk of compromised devices. One solution is to install software to remotely wipe devices upon command or if they do not reconnect to the network after a fixed period. Another solution is to have mobile devices connect from a secured virtual private network to a virtual environment. The virtual machine should have data loss prevention software that restricts whether data can be accessed or transferred out of the environment.

The Internet of Things – and the increased prevalence of medical devices connected hospital or home networks – increase the risk. What can you do about it? The ICIT report says,

The best mitigation strategy to ensure trust in a network connected to the internet of things, and to mitigate future cyber events in general, begins with knowing what devices are connected to the network, why those devices are connected to the network, and how those devices are individually configured. Otherwise, attackers can conduct old and innovative attacks without the organization’s knowledge by compromising that one insecure system.

Given how common these devices are, keeping IT in the loop may seem impossible — but we must rise to the challenge, ICIT says:

If a cyber network is a castle, then every insecure device with a connection to the internet is a secret passage that the adversary can exploit to infiltrate the network. Security systems are reactive. They have to know about something before they can recognize it. Modern systems already have difficulty preventing intrusion by slight variations of known malware. Most commercial security solutions such as firewalls, IDS/ IPS, and behavioral analytic systems function by monitoring where the attacker could attack the network and protecting those weakened points. The tools cannot protect systems that IT and the information security team are not aware exist.

The home environment – or any use outside the hospital setting – is another huge concern, says the report:

Remote monitoring devices could enable attackers to track the activity and health information of individuals over time. This possibility could impose a chilling effect on some patients. While the effect may lessen over time as remote monitoring technologies become normal, it could alter patient behavior enough to cause alarm and panic.

Pain medicine pumps and other devices that distribute controlled substances are likely high value targets to some attackers. If compromise of a system is as simple as downloading free malware to a USB and plugging the USB into the pump, then average drug addicts can exploit homecare and other vulnerable patients by fooling the monitors. One of the simpler mitigation strategies would be to combine remote monitoring technologies with sensors that aggregate activity data to match a profile of expected user activity.

A major responsibility falls onto the device makers – and the programmers who create the embedded software. For the most part, they are simply not up to the challenge of designing secure devices, and may not have the polices, practices and tools in place to get cybersecurity right. Regrettably, the ICIT report doesn’t go into much detail about the embedded software, but does state,

Unlike cell phones and other trendy technologies, embedded devices require years of research and development; sadly, cybersecurity is a new concept to many healthcare manufacturers and it may be years before the next generation of embedded devices incorporates security into its architecture. In other sectors, if a vulnerability is discovered, then developers rush to create and issue a patch. In the healthcare and embedded device environment, this approach is infeasible. Developers must anticipate what the cyber landscape will look like years in advance if they hope to preempt attacks on their devices. This model is unattainable.

In November 2015, Bloomberg Businessweek published a chilling story, “It’s Way too Easy to Hack the Hospital.” The authors, Monte Reel and Jordon Robertson, wrote about one hacker, Billy Rios:

Shortly after flying home from the Mayo gig, Rios ordered his first device—a Hospira Symbiq infusion pump. He wasn’t targeting that particular manufacturer or model to investigate; he simply happened to find one posted on EBay for about $100. It was an odd feeling, putting it in his online shopping cart. Was buying one of these without some sort of license even legal? he wondered. Is it OK to crack this open?

Infusion pumps can be found in almost every hospital room, usually affixed to a metal stand next to the patient’s bed, automatically delivering intravenous drips, injectable drugs, or other fluids into a patient’s bloodstream. Hospira, a company that was bought by Pfizer this year, is a leading manufacturer of the devices, with several different models on the market. On the company’s website, an article explains that “smart pumps” are designed to improve patient safety by automating intravenous drug delivery, which it says accounts for 56 percent of all medication errors.

Rios connected his pump to a computer network, just as a hospital would, and discovered it was possible to remotely take over the machine and “press” the buttons on the device’s touchscreen, as if someone were standing right in front of it. He found that he could set the machine to dump an entire vial of medication into a patient. A doctor or nurse standing in front of the machine might be able to spot such a manipulation and stop the infusion before the entire vial empties, but a hospital staff member keeping an eye on the pump from a centralized monitoring station wouldn’t notice a thing, he says.

 The 97-page ICIT report makes some recommendations, which I heartily agree with.

  • With each item connected to the internet of things there is a universe of vulnerabilities. Empirical evidence of aggressive penetration testing before and after a medical device is released to the public must be a manufacturer requirement.
  • Ongoing training must be paramount in any responsible healthcare organization. Adversarial initiatives typically start with targeting staff via spear phishing and watering hole attacks. The act of an ill- prepared executive clicking on a malicious link can trigger a hurricane of immediate and long term negative impact on the organization and innocent individuals whose records were exfiltrated or manipulated by bad actors.
  • A cybersecurity-centric culture must demand safer devices from manufacturers, privacy adherence by the healthcare sector as a whole and legislation that expedites the path to a more secure and technologically scalable future by policy makers.

This whole thing is scary. The healthcare industry needs to set up its game on cybersecurity.

, , , ,

We need a new browser security default: Privacy mode for external, untrusted or email links

firefox-privateBe paranoid! When you visit a website for the first time, it can learn a lot about you. If you have cookies on your computer from one of the site’s partners, it can see what else you have been doing. And it can place cookies onto your computer so it can track your future activities.

Many (or most?) browsers have some variation of “private” browsing mode. In that mode, websites shouldn’t be able to read cookies stored on your computer, and they shouldn’t be able to place permanent cookies onto your computer. (They think they can place cookies, but those cookies are deleted at the end of the session.)

Those settings aren’t good enough, because they are either all or nothing, and offer a poor balance between ease-of-use and security/privacy. The industry can and must do better. See why in my essay on NetworkWorld, “We need a better Private Browsing Mode.

 

, , , , , ,

Hackathons are great for learning — and great for the industry too

zebra-tc8000Are you a coder? Architect? Database guru? Network engineer? Mobile developer? User-experience expert? If you have hands-on tech skills, get those hands dirty at a Hackathon.

Full disclosure: Years ago, I thought Hackathons were, well, silly. If you’ve got the skills and extra energy, put them to work for coding your own mobile apps. Do a startup! Make some dough! Contribute to an open-source project! Do something productive instead of taking part in coding contests!

Since then, I’ve seen the light, because it’s clear that Hackathons are a win-win-win.

  • They are a win for techies, because they get to hone their abilities, meet people, and learn stuff.
  • They are a win for Hackathon sponsors, because they often give the latest tools, platforms and APIs a real workout.
  • They are a win for the industry, because they help advance the creation and popularization of emerging standards.

One upcoming Hackathon that I’d like to call attention to: The MEF LSO Hackathon will be at the upcoming MEF16 Global Networking Conference, in Baltimore, Nov. 7-10. The work will support Third Network service projects that are built upon key OpenLSO scenarios and OpenCS use cases for constructing Layer 2 and Layer 3 services. You can read about a previous MEF LSO Hackathon here.

Build your skills! Advance the industry! Meet interesting people! Sign up for a Hackathon!

, ,

Oracle’s reputation as community steward of Java EE is mixed

gaurdian_duke-1What’s it going to mean for Java? When Oracle purchased Sun Microsystems that was one of the biggest questions on the minds of many software developers, and indeed, the entire industry. In an April 2009 blog post, “Oracle, Sun, Winners, Losers,” written when the deal was announced (it closed in January 2010), I predicted,

Winner: Java. Java is very important to Sun. Expect a lot of investment — in the areas that are important to Oracle.

Loser: The Java Community Process. Oracle is not known for openness. Oracle is not known for embracing competitors, or for collaborating with them to create markets. Instead, Oracle is known to play hardball to dominate its markets.

Looks like I called that one correctly. While Oracle continues to invest in Java, it’s not big on true engagement with the community (aka, the Java Community Process). In a story in SD Times, “Java EE awaits its future,” published July 20, 2016, Alex Handy writes about what to expect at the forthcoming JavaOne conference, including about Java EE:

When Oracle purchased Sun Microsystems in 2010, the immediate worry in the marketplace was that the company would become a bad actor around Java. Six years later, it would seem that these fears have come true—at least in part. The biggest new platform for Java, Android, remains embroiled in ugly litigation between Google and Oracle.

Despite outward appearances of a danger for mainstream Java, however, it’s undeniable that the OpenJDK has continued along apace, almost at the same rate of change IT experienced at Sun. When Sun open-sourced the OpenJDK under the GPL before it was acquired by Oracle, it was, in a sense, ensuring that no single entity could control Java entirely, as with Linux.

Java EE, however, has lagged behind in its attention from Oracle. Java EE 7 arrived two years ago, and it’s already out of step with the new APIs introduced in OpenJDK 8. The executive committee at the Java Community Process is ready to move the enterprise platform along its road map. Yet something has stopped Java EE dead in its tracks at Oracle. JSR 366 laid out the foundations for this next revision of the platform in the fall of 2015. One would never know that, however, by looking at the Expert Committee mailing lists at the JCP: Those have been completely silent since 2014.

Alex continues,

One person who’s worried that JavaOne won’t reveal any amazing new developments in Java EE is Reza Rahman. He’s a former Java EE evangelist at Oracle, and is now one of the founders of the Java EE Guardians, a group dedicated to goading Oracle into action, or going around them entirely.

“Our principal goal is to move Java EE forward using community involvement. Our biggest concern now is if Oracle is even committed to delivering Java EE. There are various ways of solving it, but the best is for Oracle to commit to and deliver Java EE 8,” said Rahman.

His concerns come from the fact that the Java EE 8 specification has been, essentially, stalled by lack of action on Oracle’s part. The specification leads for the project are stuck in a sort of limbo, with their last chunk of work completed in December, followed by no indication of movement inside Oracle.

Alex quotes an executive at Red Hat, Craig Muzilla, who seems justifiably pessimistic:

The only thing standing in the way of evolving Java EE right now, said Muzilla, is Oracle. “Basically, what Oracle does is they hold the keys to the [Test Compatibility Kit] for certifying in EE, but in terms of creating other ways of using Java, other runtime environments, they don’t have anything other than their name on the language,” he said.

Java is still going strong. Oracle’s commitment to the community and the process – not so much. This is one “told you so” that I’m not proud of, not one bit.

, ,

Pick up… or click on… the latest issue of Java Magazine

javamagThe newest issue of the second-best software development publication is out – and it’s a doozy. You’ll definitely want to read the July/August 2016 issue of Java Magazine.

(The #1 publication in this space is my own Software Development Times. Yeah, SD Times rules.)

Here is how Andrew Binstock, editor-in-chief of Java Magazine, describes the latest issue:

…in which we look at enterprise Java – not so much at Java EE as a platform, but at individual services that can be useful as part of a larger solution, For example, we examine JSON-P, the two core Java libraries for parsing JSON data; JavaMail, the standalone library for sending and receiving email messages; and JASPIC , which is a custom way to handle security, often used with containers. For Java EE fans, one of the leaders of the JSF team discusses in considerable detail the changes being delivered in the upcoming JSF 2.3 release.

We also show off JShell from Java 9, which is an interactive shell (or REPL) useful for testing Java code snippets. It will surely become one of the most used features of the new language release, especially for testing code interactively without having to set up and run an entire project.

And we continue our series on JVM languages with JRuby, the JVM implementation of the Ruby scripting language. The article’s author, Charlie Nutter, who implemented most of the language, discusses not only the benefits of JRuby but how it became one of the fastest implementations of Ruby.

For new to intermediate programmers, we deliver more of our in-depth tutorials. Michael Kölling concludes his two-part series on generics by explaining the use of and logic behind wildcards in generics. And a book excerpt on NIO.2 illustrates advanced uses of files, paths, and directories, including an example that demonstrates how to monitor a directory for changes to its files.

In addition, we have our usual code quiz with its customary detailed solutions, a book review of a new text on writing maintainable code, an editorial about some of the challenges of writing code using only small classes, and the overview of a Java Enhancement Proposal (JEP) for Java linker. A linker in Java? Have a look.

The story I particularly recommend is “Using the Java APIs for JSON processing.” David Delabasseé covers the Java API for JavaScript Object Notation Processing (JSR-353) and its two parts, one of which is high-level object modal API, and the other a lower-level streaming API.

It’s a solid issue. Read it – and subscribe, it’s free!

, , , ,

Driving risks out of embedded automotive software

can-busWhen it comes to cars, safety means more than strong brakes, good tires, a safety cage, and lots of airbags. It also means software that won’t betray you; software that doesn’t pose a risk to life and property; software that’s working for you, not for a hacker.

Please join me for this upcoming webinar, where I am presenting along with Arthur Hicken, the Code Curmudgeon and technology evangelist for Parasoft. It’s on Thursday, August 18. Arthur and I have been plotting and scheming, and there will be some excellent information presented. Don’t miss it! Click here to register.

Driving Risks out of Embedded Automotive Software

Automobiles are becoming the ultimate mobile computer. Popular models have as many as 100 Electronic Control Units (ECUs), while high-end models push 200 ECUs. Those processors run hundreds of millions of lines of code written by the OEMs’ teams and external contractors—often for black-box assemblies. Modern cars also have increasingly sophisticated high-bandwidth internal networks and unprecedented external connectivity. Considering that no code is 100% error-free, these factors point to an unprecedented need to manage the risks of failure—including protecting life and property, avoiding costly recalls, and reducing the risk of ruinous lawsuits.

This one-hour practical webinar will review the business risks of defective embedded software in today’s connected cars. Led by Arthur Hicken, Parasoft’s automotive technology expert and evangelist, and Alan Zeichick, an independent technology analyst and founding editor of Software Development Times, the webinar will also cover five practical techniques for driving the risks out of embedded automotive software, including:

• Policy enforcement
• Reducing defects during coding
• Effective techniques for acceptance testing
• Using metrics analytics to measure risk
• Converting SDLC analytics into specific tasks to focus on the riskiest software

You can apply the proven techniques you’ll learn to code written and tested by your teams, as well as code supplied by your vendors and contractors.

, ,

It’s a fake award for SD Times – thank you, scammers!

faux-awardScammers give local businesses a faux award and then try to make money by selling certificates, trophies, and so-on.

Going through my spam filter today, I received FIVE of this exact same message praising SD Times for winning the “2016 Best of Huntington” award. The emails came from five different email addresses and domains, but the links all went to the same domain. (SD Times is published by BZ Media; I’m the “Z” of BZ Media.)

The messages read:

Sd Times has been selected for the 2016 Best of Huntington Awards for Media & Entertainment.

For details and more information please view our website: [link redacted]

If you click the link (which is not included above), you are given the choice to buy lots of things, including a plaque for $149.99 or a crystal award for $199.99. Such a deal: You can buy both for $229.99, a $349.98 value!! This is probably a lucrative scam, since the cost of sending emails is approximately $0; even a very low response rate could yield a lot of profits.

The site’s FAQ says,

Do I have to pay for an award to be a winner?

No, you do not have to pay for an award to be a winner. Award winners are not chosen based on purchases, however it is your option, to have us send you one of the 2016 Awards that have been designed for display at your place of business.

Shouldn’t my award be free?

No, most business organizations charge their members annual dues and with that money sponsor an annual award program. The Best of Huntington Award Program does not charge membership dues and as an award recipient, there is no membership requirement. We simply ask each award recipient to pay for the cost of their awards.

There is also a link to a free press release. Aren’t you excited on our behalf?

Press Release

FOR IMMEDIATE RELEASE

Sd Times Receives 2016 Best of Huntington Award

Huntington Award Program Honors the Achievement

HUNTINGTON July 2, 2016 — Sd Times has been selected for the 2016 Best of Huntington Award in the Media & Entertainment category by the Huntington Award Program.

Each year, the Huntington Award Program identifies companies that we believe have achieved exceptional marketing success in their local community and business category. These are local companies that enhance the positive image of small business through service to their customers and our community. These exceptional companies help make the Huntington area a great place to live, work and play.

Various sources of information were gathered and analyzed to choose the winners in each category. The 2016 Huntington Award Program focuses on quality, not quantity. Winners are determined based on the information gathered both internally by the Huntington Award Program and data provided by third parties.

About Huntington Award Program

The Huntington Award Program is an annual awards program honoring the achievements and accomplishments of local businesses throughout the Huntington area. Recognition is given to those companies that have shown the ability to use their best practices and implemented programs to generate competitive advantages and long-term value.

The Huntington Award Program was established to recognize the best of local businesses in our community. Our organization works exclusively with local business owners, trade groups, professional associations and other business advertising and marketing groups. Our mission is to recognize the small business community’s contributions to the U.S. economy.

SOURCE: Huntington Award Program

, , ,

Coding in the Fast Lane: The Multi-Threaded Multi-Core World of AMD64

ThrivingandSurvivinginaMulti-CoreWorld-1I wrote five contributions for an ebook from AMD Developer Central — and forgot entirely about it! The book, called “Surviving and Thriving in a Multi-Core World: Taking Advantage of Threads and Cores on AMD64,” popped up in this morning’s Google Alerts report. I have no idea why!

Here are the pieces that I wrote for the book, published in 2006. Darn, they still read well! Other contributors include my friends Anderson Bailey, Alexa Weber Morales and Larry O’Brien.

  • Driving in the Fast Lane: Multi-Core Computing for Programmers, Part 1 (page 5)
  • Driving in the Fast Lane: Multi-Core Computing for Programmers, Part 2 (page 8)
  • Coarse-Grained Vs. Fine-Grained Threading for Native Applications, Part 1 (p. 37)
  • Coarse-Grained Vs. Fine-Grained Threading for Native Applications, Part 2 (p. 40)
  • Device Driver & BIOS Development for AMD Systems (p. 87)

I am still obsessed with questionable automotive analogies. The first article begins with:

The main road near my house, called Skyline Drive, drives me nuts. For several miles, it’s a quasi-limited access highway. But for some inexplicable reason, it keeps alternating between one and two lanes in each direction. In the two-lane part, traffic moves along swiftly, even during rush hour. In the one-lane part, the traffic merges back together, and everything crawls to a standstill. When the next two-lane part appears, things speed up again.

Two lanes are better than one — and not just because they can accommodate twice as many cars. What makes the two-lane section better is that people can overtake. In the one-lane portion (which has a double-yellow line, so there’s no passing), traffic is limited to the slowest truck’s speed, or to little-old-man-peering-over-the-steering-wheel-of-his-Dodge-Dart speed. Wake me when we get there. But in the two-lane section, the traffic can sort itself out. Trucks move to the right, cars pass on the left. Police and other priority traffic weave in and out, using both lanes depending on which has more capacity at any particular moment. Delivery services with a convoy of trucks will exploit both lanes to improve throughput. The entire system becomes more efficient, and net flow of cars through those two-lane sections is considerably higher.

Okay, you’ve figured out that this is all about dual-core and multi-core computing, where cars are analogous to application threads, and the lanes are analogous to processor cores.

I’ll have to admit that my analogy is somewhat simplistic, and purists will say that it’s flawed, because an operating system has more flexibility to schedule tasks in a single-core environment under a preemptive multiprocessing environment. But that flexibility comes at a cost. Yes, if I were really modeling a microprocessor using Skyline Drive, cars would be able to pass each other in the single-lane section, but only if the car in front were to pull over and stop.

Okay, enough about cars. Let’s talk about dual-core and multi-core systems, why businesses are interested in buying them, and what implications all that should have for software developers like us.

Download and enjoy the book – it’s not gated and entirely free.

, , ,

SharePoint 2016 On-Premises – Better than ever with a bright future

SharePoint-2016-Preview-tiltedExcellent story about SharePoint in ComputerWorld this week. It gives encouragement to those who prefer to run SharePoint in their own data centers (on-premises), rather than in the cloud. In “The Future of SharePoint,” Brian Alderman writes,

In case you missed it, on May 4 Microsoft made it loud and clear it has resuscitated SharePoint On-Premises and there will be future versions, even beyond SharePoint Server 2016. However, by making you aware of the scenarios most appropriate for On-Premises and the scenarios where you can benefit from SharePoint Online, Microsoft is going to remain adamant about allowing you to create the perfect SharePoint hybrid deployment.

The future of SharePoint begins with SharePoint Online, meaning changes, features and functionality will first be deployed to SharePoint Online, and then rolled out to your SharePoint Server On-Premises deployment. This approach isn’t much of a surprise, being that SharePoint Server 2016 On-Premises was “engineered” from SharePoint Online.

Brian was writing about a post on the Microsoft SharePoint blog, and one I had overlooked (else I’d have written about it back in May. In the post, “SharePoint Server 2016—your foundation for the future,” the SharePoint Team says,

We remain committed to our on-premises customers and recognize the need to modernize experiences, patterns and practices in SharePoint Server. While our innovation will be delivered to Office 365 first, we will provide many of the new experiences and frameworks to SharePoint Server 2016 customers with Software Assurance through Feature Packs. This means you won’t have to wait for the next version of SharePoint Server to take advantage of our cloud-born innovation in your datacenter.

The first Feature Pack will be delivered through our public update channel starting in calendar year 2017, and customers will have control over which features are enabled in their on-premises farms. We will provide more detail about our plans for Feature Packs in coming months.

In addition, we will deliver a set of capabilities for SharePoint Server 2016 that address the unique needs of on-premises customers.

Now, make no mistake: The emphasis at Microsoft is squarely on Office 365 and SharePoint Online. Or as the company says SharePoint Server is, “powering your journey to the mobile-first, cloud-first world.” However, it is clear that SharePoint On-Premises will continue for some period of time. Later in the blog post in the FAQ, this is stated quite definitively:

Is SharePoint Server 2016 the last server release?

No, we remain committed to our customer’s on-premises and do not consider SharePoint Server 2016 to be the last on-premises server release.

The best place to learn about SharePoint 2016 is at BZ Media’s SPTechCon, returning to San Francisco from Dec. 5-8. (I am the Z of BZ Media.) SPTechCon, the SharePoint Technology Conference, offers more than 80 technical classes and tutorials — presented by the most knowledgeable instructors working in SharePoint today — to help you improve your skills and broaden your knowledge of Microsoft’s collaboration and productivity software.

SPTechCon will feature the first conference sessions on SharePoint 2016. Be there! Learn more at http://www.sptechcon.com.

, , , ,

Beyond the fatal Tesla crash: Security and connected autonomous cars

Kitt-InteriorWas it a software failure? The recent fatal crash of a Tesla in Autopilot mode is worrisome, but it’s too soon to blame Tesla’s software. According to Tesla on June 30, here’s what happened:

What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S. Had the Model S impacted the front or rear of the trailer, even at high speed, its advanced crash safety system would likely have prevented serious injury as it has in numerous other similar incidents.

We shall have to await the results of the NHTSA investigation to learn more. Even if it does prove to be a software failure, at least the software can be improved to try to avoid similar incidents in the future.

By coincidence, a story that I wrote about the security issues related to advanced vehicles,Connected and Autonomous Cars Are Wonderful and a Safety-Critical Security Nightmare,” was published today, July 1, on CIO Story. The piece was written several weeks ago, and said,

The good news is that government and industry standards are attempting to address the security issues with connected cars. The bad new is that those standards don’t address security directly; rather, they merely prescribe good software-development practices that should result in secure code. That’s not enough, because those processes don’t address security-related flaws in the design of vehicle systems. Worse, those standards are a hodge-podge of different regulations in different countries, and they don’t address the complexity of autonomous, self-driving vehicles.

Today, commercially available autonomous vehicles can parallel park by themselves. Tomorrow, they may be able to drive completely hands-free on highways, or drive themselves to parking lots without any human on board. The security issues, the hackability issues, are incredibly frightening. Meanwhile, companies as diverse as BMW, General Motors, Google, Mercedes, Tesla and Uber are investing billions of dollars into autonomous, self-driving car technologies.

Please read the whole story here.

, , ,

Crash! Down goes Google Calendar — cloud services are not perfect

crashCloud services crash. Of course, non-cloud-services crash too — a server in your data center can go down, too. At least there you can do something, or if it’s a critical system you can plan with redundancies and failover.

Not so much with cloud services, as this morning’s failure of Google Calendar clearly shows. The photo shows Google’s status dashboard as of 6:53am on Thursday, June 30.

I wrote about crashes at Amazon Web Services and Apple’s MobileMe back in 2008 in “When the cloud was good, it was very good. But when it was bad it was rotten.”

More recently, in 2011, I covered another AWS failure in “Skynet didn’t take down Amazon Web Services.”

Overall, cloud services are quite reliable. But they are not perfect, and it’s a mistake to think that just because they are offered by huge corporations, they will be error-free and offer 100% uptime. Be sure to work that into your plans, especially if you and your employees rely upon public cloud services to get your job done, or if your customers interact with you through cloud services.

, , ,

MEF LSO Hackathon at Euro16 brings together open source, open standards

hackathonThe MEF recently conducted its second LSO Hackathon at a Rome event called Euro16. You can read my story about it here in DiarioTi: LSO Hackathons Bring Together Open Standards, Open Source.

Alas, my coding skills are too rusty for a Hackathon, unless the objective is to develop a fully buzzword compliant implementation of “Hello World.” Fortunately, there are others with better skills, as well as a broader understanding of today’s toughest problems.

Many Hackathons are thinly veiled marketing exercises by companies, designed to find ways to get programmers hooked on their tools, platforms, APIs, etc. Not all! One of the most interesting Hackathons is from the MEF, an industry group that drives communications interoperability. As a standards defining organization (SDO), the MEF wants to help carriers and equipment vendors design products/services ready for the next generation of connectivity. That means building on a foundation of SDN (software defined networks), NFV (network functions virtualization), LSO (lifecycle service orchestration) and CE2.0 (Carrier Ethernet 2.0).

To make all this happen:

  • What the MEF does: Create open standards for architectures and specifications.
  • What vendors, carriers and open source projects do: Write software to those specifications.
  • What the Hackathon does: Give everyone a chance to work together, make sure the code is truly interoperable, and find areas where the written specs might be ambiguous.

Thus, the MEF LSO Hackathons. They bring together a wide swatch of the industry to move beyond the standards documents and actually write and test code that implements those specs.

As mentioned above, the MEF just completed its second Hackathon at Euro16. The first LSO Hackathon was at last year’s MEF GEN15 annual conference in Dallas. Here’s my story about it in Telecom Ramblings: The MEF LSO Hackathon: Building Community, Swatting Bugs, Writing Code.

The third LSO Hackathon will be at this year’s MEF annual conference, MEF16, in Baltimore, Nov. 7-10. I will be there as an observer – alas, without the up-to-date, practical skills to be a coding participant.

, , , ,

When do we want automated emails? Now!

stopwatchI can hear the protesters. “What do we want? Faster automated emails! When do we want them? In under 20 nanoseconds!

Some things have to be snappy. A Web page must load fast, or your customers will click away. Moving the mouse has to move the cursor without pauses or hesitations. Streaming video should buffer rarely and unobtrusively; it’s almost always better to temporarily degrade the video quality than to pause the playback. And of course, for a touch interface to work well, it must be snappy, which Apple has learned with iOS, and which Google learned with Project Butter.

The same is true with automated emails. They should be generated and transmitted immediately — that is, is under a minute.

I recently went to book a night’s stay at a Days Inn, a part of the Wyndham Hotel Group, and so I had to log into my Wyndham account. Bad news: I couldn’t remember the password. So, I used the password retrieval system, giving my account number and info. The website said to check my e-mail for the reset link. Kudos: That’s a lot better than saying “We’ll mail you your password,” and then sending it in plain text!!

So, I flipped over to my e-mail client. Checked for new mail. Nothing. Checked again. Nothing. Checked again. Nothing. Checked the spam folder. Nothing. Checked for new mail. Nothing. Checked again. Nothing.

I submitted the request for the password reset at 9:15 a.m. The link appeared in my inbox at 10:08 a.m. By that time, I had already booked the stay with Best Western. Sorry, Days Inn! You snooze, you lose.

What happened? The e-mail header didn’t show a transit delay, so we can’t blame the Internet. Rather, it took nearly an hour for the email to be uploaded from the originating server. This is terrible customer service, plain and simple.

It’s not merely Wyndham. When I purchase something from Amazon, the confirmation e-mail generally arrives in less than 30 seconds. When I purchase from Barnes & Noble, a confirmation e-mail can take an hour. The worst is Apple: Confirmations of purchases from the iTunes Store can take three days to appear. Three days!

It’s time to examine your policies for generating automated e-mails. You do have policies, right? I would suggest a delay of no more than one minute from when the user performs an action that would generate an e-mail and having the message delivered to the SMTP server.

Set the policy. Automated emails should go out in seconds — certainly in under one minute. Design for that and test for that. More importantly, audit the policy on a regular basis, and monitor actual performance. If password resets or order confirmations are taking 53 minutes to hit the Internet, you have a problem.

, , ,

Celebrating Ada Lovelace and doubling the talent pool

626px-Ada_Lovelace_portraitDespite some recent progress, women are still woefully underrepresented in technical fields such as software development. There are many academic programs to bring girls into STEM (science, technology, engineering and math) at various stages in their education, from grade school to high school to college. Corporations are trying hard.

It’s not enough. We all need to try harder.

On Oct. 11, 2016, we will celebrate Ada Lovelace Day, honoring the first computer programmer — male or female. Augusta Ada King-Noel, Countess of Lovelace, wrote the algorithms for Charles Babbage’s difference engine in the mid-1800s.

According to the website Finding Ada, this date doesn’t represent her birthday, which is of Dec. 10. Rather, they say, “The date is arbitrary, chosen in an attempt to make the day maximally convenient for the most number of people. We have tried to avoid major public holidays, school holidays, exam season, and times of the year when people might be hibernating.” I’d like to think that the scientifically minded Ada Lovelace would find this amusing.

There are great organizations focused on promoting women in technology, such as Women in Technology International (WITI) and the Anita Borg Institute. There are cool projects, like the Wiki Edit-a-Thon sponsored by Brown University, which seeks to correct the historic (and inaccurate) underrepresentation of female scientists in Wikipedia.

Those are good efforts. They still aren’t enough.

Are women good at STEM fields, including software development? Yes. But all too often, they are gender-stereotyped into non-coding parts of the field—when they are hired at all. And certainly the hyper-competitive environment in many tech teams, and the death-march culture, is not friendly to anyone (male or female) who wants to have a life outside the startup.

Let me share the Anita Borg Institute’s 10 best practices to foster retention of women in technical roles:

  • Collect, analyze and report retention data as it pertains to women in technical roles.
  • Formally train managers in best practices, and hold them accountable for retention.
  • Embed collaboration in the corporate culture to encourage diverse ideas.
  • Offer training programs that raise awareness of and counteract microinequities and unconscious biases.
  • Provide development and visibility opportunities to women that increase technical credibility.
  • Fund and support workshops and conferences that focus on career path experiences and challenges faced by women technologists.
  • Establish mentoring programs on technical and career development.
  • Sponsor employee resource groups for mutual support and networking.
  • Institute flexible work arrangements and tools that facilitate work/life integration.
  • Enact employee-leave policies, and provide services that support work/life integration.

Does your organization have a solid representation of women in technical jobs (not only in technical departments)? Are those women given equal pay for equal work? Are women provided with solid opportunities for professional growth and career advancement? Are you following any of the above best practices?

If so, that’s great news. I’d love to hear about it and help tell your story.

,

A good HR department is the No. 1 secret for a successful startup

pizzaIt’s not intellectual property. It’s not having code warriors who can turn pizza into algorithms. It’s not even having great angel investors. If you want a successful startup that’s going to keep you in the headlines for your technology and market prowess, you need a great Human Resources department.

Whether your organization has three employees, 30 or 300, it’s a company. That means a certain level of professionalism in administering it. Yes, tech companies love to be led by hotshot engineers who often brag about their inexperience as CEOs. Yes, those companies are often the darlings of the venture capital community. Yes, those CEOs get lots of visibility in the technology media, the financial media and most importantly, social media.

That is not enough. That’s explained very well in Claire Cain Miller’s essay in The New York Times, “Yes, Silicon Valley, Sometimes You Need More Bureaucracy.”

Miller focuses on the 2014 GitHub scandal, where a lack of professionalism in HR led to deep problems in hiring, management and culture.

“GitHub is not unusual. Tech startups with 100 or fewer employees have half as many personnel professionals as companies of the same size in other industries, according to data from PayScale, which makes compensation software and analyzed about 2,830 companies,” Miller writes.

Is HR something that’s simply soft and squishy, a distraction from the main business of cranking out code and generating viral marketing? No. It’s a core function of every business that’s large enough to have employees.

Miller cites a study that found that companies with personnel departments were nearly 40% less likely to fail than the norm, and nearly 40% more likely to go public. That 36-page study, “Organizational Blueprints for Success in High-Tech Startups,” from the University of California, Berkeley, was published in 2002, but provides some interesting food for thought.

The authors, James Baron and Michael Hannan, wrote,

It is by no means uncommon to see a founder spend more time and energy fretting about the scalability of the phone system or IT platform than about the scalability of the culture and practices for managing employees, even in case where that same founder would declare with great passion and sincerity that ‘people are the ultimate source of competitive advantage in my business.’

The study continues,

Any plan for launching a new enterprise should include a road map for evolving the organizational structure and HR system, which parallels the timeline for financial, technological, and growth milestones. We have yet to meet an entrepreneur who told us, on reflection, he or she believes they spent too much time worrying about people issues in the early days of their venture.

What does that mean for you?

• If you are part of the leadership team of a startup or small company, look beyond the tech industry for best practices in human resources management. Just because other small tech firms gloss over HR doesn’t mean that you should. In fact, perhaps having better HR might be better way to out-innovate your competitors.

• If you are looking at joining a startup or a small company, look at the HR department and the culture. If HR seems casual or ad hoc, and if everyone in the company looks the same, perhaps that’s a company not poised for long-term success. Look for a culture that cares about having a healthy and genuinely diverse workforce—and for policies that talk about ways to resolve problems.

Human resources are as important as technology and financial resources. Without the right leadership in all three areas, you’re in for a rough ride.

, , ,

Retrospective: 2010’s ESDC, the Enterprise Software Development Conference

ESDC_2010Today’s serendipitous discovery: A blog post about the Enterprise Software Development Conference (ESC), produced by BZ Media in March 2010. I was the conference chair of that event; our goal was to try to replicate the wonderful SD West conference, which CMP had discontinued the year before. (I am the “Z” of BZ Media.)

Unfortunately, ESDC was not viable from a business perspective, so we only ran it one time. Even so, we had a great conference, and the attendees, presenters and exhibitors were delighted with the event’s quality and technical content.

One of our top exhibitors was OutSystems. Mike Jones, one of their executives, wrote about the conference in a thoughtful blog post, “ESDC Retrospective.” Mike started with

Last week, the OutSystems team attended the Enterprise Software Development Conference (ESDC) in San Mateo California. This is the first year for this show and, as Alan Zeichick notes, it takes up where the old SD West conference left off. As gold sponsors of the show, we got to both attend the sessions and talk to the conference attendees at the OutSystems booth. I just wanted to share a few highlights & take-aways from the show.

One of his cited highlights was

Another highlight: Kent Beck‘s keynote on “Responsive Design: Efficiency Through Safety.”  This was the first time I had heard Kent speak. He started off by referencing Ed Yourdon‘s work on Systems Design and how it led him to try and distill his own working process for design. This was the premise for his presentation. My take-away was that no matter what you do, your design will change. I think we all accept this as fact – especially for application software. Kent then explained his techniques to reduce the risk when making design changes. For each of his examples I found myself thinking ‘This is not really a problem with the Agile Platform because the TrueChange™ engine will keep you from breaking stuff you did not intend to break, allowing you to move very fast with little risk.” If you are hand-coding, then Kent’s four techniques (as described here by Alan Zeichick) to reduce risk when making change is great advice, but why do that if you don’t have to? BTW, I think Kent would love the Agile Platform.

Thanks, Mike, for the thoughtful writeup. Hard to believe ESDC was more than six years ago. (Read the whole post here.)

, , ,

Special Mac option key symbols – your handy reference

I am often looking for these symbols and can’t find them. So here they are for English language Mac keyboards, in a handy blog format. They all use the Option key.

Note: The Option key is not the Command key, which is marked with ⌘ (looped square) symbol. Rather, the Option key is between Control and Command on many (most?) Mac keyboard. These key combinations won’t work a numerical keypad; you have to be using the main part of the keyboard.

The case of the letter/key pressed with the Option key matters. For example, Option+v is the root √ and Option+V (in other words, Option+Shift+v) is the diamond ◊. Another example: Option+7 is the paragraph ¶ and Option+& (that is, Option+Shift+7) is the double dagger ‡. You may simply copy/paste the symbols, if that’s more convenient.

These key combinations should work in most modern Mac applications, and be visible in most typefaces. No guarantees. Your mileage may vary.

SYMBOLS

¡ Option+1 (inverted exclamation)
¿ Option+? (inverted question)
« Option+\ (open double angle quote)
» Option+| (close double angle quote)
© Option+g (copyright)
® Option+r (registered copyright)
™ Option+2 (trademark)
¶ Option+7 (paragraph)
§ Option+6 (section)
• Option+8 (dot)
· Option+( (small dot)
◊ Option+V (diamond)
– Option+- (en-dash)
— Option+_ (em-dash)
† Option+t (dagger)
‡ Option+& (double dagger)
¢ Option+4 (cent)
£ Option+3 (pound)
¥ Option+y (yen)
€ Option+@ (euro)

ACCENTS AND SPECIAL LETTERS

ó Ó Option+e then letter (acute)
ô Ô Option+i then letter (circumflex)
ò Ò Option+` then letter (grave)
õ Õ Option+n then letter (tilde)
ö Ö Option+u then letter (umlaut)
å Å Option+a or Option+A (a-ring)
ø Ø Option+o or Option+O (o-slash)
æ Æ Option+’ or Option+” (ae ligature)
œ Œ Option+q or Option+Q (oe ligature)
fi Option+% (fi ligature)
fl Option+^ (fl ligature)
ç Ç Option+c or Option+C (circumflex)
ß Option+s (double-s)

MATH AND ENGINEERING

÷ Option+/ (division)
± Option++ (plus/minus)
° Option+* (degrees)
¬ Option+l (logical not)
≠ Option+= (not equal)
≥ Option+> (greater or equal)
≤ Option+< (less or equal)
√ Option+v (root)
∞ Option+5 (infinity)
≈ Option+x (tilde)
∆ Option+j (delta)
Σ Option+w (sigma)
Ω Option+z (ohm)
π Option+p (pi)
µ Option+m (micro)
∂ Option+d (derivative)
∫ Option+b (integral)

, , , ,

Quantify the risk of automotive software failures: The SRR Warranty and Recall Report

Summary of Recall Trends. Source: SRR.

Summary of Recall Trends. Source: SRR.

The costs of an automobile recall can be immense for an OEM automobile or light truck manufacturer – and potentially ruinous for a member of the industry’s supply chain. Think about the ongoing Takata airbag scandal, which Bloomberg says could cost US$24 billion. General Motors’ ignition locks recall may have reached $4.1 billion. In 2001, the exploding Firestone tires on the Ford Explorer cost $3 billion to recall. The list goes on and on. That’s all about hardware problems. What about bits and bytes?

Until now, it’s been difficult to quantify the impact of software defects on the automotive industry. Thanks to a new analysis from SRR called “Industry Insights for the Road Ahead: Automotive Warranty and Recall Report 2016,” we have a good handle on this elusive area.

According to the report, there were 63 software- related vehicle recalls from late 2012 to June 2015. That’s based on data from the United States’ National Highway Traffic Safety Administration (NHTSA). The SRR report derived that count of 63 software-related recalls using this methodology (p. 22),

To classify a recall as a software component recall, SRR searched the “Defect Summary” and “Corrective Action” fields of NHTSA’s Recall flat file for the term “software.” SRR’s inquiry captured descriptions of software-related defects identified specifically as such, as well as defects that were to be fixed by updating or changing a vehicle’s software.

That led to this analysis (p. 22),

Since the end of 2012, there has been a marked increase in recall activity due to software issues. For the primary light vehicle makes and models we studied, 32 unique software-related recalls affected about 3.6 million vehicles from 2005–2012. However, in a much shorter time period from the end of 2012 to June 2015, there were 63 software-related recalls affecting 6.4 million more vehicles.

And continuing (p. 23),

From less than 5 percent of all recalls in 2011, software-related recalls have risen to almost 15 percent in 2015. Overall, the amount of unique campaigns involving software has climbed dramatically, with nine times as many in 2015 than in 2011…

No surprises there given the dramatically increased complexity of today’s connected vehicles, with sophisticated internal networks, dozens of ECUs (electronic control units with microprocessors, memory, software and network connections), and extensive remote connectivity.

These software defects are not occurring only in systems where one expects to find sophisticated microprocessors and software, such as engine management controls and Internet-connected entertainment platforms. Microprocessors are being used to analyze everything from the driver’s position and stage of alert, to road hazards, to lane changes — and offer advanced features such as automatic parallel parking.

Where in the car are the software-related vehicle recalls? Since 2006, says the report, recalls have been prompted by defects in areas as diverse as locks/latches, power train, fuel system, vehicle speed control, air bags, electrical systems, engine and engine cooling, exterior lighting, steering, hybrid propulsion – and even the parking brake system.

That’s not all — because not every software defect results in a public and costly recall. That’s the last resort, from the OEM’s perspective. Whenever possible, the defects are either ignored by the vehicle manufacturer, or quietly addressed by a software update next time the car visits a dealer. (If the car doesn’t visit an official dealer for service, the owner may never know that a software update is available.) Says the report (p. 25),

In addition, SRR noted an increase in software-related Technical Service Bulletins (TSB), which identify issues with specific components, yet stop short of a recall. TSBs are issued when manufacturers provide recommended procedures to dealerships’ service departments for fixing problematic components.

A major role of the NHTSA is to record and analyze vehicle failures, and attempt to determine the cause. Not all failures result in a recall, or even in a TSB. However, they are tracked by the agency via Early Warning Reporting (EWR). Explains the report (p. 26),

In 2015, three new software-related categories reported data for the first time:

• Automatic Braking, listed on 21 EWR reports, resulting in 26 injuries and 1 fatality

• Electronic Stability, listed on 6 EWR reports, resulting in 7 injuries and 1 fatality

• Forward Collision Avoidance, listed in 1 EWR report, resulting in 1 injury and no fatalities

The bottom line here, beyond protecting life and property, is the bottom line for the automobile and its supply chain. As the report says in its conclusion (p. 33),

Suppliers that help OEMs get the newest software-aided components to market should be prepared for the increased financial exposure they could face if these parts fail.

About the Report

Industry Insights for the Road Ahead: Automotive Warranty and Recall Report 2016” was published by SRR: Stout, Risius Ross, which offers global financial advisory services. SRR has been in the automotive industry for 25 years, and says, “SRR professionals have more automotive experience in these service areas than any other advisory firm, period.”

This brilliant report — which is free to download in its entirety — was written by Neil Steinkamp, a Managing Director at SRR. He has extensive experience in providing a broad range of business and financial advice to corporate executives, risk managers, in-house counsel and trial lawyers. Mr. Steinkamp has provided consulting services and has been engaged as an expert in numerous matters involving automotive warranty and recall costs. His practice also includes consulting services for automotive OEMs, suppliers and their advisors regarding valuation, transactions and disputes.

, ,

The legacy application decommissioning ceremony

mag-tapeI once designed and coded a campus parking pass management system for an East Coast university. If you had a faculty, staff, student or visitor parking sticker for the campus, it was processed using my green-screen application, which went online in 1983. The university used the mainframe program with minimal changes for about a decade, until a new client/server parking system was implemented.

Today, that sticker application exists on a nine-track tape reel hanging on my wall — and probably nowhere else.

Decommissioning the parking-sticker app was relatively straightforward for the data center team, though of course I hope that it was emotionally traumatic. Data about the stickers was stored in several tables. One contained information about humans: name, address, phone number, relationship with the university. The other was about vehicles: make, year and color; license plate number; date of sticker assignment; sticker type and serial number; expiration date; date of cancellation. We handled some interesting exceptions. For example, some faculty were issued “floating” stickers that weren’t assigned to specific vehicles. That sort of thing.

Fortunately, historical info in the sticker system was not needed past a year or two. While important for campus security (“Who does that car parked in a no-parking zone belong to?”), it wasn’t data that needed to be retained for any length of time for legal or compliance reasons. Shutting off the legacy application was as simple as, well, shutting off the legacy application.

It’s not always that simple. Other software on campus in the 1980s — and much of the software that your team writes — needed to be retained, sometimes according to campus internal regulations, other times due to government or industry rules. How long do you need to keep payroll data? Transaction data for sales from your website? Bids for products and services, and the documentation that explains how the bids were solicited?

Any time you get into regulated industries, you have this issue. Financial services, aerospace, safety-oriented embedded systems, insurance, human resources, or medical: Information must be retained for compliance, and must be produced on demand by auditors, queries from litigators during eDiscovery, regulatory investigations, even court subpoenas.

That can make it hard — very hard — to turn off an application you no longer need. Even if the software is recording no new transactions, retention policies might necessitate keeping it alive for years. Maybe even decades, depending on the type of data being retained, and on the regulatory requirements of your industry. Think about drug records from pharmaceutical companies, or component sourcing for automobile manufacturers.

Data and application retention has many enterprise implications. For example: Before you deploy an application and its data onto a cloud provider or SaaS platform, you should ask: Will that application and its data need to be retained? If so, will the provider still be around and provide access to it? If not, you need to make sure there’s a plan to bring the systems in-house (even if they are no longer needed) to archive the data outside the application in a way that conforms with regulatory requirements for retention and access, and then you can decommission the application.

A word of caution: I don’t know how long nine-track tapes last, especially if they are not well shielded. My 20-year-old tape was not protected against heat or magnetism — hey, it was thrown into a box. There’s a better-than-good chance it is totally unreadable. Don’t rely upon unproven technology or suppliers for your own data archive, especially if the data must be retained for compliance purposes.

, , , ,

Blast from the past: Facebook’s tech infrastructure from 2008

Waybackmachine3Fire up the WABAC Machine, Mr. Peabody: In June 2008, I wrote a piece for MIT Technology Review explaining “How Facebook Works.”

The story started with this:

Facebook is a wonderful example of the network effect, in which the value of a network to a user is exponentially proportional to the number of other users that network has.

Facebook’s power derives from what Jeff Rothschild, its vice president of technology, calls the “social graph”–the sum of the wildly various connections between the site’s users and their friends; between people and events; between events and photos; between photos and people; and between a huge number of discrete objects linked by metadata describing them and their connections.

Facebook maintains data centers in Santa Clara, CA; San Francisco; and Northern Virginia. The centers are built on the backs of three tiers of x86 servers loaded up with open-source software, some that Facebook has created itself.

Let’s look at the main facility, in Santa Clara, and then show how it interacts with its siblings.

Read the whole story here… and check out Facebook’s current Open Source project pages too.

, , ,

Apple WWDC 2016 becomes Apple WTF – No show stoppers there

apple-watchos-wwdc-2016_0014-720x405-cSan Francisco – Apple’s Worldwide Developer Conference 2016 had plenty of developers. Plenty of WWDC news about updated operating systems, redesigned apps, sexy APIs, expansion of Apple Pay and a long-awaited version of Siri for the Macintosh.

Call me underwhelmed. There was nothing, nothing, nothing, to make me stand up and cheer. Nothing inspired me to reach for my wallet. (Yes, I know it’s a developer conference, but still.) I’m an everyday Apple user who is typing this on a MacBook Air, who reads news and updates Facebook on an iPad mini, and who carries an iPhone as my primary mobile phone. Yawn.

If you haven’t read all the announcements from Apple this week, or didn’t catch the WWDC keynote live or streaming, Wired has the best single-story write-up.

Arguably the biggest “news” is that Apple has changed its desktop operating system naming convention again. It used to be Mac OS, then Mac OS X, then just OS X. Now it is macOS. The next version will be macOS 10.12 “Sierra.” Yawn.

I am pleased that Siri, Apple’s voice recognition software, is finally coming to the Mac. However, Siri itself is not impressive. It’s terrible for dictation – Dragon is better. On the iPhone, it misinterprets commands far more than Microsoft’s Cortana, and its sphere of influence is pretty limited: It can launch third-party apps, for example, but can’t control them because the APIs are locked down.

Will Siri on macOS be better? We can be hopeful, since Apple will provide some API access. Still, I give Microsoft the edge with Cortana, and both are lightyears behind Amazon’s Alexa software for the Echo family of smart home devices.

There are updates to iOS, but they are mainly window dressing. There’s tighter integration between iOS and the Mac, but none of those are going to move the needle. Use an iPhone to unlock a Mac? Copy-paste from iOS to the Mac? Be able to hide built-in Apple apps on the phone? Some of the apps have a new look? Nice incremental upgrades. No excitement.

Apple Watch. I haven’t paid much attention to watchOS, which is being upgraded, because I can’t get excited about the Apple Watch until next-generation hardware has multiple-day battery life and an always-on time display. Until then, I’ll stick with my Pebble Time, thank you.

There are other areas where I don’t have much of an opinion, like the updates to Apple Pay and Apple’s streaming music services. Similarly, I don’t have much experience with Apple TV and tvOS. Those may be important. Or maybe not. Since my focus is on business computing, and I don’t use those products personally, they fall outside my domain.

So why were these announcements from WWDC so — well — uninspiring? Perhaps Apple is hitting a dry patch. Perhaps they need to find a new product category to dominate; remember, Apple doesn’t invent things, it “thinks different” and enters and captures markets by creating stylish products that are often better than other companies’ clunky first-gen offerings. That’s been true in desktop computers, notebooks, smartphones, tablets, smart watches, cloud services and streaming music – Apple didn’t invent those categories, and was not first to market, not even close.

Apple needs to do something bold to reignite excitement and to truly usher in the Tim Cook era. Bringing Siri to the desktop, redesigning its Maps app, using the iPhone to unlock your desktop Mac, and a snazzy Minnie Mouse watch face, don’t move the needle.

I wonder what we’ll see at WWDC 2017. Hopefully a game-changer.

, , , ,

A Seven-Point Plan for Automotive Cybersecurity

code-curmudgeon2I am hoovering directly from the blog of my friend Arthur Hicken, the Code Curmudgeon:

Last week with Alan Zeichick and I did a webinar for Parasoft on automotive cybersecurity. Now Alan thinks that cybersecurity is an odd term, especially as it applies to automotive and I mostly agree with him. But appsec is also pretty poorly fitted to automotive so maybe we should be calling it AutoSec. Feel free to chime-in using the comments below or on twitter.

I guess the point is that as cars get more complicated and get more “smart” parts and get more connected (The connected car) as part of the “internet of things”, you will start to see more and more automotive security breaches occurring. From taking over the car to stealing data to triggering airbags we’ve already had several high-profile incidents which you can see in my IoT Hall-of-Shame.

To help out we’ve put together a high-level overview of a 7-point plan to get you started. In the near future we’ll be diving into detail on each of these topics, including how standards can help you not only get quality but safety and security, the role of black-box, pen-test, and DAST as well as how to get ahead of the curve and harden your vehicle software using (SAST) and hybrid testing (IAST).

The webinar was recorded for your convenience, so be sure and check it out. If you have automotive software topics that are near and dear to your heart, but sure to let me know in the comments or on Twitter or Facebook.

Okay, the webinar was back in February, but the info didn’t appear on my blog then. Here it is now. My apologies for the oversight. Watch and enjoy the webinar!

, , , ,

The most important plug-in for Customer Experience Management software: Humans

customer_experienceNo smart software would make the angry customer less angry. No customer relationship management platform could understand the problem. No sophisticated HubSpot or Salesforce or Marketo algorithm could be able to comprehend that a piece of artwork, brought to a nationwide framing store location in October, wouldn’t be finished before Christmas – as promised. While an online order tracking system would keep the customer informed, it wouldn’t keep the customer satisfied.

Customer Experience Management (CEM). That’s the hot new buzzword for directly engaging the customer. Contrast that with Customer Relationship Management (CRM), which is more about the back-end tracking of customers, leads and orders.

Think about how Amazon.com or FedEx or Netflix keep you constantly informed about what’s happening with your products and services. They have realized that the key to customer success is equally product/service excellence and communications excellence. When I was a kid, you mailed a check and an order form to Sears Roebuck, and a few weeks later a box showed up in the mail. That was great customer service in the 1960s and 1970s. No more. We demand communications. Proactive communications. Effective, empathetic communications.

One of the best ways to make an unhappy customer happy is to empower a human to do whatever it takes to get things right. If possible, that should be the first person the customer talks to, so the problem gets solved as quickly as possible, and without adding “dropped calls” or “too many transfers” to the litany of complaints. A CEM platform should be designed with this is mind.

I’ve written a story about the non-software factors required for effective CEM platforms for Pipeline Magazine. Read the story: “CEM — Now with Humans!

, , ,

Too slow, didn’t wait: The five modern causes of slow website loads

Let’s explore the causes of slow website loads. There are obviously some delays that are beyond our control — like the user being on a very slow mobile connection. However, for the most part, our website’s load time is entirely up to us.

For the most part, our website’s load time is entirely up to us as developers and administrators. We need to do everything possible to accelerate the experience, and in fact I would argue that load time may be the single most important aspect of your site. That’s especially true of your home page, but also of other pages, especially if there are deep links to them from search engines, other Internet sites, or your own marketing emails and tweets.

We used to say that the biggest cause of slow websites was large images, especially too-large images that are downloaded to the browser and dynamically resized. Those are real issues, even today, and you should optimize your site to push out small graphics, instead of very large images. Images are no longer the main culprit, however.

Read my recent article in the GoDaddy Garage, “Are slow website load times costing you money and pageviews?” to see the five main causes of slow website loads, and get some advice about what to do about them.