Nothing you share on the Internet is guaranteed to be private to you and your intended recipient(s). Not on Twitter, not on Facebook, not on Google+, not using Slack or HipChat or WhatsApp, not in closed social-media groups, not via password-protected blogs, not via text message, not via email.

Yes, there are “privacy settings” on FB and other social media tools, but those are imperfect at best. You should not trust Facebook to keep your secrets.

If you put posts or photos onto the Internet, they are not yours to control any more. Accept they can appropriated and redistributed by others. How? Many ways, including:

  • Your emails and texts can be forwarded
  • Your Facebook and Twitter posts and direct-messages can be screen-captured
  • Your photos can be downloaded and then uploaded by someone else

Once the genie is out of the bottle, it’s gone forever. Poof! So if there’s something you definitely don’t want to become public, don’t put it on the Internet.

(I wrote this after seeing a dear friend angered that photos of her little children, which she shared with her friends on Facebook, had been re-posted by a troll.)

I can’t trust the Internet of Things. Neither can you. There are too many players and too many suppliers of the technology that can introduce vulnerabilities in our homes, our networks – or elsewhere. It’s dangerous, my friends. Quite dangerous. In fact, it can be thought of as a sort of Fifth Column, but not in the way many of us expected.

Merriam-Webster defines a Fifth Column as “a group of secret sympathizers or supporters of an enemy that engage in espionage or sabotage within defense lines or national borders.” In today’s politics, there’s lot of talk about secret sympathizers sneaking across national borders, such as terrorists posing as students or refugees. Such “bad actors” are generally part of an organization, recruited by state actors, and embedded into enemy countries for long-term penetration of society.

There have been many real-life Fifth Column activists in recent global history. Think about Kim Philby and Anthony Blunt, part of the “Cambridge Five” who worked for spy agencies in the United Kingdom in post-World War II era; but who themselves turned out to be double agents working for the Soviet Union. Fiction too, is replete with Fifth Column spies. They’re everywhere in James Bond movies and John le Carré novels.

Am I too paranoid?

Let’s bring our paranoia (or at least, my paranoia) to the Internet of Things, and start by way of the late 1990s and early 2000s. I remember quite clearly the introduction of telco and network routers by Huawei, and concerns that the Chinese government may have embedded software into those routers in order to surreptitiously listen to telecom networks and network traffic, to steal intellectual property, or to do other mischief like disable networks in the event of a conflict. (This was before the term “cyberwarfare” was widely used.)

Recall that Huawei was founded by a former engineer in the Chinese People’s Liberation Army. The company was heavily supported by Beijing. Also there were lawsuits alleging that Huawei infringed on Cisco’s intellectual property – i.e., stole its source code. Thus, there was lots of concern surrounding the company and its products.

Read my full story about this, published in Pipeline Magazine, “The Surprising and Dangerous Fifth Column Hiding Within the Internet of Things.”

Modern medical devices increasingly leverage microprocessors and embedded software, as well as sophisticated communications connections, for life-saving functionality. Insulin pumps, for example, rely on a battery, pump mechanism, microprocessor, sensors, and embedded software. Pacemakers and cardiac monitors also contain batteries, sensors, and software. Many devices also have WiFi- or Bluetooth-based communications capabilities. Even hospital rooms with intravenous drug delivery systems are controlled by embedded microprocessors and software, which are frequently connected to the institution’s network. But these innovations also mean that a software defect can cause a critical failure or security vulnerability.

In 2007, former vice president Dick Cheney famously had the wireless capabilities of his pacemaker disabled. Why? He was concerned “about reports that attackers could hack the devices and kill their owners.” Since then, the vulnerabilities caused by the larger attack surface area on modern medical devices have gone from hypothetical to demonstrable, in part due to the complexity of the software, and in part due to the failure to properly harden the code.

In October 2011, The Register reported that “a security researcher has devised an attack that hijacks nearby insulin pumps, enabling him to surreptitiously deliver fatal doses to diabetic patients who rely on them.” The insulin pump worked because the pump contained a short-range radio that allow patients and doctors to adjust its functions. The researcher showed that, by using a special antenna and custom-written software, he could locate and seize control of any such device within 300 feet.

report published by Independent Security Evaluators (ISE) shows the danger. This report examined 12 hospitals, the organization concluded “that remote adversaries can easily deploy attacks that manipulate records or devices in order to fully compromise patient health” (p. 25). Later in the report, the researchers show how they demonstrated the ability to manipulate the flow of medicine or blood samples within the hospital, resulting in the delivery of improper medicate types and dosages (p. 37)–and do all this from the hospital lobby. They were also able to hack into and remotely control patient monitors and breathing tubes – and trigger alarms that might cause doctors or nurses to administer unneeded medications.

Read more in my blog post for Parasoft, “What’s the Cure for Software Defects and Vulnerabilities in Medical Devices?

Think about alarm systems in cars. By default, many automobiles don’t come with an alarm system installed from the factory. That was for three main reasons: It lowered the base sticker price on the car; created a lucrative up-sell opportunity; and allowed for variations on alarms to suit local regulations.

My old 2004 BMW 3-series convertible (E46), for example, came pre-wired for an alarm. All the dealer had to do, upon request (and payment of $$$) was install a couple of sensors and activate the alarm in the car’s firmware. Voilà! Instant protection. Third-party auto supply houses and garages, too, were delighted that the car didn’t include the alarm, since that made it easier to sell one to worried customers, along with a great deal on a color-changing stereo head unit, megawatt amplifier and earth-shattering sub-woofer.

Let’s move from cars to cybersecurity. The dangers are real, and as an industry, it’s in our best interest to solve this problem, not by sticking our head in the sand, not by selling aftermarket products, but by a two-fold approach: 1) encouraging companies to make more secure products; and 2) encouraging customers to upgrade or replace vulnerable products — even if there’s not a dollar, pound, euro, yen or renminbi of profit in it for us:

  • If you’re a security hardware, software, or service company, the problem of malicious bits traveling over broadband, wireless and the Internet backbone is also not your problem. Rather, it’s an opportunity to sell products. Hurray for one-time sales, double hurray for recurring subscriptions.
  • If you’re a carrier, the argument goes, all you care about is the packets, and the reliability of your network. The service level agreement provided to consumers and enterprises talks about guaranteed bandwidth, up-time availability, and time to recover from failures; it certainly doesn’t promise that devices connected to your service will be free of malware or safe from hacking. Let customers buy firewalls and endpoint protection – and hey, if we offer that as a service, that’s a money-making opportunity.

Read more about this subject in my latest article for Pipeline Magazine, “An Advocate for Safer Things.”

Cloud-based firewalls come in two delicious flavors: vanilla and strawberry. Both flavors are software that checks incoming and outgoing packets to filter against access policies and block malicious traffic. Yet they are also quite different. Think of them as two essential network security tools: Both are designed to protect you, your network, and your real and virtual assets, but in different contexts.

Disclosure: I made up the terms “vanilla firewall” and “strawberry firewall” for this discussion. Hopefully they help us differentiate between the two models as we dig deeper.

Let’s start with a quick overview:

  • Vanilla firewalls are usually stand-alone products or services designed to protect an enterprise network and its users — like an on-premises firewall appliance, except that it’s in the cloud. Service providers call this a software-as-a-service (SaaS) firewall, security as a service (SECaaS), or even firewall as a service (FaaS).
  • Strawberry firewalls are cloud-based services that are designed to run in a virtual data center using your own servers in a platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS) model. In these cases, the firewall application runs on the virtual servers and protects traffic going to, from, and between applications in the cloud. The industry sometimes calls these next-generation firewalls, though the term is inconsistently applied and sometimes refers to any advanced firewall system running on-prem or in the cloud.

So why do we need these new firewalls? Why not stick a 1U firewall appliance into a rack, connect it up to the router, and call it good? Easy: Because the definition of the network perimeter has changed. Firewalls used to be like guards at the entrance to a secured facility. Only authorized people could enter that facility, and packages were searched as they entered and left the building. Moreover, your users worked inside the facility, and the data center and its servers were also inside. Thus, securing the perimeter was fairly easy. Everything inside was secure, everything outside was not secure, and the only way in and out was through the guard station.

Intrigued? Hungry? Both? Please read the rest of my story, called “Understanding cloud-based firewalls,” published on Enterprise.nxt.

What’s the biggest tool in the security industry’s toolkit? The patent application. Security thrives on innovation, and always has, because throughout recorded history, the bad guys have always had the good guys at the disadvantage. The only way to respond is to fight back smarter.

Sadly, fighting back smarter isn’t always the case. At least, not when looking over the vendor offerings at RSA Conference 2017, held mid-February in San Francisco. Sadly, some of the products and services wouldn’t have seemed out of place a decade ago. Oh, look, a firewall! Oh look, a hardware device that sits on the network and scans for intrusions! Oh, look, a service that trains employees not to click on phishing spam!

Fortunately, some companies and big thinkers are thinking new about the types of attacks… and the best ways to protect against them, detect when those protections end, how to respond when attacks are detected, and ways to share information about those attacks.

The battle, after all, is asymmetric. Think about your typical target: It’s a business or a government organization or a military or a person. It is known. It can be identified. It can’t hide, or it can’t hide for long. It defenses, or at least their outer perimeter, can be seen and tested. Security secrets and vulnerabilities can be neutralized by someone who spills those secrets, whether through spying or social engineering.

Knowing the enemy

By contrast, while attackers know who the target is, the target doesn’t know who the attackers are. There many be many attackers, and they can shift targets on short notice, going after the biggest prize or the weakest prize. They can swamp the target with attacks. If one attackers is neutralized, the other attackers are still a threat. And in fact, even the attackers don’t know who the other attackers are. Their lack of coordination is a strength.

In cyberwarfare, as in real warfare, a single successful incursion can have incredible consequences. With one solid foothold in an endpoint – whether that endpoint is on a phone or a laptop, on a server or in the cloud – the bad guys are in a good position to gain more intelligence, seek out credentials, undermine defenses, and take over new footholds.

A Failed Approach

The posture of the cybersecurity industry – and of info sec professionals and the CISO – must shift. For years, the focus was almost exclusively on prevention. Install a firewall, and keep that firewall up to date! Install antivirus software, and keep adding signatures! Install intrusion detection systems, and then upgrade them to intrusion prevention systems!

That approached failed, just as an approach to medicine that focus exclusively on wellness, healthy eating and taking vitamins will fail. The truth is that breaches happen, in part because organizations don’t do a perfect job with their prevention methods, and in part because bad guys find new weaknesses that nobody considered, from zero-day software vulnerabilities to clever new spearphishing techniques. A breach is inevitable, the industry has admitted. Now, the challenge is to detect that breach quickly, move swiftly to isolate the damage, and then identify root causes so that future attacks using that same vulnerability won’t succeed.

Meanwhile, threat intelligence tools allow businesses to share information, carefully and confidentially. When one company is attacked, others can learn how to guard against that same attack vector. Hey, criminals share information about vulnerabilities using the dark web – so let’s learn from their example.

At RSA Conference 2017, most of the messages were same-old, same-old. Not all, fortunately. I was delighted, however, to see a renewed emphasis at some companies, and in some keynotes, on innovation. Not merely to keep up with the competition or to realize short-term advantage of cybercriminals. But rather, continuous, long-term investment focused on the constantly changing nature of cybersecurity. Security thrives on innovation. Because the bad guys innovate too.

Everyone has received those crude emails claiming to be from your bank’s “Secuirty Team” that tells you that you need to click a link to “reset you account password.” It’s pretty easy to spot those emails, with all the misspellings, the terrible formatting, and the bizarre “reply to” email addresses at domains halfway around the world. Other emails of that sort ask you to review an unclothed photo of a A-list celebrity, or open up an attached document that tells you what you’ve won.

We can laugh. However, many people fall for those phishing scams — and willingly surrender their bank account numbers and passwords, or install malware, such as ransomware.

Less obvious, and more effective, are attacks that are carefully crafted to appeal to a high-value individual, such as a corporate executive or systems administrator. Despite their usual technological sophistication, anyone can be fooled, if the spearphishing email is good enough – spearphishing being the term for phishing emails designed specifically to entrap a certain person.

What’s the danger? Plenty. Spearphishing emails that pretend to be from the CEO can convince a corporate accounting manager to wire money to an overseas account. Called the “Wire Transfer Scam,” this has been around for several years and still works, costing hundreds of millions of dollars, said the FBI.

These types of scams can hurt individuals as well, getting access to their private financial information. In February 2017, about 7,700 employees of the Manatee School District in Florida had their taxpayer numbers stolen when a payroll employee responded to what she thought was a legitimate query from a district officer:

Forward all schools employees 2016 W2 forms to me attached and sent in PDF, I will like to have them as soon as possible for board review. Thanks.

It was a scam, and the scammers have each employee’s W2, a key information document in the United States. The cost of the damage: Unknown at this point, but it’s bad for the school district and for the employees as well. Sadly, this is not a new threat: The U.S. Internal Revenue Service had warned about this exact phishing scam in March 2016.

The cybercriminals behind spearphishing are continuing to innovate. Fortunately, the industry is fight back. Menlo Security, a leading security company, recently uncovered a sophisticated spearphishing attack at a well-known enterprise. While it’s understandable that the victim would decline to be identified, Menlo Security was able to provide some details on the scheme – which incorporate multiple scripts to truly customize the attack and trick the victim into disclosing key credentials.

  • The attackers performed various checks on the password entered by the victim and their IP address to determine whether it was a true compromise versus somebody who had figured out the attack.
  • The attackers supported various email providers. This was determined by the fact that they served custom pages based on the email domain. For example, a victim whose email address was email hidden; JavaScript is required would be served a page that looked like a Gmail login page.
  • The attackers exfiltrated the victim’s personally identifiable information (PII) to an attacker controlled account.
  • The attacker relied heavily on several key scripts to execute the phishing campaign, and to obtain the victim’s IP address in addition to the victim’s country and city.

Phishing and spearphishing have come a long way from those crude emails – which still work, believe it or not. We can’t count on spotting bad spelling and laughable return-address domains on emails to help identify the fraud, because the hackers will figure that out, and use native English speakers and spellcheck. The only solution will be, must be, a technological one.

What’s on the industry’s mind? Security and mobility are front-and-center of the cerebral cortex, as two of the year’s most important events prepare to kick off.

The Security Story

At the RSA Conference 2017 (February 13-17 in San Francisco), expect to see the best of the security industry, from solutions providers to technology firms to analysts. RSA can’t come too soon.

Ransomware, which exploded into the public’s mind last year with high-profile incidents, continues to run rampant. Attackers are turning to ever-bigger targets, with ever-bigger fallout. It’s not enough that hospitals are still being crippled (this was big in 2016), but hotel guests are locked out of their rooms, police departments are losing important crime evidence, and even CCTV footage has been locked away.

What makes ransomware work? Human weakness, for the most part. Many successful ransomware attacks begin with either generalized phishing or highly sophisticated and targeted spearphishing. Once the target user has clicked on a link in a malicious email or website, odds are good that his/her computer will be infected. From there, the malware can do more than encrypt data and request a payout. It can also spread to other computers on the network, install spyware, search for unpatched vulnerabilities and cause untold havoc.

Expect to hear a lot about increasingly sophisticated ransomware at RSA. We’ll see solutions to help, ranging from ever-more-sophisticated email scanners, endpoint security tools, isolation platforms and tools to prevent malware from spreading beyond the initially affected machine.

Also expect to hear plenty about artificial intelligence as the key to preventing and detecting attacks that evade traditional technologies like signatures. AI has the ability to learn and respond in ways that go far beyond anything that humans can do – and when coupled with increasingly sophisticated threat intelligence systems, AI may be the future of computer security.

The Mobility Story

Halfway around the world, mobility is only part of the story at Mobile World Congress (February 27 – March 2 in Barcelona). There will be many sessions about 5G wireless, which can provision not only traditional mobile users, but also industrial controls and the Internet of Things. AT&T recently announced that it will launch 5G service (with peak speeds of 400Mbps or better) in two American cities, Austin and Indianapolis. While the standards are not yet complete, that’s not stopping carriers and the industry from moving ahead.

Also key to the success of all mobile platforms is cloud computing. Microsoft is moving more aggressively to the cloud, going beyond Azure and Office 365 with a new Windows 10 Cloud edition, a simplified experience designed to compete against Google’s Chrome platform.

The Internet of Things is also roaring to life, and it means a lot more than fitness bands and traffic sensors. IoT applications are showing up in everything from industrial controls to embedded medical devices to increasingly intelligent cars and trucks. What makes it work? Big batteries, reliable wireless, industry standards and strong security. Every type of security player is involved with IoT, from the cloud to wireless to endpoint protection. You’ll hear more about security at Mobile World Congress than in the past, because the threats are bigger than ever. And so are the solutions.

The best way to have a butt-kicking cloud-native application is to write one from scratch. Leverage the languages, APIs, and architecture of the chosen cloud platform before exploiting its databases, analytics engines, and storage. As I wrote for Ars Technica, this will allow you to take advantage of the wealth of resources offered by companies like Microsoft, with their Azure PaaS (Platform-as-a-Service) offering or by Google Cloud Platform’s Google App Engine PaaS service.

Sometimes, however, that’s not the job. Sometimes, you have to take a native application running on a server in your local data center or colocation facility and make it run in the cloud. That means virtual machines.

Before we get into the details, let’s define “native application.” For the purposes of this exercise, it’s an application written in a high-level programming language, like C/C++, C#, or Java. It’s an application running directly on a machine talking to an operating system, like Linux or Windows, that you want to run on a cloud platform like Windows Azure, Amazon Web Services (AWS), or Google Cloud Platform (GCP).

What we are not talking about is an application that has already been virtualized, such as already running within VMware’s ESXi or Microsoft’s Hyper-V virtual machine. Sure, moving an ESXi or Hyper-V application running on-premises into the cloud is an important migration that may improve performance and add elasticity while switching capital expenses to operational expenses. Important, yes, but not a challenge. All the virtual machine giants and cloud hosts have copious documentation to help you make the switch… which amounts to basically copying the virtual machine file onto a cloud server and turning it on.

Many possible scenarios exist for moving a native datacenter application into the cloud. They boil down to two main types of migrations, and there’s no clear reason to choose one over the other:

The first is to create a virtual server within your chosen cloud provider, perhaps running Windows Server or running a flavor of Linux. Once that virtual server has been created, you migrate the application from your on-prem server to the new virtual server—exactly as you would if you were moving from one of your servers to a new server. The benefits: the application migration is straightforward, and you have 100-percent control of the server, the application, and security. The downside: the application doesn’t take advantage of cloud APIs or other special servers. It’s simply a migration that gets a server out of your data center. When you do this, you are leveraging a type of cloud called Infrastructure-as-a-Service (IaaS). You are essentially treating the cloud like a colocation facility.

The second is to see if your application code can be ported to run within the native execution engine provided by the cloud service. This is called Platform-as-a-Service (PaaS). The benefits are that you can leverage a wealth of APIs and other services offered by the cloud provider. The downsides are that you have to ensure that your code can work on the service (which may require recoding or even redesign) in order to use those APIs or even to run at all. You also don’t have full control over the execution environment, which means that security is managed by the cloud provider, not by you.

And of course, there’s the third option mentioned at the beginning: Writing an entirely new application native for the cloud provider’s PaaS. That’s still the best option, if you can do it. But our task today is to focus on migrating an existing application.

Let’s look into this more closely, via my recent article for Ars Technica, “Great app migration takes enterprise “on-prem” applications to the Cloud.”

Las Vegas, January 2017 — “Alexa, secure the enterprise against ransomware.” Artificial intelligence is making tremendous headway, as seen at this year’s huge Consumer Electronics Show (CES). We’re seeing advances that leverage AI in everything from speech recognition to the Internet of Things (IoT) to robotics to home entertainment.

Not sure what type of music to play? Don’t worry, the AI engine in your cloud-based music service knows your taste better than you do. Want to read a book whilst driving to the office? Self-driving cars are here today in limited applications, and we’ll see a lot more of them in 2017.

Want to make brushing your teeth more fun, all while promoting good dental health? The Ara is the “1st toothbrush with Artificial Intelligence,” claims Kolibree, a French company that introduced the product at CES 2017.

Gadgets dominate CES. While crowds are lining up to see the AI-powered televisions, cookers and robots, the real power of AI is hidden, behind the scenes, and not part of the consumer context. Unknown to happy shoppers exploring AI-based barbecues, artificial intelligence is keeping our networks safe, detecting ransomware, helping improve the efficiency of advertising and marketing, streamlining business efficiencies, diagnosing telecommunication faults in undersea cables, detecting fraud in banking and stock-marketing transactions, and even helping doctors track the spread of infectious diseases.

Medical applications capture the popular imagination because they’re so fast and effective. The IBM Watson AI-enabled supercomputer, for example, can read 200 million pages of text in three seconds — and understand what it reads. An oncology application running on Watson analyzes a patient’s medical records, and then combines attributes from the patient’s file with clinical expertise, external research, and data. Based on that information, Watson for Oncology identifies potential treatment plans for a patient. This means doctors can consider the treatment options provided by Watson when making decisions for individual patients. Watson even offers supporting evidence in the form of administration information, as well as warnings and toxicities for each drug.

Doctor AI Can Cure Cybersecurity Ills

Moving beyond medicine, AI is proving essential for protecting computer networks — and their users against intrusion. The traditional non-AI-based anti-virus and anti-malware products can’t protect against advanced threats, and that’s where companies like Cylance come in. They can use neural networks and other machine-learning techniques to study millions of malicious files, from executables to documents to PDFs to images. Using pattern recognition, Cylance have developed a revolutionary machine learning platform that can identify suspicious files that might be seen on websites or as email attachments, even if it’s never seen that particular type of malware before. Nothing but AI can get the job done, not in an era when over a million new pieces of malware, ranging from phishing to ransomware, appear every single day.

Menlo Security is another network-protection company that leverages artificial intelligence. The Menlo Security Isolation Platform uses AI to prevent Internet-based malware from ever reaching an endpoint, such as a desktop or mobile device, because email and websites are accessed inside the cloud — not on the client’s computer. Only safe, malware-free rendering information is sent to the user’s endpoint, eliminating the possibility of malware reaching the user’s device. An artificial intelligence engine constantly scans the Internet session to provide protection against spear-phishing and other email attacks.

What if a machine does become compromised? It’s unlikely, but it can happen — and the price of a single breech can be incredible, especially if a hacker can take full control of the compromised device and use it to attack other assets within the enterprise, such as servers, routers or executives’ computers. If a breach does occur, that’s when the AI technology of Javelin Networks leaps into action, detecting that the attack is in progress, alerting security teams, isolating the device from the network — while simultaneously tricking the attackers into believing they’ve succeeded in their attack, therefore keeping them “on the line” while real-time forensics gather information needed to identify the attacker and help shut them down for good.

Socializing Artificial Intelligence

There’s a lot more to enterprise-scale AI than medicine and computer security, of course. QSocialNow, an incredibly innovative company in Argentina, uses AI-based Big Data and Predictive Analytics to watch an organization’s social media account — and empower them to not only analyze trends, but respond in mere seconds in the case of an unexpected event, such as a rise in customer complaints, the emergence of a social protest, even a physical disaster like an earthquake or tornado. Yes, humans can watch Twitter, Facebook and other networks, but they can’t act as fast as AI — or spot subtle trends that only advanced machine learning can observe through mathematics.

Robots can be powerful helpers for humanity, and AI-based toothbrushes can help us and our kids keep our teeth healthy. While the jury may be out on the implications of self-driving cars on our city streets, there’s no doubt that AI is keeping us — and our businesses — safe and secure. Let’s celebrate the consumer devices unveiled at CES, and the artificial intelligence working behind the scenes, far from the Las Vegas Strip, for our own benefit.

According to a recent study, 46% of the top one million websites are considered risky. Why? Because the homepage or background ad sites are running software with known vulnerabilities, the site was categorized as a known bad for phishing or malware, or the site had a security incident in the past year.

According to Menlo Security, in its “State of the Web 2016” report introduced mid-December 2016, “… nearly half (46%) of the top million websites are risky.” Indeed, Menlo says, “Primarily due to outdated software, cyber hackers now have their veritable pick of half the web to exploit. And exploitation is becoming more widespread and effective for three reasons: 1. Risky sites have never been easier to exploit; 2. Traditional security products fail to provide adequate protection; 3. Phishing attacks can now utilize legitimate sites.”

This has been a significant issue for years. However, the issue came to the forefront earlier this year when several well-known media sites were essentially hijacked by malicious ads. The New York Times, the BBC, MSN and AOL were hit by tainted advertising that installed ransomware, reports Ars Technica. From their March 15, 2016, article, “Big-name sites hit by rash of malicious ads spreading crypto ransomware”:

The new campaign started last week when ‘Angler,’ a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

The results of this attack, reported The Guardian at around the same time: 

When the infected adverts hit users, they redirect the page to servers hosting the malware, which includes the widely-used (amongst cybercriminals) Angler exploit kit. That kit then attempts to find any back door it can into the target’s computer, where it will install cryptolocker-style software, which encrypts the user’s hard drive and demands payment in bitcoin for the keys to unlock it.

If big-money trusted media sites can be hit, so can nearly any corporate site, e-commerce portal, or any website that uses third-party tools – or where there might be the possibility of unpatched servers and software. That means just about anyone. After all, not all organizations are diligent about monitoring for common vulnerabilities and exploits (CVE) on their on-premises servers. When companies run their websites on multi-tenant hosting facilities, they don’t even have access to the operating system directly, but rely upon the hosting company to install patches and fixes to Windows Server, Linux, Joomla, WordPress and so-on.

A single unpatched operating system, web server platform, database or extension can introduce a vulnerability which can be scanned for. Once found, that CVE can be exploited, by a talented hacker — or by a disgruntled teenager with a readily-available web exploit kit

What can you do about it? Well, you can read my complete story on this subject, “Malware explosion: The web is risky,” published on ITProPortal.

“If you give your security team the work they hate to do day in and day out, you won’t be able to retain that team.” Eoin Keary should know. As founder, director and CTO of edgescan, a fast-growing managed security service provider (MSSP), his company frees up enterprise security teams to focus on the more strategic, more interesting, more business-critical aspects of InfoSec while his team deals with the stuff they know and do best; deal with the monotony of full-stack vulnerability management.

It’s a perfect match, Keary says. By using an MSSP, customers can focus on business-critical issues, save money, have better security—and not have to replace expensive, highly trained employees who quit after a few months out of boredom. “We are experts in vulnerability management, have built the technology and can deliver very efficiently.”

BCC Risk Advisory Ltd, edgescan’s parent company, based in Dublin, Ireland, was formed in 2011 with “me and a laptop,” explains Keary. He expects his company to end the 2016 fiscal year at seven figure revenues and a growth trajectory of circa 400% compared to 2015. Its secret cyberweapon is a cloud-based SaaS called edgescan. edgescan detects security weaknesses across the customer’s full stack of technology assets, from servers to networks, from websites to apps to mobile devices. It also provides continuous asset profiling and virtual patching coupled with expert support.

edgescan constantly assesses clients’ systems on a continuous basis. “We have a lot of intelligence and automation in the platform to determine what needs to be addressed,” explains Keary.

There’s a lot more to my interview with Eoin Keary — you can read the whole story, “Apparently We Love To Do What Companies Hate. Lucky You!” published in ITSP Magazine.

hackerOnce upon a time, goes the story, there was honor between thieves and victims. They held a member of your family for ransom; you paid the ransom; they left you alone. The local mob boss demanded protection money; if you didn’t pay, your business burned down, but if you did pay and didn’t rat him out to the police, his and his gang honestly tried to protect you. And hackers may have been operating outside legal boundaries, but for the most part, they were explorers and do-gooders intending to shine a bright light on the darkness of the Internet – not vandals, miscreants, hooligans and ne’er-do-wells.

That’s not true any more, perhaps. As I write in IT Pro Portal, “Faceless and faithless: A true depiction of today’s cyber-criminals?

Not that long ago, hackers emerged as modern-day Robin Hoods, digital heroes who relentlessly uncovered weaknesses in applications and networks to reveal the dangers of using technology carelessly. They are curious, provocative; love to know how things work and how they can be improved.

Today, however, there is blight on their good name. Hackers have been maligned by those who do not have our best interests at heart, but are instead motivated by money – attackers who steal our assets and hold organisations such as banks and hospitals to ransom.

(My apologies for the British spelling – it’s a British website and they’re funny that way.)

It’s hard to lose faith in hackers, but perhaps we need to. Sure, not all are cybercriminals, but with the rise of ransomware, nasty action by state actors, and some pretty nasty attacks like the new single-pixel malvertising exploit written up yesterday by Dan Goodwin in Ars Technica (which was discovered out after I wrote this story), it’s hard to trust that most hackers secretly have our best interests at heart.

This reminds me of Ranscam. In a blog post, “When Paying Out Doesn’t Pay Off,” Talos reports that:

Ranscam is one of these new ransomware variants. It lacks complexity and also tries to use various scare tactics to entice the user to paying, one such method used by Ranscam is to inform the user they will delete their files during every unverified payment click, which turns out to be a lie. There is no longer honor amongst thieves. Similar to threats like AnonPop, Ranscam simply delete victims’ files, and provides yet another example of why threat actors cannot always be trusted to recover a victim’s files, even if the victim complies with the ransomware author’s demands. With some organizations likely choosing to pay the ransomware author following an infection,  Ranscam further justifies the importance of ensuring that you have a sound, offline backup strategy in place rather than a sound ransom payout strategy. Not only does having a good backup strategy in place help ensure that systems can be restored, it also ensures that attackers are no longer able to collect revenue that they can then reinvest into the future development of their criminal enterprise.

Scary — and it shows that often, crime does pay. No honor among thieves indeed.

phoneFrom company-issued tablets to BYOD (bring your own device) smartphones, employees are making the case that mobile devices are essential for productivity, job satisfaction, and competitive advantage. Except in the most regulated industries, phones and tablets are part of the landscape, but their presence requires a strong security focus, especially in the era of non-stop malware, high-profile hacks, and new vulnerabilities found in popular mobile platforms. Here are four specific ways of examining this challenge that can help drive the choice of both policies and technologies for reducing mobile risk.

Protect the network: Letting any mobile device on the business network is a risk, because if the device is compromised, the network (and all of its servers and other assets) may be compromised as well. Consider isolating internal WiFi links to secured network segments, and only permit external access via virtual private networks (VPNs). Install firewalls that guard the network by recognizing not only authorized devices, but also authorized users — and authorized applications. Be sure to keep careful tabs on devices accessing the network, from where, and when.

Protect the device: A mobile device can be compromised in many ways: It might be stolen, or the user might install malware that provides a gateway for a hacker. Each mobile device should be protect by strong passwords not only for the device, but on critical business apps. Don’t allow corporate data to be stored on the device itself. Ensure that there are remote-wipe capabilities if the device is lost. And consider installed a Mobile Device Management (MDM) platform that can give IT full control over the mobile device – or at least those portions of a employee-owned device that might ever be used for business purposes.

Protect the data: To be productive with their mobile devices, employees want access to important corporate assets, such as email, internal websites, ERP or CRM applications, document repositories, as well as cloud-based services. Ensure that permissions are granted specifically for needed services, and that all access is encrypted and logged. As mentioned above, never let corporate data – including documents, emails, chats, internal social media, contacts, and passwords – be stored or cached on the mobile device. Never allow co-mingling of personal and business data, such as email accounts. Yes, it’s a nuisance, but make the employee log into the network, and authenticate into enterprise-authorized applications, each and every time. MDM platforms can help enforce those policies as well.

Protect the business: The policies regarding mobile access should be worked out along with corporate counsel, and communicated clearly to all employees before they are given access to applications and data. The goal isn’t to be heavy-handed, but rather, to gain their support. If employees understand the stakes, they become allies in helping protect business interests. Mobile access is risky for enterprises, and with today’s aggressive malware, the potential for harm has never been higher. It’s not too soon to take it seriously.

wayne-rashWhen an employee account is compromised by malware, the malware establishes a foothold on the user’s computer – and immediately tries to gain access to additional resources. It turns out that with the right data gathering tools, and with the right Big Data analytics and machine-learning methodologies, the anomalous network traffic caused by this activity can be detected – and thwarted.

That’s the role played by Blindspotter, a new anti-malware system that seems like a specialized version of a network intrusion detection/prevention system (IDPS). Blindspotter can help against many types of malware attacks. Those include one of the most insidious and successful hack vectors today: spear phishing. That’s when a high-level target in your company is singled out for attack by malicious emails or by compromised websites. All the victim has to do is open an email, or click on a link, and wham – malware is quietly installed and operating. (High-level targets include top executives, financial staff and IT administrators.)

My colleague Wayne Rash recently wrote about this network monitoring solution and its creator, Balabit, for eWeek in “Blindspotter Uses Machine Learning to Find Suspicious Network Activity”:

The idea behind Balabit’s Blindspotter and Shell Control Box is that if you gather enough data and subject it to analysis comparing activity that’s expected with actual activity on an active network, it’s possible to tell if someone is using a person’s credentials who shouldn’t be or whether a privileged user is abusing their access rights.

 The Balabit Shell Control Box is an appliance that monitors all network activity and records the activity of users, including all privileged users, right down to every keystroke and mouse movement. Because privileged users such as network administrators are a key target for breaches it can pay special attention to them.

The Blindspotter software sifts through the data collected by the Shell Control Box and looks for anything out of the ordinary. In addition to spotting things like a user coming into the network from a strange IP address or at an unusual time of day—something that other security software can do—Blindspotter is able to analyze what’s happening with each user, but is able to spot what is not happening, in other words deviations from normal behavior.

Read the whole story here. Thank you, Wayne, for telling us about Blindspotter.

bloombergMedical devices are incredibly vulnerable to hacking attacks. In some cases it’s because of software defects that allow for exploits, like buffer overflows, SQL injection or insecure direct object references. In other cases, you can blame misconfigurations, lack of encryption (or weak encryption), non-secure data/control networks, unfettered wireless access, and worse.

Why would hackers go after medical devices? Lots of reasons. To name but one: It’s a potential terrorist threat against real human beings. Remember that Dick Cheney famously disabled the wireless capabilities of his implanted heart monitor for fear of an assassination attack.

Certainly healthcare organizations are being targeted for everything from theft of medical records to ransomware. To quote the report “Hacking Healthcare IT in 2016,” from the Institute for Critical Infrastructure Technology (ICIT):

The Healthcare sector manages very sensitive and diverse data, which ranges from personal identifiable information (PII) to financial information. Data is increasingly stored digitally as electronic Protected Health Information (ePHI). Systems belonging to the Healthcare sector and the Federal Government have recently been targeted because they contain vast amounts of PII and financial data. Both sectors collect, store, and protect data concerning United States citizens and government employees. The government systems are considered more difficult to attack because the United States Government has been investing in cybersecurity for a (slightly) longer period. Healthcare systems attract more attackers because they contain a wider variety of information. An electronic health record (EHR) contains a patient’s personal identifiable information, their private health information, and their financial information.

EHR adoption has increased over the past few years under the Health Information Technology and Economics Clinical Health (HITECH) Act. Stan Wisseman [from Hewlett-Packard] comments, “EHRs enable greater access to patient records and facilitate sharing of information among providers, payers and patients themselves. However, with extensive access, more centralized data storage, and confidential information sent over networks, there is an increased risk of privacy breach through data leakage, theft, loss, or cyber-attack. A cautious approach to IT integration is warranted to ensure that patients’ sensitive information is protected.”

Let’s talk devices. Those could be everything from emergency-room monitors to pacemakers to insulin pumps to X-ray machines whose radiation settings might be changed or overridden by malware. The ICIT report says,

Mobile devices introduce new threat vectors to the organization. Employees and patients expand the attack surface by connecting smartphones, tablets, and computers to the network. Healthcare organizations can address the pervasiveness of mobile devices through an Acceptable Use policy and a Bring-Your-Own-Device policy. Acceptable Use policies govern what data can be accessed on what devices. BYOD policies benefit healthcare organizations by decreasing the cost of infrastructure and by increasing employee productivity. Mobile devices can be corrupted, lost, or stolen. The BYOD policy should address how the information security team will mitigate the risk of compromised devices. One solution is to install software to remotely wipe devices upon command or if they do not reconnect to the network after a fixed period. Another solution is to have mobile devices connect from a secured virtual private network to a virtual environment. The virtual machine should have data loss prevention software that restricts whether data can be accessed or transferred out of the environment.

The Internet of Things – and the increased prevalence of medical devices connected hospital or home networks – increase the risk. What can you do about it? The ICIT report says,

The best mitigation strategy to ensure trust in a network connected to the internet of things, and to mitigate future cyber events in general, begins with knowing what devices are connected to the network, why those devices are connected to the network, and how those devices are individually configured. Otherwise, attackers can conduct old and innovative attacks without the organization’s knowledge by compromising that one insecure system.

Given how common these devices are, keeping IT in the loop may seem impossible — but we must rise to the challenge, ICIT says:

If a cyber network is a castle, then every insecure device with a connection to the internet is a secret passage that the adversary can exploit to infiltrate the network. Security systems are reactive. They have to know about something before they can recognize it. Modern systems already have difficulty preventing intrusion by slight variations of known malware. Most commercial security solutions such as firewalls, IDS/ IPS, and behavioral analytic systems function by monitoring where the attacker could attack the network and protecting those weakened points. The tools cannot protect systems that IT and the information security team are not aware exist.

The home environment – or any use outside the hospital setting – is another huge concern, says the report:

Remote monitoring devices could enable attackers to track the activity and health information of individuals over time. This possibility could impose a chilling effect on some patients. While the effect may lessen over time as remote monitoring technologies become normal, it could alter patient behavior enough to cause alarm and panic.

Pain medicine pumps and other devices that distribute controlled substances are likely high value targets to some attackers. If compromise of a system is as simple as downloading free malware to a USB and plugging the USB into the pump, then average drug addicts can exploit homecare and other vulnerable patients by fooling the monitors. One of the simpler mitigation strategies would be to combine remote monitoring technologies with sensors that aggregate activity data to match a profile of expected user activity.

A major responsibility falls onto the device makers – and the programmers who create the embedded software. For the most part, they are simply not up to the challenge of designing secure devices, and may not have the polices, practices and tools in place to get cybersecurity right. Regrettably, the ICIT report doesn’t go into much detail about the embedded software, but does state,

Unlike cell phones and other trendy technologies, embedded devices require years of research and development; sadly, cybersecurity is a new concept to many healthcare manufacturers and it may be years before the next generation of embedded devices incorporates security into its architecture. In other sectors, if a vulnerability is discovered, then developers rush to create and issue a patch. In the healthcare and embedded device environment, this approach is infeasible. Developers must anticipate what the cyber landscape will look like years in advance if they hope to preempt attacks on their devices. This model is unattainable.

In November 2015, Bloomberg Businessweek published a chilling story, “It’s Way too Easy to Hack the Hospital.” The authors, Monte Reel and Jordon Robertson, wrote about one hacker, Billy Rios:

Shortly after flying home from the Mayo gig, Rios ordered his first device—a Hospira Symbiq infusion pump. He wasn’t targeting that particular manufacturer or model to investigate; he simply happened to find one posted on EBay for about $100. It was an odd feeling, putting it in his online shopping cart. Was buying one of these without some sort of license even legal? he wondered. Is it OK to crack this open?

Infusion pumps can be found in almost every hospital room, usually affixed to a metal stand next to the patient’s bed, automatically delivering intravenous drips, injectable drugs, or other fluids into a patient’s bloodstream. Hospira, a company that was bought by Pfizer this year, is a leading manufacturer of the devices, with several different models on the market. On the company’s website, an article explains that “smart pumps” are designed to improve patient safety by automating intravenous drug delivery, which it says accounts for 56 percent of all medication errors.

Rios connected his pump to a computer network, just as a hospital would, and discovered it was possible to remotely take over the machine and “press” the buttons on the device’s touchscreen, as if someone were standing right in front of it. He found that he could set the machine to dump an entire vial of medication into a patient. A doctor or nurse standing in front of the machine might be able to spot such a manipulation and stop the infusion before the entire vial empties, but a hospital staff member keeping an eye on the pump from a centralized monitoring station wouldn’t notice a thing, he says.

 The 97-page ICIT report makes some recommendations, which I heartily agree with.

  • With each item connected to the internet of things there is a universe of vulnerabilities. Empirical evidence of aggressive penetration testing before and after a medical device is released to the public must be a manufacturer requirement.
  • Ongoing training must be paramount in any responsible healthcare organization. Adversarial initiatives typically start with targeting staff via spear phishing and watering hole attacks. The act of an ill- prepared executive clicking on a malicious link can trigger a hurricane of immediate and long term negative impact on the organization and innocent individuals whose records were exfiltrated or manipulated by bad actors.
  • A cybersecurity-centric culture must demand safer devices from manufacturers, privacy adherence by the healthcare sector as a whole and legislation that expedites the path to a more secure and technologically scalable future by policy makers.

This whole thing is scary. The healthcare industry needs to set up its game on cybersecurity.

firefox-privateBe paranoid! When you visit a website for the first time, it can learn a lot about you. If you have cookies on your computer from one of the site’s partners, it can see what else you have been doing. And it can place cookies onto your computer so it can track your future activities.

Many (or most?) browsers have some variation of “private” browsing mode. In that mode, websites shouldn’t be able to read cookies stored on your computer, and they shouldn’t be able to place permanent cookies onto your computer. (They think they can place cookies, but those cookies are deleted at the end of the session.)

Those settings aren’t good enough, because they are either all or nothing, and offer a poor balance between ease-of-use and security/privacy. The industry can and must do better. See why in my essay on NetworkWorld, “We need a better Private Browsing Mode.

 

muggingNothing is scarier than getting together with a buyer (or a seller) to exchange dollars for a product advertised on Craig’s List, eBay or another online service… and then be mugged or robbed. There are certainly plenty of news stories on this subject, but the danger continues. Here are some recent reports:

Don’t be a victim! The Phoenix Police Department has released an advisory. It’s good advice. Follow it.

Phoenix Police Media Advisory:

Internet Exchange Related Crimes

The Phoenix Police Department has recently experienced reported crimes specific to the usage of internet exchange sites that allow sellers to advertise items for sale and then interact with buyers. Subsequent to the online interaction, the two parties usually meet and exchange money for goods in a private party transaction at an agreed-upon location. However, due to circumstances surrounding the nature of these interactions, many criminals are using them for their own purposes

 Specifically, the Phoenix Police Department has seen an increase in robberies of one of the involved parties by the other party during these exchanges. However, crimes as serious as homicide and kidnapping have been linked to these transactions. Although no strategy is 100% effective when trying to be safe, there are a number of steps one can take to ensure the transaction is done under the safest possible circumstances. The department is urging those involved in these private, internet-based sales transactions to consider the following while finalizing the deal and making safety their primary consideration:

  • If the deal seems too good to be true, it probably is.
  • The location of the exchange should be somewhere in public that has many people around like a mall, a well-traveled parking lot, or a public area. Do not agree to meet at someone’s house, a secluded place, a vacant house, or the like.
  • Try to schedule the transaction while it is still daylight, or at least in a place that is very well lit.
  • Ask why the person is selling the item and what type of payment they will accept. Be wary of agreeing to a cash payment and then travelling to the deal with a large sum of cash.
  • Bring a friend with you to the meet and let someone who isn’t going with you know where you are going and when you can be expected back.
  • Know the fair market value of the item you are purchasing.
  •  Trust your instinct! If something seems suspicious, or you get a bad feeling, pass on the deal!

Other good advice that I’ve seen:

  • Never agree to meet in a second place, when you show up at the agreed-upon place and receive a text message redirecting you somewhere else.
  • Never give the other party your home address. If you must do so (because they are picking up a large item from your house), bring the item outside; don’t let them into your house. Inform your neighbors what’s going on.
  • Call your local police department and ask if they can recommend an Internet Purchase Exchange Location, also known as a Safe Exchange Zone.

Be careful out there, my friends.

big-shredderCan someone steal the data off your old computer? The short answer is yes. A determined criminal can grab the bits, including documents, images, spreadsheets, and even passwords.

If you donate, sell or recycle a computer, whoever gets hold of it can recover the information in its hard drive or solid-state storage (SSD). The platform doesn’t matter: Whether its Windows or Linux or Mac OS, you can’t 100% eliminate sensitive data by, say, eliminating user accounts or erasing files!

You can make the job harder by using the computer’s disk utilities to format the hard drive. Be aware, however, that formatting will thwart a casual thief, but not a determined hacker.

The only truly safe way to destroy the data is to physically destroy the storage media. For years, businesses have physically removed and destroyed the hard drives in desktops, servers and laptops. It used to be easy to remove the hard drive: take out a couple of screws, pop open a cover, unplug a cable, and lift the drive right out.

Once the hard drive is identified and removed, you can smash it with a hammer, drill holes in it, even take it apart (which is fun, albeit time-consuming). Some businesses will put the hard drive into an industrial shredder, which is a scaled-up version of an office paper shredder. Some also use magnetism to attempt to destroy the data. Not sure how effective that is, however, and magnets won’t work at all on SSDs.

It’s much harder to remove the storage from today’s ultra-thin, tightly sealed notebooks, such as a Microsoft Surface or Apple MacBook Air, or even from tablets. What if you want to destroy the storage in order to prevent hackers from gaining access? It’s a real challenge.

If you have access to an industrial shredder, an option is to shred the entire computer. It seems wasteful, and I can imagine that it’s not good to shred lithium-ion batteries – many of which are not easily removable, again, as in the Microsoft Surface or Apple MacBook Air. You don’t want those chemicals lying around. Still, that works, and works well.

Note that an industrial shredder is kinda big and expensive – you can see some from SSL World. However, if you live in any sort of medium-sized or larger urban area, you can probably find a shredding service that will destroy the computer right in front of you. I’ve found one such service here in Phoenix, Assured Document Destruction Inc., that claims to be compliant with industry regulations for privacy, such as HIPAA and Sarbanes-Oxley.

Don’t want to shred the whole computer? Let’s say the computer uses a standard hard drive, usually in a 3.5-inch form factor (desktops and servers) or 2.5-inch form factor (notebooks). If you have a set of small screwdrivers, you should be able to dismantle the computer, remove the storage device, and kill it – such as by smashing it with a maul, drilling holes in it, or taking it completely apart. Note that driving over it in your car, while satisfying, may not cause significant damage.

What about solid state storage? The same actually applies with SSDs, but it’s a bit trickier. Sometimes the drive still looks like a standard 2.5-inch hard drive. But sometimes the “solid state drive” is merely a few exposed chips on the motherboard or a smaller circuit board. You’ve got to smash that sucker. Remove it from the computer. Hulk Smash! Break up the circuit board, pulverize the chips. Only then will it be dead dead dead. (Though one could argue that government agencies like the NSA could still put Humpty Dumpty back together again.)

In short: Even if the computer itself seems totally worthless, its storage can be removed, connected to a working computer, and accessed by a skilled techie. If you want to ensure that your data remains private, you must destroy it.

Web filtering. The phrase connotes keeping employees from spending too much time monitoring Beanie Baby auctions on eBay, and stopping school children from encountering (accidentally or deliberately) naughty images on the internet. Were it that simple — but nowadays, web filtering goes far beyond monitoring staff productivity and maintaining the innocence of childhood. For nearly every organization today, web filtering should be considered an absolute necessity. Small business, K-12 school district, Fortune 500, non-profit or government… it doesn’t matter. The unfiltered internet is not your friend, and legally, it’s a liability; a lawsuit waiting to happen.

Web filtering means blocking internet applications – including browsers – from contacting or retrieving content from websites that violate an Acceptable Use Policy (AUP). The policy might set rules blocking some specific websites (like a competitor’s website). It might block some types of content (like pornography), or detected malware, or even access to external email systems via browser or dedicated clients. In some cases, the AUP might include what we might call government-mandated restrictions (like certain websites in hostile countries, or specific news sources).

Unacceptable use in the AUP

The specifics of the AUP might be up to the organization to define entirely on its own; that would be the case for a small business, perhaps. Government organizations, such as schools or military contractors, might have specific AUP requirements placed on them by funders or government regulators, thereby becoming a compliance/governance issue as well. And of course, legal counsel should be sought when creating policies that balance an employee’s ability to access content of his/her choice, against the company’s obligations to protect the employee (or the company) from unwanted content.

It sounds easy – the organization sets an AUP, consulting legal, IT and the executive suite. The IT department implements the AUP through web filtering, perhaps with software installed and configured on devices; perhaps through firewall settings at the network level; and perhaps through filters managed by the internet service provider. It’s not simple, however. The internet is constantly changing, employees are adept at finding ways around web filters; and besides, it’s tricky to translate policies written in English (as in the legal policy document) into technological actions. We’ll get into that a bit more shortly. First, let’s look more closely at why organizations need those Acceptable Use Policies, and what should be in them.

  • Improving employee productivity. This is the low-hanging fruit. You may not want employees spending too much time on Facebook on their company computers. (Of course, if they are permitted to bring mobile devices into the office, they can still access social media via cellular). That’s a policy consideration, though the jury is out if a blank blockage is the best way to improve productivity.
  • Preserving bandwidth. For technical reasons, you may not want employees streaming Netflix movies or Hulu-hosted classic TV shows across the business network. Seinfeld is fun, but not on company bandwidth. As with social media, this is truly up to the organization to decide.
  • Blocking email access. Many organizations do not want their employees accessing external email services from the business computers. That’s not only for productivity purposes, but also makes it difficult to engage in unapproved communications – such as emailing confidential documents to yourself. Merely configuring your corporate email server to block the exfiltration of intellectual property is not enough if users can access personal gmail.com or hushmail.com accounts. Blocking external email requires filtering multiple protocols as well as specific email hosts, and may be required to protect not only your IP, but also customers’ data, in addition to complying with regulations from organizations like the U.S. Securities and Exchange Commission.
  • Blocking access to pornography and NSFW content. It’s not that you are being a stick-in-the-mud prude, or protecting children. The initial NSFW (not safe for work) are often said as a joke, but in reality, some content can be construed as contributing to an hostile work environment. Just like the need to maintain a physically safe work environment – no blocked fire exits, for example – so too must you maintain a safe internet environment. If users can be unwillingly subjected to offensive content by other employees, there may be significant legal, financial and even public-relations consequences if it’s seen as harassment.
  • Blocking access to malware. A senior manager receives a spear-phishing email that looks legit. He clicks the link and, wham; ransomware is on his computer. Or spyware, like a keylogger. Or perhaps a back-door that allows other access by hackers. You can train employees over and over, and they will still click on unsafe email links or on web pages. Anti-malware software on the computer can help, but web filtering is part of a layered approach to anti-malware protection. This applies to trackers as well: As part of the AUP, the web filters may be configured to block ad networks, behavior trackers and other web services that attempt to glean information about your company and its workers.
  • Blocking access to specific internet applications. Whether you consider it Shadow IT or simply an individual’s personal preference, it’s up to an AUP to decide which online services should be accessible; either through an installed application or via a web interface. Think about online storage repositories such as Microsoft OneDrive, Google Drive, Dropbox or Box: Personal accounts can be high-bandwidth conduits for exfiltration of vast quantities of valuable IP. Web filtering can help manage the situation.
  • Compliance with government regulations. Whether it’s a military base commander making a ruling, or a government restricting access to news sites out-of-favor with the current regime; those are rules that often must be followed without question. It’s not my purpose here to discuss whether this is “censorship,” though in some cases it certainly is. However, the laws of the United States do not apply outside the United States, and blocking some internet sites or types of web content may be part of the requirements for doing business in some countries or with some governments. What’s important here is to ensure that you have effective controls and technology in place to implement the AUP – but don’t go broadly beyond it.
  • Compliance with industry requirements. Let’s use the example of the requirements that schools or public libraries must protect students (and the general public) from content deemed to be unacceptable in that environment. After all, just because a patron is an adult doesn’t mean he/she is allowed to watch pornography on one of the library’s publicly accessible computers, or even on his/her computer on the library’s Wi-Fi network.

What about children?

A key ingredient in creating an AUP for schools and libraries in the United States is the Children’s Internet Protection Act (CIPA). In order to receive government subsidies or discounts, schools and libraries must comply with these regulations. (Other countries may have an equivalent to these policies.)

Learn more about how the CIPA should drive the AUP for any organization where minors can be found, and how best to implement an AUP for secure protection. That’s all covered in my article for Upgrade Magazine, “Web filtering for business: Keep your secrets safe, and keep your employees happy.”

sophos-naked-securityHere’s a popular article that I wrote on email security for Sophos’ “Naked Security” blog.

5 things you should know about email unsubscribe links before you click” starts with:

We all get emails we don’t want, and cleaning them up can be as easy as clicking ‘unsubscribe’ at the bottom of the email. However, some of those handy little links can cause more trouble than they solve. You may end up giving the sender a lot of information about you, or even an opportunity to infect you with malware.

Read the whole article here.

can-busWhen it comes to cars, safety means more than strong brakes, good tires, a safety cage, and lots of airbags. It also means software that won’t betray you; software that doesn’t pose a risk to life and property; software that’s working for you, not for a hacker.

Please join me for this upcoming webinar, where I am presenting along with Arthur Hicken, the Code Curmudgeon and technology evangelist for Parasoft. It’s on Thursday, August 18. Arthur and I have been plotting and scheming, and there will be some excellent information presented. Don’t miss it! Click here to register.

Driving Risks out of Embedded Automotive Software

Automobiles are becoming the ultimate mobile computer. Popular models have as many as 100 Electronic Control Units (ECUs), while high-end models push 200 ECUs. Those processors run hundreds of millions of lines of code written by the OEMs’ teams and external contractors—often for black-box assemblies. Modern cars also have increasingly sophisticated high-bandwidth internal networks and unprecedented external connectivity. Considering that no code is 100% error-free, these factors point to an unprecedented need to manage the risks of failure—including protecting life and property, avoiding costly recalls, and reducing the risk of ruinous lawsuits.

This one-hour practical webinar will review the business risks of defective embedded software in today’s connected cars. Led by Arthur Hicken, Parasoft’s automotive technology expert and evangelist, and Alan Zeichick, an independent technology analyst and founding editor of Software Development Times, the webinar will also cover five practical techniques for driving the risks out of embedded automotive software, including:

• Policy enforcement
• Reducing defects during coding
• Effective techniques for acceptance testing
• Using metrics analytics to measure risk
• Converting SDLC analytics into specific tasks to focus on the riskiest software

You can apply the proven techniques you’ll learn to code written and tested by your teams, as well as code supplied by your vendors and contractors.

jason-steerNews websites are an irresistible target for hackers because they are so popular. Why? because they are trusted brands, and because — by their very nature — they contain many external links and use lots of outside content providers and analytics/tracking services. It doesn’t take much to corrupt one of those websites, or one of the myriad partners sites they rely upon, like ad networks, content feeds or behavioral trackers.

Potentially, malware injected on any well-trafficked news website, could infect tremendous numbers of people with ransomware, keyloggers, zombie code, or worse. Alarmist? Perhaps, but with good reason. News websites, which can include both traditional media (like the Chicago Tribune and the BBC), or new-media platforms (such as BuzzFeed or Business Insider) attract a tremendous number of visitors, especially when there is a breaking news story of tremendous interest, like a natural disaster, political event or celebrity shenanigans.

Publishing companies are not technology companies. They are content providers who do their honest best to offer a secure experience, but can’t be responsible for external links. In fact, many say so right in their terms of use statements or privacy policies. What they can be responsible for are the third-party networks that provide content or services to their platforms, but in reality, the search for profits and/or a competitive advantage outweighs any other considerations. And of course, their platforms can be hacked as well.

According to a story in the BBC, news sites in Russia, including the Moscow Echo Radio Station, opposition newspaper New Times, and the Kommersant business newspaper were hacked back in March 2012. In November 2014, the Syrian Electronic Army claimed to have hacked news sites, including the Canada’s CBC News.

Also in November 2014, one of the U.K’s most popular sites, The Telegraph, tweeted, “A part of our website run by a third-party was compromised earlier today. We’ve removed the component. No Telegraph user data was affected.”

A year earlier, in January 2013, the New York Times self-reported, “Hackers in China Attacked The Times for Last 4 Months.” The story said that, “The attackers first installed malware — malicious software — that enabled them to gain entry to any computer on The Times’s network. The malware was identified by computer security experts as a specific strain associated with computer attacks originating in China.”

Regional news outlets can also be targets. On September 18, 2015, reported CBS Local in San Francisco, “Hackers took control of the five news websites of Palo Alto-based Embarcadero Media Group on Thursday night, according to the CBS. The websites of Palo Alto Weekly, The Almanac, Mountain View Voice and Pleasanton Weekly were all reportedly attacked at about 10:30 p.m. Thursday.

I talked recently with Jason Steer of Menlo Security, a security company based in Menlo Park, Calif. He put it very clearly:

You are taking active code from a source you didn’t request, and you are running it inside your PC and your network, without any inspection whatsoever. Because of the high volumes of users, it only takes a small number of successes to make the hacking worthwhile. Antivirus can’t really help here, either consumer or enterprise. Antivirus may not detect ransomware being installed from a site you visit, or malicious activity from a bad advertisement or bad JavaScript.

Jason pointed me to his blog post from November 12, 2015, “Top 50 UK Website Security Report.” His post says, in part,

Across the top 50 sites, a number of important findings were made:

• On average, when visiting a top 50 U.K. website, your browser will execute 19 scripts

• The top UK website executed 125 unique scripts when requested

His blog continued with a particularly scary observation:

15 of the top 50 sites (i.e. 30 percent) were running vulnerable versions of web-server code at time of testing. Microsoft IIS version 7.5 was the most prominent vulnerable version reported with known software vulnerabilities going back more than five years.

How many scripts are running on your browser from how many external servers? According to Jason’s research, if you visit the BBC website, your browser might be running 92 scripts pushed to it from 11 different servers. The Daily Mail? 127 scripts from 35 servers. The Financial Times? 199 scripts from 31 servers. The New Yorker? 113 scripts from 33 sites. The Economist? 185 scripts from 46 sites. The New York Times? 76 scripts from 29 servers. And Forbes, 100 scripts from 49 servers.

Most of those servers and scripts are benign. But if they’re not, they’re not. The headline on Ars Technica on March 15, 2016, says it all: “Big-name sites hit by rash of malicious ads spreading crypto ransomware.” The story begins,

Mainstream websites, including those published by The New York Times, the BBC, MSN, and AOL, are falling victim to a new rash of malicious ads that attempt to surreptitiously install crypto ransomware and other malware on the computers of unsuspecting visitors, security firms warned.

The tainted ads may have exposed tens of thousands of people over the past 24 hours alone, according to a blog post published Monday by Trend Micro. The new campaign started last week when “Angler,” a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

 According to a separate blog post from Trustwave’s SpiderLabs group, one JSON-based file being served in the ads has more than 12,000 lines of heavily obfuscated code. When researchers deciphered the code, they discovered it enumerated a long list of security products and tools it avoided in an attempt to remain undetected.

Let me share my favorite news website hack story, because of its sheer audacity. According to Jason’s blog, ad delivery systems can be turned into malware delivery systems, and nobody might every know:

If we take one such example in March 2016, one attacker waited patiently for the domain ‘brentsmedia[.]com’ to expire, registered in Utah, USA , a known ad network content provider. The domain in question had expired ownership for 66 days, was then taken over by an attacker in Russia (Pavel G Astahov) and 1 day later was serving up malicious ads to visitors of sites including the BBC, AOL & New York Times. No-one told any of these popular websites until the malicious ads had already appeared.

Jason recently published an article on this subject in SC Magazine, “Brexit leads to pageviews — pageviews lead to malware.” Check it out. And be aware that when you visit a trusted news website, you have no idea what code is being executed on your computer, what that code does, and who wrote that code.

vz_use_outdoor_headerThank you, NetGear, for taking care of your valued customers. On July 1, the company announced that it would be shutting down the proprietary back-end cloud services required for its VueZone cameras to work – turning them into expensive camera-shaped paperweights. See “Throwing our IoT investment in the trash thanks to NetGear.”

The next day, I was contacted by the company’s global communications manager. He defended the policy, arguing that NetGear was not only giving 18 months’ notice of the shutdown, but they are “doing our best to help VueZone customers migrate to the Arlo platform by offering significant discounts, exclusive to our VueZone customers.” See “A response from NetGear regarding the VueZone IoT trashcan story.”

And now, the company has done a 180° turn. NetGear will not turn off the service, at least not at this time. Well done. Here’s the email that came a few minutes ago. The good news for VueZone customers is that they can continue. On the other hand, let’s not party too heartily. The danger posed by proprietary cloud services driving IoT devices remains. When the vendor decides to turn it off, all you have is recycle-ware and potentially, one heck of a migration issue.

Subject: VueZone Services to Continue Beyond January 1, 2018

Dear valued VueZone customer,

On July 1, 2016, NETGEAR announced the planned discontinuation of services for the VueZone video monitoring product line, which was scheduled to begin as of January 1, 2018.

Since the announcement, we have received overwhelming feedback from our VueZone customers expressing a desire for continued services and support for the VueZone camera system. We have heard your passionate response and have decided to extend service for the VueZone product line. Although NETGEAR no longer manufactures or sells VueZone hardware, NETGEAR will continue to support existing VueZone customers beyond January 1, 2018.

We truly appreciate the loyalty of our customers and we will continue our commitment of delivering the highest quality and most innovative solutions for consumers and businesses. Thank you for choosing us.

Best regards,

The NETGEAR VueZone Team

July 19, 2016

Kitt-InteriorWas it a software failure? The recent fatal crash of a Tesla in Autopilot mode is worrisome, but it’s too soon to blame Tesla’s software. According to Tesla on June 30, here’s what happened:

What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S. Had the Model S impacted the front or rear of the trailer, even at high speed, its advanced crash safety system would likely have prevented serious injury as it has in numerous other similar incidents.

We shall have to await the results of the NHTSA investigation to learn more. Even if it does prove to be a software failure, at least the software can be improved to try to avoid similar incidents in the future.

By coincidence, a story that I wrote about the security issues related to advanced vehicles,Connected and Autonomous Cars Are Wonderful and a Safety-Critical Security Nightmare,” was published today, July 1, on CIO Story. The piece was written several weeks ago, and said,

The good news is that government and industry standards are attempting to address the security issues with connected cars. The bad new is that those standards don’t address security directly; rather, they merely prescribe good software-development practices that should result in secure code. That’s not enough, because those processes don’t address security-related flaws in the design of vehicle systems. Worse, those standards are a hodge-podge of different regulations in different countries, and they don’t address the complexity of autonomous, self-driving vehicles.

Today, commercially available autonomous vehicles can parallel park by themselves. Tomorrow, they may be able to drive completely hands-free on highways, or drive themselves to parking lots without any human on board. The security issues, the hackability issues, are incredibly frightening. Meanwhile, companies as diverse as BMW, General Motors, Google, Mercedes, Tesla and Uber are investing billions of dollars into autonomous, self-driving car technologies.

Please read the whole story here.

Summary of Recall Trends. Source: SRR.

Summary of Recall Trends. Source: SRR.

The costs of an automobile recall can be immense for an OEM automobile or light truck manufacturer – and potentially ruinous for a member of the industry’s supply chain. Think about the ongoing Takata airbag scandal, which Bloomberg says could cost US$24 billion. General Motors’ ignition locks recall may have reached $4.1 billion. In 2001, the exploding Firestone tires on the Ford Explorer cost $3 billion to recall. The list goes on and on. That’s all about hardware problems. What about bits and bytes?

Until now, it’s been difficult to quantify the impact of software defects on the automotive industry. Thanks to a new analysis from SRR called “Industry Insights for the Road Ahead: Automotive Warranty and Recall Report 2016,” we have a good handle on this elusive area.

According to the report, there were 63 software- related vehicle recalls from late 2012 to June 2015. That’s based on data from the United States’ National Highway Traffic Safety Administration (NHTSA). The SRR report derived that count of 63 software-related recalls using this methodology (p. 22),

To classify a recall as a software component recall, SRR searched the “Defect Summary” and “Corrective Action” fields of NHTSA’s Recall flat file for the term “software.” SRR’s inquiry captured descriptions of software-related defects identified specifically as such, as well as defects that were to be fixed by updating or changing a vehicle’s software.

That led to this analysis (p. 22),

Since the end of 2012, there has been a marked increase in recall activity due to software issues. For the primary light vehicle makes and models we studied, 32 unique software-related recalls affected about 3.6 million vehicles from 2005–2012. However, in a much shorter time period from the end of 2012 to June 2015, there were 63 software-related recalls affecting 6.4 million more vehicles.

And continuing (p. 23),

From less than 5 percent of all recalls in 2011, software-related recalls have risen to almost 15 percent in 2015. Overall, the amount of unique campaigns involving software has climbed dramatically, with nine times as many in 2015 than in 2011…

No surprises there given the dramatically increased complexity of today’s connected vehicles, with sophisticated internal networks, dozens of ECUs (electronic control units with microprocessors, memory, software and network connections), and extensive remote connectivity.

These software defects are not occurring only in systems where one expects to find sophisticated microprocessors and software, such as engine management controls and Internet-connected entertainment platforms. Microprocessors are being used to analyze everything from the driver’s position and stage of alert, to road hazards, to lane changes — and offer advanced features such as automatic parallel parking.

Where in the car are the software-related vehicle recalls? Since 2006, says the report, recalls have been prompted by defects in areas as diverse as locks/latches, power train, fuel system, vehicle speed control, air bags, electrical systems, engine and engine cooling, exterior lighting, steering, hybrid propulsion – and even the parking brake system.

That’s not all — because not every software defect results in a public and costly recall. That’s the last resort, from the OEM’s perspective. Whenever possible, the defects are either ignored by the vehicle manufacturer, or quietly addressed by a software update next time the car visits a dealer. (If the car doesn’t visit an official dealer for service, the owner may never know that a software update is available.) Says the report (p. 25),

In addition, SRR noted an increase in software-related Technical Service Bulletins (TSB), which identify issues with specific components, yet stop short of a recall. TSBs are issued when manufacturers provide recommended procedures to dealerships’ service departments for fixing problematic components.

A major role of the NHTSA is to record and analyze vehicle failures, and attempt to determine the cause. Not all failures result in a recall, or even in a TSB. However, they are tracked by the agency via Early Warning Reporting (EWR). Explains the report (p. 26),

In 2015, three new software-related categories reported data for the first time:

• Automatic Braking, listed on 21 EWR reports, resulting in 26 injuries and 1 fatality

• Electronic Stability, listed on 6 EWR reports, resulting in 7 injuries and 1 fatality

• Forward Collision Avoidance, listed in 1 EWR report, resulting in 1 injury and no fatalities

The bottom line here, beyond protecting life and property, is the bottom line for the automobile and its supply chain. As the report says in its conclusion (p. 33),

Suppliers that help OEMs get the newest software-aided components to market should be prepared for the increased financial exposure they could face if these parts fail.

About the Report

Industry Insights for the Road Ahead: Automotive Warranty and Recall Report 2016” was published by SRR: Stout, Risius Ross, which offers global financial advisory services. SRR has been in the automotive industry for 25 years, and says, “SRR professionals have more automotive experience in these service areas than any other advisory firm, period.”

This brilliant report — which is free to download in its entirety — was written by Neil Steinkamp, a Managing Director at SRR. He has extensive experience in providing a broad range of business and financial advice to corporate executives, risk managers, in-house counsel and trial lawyers. Mr. Steinkamp has provided consulting services and has been engaged as an expert in numerous matters involving automotive warranty and recall costs. His practice also includes consulting services for automotive OEMs, suppliers and their advisors regarding valuation, transactions and disputes.

5D3_5453Connected cars are vulnerable due to the radios that link them to the outside world. For example, consider cellular data links, such as the one in the Mercedes M-class SUV that my family owned for a while, allow for remote access to more than diagnostics: Using the system, called mbrace, an authorized M-B support center can unlock the doors via that link. Owners can use the M-B mobile app to

Start your vehicle from anywhere, and heat or cool the interior of your vehicle to the last set temperature. You can also remotely lock or unlock, sound the horn or find your vehicle via the Mobile App or website.

Nearly all high-end car manufacturers offer remote access systems, also referred to as telematics. Other popular systems with door-unlock capability include General Motors’ OnStar, BMW’s Assist, Hyundai’s BlueLink and Infiniti’s Connection. Each represents a potential attack vector, as do after-market add-ons.

In a blog post on Car & Driver, Bob Sorokanich writes,

It’s been a busy summer for automotive hackers, and the latest development is bad news for luxury-car owners: Good-guy digital security researcher Samy Kamkar just revealed that BMW, Mercedes-Benz, Chrysler, and aftermarket Viper connected-car systems are all theoretically vulnerable to the same hack that allowed him to remotely control functions in OnStar-equipped vehicles.

Consider yourself warned. The Federal Bureau of Investigation released a public service announcement, “Motor Vehicles Increasing Vulnerable to Remote Exploits.” The PSA says:

Vulnerabilities may exist within a vehicle’s wireless communication functions, within a mobile device – such as a cellular phone or tablet connected to the vehicle via USB, Bluetooth, or Wi-Fi – or within a third-party device connected through a vehicle diagnostic port. In these cases, it may be possible for an attacker to remotely exploit these vulnerabilities and gain access to the vehicle’s controller network or to data stored on the vehicle. Although vulnerabilities may not always result in an attacker being able to access all parts of the system, the safety risk to consumers could increase significantly if the access involves the ability to manipulate critical vehicle control systems.

The PSA continues,

Over the past year, researchers identified a number of vulnerabilities in the radio module of a MY2014 passenger vehicle and reported its detailed findings in a whitepaper published in August 2015. The vehicle studied was unaltered and purchased directly from a dealer. In this study, which was conducted over a period of several months, researchers developed exploits targeting the active cellular wireless and optionally user-enabled Wi-Fi hotspot communication functions. Attacks on the vehicle that were conducted over Wi-Fi were limited to a distance of less than about 100 feet from the vehicle. However, an attacker making a cellular connection to the vehicle’s cellular carrier – from anywhere on the carrier’s nationwide network – could communicate with and perform exploits on the vehicle via an Internet Protocol (IP) address.

In the aforementioned case, the radio module contained multiple wireless communication and entertainment functions and was connected to two controller area network (CAN) buses in the vehicle. Following are some of the vehicle function manipulations that researchers were able to accomplish.

In a target vehicle, at low speeds (5-10 mph):

  • Engine shutdown
  • Disable brakes
  • Steering

In a target vehicle, at any speed:

  • Door locks
  • Turn signal
  • Tachometer
  • Radio, HVAC, GPS

(The whitepaper referenced above is “Remote Exploitation of an Unaltered Passenger Vehicle,” by IOActive Security Services.)

How can you protect yourself — and your vehicle? The FBI offers four excellent suggestions – read the PSA for more details on them:

  1. Ensure your vehicle software is up to date
  1. Be careful when making any modifications to vehicle software
  1. Maintain awareness and exercise discretion when connecting third-party devices to your vehicle
  1. Be aware of who has physical access to your vehicle

To those I would add: Choose security over convenience, and if possible, disable the remote-access capabilities of your vehicle. You may not be able to prevent every possible attack — some of those systems can’t be turned off, and if a hacker is able to get physical access to the vehicle’s ODB-II diagnostics port or other electronics, all bets are off. You can live without being able to use a mobile app to start your car, or without the manufacturer preforming remote engine diagnostics. Heck, our ’91 Honda doesn’t even have a clicker, we have to open the door with a key. Be safe!

find-my-phoneThere are several types of dangers presented by a lost Bring Your Own Device (BYOD) smartphone or tablet. Many IT professionals and security specialists think only about some of them. They are all problematic. Does your company have policies about lost personal devices?

  • If you have those policies, what are they?
  • Does the employee know about those policies?
  • Does the employee know how to notify the correct people in case his or her device is lost?

Let’s say you have policies. Let’s say the employee calls the security office and says, “My personal phone is gone. I use it to access company resources, and I don’t think it was securely locked.” What happens?

Does the company have all the information necessary to take all the proper actions, including the telephone number, carrier, manufacturer and model, serial number, and other characteristics? Who gets notified? How long do you wait before taking an irreversible action? Can the security desk respond in an effective way? Can the security respond instantly, including nights, weekend and holidays?

If you don’t have those policies — with people and knowledge to make them effective — you’ve got a serious problem.

Read my latest story in NetworkWorld, “Dude, where’s my phone? BYOD means enterprise security exposure.” It discusses the four biggest obvious threats from a lost BYOD device, and what you can do to address those threats.

mag-tapeI once designed and coded a campus parking pass management system for an East Coast university. If you had a faculty, staff, student or visitor parking sticker for the campus, it was processed using my green-screen application, which went online in 1983. The university used the mainframe program with minimal changes for about a decade, until a new client/server parking system was implemented.

Today, that sticker application exists on a nine-track tape reel hanging on my wall — and probably nowhere else.

Decommissioning the parking-sticker app was relatively straightforward for the data center team, though of course I hope that it was emotionally traumatic. Data about the stickers was stored in several tables. One contained information about humans: name, address, phone number, relationship with the university. The other was about vehicles: make, year and color; license plate number; date of sticker assignment; sticker type and serial number; expiration date; date of cancellation. We handled some interesting exceptions. For example, some faculty were issued “floating” stickers that weren’t assigned to specific vehicles. That sort of thing.

Fortunately, historical info in the sticker system was not needed past a year or two. While important for campus security (“Who does that car parked in a no-parking zone belong to?”), it wasn’t data that needed to be retained for any length of time for legal or compliance reasons. Shutting off the legacy application was as simple as, well, shutting off the legacy application.

It’s not always that simple. Other software on campus in the 1980s — and much of the software that your team writes — needed to be retained, sometimes according to campus internal regulations, other times due to government or industry rules. How long do you need to keep payroll data? Transaction data for sales from your website? Bids for products and services, and the documentation that explains how the bids were solicited?

Any time you get into regulated industries, you have this issue. Financial services, aerospace, safety-oriented embedded systems, insurance, human resources, or medical: Information must be retained for compliance, and must be produced on demand by auditors, queries from litigators during eDiscovery, regulatory investigations, even court subpoenas.

That can make it hard — very hard — to turn off an application you no longer need. Even if the software is recording no new transactions, retention policies might necessitate keeping it alive for years. Maybe even decades, depending on the type of data being retained, and on the regulatory requirements of your industry. Think about drug records from pharmaceutical companies, or component sourcing for automobile manufacturers.

Data and application retention has many enterprise implications. For example: Before you deploy an application and its data onto a cloud provider or SaaS platform, you should ask: Will that application and its data need to be retained? If so, will the provider still be around and provide access to it? If not, you need to make sure there’s a plan to bring the systems in-house (even if they are no longer needed) to archive the data outside the application in a way that conforms with regulatory requirements for retention and access, and then you can decommission the application.

A word of caution: I don’t know how long nine-track tapes last, especially if they are not well shielded. My 20-year-old tape was not protected against heat or magnetism — hey, it was thrown into a box. There’s a better-than-good chance it is totally unreadable. Don’t rely upon unproven technology or suppliers for your own data archive, especially if the data must be retained for compliance purposes.

mobile_everythingSecurity standards for cellular communications are pretty much invisible. The security standards, created by groups like the 3GPP, play out behind the scenes, embedded into broader cellular protocols like 3G, 4G, LTE and the oft-discussed forthcoming 5G. Due to the nature of the security and other cellular specs, they evolve very slowly and deliberately; it’s a snail-like pace compared to, say, WiFi or Bluetooth.

Why the glacial pace? One reason is that cellular standards of all sorts must be carefully designed and tested in order to work in a transparent global marketplace. There are also a huge number of participants in the value chain, from handset makers to handset firmware makers to radio manufacturers to tower equipment to carriers… the list goes on and on.

Another reason why cellular software, including security protocols and algorithms goes slowly is that it’s all bound up in large platform versions. The current cellular security system is unlikely to change significantly before the roll-out of 5G… and even then, older devices will continue to use the security protocols embedded in their platform, unless a bug forces a software patch. Those security protocols cover everything from authentication of the cellular device to the tower, to the authentication of the tower to the device, to encryption of voice and data traffic.

We can only hope that end users will move swiftly to 5G. Why? because 4G and older platforms aren’t incredibly secure. Sure, they are good enough today, but that’s only “good enough.” The downside is that everything is pretty fuzzy when it comes to what 5G will actually offer… or even how many 5G standards there will be.

Read more in my story in Pipeline Magazine, “Wireless Security Standards.”