The word went out Wednesday, March 22, spreading from techie to techie. “Better change your iCloud password, and change it fast.” What’s going on? According to ZDNet, “Hackers are demanding Apple pay a ransom in bitcoin or they’ll blow the lid off millions of iCloud account credentials.”

A hacker group claims to have access to 250 million iCloud and other Apple accounts. They are threatening to reset all the passwords on those accounts – and then remotely wipe those phones using lost-phone capabilities — unless Apple pays up with untraceable bitcoins or Apple gift cards. The ransom is a laughably small $75,000.

What’s Happening at Apple?

According to various sources, at least some of the stolen account credentials appear to be legitimate. Whether that means all 250 million accounts are in peril, of course, is unknowable.

Apple seems to have acknowledged that there is a genuine problem. The company told CNET, “The alleged list of email addresses and passwords appears to have been obtained from previously compromised third-party services.” We obviously don’t know what Apple is going to do, or what Apple can do. It hasn’t put out a general call, at least as of Thursday, for users to change their passwords, which would seem to be prudent. It also hasn’t encouraged users to enable two-factor authentication, which should make it much more difficult for hackers to reset iCloud passwords without physical access to a user’s iPhone, iPad, or Mac.

Unless the hackers alter the demands, Apple has a two-week window to respond. From its end, it could temporarily disable password reset capabilities for iCloud accounts, or at least make the process difficult to automate, access programmatically, or even access more than once from a given IP address. So, it’s not “game over” for iCloud users and iPhone owners by any means.

It could be that the hackers are asking for such a low ransom because they know their attack is unlikely to succeed. They’re possibly hoping that Apple will figure it’s easier to pay a small amount than to take any real action. My guess is they are wrong, and Apple will lock them out before the April 7 deadline.

Where Did This Come From

Too many criminal networks have access to too much data. Where are they getting it? Everywhere. The problem multiplies because people reuse usernames and passwords. For nearly every site nowadays, the username is the email address. That means if you know my email address (and it’s not hard to find), you know my username for Facebook, for iCloud, for Dropbox, for Salesforce.com, for Windows Live, for Yelp. Using the email address for the login is superficially good for consumers: They are unlikely to forget their login.

The bad news is that account access now depends on a single piece of hidden information: the password. And people reuse passwords and choose weak passwords. So if someone steals a database from a major retailer with a million account usernames (which are email addresses) and passwords, many of those will also be Facebook logins. And Twitter. And iCloud.

That’s how hackers can quietly accumulate what they claim are 250 million iCloud passwords. They probably have 250 million email address / password pairs amalgamated from various sources: A million from this retailer, ten million from that social network. It adds up. How many of those will work in iTunes? Unknown. Not 250 million. But maybe 10 million? Or 20 million? Either way, it’s a nightmare for customers and a disaster for Apple, if those accounts are locked, or if phones are bricked.

What’s the Answer?

As long as we use passwords, and users have the ability to reuse passwords, this problem will exist. Hackers are excellent at stealing data. Companies are bad at detecting breaches, and even worse about disclosing them unless legally obligated to do so.

Can Apple present those 250 million accounts from being seized? Probably. Will problems like this happen again and again and again? For sure, until we move away from any possibility of shared credentials. And that’s not happening any time soon.

We received this realistic-looking email today claiming to be from a payment company called FrontStream. If you click the links, it tries to get you to active an account and provide bank details. However… We never requested an account from this company. Therefore, we label it phishing — and an attempt to defraud.

If you receive a message like this, delete it. Don’t click any of the links, and don’t reply to it either. You’ve been warned.

From: billing [email address at frontstream.com]
Sent: Wed, Mar 22, 2017 10:34 am
Subject: New Account Ready for Activation

Dear [redacted],

Your account is now available at our FrontStream Invoicing Website for you to view your existing outstanding invoices and make payment. You can directly activate your account here:

[link redacted]

Or you can go to the FrontStream Invoicing website [link redacted], select ‘REGISTER’ option and go through the activation process. Below is your detailed account information from our record. They’re required in order to complete your account activation.

Customer Number: [redacted]

Phone Number: [redacted]

Activation Code: [redacted]

Sincerely,

Accounts Receivable

UPDATE MARCH 22

I tweeted about this blog post, and @FrontStream replied:

@zeichick Sorry for the confusion! The email was sent in error from our customer invoicing system. We’ll be following up with more details.

Given that we aren’t a FrontStream customer, this is peculiar. Will update again if there are more details.

UPDATE MARCH 27

Nothing more from FrontStream.

Let’s talk about the practical application of artificial intelligence to cybersecurity. Or rather, let’s read about it. My friend Sean Martin has written a three-part series on the topic for ITSP Magazine, exploring AI, machine learning, and other related topics. I provided review and commentary into the series.

The first part, “It’s a Marketing Mess! Artificial Intelligence vs Machine Learning,” explores probably the biggest challenge about AI: Hyperbole. That, and inconsistency. Every lab, every vendor, every conference, every analyst, defines even the most basic terminology — when they bother to define it at all. Vagueness begets vagueness, and so the terms “artificial intelligence” and “machine learning” are thrown around with wanton abandon. As Sean writes,

The latest marketing discovery of AI as a cybersecurity product term only exacerbates an already complex landscape of jingoisms with like muddled understanding. A raft of these associated terms, such as big data, smart data, heuristics (which can be a branch of AI), behavioral analytics, statistics, data science, machine learning and deep learning. Few experts agree on exactly what those terms mean, so how can consumers of the solutions that sport these fancy features properly understand what those things are?

Machine Learning: The More Intelligent Artificial Intelligence,” the second installment, picks up by digging into pattern recognition. Specifically, the story is about when AI software can discern patterns based on its own examination of raw data. Sean also digs into deep learning:

Deep Learning (also known as deep structured learning, hierarchical learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and non-linear transformations.

In the conclusion, “The Actual Benefits of Artificial Intelligence and Machine Learning,” Sean brings it home to your business. How you can tell if an AI solution is real? How can you tell what it really does? That means going beyond the marketing material’s attempts to obfuscate:

The bottom line on AI-based technologies in the security world: Whether it’s called machine learning or some flavor of analytics, look beyond the terminology – and the oooh, ahhh hype of artificial intelligence – to see what the technology does. As the saying goes, pay for the steak – not the artificial intelligent marketing sizzle.

It was a pleasure working on this series with Sean, and we hope you enjoy reading it.

Was the Russian government behind the 2004 theft of data on about 500 million Yahoo subscribers? The U.S. Justice Department thinks so: It accused two Russian intelligence officers of directing the hacking efforts, and also named two hackers as being part of the conspiracy to steal the data.

According to Mary B. McCord, Acting Assistant Attorney General,

The defendants include two officers of the Russian Federal Security Service (FSB), an intelligence and law enforcement agency of the Russian Federation and two criminal hackers with whom they conspired to accomplish these intrusions. Dmitry Dokuchaev and Igor Sushchin, both FSB officers, protected, directed, facilitated and paid criminal hackers to collect information through computer intrusions in the United States and elsewhere.

Ms. McCord added that scheme targeted Yahoo accounts of Russian and U.S. government officials, including security staff, diplomats and military personnel. “They also targeted Russian journalists; numerous employees of other providers whose networks the conspirators sought to exploit; and employees of financial services and other commercial entities,” she said.

From a technological perspective, the hackers first broke into computers of American companies providing email and internet-related services. From there, they harvested information, including information about individual users and the private contents of their accounts. The hackers, explained Ms. McCord, were hired to gather information for the FSB officers — classic espionage. However, they quietly went farther to steal financial information, such as gift card and credit card numbers, from users’ email accounts — and also use millions of stolen Yahoo accounts to set up an email spam scheme.

Was this state-sponsored cybertheft? Probably, but it’s not certain. What we have are serious allegations, but we don’t know if the FSB agents were working on orders from the Kremlin, or if they were running their own operation for their own private benefit. It’s simply too soon to tell.

The Turkish/Dutch Hacking Connection

Similarly, it’s too soon to know who is behind this week’s use of hijacked Twitter accounts to fling some nasty rhetoric against the Netherlands. This comes on the heels of the Dutch government’s efforts to block Turkish government ministers from traveling to the Netherlands to encourage Turkish ex-pats to vote in a Turkish referendum. At the same time, the Netherlands themselves were having an important election, with one of the leading candidates offering an isolationist, anti-Muslim platform. According to Reuters,

A diplomatic spat between Turkey, the Netherlands and Germany spread online on Wednesday when a large number of Twitter accounts were hijacked and replaced with anti-Nazi messages in Turkish.

The attacks, using the hashtags #Nazialmanya (NaziGermany) or #Nazihollanda (NaziHolland), took over accounts of high-profile CEOs, publishers, government agencies, politicians and also some ordinary Twitter users.

The account hijackings took place as the Dutch began voting on Wednesday in a parliamentary election that is seen as a test of anti-establishment and anti-immigrant sentiment.

The hackers did a good job getting access to Twitter accounts. Reuters continued,

The hacked accounts featured tweets with Nazi symbols, a variety of hashtags and the phrase “See you on April 16”, the date of a planned referendum in Turkey on extending Erdogan’s presidential powers.

Among them were the accounts of the European Parliament and the personal profile of French conservative politician Alain Juppe.

They also included the UK Department of Health and BBC North America, along with the profile of Marcelo Claure, the chief executive of U.S. telecoms operator Sprint Corp.

Other accounts included publishing sites for Die Welt, Forbes and Reuters Japan and several non-profit agencies including Amnesty International and UNICEF USA, as well as Duke University in the United States.

How did the hackers get access to Twitter? In part by breaking into a Dutch audience analytics company, which would have had access to some or all of those accounts. As Reuters reported,

At least some of the hijacked tweets appear to have been delivered via Twitter Counter, a Netherlands-based Twitter audience analytics company. Twitter Counter Chief Executive Omer Ginor acknowledged via email that the service had been hacked.

Meanwhile in a separate action, Reuters said,

Last Saturday, denial of service attacks staged by a Turkish hacking group hit the websites of Rotterdam airport and anti-Islam firebrand Geert Wilders, whose Freedom Party is vying to form to form the biggest party in the Dutch parliament.

So – as with the Yahoo hack in 2014 – are these the work of state-sponsored hackers? Or of hackers who believe in a cause, and who are working on their own to support that cause? It’s too soon to tell, and in this case, we may never know; it’s unclear if any organizations as powerful as the U.S. Justice Department and FBI are investigating. What we do know, though, is that nearly everything is vulnerable. A reputable analytics service can be hacked in order to provide a backdoor means to take over Twitter accounts. Internet access companies can be subverted and used for espionage or for staging man-in-the-middle attacks.

How many more of these attacks will be unveiled in the weeks, months and years ahead? One safe prediction: There will be many more attacks — whether state sponsors are behind them or not.

To absolutely nobody’s surprise, the U.S. Central Intelligence Agency can spy on mobile phones. That includes Android and iPhone, and also monitor the microphones on smart home devices like televisions.

This week’s disclosure of CIA programs by WikiLeaks has been billed as the largest-ever publication of confidential documents from the American spy agency. The document dump will appear in pieces; the first installment has 8,761 documents and files from the CIA’s Center for Cyber Intelligence, says WikiLeaks.

According to WikiLeaks, the CIA malware and hacking tools are built by EDG (Engineering Development Group), a software development group within the CIA’s Directorate for Digital Innovation. WikiLeaks says the EDG is responsible for the development, testing and operational support of all backdoors, exploits, malicious payloads, trojans, viruses and any other kind of malware used by the CIA.

Smart TV = Spy TV?

Another part of the covert program, code-named “Weeping Angel,” turns smart TVs into secret microphones. After infestation, Weeping Angel places the target TV in a ‘Fake-Off’ mode. The owner falsely believes the TV is off when it is on. In ‘Fake-Off’ mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server.

The New York Times reports the CIA has refused to explicitly confirm the authenticity of the documents. however, the government strongly implied their authenticity when the agency put out a statement to defend its work and chastise WikiLeaks, saying the disclosures “equip our adversaries with tools and information to do us harm.”

The WikiLeaks data dump talked about efforts to infect and control non-mobile systems. That includes desktops, notebooks and servers running Windows, Linux, Mac OS and Unix. The malware is distributed in many ways, including website viruses, software on CDs or DVDs, and portable USB storage devices.

Going mobile with spyware

What about the iPhone? Again, according to WikiLeaks, the CIA produces malware to infest, control and exfiltrate data from Apple products running iOS, such as iPhones and iPads. Similarly, other programs target Android. Says WikiLeaks, “These techniques permit the CIA to bypass the encryption of WhatsApp, Signal, Telegram, Wiebo, Confide and Cloackman by hacking the smart phones that they run on and collecting audio and message traffic before encryption is applied.”

The tech industry is scrambling to patch the vulnerabilities revealed by the WikiLeaks data dump. For example, Apple said,

Apple is deeply committed to safeguarding our customers’ privacy and security. The technology built into today’s iPhone represents the best data security available to consumers, and we’re constantly working to keep it that way. Our products and software are designed to quickly get security updates into the hands of our customers, with nearly 80 percent of users running the latest version of our operating system. While our initial analysis indicates that many of the issues leaked today were already patched in the latest iOS, we will continue work to rapidly address any identified vulnerabilities. We always urge customers to download the latest iOS to make sure they have the most recent security updates.

Enterprises should expect patches to come from every major hardware or software vendors. IT must be vigilant about making those security updates. In addition, everyone should attempt to identify unpatched devices on the network, and deny those devices access to critical resources until they are properly patched and tested. We don’t want to help mobile devices to become spy devices.

Cybercriminals want your credentials and your employees’ credentials. When those hackers succeed in stealing that information, it can be bad for individuals – and even worse for corporations and other organizations. This is a scourge that’s bad, and it will remain bad.

Credentials come in two types. There are personal credentials, such as the login and password for an email account, bank and retirement accounts, credit-card numbers, airline membership program, online shopping and social media. When hackers manage to obtain those credentials, such as through phishing, they can steal money, order goods and services, and engage in identity theft. This can be extremely costly and inconvenient for victims, but the damage is generally contained to that one unfortunate individual.

Corporate digital credentials, on the other hand, are the keys to an organization’s network. Consider a manager, executive or information-technology worker within a typical medium-size or larger-size business. Somewhere in the organization is a database that describes that employee – and describes which digital assets that employee is authorized to use. If cybercriminals manage to steal the employee’s corporate digital credentials, the criminals can then access those same assets, without setting off any alarm bells. Why? Because they have valid credentials.

What might those assets be? Depending on the employee:

  • It might range from everything to file servers that contain intellectual property, as pricing sheets, product blueprints, or patent applications.
  • It might include email archives that describe business plans. Or accounting servers that contain important financial information that could help competitors or allow for “insider trading.”
  • It might be human resources data that can help the hackers attack other individuals. Or engage in identity theft or even blackmail.

What if the stolen credentials are for individuals in the IT or information security department? The hackers can learn a great deal about the company’s technology infrastructure, perhaps including passwords to make changes to configurations, open up backdoors, or even disable security systems.

Read my whole story about this —including what to do about it — in Telecom Times, “The CyberSecurity Scourge of Credentials Theft.”

Nothing you share on the Internet is guaranteed to be private to you and your intended recipient(s). Not on Twitter, not on Facebook, not on Google+, not using Slack or HipChat or WhatsApp, not in closed social-media groups, not via password-protected blogs, not via text message, not via email.

Yes, there are “privacy settings” on FB and other social media tools, but those are imperfect at best. You should not trust Facebook to keep your secrets.

If you put posts or photos onto the Internet, they are not yours to control any more. Accept they can appropriated and redistributed by others. How? Many ways, including:

  • Your emails and texts can be forwarded
  • Your Facebook and Twitter posts and direct-messages can be screen-captured
  • Your photos can be downloaded and then uploaded by someone else

Once the genie is out of the bottle, it’s gone forever. Poof! So if there’s something you definitely don’t want to become public, don’t put it on the Internet.

(I wrote this after seeing a dear friend angered that photos of her little children, which she shared with her friends on Facebook, had been re-posted by a troll.)

I can’t trust the Internet of Things. Neither can you. There are too many players and too many suppliers of the technology that can introduce vulnerabilities in our homes, our networks – or elsewhere. It’s dangerous, my friends. Quite dangerous. In fact, it can be thought of as a sort of Fifth Column, but not in the way many of us expected.

Merriam-Webster defines a Fifth Column as “a group of secret sympathizers or supporters of an enemy that engage in espionage or sabotage within defense lines or national borders.” In today’s politics, there’s lot of talk about secret sympathizers sneaking across national borders, such as terrorists posing as students or refugees. Such “bad actors” are generally part of an organization, recruited by state actors, and embedded into enemy countries for long-term penetration of society.

There have been many real-life Fifth Column activists in recent global history. Think about Kim Philby and Anthony Blunt, part of the “Cambridge Five” who worked for spy agencies in the United Kingdom in post-World War II era; but who themselves turned out to be double agents working for the Soviet Union. Fiction too, is replete with Fifth Column spies. They’re everywhere in James Bond movies and John le Carré novels.

Am I too paranoid?

Let’s bring our paranoia (or at least, my paranoia) to the Internet of Things, and start by way of the late 1990s and early 2000s. I remember quite clearly the introduction of telco and network routers by Huawei, and concerns that the Chinese government may have embedded software into those routers in order to surreptitiously listen to telecom networks and network traffic, to steal intellectual property, or to do other mischief like disable networks in the event of a conflict. (This was before the term “cyberwarfare” was widely used.)

Recall that Huawei was founded by a former engineer in the Chinese People’s Liberation Army. The company was heavily supported by Beijing. Also there were lawsuits alleging that Huawei infringed on Cisco’s intellectual property – i.e., stole its source code. Thus, there was lots of concern surrounding the company and its products.

Read my full story about this, published in Pipeline Magazine, “The Surprising and Dangerous Fifth Column Hiding Within the Internet of Things.”

Modern medical devices increasingly leverage microprocessors and embedded software, as well as sophisticated communications connections, for life-saving functionality. Insulin pumps, for example, rely on a battery, pump mechanism, microprocessor, sensors, and embedded software. Pacemakers and cardiac monitors also contain batteries, sensors, and software. Many devices also have WiFi- or Bluetooth-based communications capabilities. Even hospital rooms with intravenous drug delivery systems are controlled by embedded microprocessors and software, which are frequently connected to the institution’s network. But these innovations also mean that a software defect can cause a critical failure or security vulnerability.

In 2007, former vice president Dick Cheney famously had the wireless capabilities of his pacemaker disabled. Why? He was concerned “about reports that attackers could hack the devices and kill their owners.” Since then, the vulnerabilities caused by the larger attack surface area on modern medical devices have gone from hypothetical to demonstrable, in part due to the complexity of the software, and in part due to the failure to properly harden the code.

In October 2011, The Register reported that “a security researcher has devised an attack that hijacks nearby insulin pumps, enabling him to surreptitiously deliver fatal doses to diabetic patients who rely on them.” The insulin pump worked because the pump contained a short-range radio that allow patients and doctors to adjust its functions. The researcher showed that, by using a special antenna and custom-written software, he could locate and seize control of any such device within 300 feet.

report published by Independent Security Evaluators (ISE) shows the danger. This report examined 12 hospitals, the organization concluded “that remote adversaries can easily deploy attacks that manipulate records or devices in order to fully compromise patient health” (p. 25). Later in the report, the researchers show how they demonstrated the ability to manipulate the flow of medicine or blood samples within the hospital, resulting in the delivery of improper medicate types and dosages (p. 37)–and do all this from the hospital lobby. They were also able to hack into and remotely control patient monitors and breathing tubes – and trigger alarms that might cause doctors or nurses to administer unneeded medications.

Read more in my blog post for Parasoft, “What’s the Cure for Software Defects and Vulnerabilities in Medical Devices?

Think about alarm systems in cars. By default, many automobiles don’t come with an alarm system installed from the factory. That was for three main reasons: It lowered the base sticker price on the car; created a lucrative up-sell opportunity; and allowed for variations on alarms to suit local regulations.

My old 2004 BMW 3-series convertible (E46), for example, came pre-wired for an alarm. All the dealer had to do, upon request (and payment of $$$) was install a couple of sensors and activate the alarm in the car’s firmware. Voilà! Instant protection. Third-party auto supply houses and garages, too, were delighted that the car didn’t include the alarm, since that made it easier to sell one to worried customers, along with a great deal on a color-changing stereo head unit, megawatt amplifier and earth-shattering sub-woofer.

Let’s move from cars to cybersecurity. The dangers are real, and as an industry, it’s in our best interest to solve this problem, not by sticking our head in the sand, not by selling aftermarket products, but by a two-fold approach: 1) encouraging companies to make more secure products; and 2) encouraging customers to upgrade or replace vulnerable products — even if there’s not a dollar, pound, euro, yen or renminbi of profit in it for us:

  • If you’re a security hardware, software, or service company, the problem of malicious bits traveling over broadband, wireless and the Internet backbone is also not your problem. Rather, it’s an opportunity to sell products. Hurray for one-time sales, double hurray for recurring subscriptions.
  • If you’re a carrier, the argument goes, all you care about is the packets, and the reliability of your network. The service level agreement provided to consumers and enterprises talks about guaranteed bandwidth, up-time availability, and time to recover from failures; it certainly doesn’t promise that devices connected to your service will be free of malware or safe from hacking. Let customers buy firewalls and endpoint protection – and hey, if we offer that as a service, that’s a money-making opportunity.

Read more about this subject in my latest article for Pipeline Magazine, “An Advocate for Safer Things.”

Cloud-based firewalls come in two delicious flavors: vanilla and strawberry. Both flavors are software that checks incoming and outgoing packets to filter against access policies and block malicious traffic. Yet they are also quite different. Think of them as two essential network security tools: Both are designed to protect you, your network, and your real and virtual assets, but in different contexts.

Disclosure: I made up the terms “vanilla firewall” and “strawberry firewall” for this discussion. Hopefully they help us differentiate between the two models as we dig deeper.

Let’s start with a quick overview:

  • Vanilla firewalls are usually stand-alone products or services designed to protect an enterprise network and its users — like an on-premises firewall appliance, except that it’s in the cloud. Service providers call this a software-as-a-service (SaaS) firewall, security as a service (SECaaS), or even firewall as a service (FaaS).
  • Strawberry firewalls are cloud-based services that are designed to run in a virtual data center using your own servers in a platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS) model. In these cases, the firewall application runs on the virtual servers and protects traffic going to, from, and between applications in the cloud. The industry sometimes calls these next-generation firewalls, though the term is inconsistently applied and sometimes refers to any advanced firewall system running on-prem or in the cloud.

So why do we need these new firewalls? Why not stick a 1U firewall appliance into a rack, connect it up to the router, and call it good? Easy: Because the definition of the network perimeter has changed. Firewalls used to be like guards at the entrance to a secured facility. Only authorized people could enter that facility, and packages were searched as they entered and left the building. Moreover, your users worked inside the facility, and the data center and its servers were also inside. Thus, securing the perimeter was fairly easy. Everything inside was secure, everything outside was not secure, and the only way in and out was through the guard station.

Intrigued? Hungry? Both? Please read the rest of my story, called “Understanding cloud-based firewalls,” published on Enterprise.nxt.

What’s the biggest tool in the security industry’s toolkit? The patent application. Security thrives on innovation, and always has, because throughout recorded history, the bad guys have always had the good guys at the disadvantage. The only way to respond is to fight back smarter.

Sadly, fighting back smarter isn’t always the case. At least, not when looking over the vendor offerings at RSA Conference 2017, held mid-February in San Francisco. Sadly, some of the products and services wouldn’t have seemed out of place a decade ago. Oh, look, a firewall! Oh look, a hardware device that sits on the network and scans for intrusions! Oh, look, a service that trains employees not to click on phishing spam!

Fortunately, some companies and big thinkers are thinking new about the types of attacks… and the best ways to protect against them, detect when those protections end, how to respond when attacks are detected, and ways to share information about those attacks.

The battle, after all, is asymmetric. Think about your typical target: It’s a business or a government organization or a military or a person. It is known. It can be identified. It can’t hide, or it can’t hide for long. It defenses, or at least their outer perimeter, can be seen and tested. Security secrets and vulnerabilities can be neutralized by someone who spills those secrets, whether through spying or social engineering.

Knowing the enemy

By contrast, while attackers know who the target is, the target doesn’t know who the attackers are. There many be many attackers, and they can shift targets on short notice, going after the biggest prize or the weakest prize. They can swamp the target with attacks. If one attackers is neutralized, the other attackers are still a threat. And in fact, even the attackers don’t know who the other attackers are. Their lack of coordination is a strength.

In cyberwarfare, as in real warfare, a single successful incursion can have incredible consequences. With one solid foothold in an endpoint – whether that endpoint is on a phone or a laptop, on a server or in the cloud – the bad guys are in a good position to gain more intelligence, seek out credentials, undermine defenses, and take over new footholds.

A Failed Approach

The posture of the cybersecurity industry – and of info sec professionals and the CISO – must shift. For years, the focus was almost exclusively on prevention. Install a firewall, and keep that firewall up to date! Install antivirus software, and keep adding signatures! Install intrusion detection systems, and then upgrade them to intrusion prevention systems!

That approached failed, just as an approach to medicine that focus exclusively on wellness, healthy eating and taking vitamins will fail. The truth is that breaches happen, in part because organizations don’t do a perfect job with their prevention methods, and in part because bad guys find new weaknesses that nobody considered, from zero-day software vulnerabilities to clever new spearphishing techniques. A breach is inevitable, the industry has admitted. Now, the challenge is to detect that breach quickly, move swiftly to isolate the damage, and then identify root causes so that future attacks using that same vulnerability won’t succeed.

Meanwhile, threat intelligence tools allow businesses to share information, carefully and confidentially. When one company is attacked, others can learn how to guard against that same attack vector. Hey, criminals share information about vulnerabilities using the dark web – so let’s learn from their example.

At RSA Conference 2017, most of the messages were same-old, same-old. Not all, fortunately. I was delighted, however, to see a renewed emphasis at some companies, and in some keynotes, on innovation. Not merely to keep up with the competition or to realize short-term advantage of cybercriminals. But rather, continuous, long-term investment focused on the constantly changing nature of cybersecurity. Security thrives on innovation. Because the bad guys innovate too.

Everyone has received those crude emails claiming to be from your bank’s “Secuirty Team” that tells you that you need to click a link to “reset you account password.” It’s pretty easy to spot those emails, with all the misspellings, the terrible formatting, and the bizarre “reply to” email addresses at domains halfway around the world. Other emails of that sort ask you to review an unclothed photo of a A-list celebrity, or open up an attached document that tells you what you’ve won.

We can laugh. However, many people fall for those phishing scams — and willingly surrender their bank account numbers and passwords, or install malware, such as ransomware.

Less obvious, and more effective, are attacks that are carefully crafted to appeal to a high-value individual, such as a corporate executive or systems administrator. Despite their usual technological sophistication, anyone can be fooled, if the spearphishing email is good enough – spearphishing being the term for phishing emails designed specifically to entrap a certain person.

What’s the danger? Plenty. Spearphishing emails that pretend to be from the CEO can convince a corporate accounting manager to wire money to an overseas account. Called the “Wire Transfer Scam,” this has been around for several years and still works, costing hundreds of millions of dollars, said the FBI.

These types of scams can hurt individuals as well, getting access to their private financial information. In February 2017, about 7,700 employees of the Manatee School District in Florida had their taxpayer numbers stolen when a payroll employee responded to what she thought was a legitimate query from a district officer:

Forward all schools employees 2016 W2 forms to me attached and sent in PDF, I will like to have them as soon as possible for board review. Thanks.

It was a scam, and the scammers have each employee’s W2, a key information document in the United States. The cost of the damage: Unknown at this point, but it’s bad for the school district and for the employees as well. Sadly, this is not a new threat: The U.S. Internal Revenue Service had warned about this exact phishing scam in March 2016.

The cybercriminals behind spearphishing are continuing to innovate. Fortunately, the industry is fight back. Menlo Security, a leading security company, recently uncovered a sophisticated spearphishing attack at a well-known enterprise. While it’s understandable that the victim would decline to be identified, Menlo Security was able to provide some details on the scheme – which incorporate multiple scripts to truly customize the attack and trick the victim into disclosing key credentials.

  • The attackers performed various checks on the password entered by the victim and their IP address to determine whether it was a true compromise versus somebody who had figured out the attack.
  • The attackers supported various email providers. This was determined by the fact that they served custom pages based on the email domain. For example, a victim whose email address was email hidden; JavaScript is required would be served a page that looked like a Gmail login page.
  • The attackers exfiltrated the victim’s personally identifiable information (PII) to an attacker controlled account.
  • The attacker relied heavily on several key scripts to execute the phishing campaign, and to obtain the victim’s IP address in addition to the victim’s country and city.

Phishing and spearphishing have come a long way from those crude emails – which still work, believe it or not. We can’t count on spotting bad spelling and laughable return-address domains on emails to help identify the fraud, because the hackers will figure that out, and use native English speakers and spellcheck. The only solution will be, must be, a technological one.

What’s on the industry’s mind? Security and mobility are front-and-center of the cerebral cortex, as two of the year’s most important events prepare to kick off.

The Security Story

At the RSA Conference 2017 (February 13-17 in San Francisco), expect to see the best of the security industry, from solutions providers to technology firms to analysts. RSA can’t come too soon.

Ransomware, which exploded into the public’s mind last year with high-profile incidents, continues to run rampant. Attackers are turning to ever-bigger targets, with ever-bigger fallout. It’s not enough that hospitals are still being crippled (this was big in 2016), but hotel guests are locked out of their rooms, police departments are losing important crime evidence, and even CCTV footage has been locked away.

What makes ransomware work? Human weakness, for the most part. Many successful ransomware attacks begin with either generalized phishing or highly sophisticated and targeted spearphishing. Once the target user has clicked on a link in a malicious email or website, odds are good that his/her computer will be infected. From there, the malware can do more than encrypt data and request a payout. It can also spread to other computers on the network, install spyware, search for unpatched vulnerabilities and cause untold havoc.

Expect to hear a lot about increasingly sophisticated ransomware at RSA. We’ll see solutions to help, ranging from ever-more-sophisticated email scanners, endpoint security tools, isolation platforms and tools to prevent malware from spreading beyond the initially affected machine.

Also expect to hear plenty about artificial intelligence as the key to preventing and detecting attacks that evade traditional technologies like signatures. AI has the ability to learn and respond in ways that go far beyond anything that humans can do – and when coupled with increasingly sophisticated threat intelligence systems, AI may be the future of computer security.

The Mobility Story

Halfway around the world, mobility is only part of the story at Mobile World Congress (February 27 – March 2 in Barcelona). There will be many sessions about 5G wireless, which can provision not only traditional mobile users, but also industrial controls and the Internet of Things. AT&T recently announced that it will launch 5G service (with peak speeds of 400Mbps or better) in two American cities, Austin and Indianapolis. While the standards are not yet complete, that’s not stopping carriers and the industry from moving ahead.

Also key to the success of all mobile platforms is cloud computing. Microsoft is moving more aggressively to the cloud, going beyond Azure and Office 365 with a new Windows 10 Cloud edition, a simplified experience designed to compete against Google’s Chrome platform.

The Internet of Things is also roaring to life, and it means a lot more than fitness bands and traffic sensors. IoT applications are showing up in everything from industrial controls to embedded medical devices to increasingly intelligent cars and trucks. What makes it work? Big batteries, reliable wireless, industry standards and strong security. Every type of security player is involved with IoT, from the cloud to wireless to endpoint protection. You’ll hear more about security at Mobile World Congress than in the past, because the threats are bigger than ever. And so are the solutions.

The best way to have a butt-kicking cloud-native application is to write one from scratch. Leverage the languages, APIs, and architecture of the chosen cloud platform before exploiting its databases, analytics engines, and storage. As I wrote for Ars Technica, this will allow you to take advantage of the wealth of resources offered by companies like Microsoft, with their Azure PaaS (Platform-as-a-Service) offering or by Google Cloud Platform’s Google App Engine PaaS service.

Sometimes, however, that’s not the job. Sometimes, you have to take a native application running on a server in your local data center or colocation facility and make it run in the cloud. That means virtual machines.

Before we get into the details, let’s define “native application.” For the purposes of this exercise, it’s an application written in a high-level programming language, like C/C++, C#, or Java. It’s an application running directly on a machine talking to an operating system, like Linux or Windows, that you want to run on a cloud platform like Windows Azure, Amazon Web Services (AWS), or Google Cloud Platform (GCP).

What we are not talking about is an application that has already been virtualized, such as already running within VMware’s ESXi or Microsoft’s Hyper-V virtual machine. Sure, moving an ESXi or Hyper-V application running on-premises into the cloud is an important migration that may improve performance and add elasticity while switching capital expenses to operational expenses. Important, yes, but not a challenge. All the virtual machine giants and cloud hosts have copious documentation to help you make the switch… which amounts to basically copying the virtual machine file onto a cloud server and turning it on.

Many possible scenarios exist for moving a native datacenter application into the cloud. They boil down to two main types of migrations, and there’s no clear reason to choose one over the other:

The first is to create a virtual server within your chosen cloud provider, perhaps running Windows Server or running a flavor of Linux. Once that virtual server has been created, you migrate the application from your on-prem server to the new virtual server—exactly as you would if you were moving from one of your servers to a new server. The benefits: the application migration is straightforward, and you have 100-percent control of the server, the application, and security. The downside: the application doesn’t take advantage of cloud APIs or other special servers. It’s simply a migration that gets a server out of your data center. When you do this, you are leveraging a type of cloud called Infrastructure-as-a-Service (IaaS). You are essentially treating the cloud like a colocation facility.

The second is to see if your application code can be ported to run within the native execution engine provided by the cloud service. This is called Platform-as-a-Service (PaaS). The benefits are that you can leverage a wealth of APIs and other services offered by the cloud provider. The downsides are that you have to ensure that your code can work on the service (which may require recoding or even redesign) in order to use those APIs or even to run at all. You also don’t have full control over the execution environment, which means that security is managed by the cloud provider, not by you.

And of course, there’s the third option mentioned at the beginning: Writing an entirely new application native for the cloud provider’s PaaS. That’s still the best option, if you can do it. But our task today is to focus on migrating an existing application.

Let’s look into this more closely, via my recent article for Ars Technica, “Great app migration takes enterprise “on-prem” applications to the Cloud.”

Las Vegas, January 2017 — “Alexa, secure the enterprise against ransomware.” Artificial intelligence is making tremendous headway, as seen at this year’s huge Consumer Electronics Show (CES). We’re seeing advances that leverage AI in everything from speech recognition to the Internet of Things (IoT) to robotics to home entertainment.

Not sure what type of music to play? Don’t worry, the AI engine in your cloud-based music service knows your taste better than you do. Want to read a book whilst driving to the office? Self-driving cars are here today in limited applications, and we’ll see a lot more of them in 2017.

Want to make brushing your teeth more fun, all while promoting good dental health? The Ara is the “1st toothbrush with Artificial Intelligence,” claims Kolibree, a French company that introduced the product at CES 2017.

Gadgets dominate CES. While crowds are lining up to see the AI-powered televisions, cookers and robots, the real power of AI is hidden, behind the scenes, and not part of the consumer context. Unknown to happy shoppers exploring AI-based barbecues, artificial intelligence is keeping our networks safe, detecting ransomware, helping improve the efficiency of advertising and marketing, streamlining business efficiencies, diagnosing telecommunication faults in undersea cables, detecting fraud in banking and stock-marketing transactions, and even helping doctors track the spread of infectious diseases.

Medical applications capture the popular imagination because they’re so fast and effective. The IBM Watson AI-enabled supercomputer, for example, can read 200 million pages of text in three seconds — and understand what it reads. An oncology application running on Watson analyzes a patient’s medical records, and then combines attributes from the patient’s file with clinical expertise, external research, and data. Based on that information, Watson for Oncology identifies potential treatment plans for a patient. This means doctors can consider the treatment options provided by Watson when making decisions for individual patients. Watson even offers supporting evidence in the form of administration information, as well as warnings and toxicities for each drug.

Doctor AI Can Cure Cybersecurity Ills

Moving beyond medicine, AI is proving essential for protecting computer networks — and their users against intrusion. The traditional non-AI-based anti-virus and anti-malware products can’t protect against advanced threats, and that’s where companies like Cylance come in. They can use neural networks and other machine-learning techniques to study millions of malicious files, from executables to documents to PDFs to images. Using pattern recognition, Cylance have developed a revolutionary machine learning platform that can identify suspicious files that might be seen on websites or as email attachments, even if it’s never seen that particular type of malware before. Nothing but AI can get the job done, not in an era when over a million new pieces of malware, ranging from phishing to ransomware, appear every single day.

Menlo Security is another network-protection company that leverages artificial intelligence. The Menlo Security Isolation Platform uses AI to prevent Internet-based malware from ever reaching an endpoint, such as a desktop or mobile device, because email and websites are accessed inside the cloud — not on the client’s computer. Only safe, malware-free rendering information is sent to the user’s endpoint, eliminating the possibility of malware reaching the user’s device. An artificial intelligence engine constantly scans the Internet session to provide protection against spear-phishing and other email attacks.

What if a machine does become compromised? It’s unlikely, but it can happen — and the price of a single breech can be incredible, especially if a hacker can take full control of the compromised device and use it to attack other assets within the enterprise, such as servers, routers or executives’ computers. If a breach does occur, that’s when the AI technology of Javelin Networks leaps into action, detecting that the attack is in progress, alerting security teams, isolating the device from the network — while simultaneously tricking the attackers into believing they’ve succeeded in their attack, therefore keeping them “on the line” while real-time forensics gather information needed to identify the attacker and help shut them down for good.

Socializing Artificial Intelligence

There’s a lot more to enterprise-scale AI than medicine and computer security, of course. QSocialNow, an incredibly innovative company in Argentina, uses AI-based Big Data and Predictive Analytics to watch an organization’s social media account — and empower them to not only analyze trends, but respond in mere seconds in the case of an unexpected event, such as a rise in customer complaints, the emergence of a social protest, even a physical disaster like an earthquake or tornado. Yes, humans can watch Twitter, Facebook and other networks, but they can’t act as fast as AI — or spot subtle trends that only advanced machine learning can observe through mathematics.

Robots can be powerful helpers for humanity, and AI-based toothbrushes can help us and our kids keep our teeth healthy. While the jury may be out on the implications of self-driving cars on our city streets, there’s no doubt that AI is keeping us — and our businesses — safe and secure. Let’s celebrate the consumer devices unveiled at CES, and the artificial intelligence working behind the scenes, far from the Las Vegas Strip, for our own benefit.

According to a recent study, 46% of the top one million websites are considered risky. Why? Because the homepage or background ad sites are running software with known vulnerabilities, the site was categorized as a known bad for phishing or malware, or the site had a security incident in the past year.

According to Menlo Security, in its “State of the Web 2016” report introduced mid-December 2016, “… nearly half (46%) of the top million websites are risky.” Indeed, Menlo says, “Primarily due to outdated software, cyber hackers now have their veritable pick of half the web to exploit. And exploitation is becoming more widespread and effective for three reasons: 1. Risky sites have never been easier to exploit; 2. Traditional security products fail to provide adequate protection; 3. Phishing attacks can now utilize legitimate sites.”

This has been a significant issue for years. However, the issue came to the forefront earlier this year when several well-known media sites were essentially hijacked by malicious ads. The New York Times, the BBC, MSN and AOL were hit by tainted advertising that installed ransomware, reports Ars Technica. From their March 15, 2016, article, “Big-name sites hit by rash of malicious ads spreading crypto ransomware”:

The new campaign started last week when ‘Angler,’ a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

The results of this attack, reported The Guardian at around the same time: 

When the infected adverts hit users, they redirect the page to servers hosting the malware, which includes the widely-used (amongst cybercriminals) Angler exploit kit. That kit then attempts to find any back door it can into the target’s computer, where it will install cryptolocker-style software, which encrypts the user’s hard drive and demands payment in bitcoin for the keys to unlock it.

If big-money trusted media sites can be hit, so can nearly any corporate site, e-commerce portal, or any website that uses third-party tools – or where there might be the possibility of unpatched servers and software. That means just about anyone. After all, not all organizations are diligent about monitoring for common vulnerabilities and exploits (CVE) on their on-premises servers. When companies run their websites on multi-tenant hosting facilities, they don’t even have access to the operating system directly, but rely upon the hosting company to install patches and fixes to Windows Server, Linux, Joomla, WordPress and so-on.

A single unpatched operating system, web server platform, database or extension can introduce a vulnerability which can be scanned for. Once found, that CVE can be exploited, by a talented hacker — or by a disgruntled teenager with a readily-available web exploit kit

What can you do about it? Well, you can read my complete story on this subject, “Malware explosion: The web is risky,” published on ITProPortal.

“If you give your security team the work they hate to do day in and day out, you won’t be able to retain that team.” Eoin Keary should know. As founder, director and CTO of edgescan, a fast-growing managed security service provider (MSSP), his company frees up enterprise security teams to focus on the more strategic, more interesting, more business-critical aspects of InfoSec while his team deals with the stuff they know and do best; deal with the monotony of full-stack vulnerability management.

It’s a perfect match, Keary says. By using an MSSP, customers can focus on business-critical issues, save money, have better security—and not have to replace expensive, highly trained employees who quit after a few months out of boredom. “We are experts in vulnerability management, have built the technology and can deliver very efficiently.”

BCC Risk Advisory Ltd, edgescan’s parent company, based in Dublin, Ireland, was formed in 2011 with “me and a laptop,” explains Keary. He expects his company to end the 2016 fiscal year at seven figure revenues and a growth trajectory of circa 400% compared to 2015. Its secret cyberweapon is a cloud-based SaaS called edgescan. edgescan detects security weaknesses across the customer’s full stack of technology assets, from servers to networks, from websites to apps to mobile devices. It also provides continuous asset profiling and virtual patching coupled with expert support.

edgescan constantly assesses clients’ systems on a continuous basis. “We have a lot of intelligence and automation in the platform to determine what needs to be addressed,” explains Keary.

There’s a lot more to my interview with Eoin Keary — you can read the whole story, “Apparently We Love To Do What Companies Hate. Lucky You!” published in ITSP Magazine.

hackerOnce upon a time, goes the story, there was honor between thieves and victims. They held a member of your family for ransom; you paid the ransom; they left you alone. The local mob boss demanded protection money; if you didn’t pay, your business burned down, but if you did pay and didn’t rat him out to the police, his and his gang honestly tried to protect you. And hackers may have been operating outside legal boundaries, but for the most part, they were explorers and do-gooders intending to shine a bright light on the darkness of the Internet – not vandals, miscreants, hooligans and ne’er-do-wells.

That’s not true any more, perhaps. As I write in IT Pro Portal, “Faceless and faithless: A true depiction of today’s cyber-criminals?

Not that long ago, hackers emerged as modern-day Robin Hoods, digital heroes who relentlessly uncovered weaknesses in applications and networks to reveal the dangers of using technology carelessly. They are curious, provocative; love to know how things work and how they can be improved.

Today, however, there is blight on their good name. Hackers have been maligned by those who do not have our best interests at heart, but are instead motivated by money – attackers who steal our assets and hold organisations such as banks and hospitals to ransom.

(My apologies for the British spelling – it’s a British website and they’re funny that way.)

It’s hard to lose faith in hackers, but perhaps we need to. Sure, not all are cybercriminals, but with the rise of ransomware, nasty action by state actors, and some pretty nasty attacks like the new single-pixel malvertising exploit written up yesterday by Dan Goodwin in Ars Technica (which was discovered out after I wrote this story), it’s hard to trust that most hackers secretly have our best interests at heart.

This reminds me of Ranscam. In a blog post, “When Paying Out Doesn’t Pay Off,” Talos reports that:

Ranscam is one of these new ransomware variants. It lacks complexity and also tries to use various scare tactics to entice the user to paying, one such method used by Ranscam is to inform the user they will delete their files during every unverified payment click, which turns out to be a lie. There is no longer honor amongst thieves. Similar to threats like AnonPop, Ranscam simply delete victims’ files, and provides yet another example of why threat actors cannot always be trusted to recover a victim’s files, even if the victim complies with the ransomware author’s demands. With some organizations likely choosing to pay the ransomware author following an infection,  Ranscam further justifies the importance of ensuring that you have a sound, offline backup strategy in place rather than a sound ransom payout strategy. Not only does having a good backup strategy in place help ensure that systems can be restored, it also ensures that attackers are no longer able to collect revenue that they can then reinvest into the future development of their criminal enterprise.

Scary — and it shows that often, crime does pay. No honor among thieves indeed.

phoneFrom company-issued tablets to BYOD (bring your own device) smartphones, employees are making the case that mobile devices are essential for productivity, job satisfaction, and competitive advantage. Except in the most regulated industries, phones and tablets are part of the landscape, but their presence requires a strong security focus, especially in the era of non-stop malware, high-profile hacks, and new vulnerabilities found in popular mobile platforms. Here are four specific ways of examining this challenge that can help drive the choice of both policies and technologies for reducing mobile risk.

Protect the network: Letting any mobile device on the business network is a risk, because if the device is compromised, the network (and all of its servers and other assets) may be compromised as well. Consider isolating internal WiFi links to secured network segments, and only permit external access via virtual private networks (VPNs). Install firewalls that guard the network by recognizing not only authorized devices, but also authorized users — and authorized applications. Be sure to keep careful tabs on devices accessing the network, from where, and when.

Protect the device: A mobile device can be compromised in many ways: It might be stolen, or the user might install malware that provides a gateway for a hacker. Each mobile device should be protect by strong passwords not only for the device, but on critical business apps. Don’t allow corporate data to be stored on the device itself. Ensure that there are remote-wipe capabilities if the device is lost. And consider installed a Mobile Device Management (MDM) platform that can give IT full control over the mobile device – or at least those portions of a employee-owned device that might ever be used for business purposes.

Protect the data: To be productive with their mobile devices, employees want access to important corporate assets, such as email, internal websites, ERP or CRM applications, document repositories, as well as cloud-based services. Ensure that permissions are granted specifically for needed services, and that all access is encrypted and logged. As mentioned above, never let corporate data – including documents, emails, chats, internal social media, contacts, and passwords – be stored or cached on the mobile device. Never allow co-mingling of personal and business data, such as email accounts. Yes, it’s a nuisance, but make the employee log into the network, and authenticate into enterprise-authorized applications, each and every time. MDM platforms can help enforce those policies as well.

Protect the business: The policies regarding mobile access should be worked out along with corporate counsel, and communicated clearly to all employees before they are given access to applications and data. The goal isn’t to be heavy-handed, but rather, to gain their support. If employees understand the stakes, they become allies in helping protect business interests. Mobile access is risky for enterprises, and with today’s aggressive malware, the potential for harm has never been higher. It’s not too soon to take it seriously.

wayne-rashWhen an employee account is compromised by malware, the malware establishes a foothold on the user’s computer – and immediately tries to gain access to additional resources. It turns out that with the right data gathering tools, and with the right Big Data analytics and machine-learning methodologies, the anomalous network traffic caused by this activity can be detected – and thwarted.

That’s the role played by Blindspotter, a new anti-malware system that seems like a specialized version of a network intrusion detection/prevention system (IDPS). Blindspotter can help against many types of malware attacks. Those include one of the most insidious and successful hack vectors today: spear phishing. That’s when a high-level target in your company is singled out for attack by malicious emails or by compromised websites. All the victim has to do is open an email, or click on a link, and wham – malware is quietly installed and operating. (High-level targets include top executives, financial staff and IT administrators.)

My colleague Wayne Rash recently wrote about this network monitoring solution and its creator, Balabit, for eWeek in “Blindspotter Uses Machine Learning to Find Suspicious Network Activity”:

The idea behind Balabit’s Blindspotter and Shell Control Box is that if you gather enough data and subject it to analysis comparing activity that’s expected with actual activity on an active network, it’s possible to tell if someone is using a person’s credentials who shouldn’t be or whether a privileged user is abusing their access rights.

 The Balabit Shell Control Box is an appliance that monitors all network activity and records the activity of users, including all privileged users, right down to every keystroke and mouse movement. Because privileged users such as network administrators are a key target for breaches it can pay special attention to them.

The Blindspotter software sifts through the data collected by the Shell Control Box and looks for anything out of the ordinary. In addition to spotting things like a user coming into the network from a strange IP address or at an unusual time of day—something that other security software can do—Blindspotter is able to analyze what’s happening with each user, but is able to spot what is not happening, in other words deviations from normal behavior.

Read the whole story here. Thank you, Wayne, for telling us about Blindspotter.

bloombergMedical devices are incredibly vulnerable to hacking attacks. In some cases it’s because of software defects that allow for exploits, like buffer overflows, SQL injection or insecure direct object references. In other cases, you can blame misconfigurations, lack of encryption (or weak encryption), non-secure data/control networks, unfettered wireless access, and worse.

Why would hackers go after medical devices? Lots of reasons. To name but one: It’s a potential terrorist threat against real human beings. Remember that Dick Cheney famously disabled the wireless capabilities of his implanted heart monitor for fear of an assassination attack.

Certainly healthcare organizations are being targeted for everything from theft of medical records to ransomware. To quote the report “Hacking Healthcare IT in 2016,” from the Institute for Critical Infrastructure Technology (ICIT):

The Healthcare sector manages very sensitive and diverse data, which ranges from personal identifiable information (PII) to financial information. Data is increasingly stored digitally as electronic Protected Health Information (ePHI). Systems belonging to the Healthcare sector and the Federal Government have recently been targeted because they contain vast amounts of PII and financial data. Both sectors collect, store, and protect data concerning United States citizens and government employees. The government systems are considered more difficult to attack because the United States Government has been investing in cybersecurity for a (slightly) longer period. Healthcare systems attract more attackers because they contain a wider variety of information. An electronic health record (EHR) contains a patient’s personal identifiable information, their private health information, and their financial information.

EHR adoption has increased over the past few years under the Health Information Technology and Economics Clinical Health (HITECH) Act. Stan Wisseman [from Hewlett-Packard] comments, “EHRs enable greater access to patient records and facilitate sharing of information among providers, payers and patients themselves. However, with extensive access, more centralized data storage, and confidential information sent over networks, there is an increased risk of privacy breach through data leakage, theft, loss, or cyber-attack. A cautious approach to IT integration is warranted to ensure that patients’ sensitive information is protected.”

Let’s talk devices. Those could be everything from emergency-room monitors to pacemakers to insulin pumps to X-ray machines whose radiation settings might be changed or overridden by malware. The ICIT report says,

Mobile devices introduce new threat vectors to the organization. Employees and patients expand the attack surface by connecting smartphones, tablets, and computers to the network. Healthcare organizations can address the pervasiveness of mobile devices through an Acceptable Use policy and a Bring-Your-Own-Device policy. Acceptable Use policies govern what data can be accessed on what devices. BYOD policies benefit healthcare organizations by decreasing the cost of infrastructure and by increasing employee productivity. Mobile devices can be corrupted, lost, or stolen. The BYOD policy should address how the information security team will mitigate the risk of compromised devices. One solution is to install software to remotely wipe devices upon command or if they do not reconnect to the network after a fixed period. Another solution is to have mobile devices connect from a secured virtual private network to a virtual environment. The virtual machine should have data loss prevention software that restricts whether data can be accessed or transferred out of the environment.

The Internet of Things – and the increased prevalence of medical devices connected hospital or home networks – increase the risk. What can you do about it? The ICIT report says,

The best mitigation strategy to ensure trust in a network connected to the internet of things, and to mitigate future cyber events in general, begins with knowing what devices are connected to the network, why those devices are connected to the network, and how those devices are individually configured. Otherwise, attackers can conduct old and innovative attacks without the organization’s knowledge by compromising that one insecure system.

Given how common these devices are, keeping IT in the loop may seem impossible — but we must rise to the challenge, ICIT says:

If a cyber network is a castle, then every insecure device with a connection to the internet is a secret passage that the adversary can exploit to infiltrate the network. Security systems are reactive. They have to know about something before they can recognize it. Modern systems already have difficulty preventing intrusion by slight variations of known malware. Most commercial security solutions such as firewalls, IDS/ IPS, and behavioral analytic systems function by monitoring where the attacker could attack the network and protecting those weakened points. The tools cannot protect systems that IT and the information security team are not aware exist.

The home environment – or any use outside the hospital setting – is another huge concern, says the report:

Remote monitoring devices could enable attackers to track the activity and health information of individuals over time. This possibility could impose a chilling effect on some patients. While the effect may lessen over time as remote monitoring technologies become normal, it could alter patient behavior enough to cause alarm and panic.

Pain medicine pumps and other devices that distribute controlled substances are likely high value targets to some attackers. If compromise of a system is as simple as downloading free malware to a USB and plugging the USB into the pump, then average drug addicts can exploit homecare and other vulnerable patients by fooling the monitors. One of the simpler mitigation strategies would be to combine remote monitoring technologies with sensors that aggregate activity data to match a profile of expected user activity.

A major responsibility falls onto the device makers – and the programmers who create the embedded software. For the most part, they are simply not up to the challenge of designing secure devices, and may not have the polices, practices and tools in place to get cybersecurity right. Regrettably, the ICIT report doesn’t go into much detail about the embedded software, but does state,

Unlike cell phones and other trendy technologies, embedded devices require years of research and development; sadly, cybersecurity is a new concept to many healthcare manufacturers and it may be years before the next generation of embedded devices incorporates security into its architecture. In other sectors, if a vulnerability is discovered, then developers rush to create and issue a patch. In the healthcare and embedded device environment, this approach is infeasible. Developers must anticipate what the cyber landscape will look like years in advance if they hope to preempt attacks on their devices. This model is unattainable.

In November 2015, Bloomberg Businessweek published a chilling story, “It’s Way too Easy to Hack the Hospital.” The authors, Monte Reel and Jordon Robertson, wrote about one hacker, Billy Rios:

Shortly after flying home from the Mayo gig, Rios ordered his first device—a Hospira Symbiq infusion pump. He wasn’t targeting that particular manufacturer or model to investigate; he simply happened to find one posted on EBay for about $100. It was an odd feeling, putting it in his online shopping cart. Was buying one of these without some sort of license even legal? he wondered. Is it OK to crack this open?

Infusion pumps can be found in almost every hospital room, usually affixed to a metal stand next to the patient’s bed, automatically delivering intravenous drips, injectable drugs, or other fluids into a patient’s bloodstream. Hospira, a company that was bought by Pfizer this year, is a leading manufacturer of the devices, with several different models on the market. On the company’s website, an article explains that “smart pumps” are designed to improve patient safety by automating intravenous drug delivery, which it says accounts for 56 percent of all medication errors.

Rios connected his pump to a computer network, just as a hospital would, and discovered it was possible to remotely take over the machine and “press” the buttons on the device’s touchscreen, as if someone were standing right in front of it. He found that he could set the machine to dump an entire vial of medication into a patient. A doctor or nurse standing in front of the machine might be able to spot such a manipulation and stop the infusion before the entire vial empties, but a hospital staff member keeping an eye on the pump from a centralized monitoring station wouldn’t notice a thing, he says.

 The 97-page ICIT report makes some recommendations, which I heartily agree with.

  • With each item connected to the internet of things there is a universe of vulnerabilities. Empirical evidence of aggressive penetration testing before and after a medical device is released to the public must be a manufacturer requirement.
  • Ongoing training must be paramount in any responsible healthcare organization. Adversarial initiatives typically start with targeting staff via spear phishing and watering hole attacks. The act of an ill- prepared executive clicking on a malicious link can trigger a hurricane of immediate and long term negative impact on the organization and innocent individuals whose records were exfiltrated or manipulated by bad actors.
  • A cybersecurity-centric culture must demand safer devices from manufacturers, privacy adherence by the healthcare sector as a whole and legislation that expedites the path to a more secure and technologically scalable future by policy makers.

This whole thing is scary. The healthcare industry needs to set up its game on cybersecurity.

firefox-privateBe paranoid! When you visit a website for the first time, it can learn a lot about you. If you have cookies on your computer from one of the site’s partners, it can see what else you have been doing. And it can place cookies onto your computer so it can track your future activities.

Many (or most?) browsers have some variation of “private” browsing mode. In that mode, websites shouldn’t be able to read cookies stored on your computer, and they shouldn’t be able to place permanent cookies onto your computer. (They think they can place cookies, but those cookies are deleted at the end of the session.)

Those settings aren’t good enough, because they are either all or nothing, and offer a poor balance between ease-of-use and security/privacy. The industry can and must do better. See why in my essay on NetworkWorld, “We need a better Private Browsing Mode.

 

muggingNothing is scarier than getting together with a buyer (or a seller) to exchange dollars for a product advertised on Craig’s List, eBay or another online service… and then be mugged or robbed. There are certainly plenty of news stories on this subject, but the danger continues. Here are some recent reports:

Don’t be a victim! The Phoenix Police Department has released an advisory. It’s good advice. Follow it.

Phoenix Police Media Advisory:

Internet Exchange Related Crimes

The Phoenix Police Department has recently experienced reported crimes specific to the usage of internet exchange sites that allow sellers to advertise items for sale and then interact with buyers. Subsequent to the online interaction, the two parties usually meet and exchange money for goods in a private party transaction at an agreed-upon location. However, due to circumstances surrounding the nature of these interactions, many criminals are using them for their own purposes

 Specifically, the Phoenix Police Department has seen an increase in robberies of one of the involved parties by the other party during these exchanges. However, crimes as serious as homicide and kidnapping have been linked to these transactions. Although no strategy is 100% effective when trying to be safe, there are a number of steps one can take to ensure the transaction is done under the safest possible circumstances. The department is urging those involved in these private, internet-based sales transactions to consider the following while finalizing the deal and making safety their primary consideration:

  • If the deal seems too good to be true, it probably is.
  • The location of the exchange should be somewhere in public that has many people around like a mall, a well-traveled parking lot, or a public area. Do not agree to meet at someone’s house, a secluded place, a vacant house, or the like.
  • Try to schedule the transaction while it is still daylight, or at least in a place that is very well lit.
  • Ask why the person is selling the item and what type of payment they will accept. Be wary of agreeing to a cash payment and then travelling to the deal with a large sum of cash.
  • Bring a friend with you to the meet and let someone who isn’t going with you know where you are going and when you can be expected back.
  • Know the fair market value of the item you are purchasing.
  •  Trust your instinct! If something seems suspicious, or you get a bad feeling, pass on the deal!

Other good advice that I’ve seen:

  • Never agree to meet in a second place, when you show up at the agreed-upon place and receive a text message redirecting you somewhere else.
  • Never give the other party your home address. If you must do so (because they are picking up a large item from your house), bring the item outside; don’t let them into your house. Inform your neighbors what’s going on.
  • Call your local police department and ask if they can recommend an Internet Purchase Exchange Location, also known as a Safe Exchange Zone.

Be careful out there, my friends.

big-shredderCan someone steal the data off your old computer? The short answer is yes. A determined criminal can grab the bits, including documents, images, spreadsheets, and even passwords.

If you donate, sell or recycle a computer, whoever gets hold of it can recover the information in its hard drive or solid-state storage (SSD). The platform doesn’t matter: Whether its Windows or Linux or Mac OS, you can’t 100% eliminate sensitive data by, say, eliminating user accounts or erasing files!

You can make the job harder by using the computer’s disk utilities to format the hard drive. Be aware, however, that formatting will thwart a casual thief, but not a determined hacker.

The only truly safe way to destroy the data is to physically destroy the storage media. For years, businesses have physically removed and destroyed the hard drives in desktops, servers and laptops. It used to be easy to remove the hard drive: take out a couple of screws, pop open a cover, unplug a cable, and lift the drive right out.

Once the hard drive is identified and removed, you can smash it with a hammer, drill holes in it, even take it apart (which is fun, albeit time-consuming). Some businesses will put the hard drive into an industrial shredder, which is a scaled-up version of an office paper shredder. Some also use magnetism to attempt to destroy the data. Not sure how effective that is, however, and magnets won’t work at all on SSDs.

It’s much harder to remove the storage from today’s ultra-thin, tightly sealed notebooks, such as a Microsoft Surface or Apple MacBook Air, or even from tablets. What if you want to destroy the storage in order to prevent hackers from gaining access? It’s a real challenge.

If you have access to an industrial shredder, an option is to shred the entire computer. It seems wasteful, and I can imagine that it’s not good to shred lithium-ion batteries – many of which are not easily removable, again, as in the Microsoft Surface or Apple MacBook Air. You don’t want those chemicals lying around. Still, that works, and works well.

Note that an industrial shredder is kinda big and expensive – you can see some from SSL World. However, if you live in any sort of medium-sized or larger urban area, you can probably find a shredding service that will destroy the computer right in front of you. I’ve found one such service here in Phoenix, Assured Document Destruction Inc., that claims to be compliant with industry regulations for privacy, such as HIPAA and Sarbanes-Oxley.

Don’t want to shred the whole computer? Let’s say the computer uses a standard hard drive, usually in a 3.5-inch form factor (desktops and servers) or 2.5-inch form factor (notebooks). If you have a set of small screwdrivers, you should be able to dismantle the computer, remove the storage device, and kill it – such as by smashing it with a maul, drilling holes in it, or taking it completely apart. Note that driving over it in your car, while satisfying, may not cause significant damage.

What about solid state storage? The same actually applies with SSDs, but it’s a bit trickier. Sometimes the drive still looks like a standard 2.5-inch hard drive. But sometimes the “solid state drive” is merely a few exposed chips on the motherboard or a smaller circuit board. You’ve got to smash that sucker. Remove it from the computer. Hulk Smash! Break up the circuit board, pulverize the chips. Only then will it be dead dead dead. (Though one could argue that government agencies like the NSA could still put Humpty Dumpty back together again.)

In short: Even if the computer itself seems totally worthless, its storage can be removed, connected to a working computer, and accessed by a skilled techie. If you want to ensure that your data remains private, you must destroy it.

Web filtering. The phrase connotes keeping employees from spending too much time monitoring Beanie Baby auctions on eBay, and stopping school children from encountering (accidentally or deliberately) naughty images on the internet. Were it that simple — but nowadays, web filtering goes far beyond monitoring staff productivity and maintaining the innocence of childhood. For nearly every organization today, web filtering should be considered an absolute necessity. Small business, K-12 school district, Fortune 500, non-profit or government… it doesn’t matter. The unfiltered internet is not your friend, and legally, it’s a liability; a lawsuit waiting to happen.

Web filtering means blocking internet applications – including browsers – from contacting or retrieving content from websites that violate an Acceptable Use Policy (AUP). The policy might set rules blocking some specific websites (like a competitor’s website). It might block some types of content (like pornography), or detected malware, or even access to external email systems via browser or dedicated clients. In some cases, the AUP might include what we might call government-mandated restrictions (like certain websites in hostile countries, or specific news sources).

Unacceptable use in the AUP

The specifics of the AUP might be up to the organization to define entirely on its own; that would be the case for a small business, perhaps. Government organizations, such as schools or military contractors, might have specific AUP requirements placed on them by funders or government regulators, thereby becoming a compliance/governance issue as well. And of course, legal counsel should be sought when creating policies that balance an employee’s ability to access content of his/her choice, against the company’s obligations to protect the employee (or the company) from unwanted content.

It sounds easy – the organization sets an AUP, consulting legal, IT and the executive suite. The IT department implements the AUP through web filtering, perhaps with software installed and configured on devices; perhaps through firewall settings at the network level; and perhaps through filters managed by the internet service provider. It’s not simple, however. The internet is constantly changing, employees are adept at finding ways around web filters; and besides, it’s tricky to translate policies written in English (as in the legal policy document) into technological actions. We’ll get into that a bit more shortly. First, let’s look more closely at why organizations need those Acceptable Use Policies, and what should be in them.

  • Improving employee productivity. This is the low-hanging fruit. You may not want employees spending too much time on Facebook on their company computers. (Of course, if they are permitted to bring mobile devices into the office, they can still access social media via cellular). That’s a policy consideration, though the jury is out if a blank blockage is the best way to improve productivity.
  • Preserving bandwidth. For technical reasons, you may not want employees streaming Netflix movies or Hulu-hosted classic TV shows across the business network. Seinfeld is fun, but not on company bandwidth. As with social media, this is truly up to the organization to decide.
  • Blocking email access. Many organizations do not want their employees accessing external email services from the business computers. That’s not only for productivity purposes, but also makes it difficult to engage in unapproved communications – such as emailing confidential documents to yourself. Merely configuring your corporate email server to block the exfiltration of intellectual property is not enough if users can access personal gmail.com or hushmail.com accounts. Blocking external email requires filtering multiple protocols as well as specific email hosts, and may be required to protect not only your IP, but also customers’ data, in addition to complying with regulations from organizations like the U.S. Securities and Exchange Commission.
  • Blocking access to pornography and NSFW content. It’s not that you are being a stick-in-the-mud prude, or protecting children. The initial NSFW (not safe for work) are often said as a joke, but in reality, some content can be construed as contributing to an hostile work environment. Just like the need to maintain a physically safe work environment – no blocked fire exits, for example – so too must you maintain a safe internet environment. If users can be unwillingly subjected to offensive content by other employees, there may be significant legal, financial and even public-relations consequences if it’s seen as harassment.
  • Blocking access to malware. A senior manager receives a spear-phishing email that looks legit. He clicks the link and, wham; ransomware is on his computer. Or spyware, like a keylogger. Or perhaps a back-door that allows other access by hackers. You can train employees over and over, and they will still click on unsafe email links or on web pages. Anti-malware software on the computer can help, but web filtering is part of a layered approach to anti-malware protection. This applies to trackers as well: As part of the AUP, the web filters may be configured to block ad networks, behavior trackers and other web services that attempt to glean information about your company and its workers.
  • Blocking access to specific internet applications. Whether you consider it Shadow IT or simply an individual’s personal preference, it’s up to an AUP to decide which online services should be accessible; either through an installed application or via a web interface. Think about online storage repositories such as Microsoft OneDrive, Google Drive, Dropbox or Box: Personal accounts can be high-bandwidth conduits for exfiltration of vast quantities of valuable IP. Web filtering can help manage the situation.
  • Compliance with government regulations. Whether it’s a military base commander making a ruling, or a government restricting access to news sites out-of-favor with the current regime; those are rules that often must be followed without question. It’s not my purpose here to discuss whether this is “censorship,” though in some cases it certainly is. However, the laws of the United States do not apply outside the United States, and blocking some internet sites or types of web content may be part of the requirements for doing business in some countries or with some governments. What’s important here is to ensure that you have effective controls and technology in place to implement the AUP – but don’t go broadly beyond it.
  • Compliance with industry requirements. Let’s use the example of the requirements that schools or public libraries must protect students (and the general public) from content deemed to be unacceptable in that environment. After all, just because a patron is an adult doesn’t mean he/she is allowed to watch pornography on one of the library’s publicly accessible computers, or even on his/her computer on the library’s Wi-Fi network.

What about children?

A key ingredient in creating an AUP for schools and libraries in the United States is the Children’s Internet Protection Act (CIPA). In order to receive government subsidies or discounts, schools and libraries must comply with these regulations. (Other countries may have an equivalent to these policies.)

Learn more about how the CIPA should drive the AUP for any organization where minors can be found, and how best to implement an AUP for secure protection. That’s all covered in my article for Upgrade Magazine, “Web filtering for business: Keep your secrets safe, and keep your employees happy.”

sophos-naked-securityHere’s a popular article that I wrote on email security for Sophos’ “Naked Security” blog.

5 things you should know about email unsubscribe links before you click” starts with:

We all get emails we don’t want, and cleaning them up can be as easy as clicking ‘unsubscribe’ at the bottom of the email. However, some of those handy little links can cause more trouble than they solve. You may end up giving the sender a lot of information about you, or even an opportunity to infect you with malware.

Read the whole article here.

can-busWhen it comes to cars, safety means more than strong brakes, good tires, a safety cage, and lots of airbags. It also means software that won’t betray you; software that doesn’t pose a risk to life and property; software that’s working for you, not for a hacker.

Please join me for this upcoming webinar, where I am presenting along with Arthur Hicken, the Code Curmudgeon and technology evangelist for Parasoft. It’s on Thursday, August 18. Arthur and I have been plotting and scheming, and there will be some excellent information presented. Don’t miss it! Click here to register.

Driving Risks out of Embedded Automotive Software

Automobiles are becoming the ultimate mobile computer. Popular models have as many as 100 Electronic Control Units (ECUs), while high-end models push 200 ECUs. Those processors run hundreds of millions of lines of code written by the OEMs’ teams and external contractors—often for black-box assemblies. Modern cars also have increasingly sophisticated high-bandwidth internal networks and unprecedented external connectivity. Considering that no code is 100% error-free, these factors point to an unprecedented need to manage the risks of failure—including protecting life and property, avoiding costly recalls, and reducing the risk of ruinous lawsuits.

This one-hour practical webinar will review the business risks of defective embedded software in today’s connected cars. Led by Arthur Hicken, Parasoft’s automotive technology expert and evangelist, and Alan Zeichick, an independent technology analyst and founding editor of Software Development Times, the webinar will also cover five practical techniques for driving the risks out of embedded automotive software, including:

• Policy enforcement
• Reducing defects during coding
• Effective techniques for acceptance testing
• Using metrics analytics to measure risk
• Converting SDLC analytics into specific tasks to focus on the riskiest software

You can apply the proven techniques you’ll learn to code written and tested by your teams, as well as code supplied by your vendors and contractors.

jason-steerNews websites are an irresistible target for hackers because they are so popular. Why? because they are trusted brands, and because — by their very nature — they contain many external links and use lots of outside content providers and analytics/tracking services. It doesn’t take much to corrupt one of those websites, or one of the myriad partners sites they rely upon, like ad networks, content feeds or behavioral trackers.

Potentially, malware injected on any well-trafficked news website, could infect tremendous numbers of people with ransomware, keyloggers, zombie code, or worse. Alarmist? Perhaps, but with good reason. News websites, which can include both traditional media (like the Chicago Tribune and the BBC), or new-media platforms (such as BuzzFeed or Business Insider) attract a tremendous number of visitors, especially when there is a breaking news story of tremendous interest, like a natural disaster, political event or celebrity shenanigans.

Publishing companies are not technology companies. They are content providers who do their honest best to offer a secure experience, but can’t be responsible for external links. In fact, many say so right in their terms of use statements or privacy policies. What they can be responsible for are the third-party networks that provide content or services to their platforms, but in reality, the search for profits and/or a competitive advantage outweighs any other considerations. And of course, their platforms can be hacked as well.

According to a story in the BBC, news sites in Russia, including the Moscow Echo Radio Station, opposition newspaper New Times, and the Kommersant business newspaper were hacked back in March 2012. In November 2014, the Syrian Electronic Army claimed to have hacked news sites, including the Canada’s CBC News.

Also in November 2014, one of the U.K’s most popular sites, The Telegraph, tweeted, “A part of our website run by a third-party was compromised earlier today. We’ve removed the component. No Telegraph user data was affected.”

A year earlier, in January 2013, the New York Times self-reported, “Hackers in China Attacked The Times for Last 4 Months.” The story said that, “The attackers first installed malware — malicious software — that enabled them to gain entry to any computer on The Times’s network. The malware was identified by computer security experts as a specific strain associated with computer attacks originating in China.”

Regional news outlets can also be targets. On September 18, 2015, reported CBS Local in San Francisco, “Hackers took control of the five news websites of Palo Alto-based Embarcadero Media Group on Thursday night, according to the CBS. The websites of Palo Alto Weekly, The Almanac, Mountain View Voice and Pleasanton Weekly were all reportedly attacked at about 10:30 p.m. Thursday.

I talked recently with Jason Steer of Menlo Security, a security company based in Menlo Park, Calif. He put it very clearly:

You are taking active code from a source you didn’t request, and you are running it inside your PC and your network, without any inspection whatsoever. Because of the high volumes of users, it only takes a small number of successes to make the hacking worthwhile. Antivirus can’t really help here, either consumer or enterprise. Antivirus may not detect ransomware being installed from a site you visit, or malicious activity from a bad advertisement or bad JavaScript.

Jason pointed me to his blog post from November 12, 2015, “Top 50 UK Website Security Report.” His post says, in part,

Across the top 50 sites, a number of important findings were made:

• On average, when visiting a top 50 U.K. website, your browser will execute 19 scripts

• The top UK website executed 125 unique scripts when requested

His blog continued with a particularly scary observation:

15 of the top 50 sites (i.e. 30 percent) were running vulnerable versions of web-server code at time of testing. Microsoft IIS version 7.5 was the most prominent vulnerable version reported with known software vulnerabilities going back more than five years.

How many scripts are running on your browser from how many external servers? According to Jason’s research, if you visit the BBC website, your browser might be running 92 scripts pushed to it from 11 different servers. The Daily Mail? 127 scripts from 35 servers. The Financial Times? 199 scripts from 31 servers. The New Yorker? 113 scripts from 33 sites. The Economist? 185 scripts from 46 sites. The New York Times? 76 scripts from 29 servers. And Forbes, 100 scripts from 49 servers.

Most of those servers and scripts are benign. But if they’re not, they’re not. The headline on Ars Technica on March 15, 2016, says it all: “Big-name sites hit by rash of malicious ads spreading crypto ransomware.” The story begins,

Mainstream websites, including those published by The New York Times, the BBC, MSN, and AOL, are falling victim to a new rash of malicious ads that attempt to surreptitiously install crypto ransomware and other malware on the computers of unsuspecting visitors, security firms warned.

The tainted ads may have exposed tens of thousands of people over the past 24 hours alone, according to a blog post published Monday by Trend Micro. The new campaign started last week when “Angler,” a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

 According to a separate blog post from Trustwave’s SpiderLabs group, one JSON-based file being served in the ads has more than 12,000 lines of heavily obfuscated code. When researchers deciphered the code, they discovered it enumerated a long list of security products and tools it avoided in an attempt to remain undetected.

Let me share my favorite news website hack story, because of its sheer audacity. According to Jason’s blog, ad delivery systems can be turned into malware delivery systems, and nobody might every know:

If we take one such example in March 2016, one attacker waited patiently for the domain ‘brentsmedia[.]com’ to expire, registered in Utah, USA , a known ad network content provider. The domain in question had expired ownership for 66 days, was then taken over by an attacker in Russia (Pavel G Astahov) and 1 day later was serving up malicious ads to visitors of sites including the BBC, AOL & New York Times. No-one told any of these popular websites until the malicious ads had already appeared.

Jason recently published an article on this subject in SC Magazine, “Brexit leads to pageviews — pageviews lead to malware.” Check it out. And be aware that when you visit a trusted news website, you have no idea what code is being executed on your computer, what that code does, and who wrote that code.

vz_use_outdoor_headerThank you, NetGear, for taking care of your valued customers. On July 1, the company announced that it would be shutting down the proprietary back-end cloud services required for its VueZone cameras to work – turning them into expensive camera-shaped paperweights. See “Throwing our IoT investment in the trash thanks to NetGear.”

The next day, I was contacted by the company’s global communications manager. He defended the policy, arguing that NetGear was not only giving 18 months’ notice of the shutdown, but they are “doing our best to help VueZone customers migrate to the Arlo platform by offering significant discounts, exclusive to our VueZone customers.” See “A response from NetGear regarding the VueZone IoT trashcan story.”

And now, the company has done a 180° turn. NetGear will not turn off the service, at least not at this time. Well done. Here’s the email that came a few minutes ago. The good news for VueZone customers is that they can continue. On the other hand, let’s not party too heartily. The danger posed by proprietary cloud services driving IoT devices remains. When the vendor decides to turn it off, all you have is recycle-ware and potentially, one heck of a migration issue.

Subject: VueZone Services to Continue Beyond January 1, 2018

Dear valued VueZone customer,

On July 1, 2016, NETGEAR announced the planned discontinuation of services for the VueZone video monitoring product line, which was scheduled to begin as of January 1, 2018.

Since the announcement, we have received overwhelming feedback from our VueZone customers expressing a desire for continued services and support for the VueZone camera system. We have heard your passionate response and have decided to extend service for the VueZone product line. Although NETGEAR no longer manufactures or sells VueZone hardware, NETGEAR will continue to support existing VueZone customers beyond January 1, 2018.

We truly appreciate the loyalty of our customers and we will continue our commitment of delivering the highest quality and most innovative solutions for consumers and businesses. Thank you for choosing us.

Best regards,

The NETGEAR VueZone Team

July 19, 2016