, , ,

Last year’s top hacker tactics may surprise you

Did you know that last year, 75% of data breaches were perpetrated by outsiders, and fully 25% involved internal actors? Did you know that 18% were conducted by state-affiliated actors, and 51% involved organized criminal groups?

That’s according to the newly release 2017 Data Breach Investigations Report from Verizon. It’s the 10th edition of the DBIR, and as always, it’s fascinating – and frightening at the same time.

The most successful tactic, if you want to call it that, used by hackers: stolen or weak (i.e., easily guessed) passwords. They were were used by 81% of breaches. The report says that 62% of breaches featured hacking of some sort, and 51% involved malware.

More disturbing is that fully 66% of malware was installed by malicious email attachments. This means we’re doing a poor job of training our employees not to click links and open documents. We teach, we train, we test, we yell, we scream, and workers open documents anyway. Sigh. According to the report,

People are still falling for phishing—yes still. This year’s DBIR found that around 1 in 14 users were tricked into following a link or opening an attachment — and a quarter of those went on to be duped more than once. Where phishing successfully opened the door, malware was then typically put to work to capture and export data—or take control of systems.

There is a wealth of information in the 2017 DBIR, covering everything from cyber-espionage to the dangers caused by failing to keep up with patches, fixes, and updates. There’s a major section on ransomware, which has grown tremendously in the past year. There are also industry-specific breakouts, covering healthcare, finance, and so-on. It’s a big report, but worth reading. And sharing.

Learn more by reading my latest for Zonic News, “Verizon Describes 2016’S Hackers — And Their Top Tactics.”

, ,

No security plan? It’s like riding a bicycle in traffic in the rain without a helmet

Every company should have formal processes for implementing cybersecurity. That includes evaluating systems, describing activities, testing those policies, and authorizing action. After all, in this area, businesses can’t afford to wing it, thinking, “if something happens, we’ll figure out what to do.” In many cases, without the proper technology, a breach may not be discovered for months or years – or ever. At least not until the lawsuits begin.

Indeed, running without cybersecurity accreditations is like riding a bicycle in a rainstorm. Without a helmet. In heavy traffic. At night. A disaster is bound to happen sooner or later: That’s especially true when businesses are facing off against professional hackers. And when they are stumbled across as juicy victims by script-kiddies who can launch a thousand variations of Ransomware-as-a-Service with a single keystroke.

Yet, according to the British Chambers of Commerce (BCC), small and very small businesses are extremely deficient in terms of having cybersecurity plans. According to the BCC, in the U.K. only 10% of one-person businesses and 15% of those with 1-4 employees have any formal cybersecurity accreditations. Contrast that with businesses with more than 100 employees: 47% with more than 100 employees) have formal plans.

While a CEO may want to focus on his/her primary business, in reality, it’s irresponsible to neglect cybersecurity planning. Indeed, it’s also not good for long-term business success. According to the BCC study, 21% of businesses believe the threat of cyber-crime is preventing their company from growing. And of the businesses that do have cybersecurity accreditations, half (49%) believe it gives their business a competitive advantage over rival companies, and a third (33%) consider it important in creating a more secure environment when trading with other businesses.

Read more about this in my latest for Zonic News, “One In Five Businesses Were Successfully Cyber-Attacked Last Year — Here’s Why.

,

Manage the network, Hal

Some large percentage of IT and security tasks and alerts require simple responses. On a small network, there aren’t many alerts, and so administrators can easily accommodate them: Fixing a connection here, approving external VPN access there, updating router firmware on that side, giving users the latest patches to Microsoft Office on that side, evaluating a security warning, dismissing a security warning, making sure that a newly spun-up virtual machine has the proper agents and firewall settings, reviewing log activity. That sort of thing.

On a large network, those tasks become tedious… and on a very large network, they can escalate unmanageably. As networks scale to hundreds, thousands, and hundreds of thousands of devices, thanks to mobility and the Internet of Things, the load expands exponentially – and so do routine IT tasks and alerts, especially when the network, its devices, users and applications are in constant flux.

Most tasks can be automated, yes, but it’s not easy to spell out in a standard policy-based system exactly what to do. Similarly, the proper way of handling alerts can be automated, but given the tremendous variety of situations, variables, combinations and permutations, that too can be challenging. Merely programming a large number of possible situations, and their responses, would be a tremendous task — and not even worth the effort, since the scripts would be brittle and would themselves require constant review and maintenance.

That’s why in many organizations, only responses to the very simplest of tasks and alert responses are programmed in rule-based systems. The rest are shunted over to IT and security professionals, whose highly trained brains can rapidly decide what to do and execute the proper response.

At the same time, those highly trained brains turn into mush because handling routine, easy-to-solve problems is mind-numbing and not intellectually challenging. Solving a problem once is exciting. Solving nearly the same problem a hundred times every day, five days a week, 52 weeks a year (not counting holidays) is inspiration for updating the C.V… and finding a more interesting job.

How do we solve this? Read my newest piece for Zonic News, “Artificial Intelligence Is The Right Answer To IT And Security Scalability Issues — And AI Won’t Get Bored.

, ,

Look who’s talking – and controlling your home speech-enabled technology

“Alexa! Unlock the front door!” No, that won’t work, even if you have an intelligent lock designed to work with the Amazon Echo. That’s because Amazon is smart enough to know that someone could shout those five words into an open window, and gain entry to your house.

Presumably Amazon doesn’t allow voice control of “Alexa! Turn off the security system!” but that’s purely conjecture. It’s not something I’ve tried. And certainly it’s possible go use programming or clever work-around to enable voice-activated door unlocking or force-field deactivation. That’s why while our home contains a fair amount of cutting-edge AI-based automation, perimeter security is not hooked up to any of it. We’ll rely upon old-fashioned locks and keys and alarm keypads, thank you very much.

And sorry, no voice-enabled safes for me either. It didn’t work so well to protect the CIA against Jason Bourne, did it?

Unlike the fictional CIA safe and the equally fictional computer on the Starship Enterprise, Echo, Google Home, Siri, Android, and their friends can’t identify specific voices with any degree of accuracy. In most cases, they can’t do so at all. So, don’t look to be able to train Alexa to set up access control lists (ACLs) based on voiceprints. That’ll have to wait for the 23rd century, or at least for another couple of years.

The inability of today’s AI-based assistants to discriminate allows for some foolishness – and some shenanigans. We have an Echo in our family room, and every so often, while watching a movie, Alexa will suddenly proclaim, “Sorry, I didn’t understand that command,” or some such. What set the system off? No idea. But it’s amusing.

Less amusing was Burger King’s advertising prank which intentionally tried to get Google Home to help sell more hamburgers. As Fast Company explains:

A new Whopper ad from Burger King turns Google’s voice-activated speaker into an unwitting shill. In the 15-second spot, a store employee utters the words “OK Google, what is the Whopper burger?” This should wake up any Google Home speakers present, and trigger a partial readout of the Whopper’s Wikipedia page. (Android phones also support “OK Google” commands, but use voice training to block out unauthorized speakers.)

Fortunately, Google was as annoyed as everyone else, and took swift action, said the story:

Update: Google has stopped the commercial from working – presumably by blacklisting the specific audio clip from the ad – though Google Home users can still inquire about the Whopper in their own words.

Burger King wasn’t the first to try this stunt. Other similar tricks have succeeded against Home and Echo, and sometimes, the devices are activated accidentally by TV shows and news reports. Look forward to more of this.

It reminds me of the very first time I saw a prototype Echo. What did I say? “Alexa, Format See Colon.” Darn. It didn’t erase anything. But at least it’s better than a cat running around on your laptop keyboard, erasing your term paper. Or a TV show unlocking your doors. Right?

, ,

Listen to Sir Tim Berners-Lee: Don’t weaken encryption!

It’s always a bad idea to intentionally weaken the security that protects hardware, software, and data. Why? Many reasons, including the basic right (in many societies) of individuals to engage in legal activities anonymously. An additional reason: Because knowledge about weakened encryption, back doors and secret keys could be leaked or stolen, leading to unintended consequences and breaches by bad actors.

Sir Tim Berners-Lee, the inventor of the World Wide Web, is worried. Some officials in the United States and the United Kingdom want to force technology companies to weaken encryption and/or provide back doors to government investigators.

In comments to the BBC, Sir Tim said that there could be serious consequences to giving keys to unlock coded messages and forcing carriers to help with espionage. The BBC story said:

“Now I know that if you’re trying to catch terrorists it’s really tempting to demand to be able to break all that encryption but if you break that encryption then guess what – so could other people and guess what – they may end up getting better at it than you are,” he said.

Sir Tim also criticized moves by legislators on both sides of the Atlantic, which he sees as an assault on the privacy of web users. He attacked the UK’s recent Investigatory Powers Act, which he had criticised when it went through Parliament: “The idea that all ISPs should be required to spy on citizens and hold the data for six months is appalling.”

The Investigatory Powers Act 2016, which became U.K. law last November, gives broad powers to the government to intercept communications. It requires telecommunications providers to cooperate with government requests for assistance with such interception.

Read more about this topic — including real-world examples of stolen encryption keys, and why the government wants those back doors. It’s all in my piece for Zonic News, “Don’t Weaken Encryption with Back Doors and Intentional Flaws.

, ,

Three years of the 2013 OWASP Top 10 — and it’s the same vulnerabilities over and over

Can’t we fix injection already? It’s been nearly four years since the most recent iteration of the OWASP Top 10 came out — that’s June 12, 2013. The OWASP Top 10 are the most critical web application security flaws, as determined by a large group of experts. The list doesn’t change much, or change often, because the fundamentals of web application security are consistent.

The 2013 OWASP Top 10 were

  1. Injection
  2. Broken Authentication and Session Management
  3. Cross-Site Scripting (XSS)
  4. Insecure Direct Object References
  5. Security Misconfiguration
  6. Sensitive Data Exposure
  7. Missing Function Level Access Control
  8. Cross-Site Request Forgery (CSRF)
  9. Using Components with Known Vulnerabilities
  10. Unvalidated Redirects and Forwards

The preceding list came out on April 19. 2010:

  1. Injection
  2. Cross-Site Scripting (XSS)
  3. Broken Authentication and Session Management
  4. Insecure Direct Object References
  5. Cross-Site Request Forgery (CSRF)
  6. Security Misconfiguration
  7. Insecure Cryptographic Storage
  8. Failure to Restrict URL Access
  9. Insufficient Transport Layer Protection
  10. Unvalidated Redirects and Forwards

Looks pretty familiar. If you go back further to the inaugural Open Web Application Security Project 2004 and then the 2007 lists, the pattern of flaws stays the same. That’s because programmers, testers, and code-design tools keep making the same mistakes, over and over again.

Take the #1, Injection (often written as SQL Injection, but it’s broader than simply SQL). It’s described as:

Injection flaws occur when an application sends untrusted data to an interpreter. Injection flaws are very prevalent, particularly in legacy code. They are often found in SQL, LDAP, Xpath, or NoSQL queries; OS commands; XML parsers, SMTP Headers, program arguments, etc. Injection flaws are easy to discover when examining code, but frequently hard to discover via testing. Scanners and fuzzers can help attackers find injection flaws.

The technical impact?

Injection can result in data loss or corruption, lack of accountability, or denial of access. Injection can sometimes lead to complete host takeover.

And the business impact?

Consider the business value of the affected data and the platform running the interpreter. All data could be stolen, modified, or deleted. Could your reputation be harmed?

Eliminating the vulnerability to injection attacks is not rocket science. OWASP summaries three approaches:

Preventing injection requires keeping untrusted data separate from commands and queries.

The preferred option is to use a safe API which avoids the use of the interpreter entirely or provides a parameterized interface. Be careful with APIs, such as stored procedures, that are parameterized, but can still introduce injection under the hood.

If a parameterized API is not available, you should carefully escape special characters using the specific escape syntax for that interpreter. OWASP’s ESAPI provides many of these escaping routines.

Positive or “white list” input validation is also recommended, but is not a complete defense as many applications require special characters in their input. If special characters are required, only approaches 1. and 2. above will make their use safe. OWASP’s ESAPI has an extensible library of white list input validation routines.

Not rocket science, not brain surgery — and the same is true of the other vulnerabilities. There’s no excuse for still getting these wrong, folks. Cut down on these top 10, and our web applications will be much safer, and our organizational risk much reduced.

Do you know how often your web developers make the OWASP Top 10 mistakes? The answer should be “never.” They’ve had plenty of time to figure this out.

, , ,

What’s the deal with Apple iCloud accounts being hacked?

The word went out Wednesday, March 22, spreading from techie to techie. “Better change your iCloud password, and change it fast.” What’s going on? According to ZDNet, “Hackers are demanding Apple pay a ransom in bitcoin or they’ll blow the lid off millions of iCloud account credentials.”

A hacker group claims to have access to 250 million iCloud and other Apple accounts. They are threatening to reset all the passwords on those accounts – and then remotely wipe those phones using lost-phone capabilities — unless Apple pays up with untraceable bitcoins or Apple gift cards. The ransom is a laughably small $75,000.

According to various sources, at least some of the stolen account credentials appear to be legitimate. Whether that means all 250 million accounts are in peril, of course, is unknowable.

Apple seems to have acknowledged that there is a genuine problem. The company told CNET, “The alleged list of email addresses and passwords appears to have been obtained from previously compromised third-party services.”

We obviously don’t know what Apple is going to do, or what Apple can do. It hasn’t put out a general call, at least as of Thursday, for users to change their passwords, which would seem to be prudent. It also hasn’t encouraged users to enable two-factor authentication, which should make it much more difficult for hackers to reset iCloud passwords without physical access to a user’s iPhone, iPad, or Mac.

Unless the hackers alter the demands, Apple has a two-week window to respond. From its end, it could temporarily disable password reset capabilities for iCloud accounts, or at least make the process difficult to automate, access programmatically, or even access more than once from a given IP address. So, it’s not “game over” for iCloud users and iPhone owners by any means.

It could be that the hackers are asking for such a low ransom because they know their attack is unlikely to succeed. They’re possibly hoping that Apple will figure it’s easier to pay a small amount than to take any real action. My guess is they are wrong, and Apple will lock them out before the April 7 deadline.

So what’s really going on, and what can be done about it? Read more in my essay, “Apple iCloud Accounts Hacked — Or Maybe Not,” on Zonic News.

, ,

New phishing scam referencing a company called FrontStream

We received this realistic-looking email today claiming to be from a payment company called FrontStream. If you click the links, it tries to get you to active an account and provide bank details. However… We never requested an account from this company. Therefore, we label it phishing — and an attempt to defraud.

If you receive a message like this, delete it. Don’t click any of the links, and don’t reply to it either. You’ve been warned.

From: billing [email address at frontstream.com]
Sent: Wed, Mar 22, 2017 10:34 am
Subject: New Account Ready for Activation

Dear [redacted],

Your account is now available at our FrontStream Invoicing Website for you to view your existing outstanding invoices and make payment. You can directly activate your account here:

[link redacted]

Or you can go to the FrontStream Invoicing website [link redacted], select ‘REGISTER’ option and go through the activation process. Below is your detailed account information from our record. They’re required in order to complete your account activation.

Customer Number: [redacted]

Phone Number: [redacted]

Activation Code: [redacted]

Sincerely,

Accounts Receivable

UPDATE MARCH 22

I tweeted about this blog post, and @FrontStream replied:

@zeichick Sorry for the confusion! The email was sent in error from our customer invoicing system. We’ll be following up with more details.

Given that we aren’t a FrontStream customer, this is peculiar. Will update again if there are more details.

UPDATE MARCH 27

Nothing more from FrontStream.

,

The cybersecurity benefits of artificial intelligence and machine learning

Let’s talk about the practical application of artificial intelligence to cybersecurity. Or rather, let’s read about it. My friend Sean Martin has written a three-part series on the topic for ITSP Magazine, exploring AI, machine learning, and other related topics. I provided review and commentary into the series.

The first part, “It’s a Marketing Mess! Artificial Intelligence vs Machine Learning,” explores probably the biggest challenge about AI: Hyperbole. That, and inconsistency. Every lab, every vendor, every conference, every analyst, defines even the most basic terminology — when they bother to define it at all. Vagueness begets vagueness, and so the terms “artificial intelligence” and “machine learning” are thrown around with wanton abandon. As Sean writes,

The latest marketing discovery of AI as a cybersecurity product term only exacerbates an already complex landscape of jingoisms with like muddled understanding. A raft of these associated terms, such as big data, smart data, heuristics (which can be a branch of AI), behavioral analytics, statistics, data science, machine learning and deep learning. Few experts agree on exactly what those terms mean, so how can consumers of the solutions that sport these fancy features properly understand what those things are?

Machine Learning: The More Intelligent Artificial Intelligence,” the second installment, picks up by digging into pattern recognition. Specifically, the story is about when AI software can discern patterns based on its own examination of raw data. Sean also digs into deep learning:

Deep Learning (also known as deep structured learning, hierarchical learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and non-linear transformations.

In the conclusion, “The Actual Benefits of Artificial Intelligence and Machine Learning,” Sean brings it home to your business. How you can tell if an AI solution is real? How can you tell what it really does? That means going beyond the marketing material’s attempts to obfuscate:

The bottom line on AI-based technologies in the security world: Whether it’s called machine learning or some flavor of analytics, look beyond the terminology – and the oooh, ahhh hype of artificial intelligence – to see what the technology does. As the saying goes, pay for the steak – not the artificial intelligent marketing sizzle.

It was a pleasure working on this series with Sean, and we hope you enjoy reading it.

, ,

The Russians are hacking! One if by phishing, two if by Twitter

Was the Russian government behind the 2004 theft of data on about 500 million Yahoo subscribers? The U.S. Justice Department thinks so: It accused two Russian intelligence officers of directing the hacking efforts, and also named two hackers as being part of the conspiracy to steal the data.

According to Mary B. McCord, Acting Assistant Attorney General,

The defendants include two officers of the Russian Federal Security Service (FSB), an intelligence and law enforcement agency of the Russian Federation and two criminal hackers with whom they conspired to accomplish these intrusions. Dmitry Dokuchaev and Igor Sushchin, both FSB officers, protected, directed, facilitated and paid criminal hackers to collect information through computer intrusions in the United States and elsewhere.

Ms. McCord added that scheme targeted Yahoo accounts of Russian and U.S. government officials, including security staff, diplomats and military personnel. “They also targeted Russian journalists; numerous employees of other providers whose networks the conspirators sought to exploit; and employees of financial services and other commercial entities,” she said.

From a technological perspective, the hackers first broke into computers of American companies providing email and internet-related services. From there, they harvested information, including information about individual users and the private contents of their accounts.

The harm? The hackers, explained Ms. McCord, were hired to gather information for the FSB officers — classic espionage. However, they quietly went farther to steal financial information, such as gift card and credit card numbers, from users’ email accounts — and also use millions of stolen Yahoo accounts to set up an email spam scheme.

You can read more about this — and also about Twitter hacking in the escalating war-of-words between Turkey and the Netherlands. See my post for Zonic News, “State-Sponsored Hacking? Activists Who Support A Cause? Both? Neither?

, ,

Look out iOS, Android and IoT, here comes the CIA, says WikiLeaks

To absolutely nobody’s surprise, the U.S. Central Intelligence Agency can spy on mobile phones. That includes Android and iPhone, and also monitor the microphones on smart home devices like televisions.

This week’s disclosure of CIA programs by WikiLeaks has been billed as the largest-ever publication of confidential documents from the American spy agency. The document dump will appear in pieces; the first installment has 8,761 documents and files from the CIA’s Center for Cyber Intelligence, says WikiLeaks. According to WikiLeaks, the CIA malware and hacking tools are built by EDG (Engineering Development Group), a software development group within the CIA’s Directorate for Digital Innovation. WikiLeaks says the EDG is responsible for the development, testing and operational support of all backdoors, exploits, malicious payloads, trojans, viruses and any other kind of malware used by the CIA.

Another part of the program, code-named “Weeping Angel,” turns smart TVs into secret microphones. After infestation, Weeping Angel places the target TV in a ‘Fake-Off’ mode. The owner falsely believes the TV is off when it is on. In ‘Fake-Off’ mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server.

According to the New York Times, the CIA has refused to explicitly confirm the authenticity of the documents. however, the government strongly implied their authenticity when the agency put out a statement to defend its work and chastise WikiLeaks, saying the disclosures “equip our adversaries with tools and information to do us harm.”

The WikiLeaks data dump talked about efforts to infect and control non-mobile systems. That includes desktops, notebooks and servers running Windows, Linux, Mac OS and Unix. The malware is distributed in many ways, including website viruses, software on CDs or DVDs, and portable USB storage devices.

Enterprises should expect many updates to come from every major hardware or software vendors – and be vigilant about making those security updates. In addition, attempt to identify unpatched devices on the network, and deny them access to critical resources until they are patched and tested.

To read more about this, including Apple’s reaction to the targeting of iOS devices, see my full story, “WikiLeaks Exposes CIA Spyware On Mobile, IoT Devices,” on the Zonic News blog.

, , ,

What to do about credentials theft – the scourge of cybersecurity

Cybercriminals want your credentials and your employees’ credentials. When those hackers succeed in stealing that information, it can be bad for individuals – and even worse for corporations and other organizations. This is a scourge that’s bad, and it will remain bad.

Credentials come in two types. There are personal credentials, such as the login and password for an email account, bank and retirement accounts, credit-card numbers, airline membership program, online shopping and social media. When hackers manage to obtain those credentials, such as through phishing, they can steal money, order goods and services, and engage in identity theft. This can be extremely costly and inconvenient for victims, but the damage is generally contained to that one unfortunate individual.

Corporate digital credentials, on the other hand, are the keys to an organization’s network. Consider a manager, executive or information-technology worker within a typical medium-size or larger-size business. Somewhere in the organization is a database that describes that employee – and describes which digital assets that employee is authorized to use. If cybercriminals manage to steal the employee’s corporate digital credentials, the criminals can then access those same assets, without setting off any alarm bells. Why? Because they have valid credentials.

What might those assets be? Depending on the employee, it might range from everything to file servers that contain intellectual property, as pricing sheets, product blueprints, or patent applications.

It might include email archives that describe business plans. Or accounting servers that contain important financial information that could help competitors or allow for “insider trading.”

It might be human resources data that can help the hackers attack other individuals. Or engage in identity theft or even blackmail.

What if the stolen credentials are for individuals in the IT or information security department? The hackers can learn a great deal about the company’s technology infrastructure, perhaps including passwords to make changes to configurations, open up backdoors, or even disable security systems.

Read my whole story about this —including what to do about it — in Telecom Times, “The CyberSecurity Scourge of Credentials Theft.”

, ,

Don’t trust Facebook to keep your secrets

Nothing you share on the Internet is guaranteed to be private to you and your intended recipient(s). Not on Twitter, not on Facebook, not on Google+, not using Slack or HipChat or WhatsApp, not in closed social-media groups, not via password-protected blogs, not via text message, not via email.

Yes, there are “privacy settings” on FB and other social media tools, but those are imperfect at best. You should not trust Facebook to keep your secrets.

If you put posts or photos onto the Internet, they are not yours to control any more. Accept they can appropriated and redistributed by others. How? Many ways, including:

  • Your emails and texts can be forwarded
  • Your Facebook and Twitter posts and direct-messages can be screen-captured
  • Your photos can be downloaded and then uploaded by someone else

Once the genie is out of the bottle, it’s gone forever. Poof! So if there’s something you definitely don’t want to become public, don’t put it on the Internet.

(I wrote this after seeing a dear friend angered that photos of her little children, which she shared with her friends on Facebook, had been re-posted by a troll.)

, ,

The Fifth Column hiding in the Internet of Things (IoT)

I can’t trust the Internet of Things. Neither can you. There are too many players and too many suppliers of the technology that can introduce vulnerabilities in our homes, our networks – or elsewhere. It’s dangerous, my friends. Quite dangerous. In fact, it can be thought of as a sort of Fifth Column, but not in the way many of us expected.

Merriam-Webster defines a Fifth Column as “a group of secret sympathizers or supporters of an enemy that engage in espionage or sabotage within defense lines or national borders.” In today’s politics, there’s lot of talk about secret sympathizers sneaking across national borders, such as terrorists posing as students or refugees. Such “bad actors” are generally part of an organization, recruited by state actors, and embedded into enemy countries for long-term penetration of society.

There have been many real-life Fifth Column activists in recent global history. Think about Kim Philby and Anthony Blunt, part of the “Cambridge Five” who worked for spy agencies in the United Kingdom in post-World War II era; but who themselves turned out to be double agents working for the Soviet Union. Fiction too, is replete with Fifth Column spies. They’re everywhere in James Bond movies and John le Carré novels.

Am I too paranoid?

Let’s bring our paranoia (or at least, my paranoia) to the Internet of Things, and start by way of the late 1990s and early 2000s. I remember quite clearly the introduction of telco and network routers by Huawei, and concerns that the Chinese government may have embedded software into those routers in order to surreptitiously listen to telecom networks and network traffic, to steal intellectual property, or to do other mischief like disable networks in the event of a conflict. (This was before the term “cyberwarfare” was widely used.)

Recall that Huawei was founded by a former engineer in the Chinese People’s Liberation Army. The company was heavily supported by Beijing. Also there were lawsuits alleging that Huawei infringed on Cisco’s intellectual property – i.e., stole its source code. Thus, there was lots of concern surrounding the company and its products.

Read my full story about this, published in Pipeline Magazine, “The Surprising and Dangerous Fifth Column Hiding Within the Internet of Things.”

, ,

An intimate take on cybersecurity: Yes, medical devices can be hacked and compromised

Modern medical devices increasingly leverage microprocessors and embedded software, as well as sophisticated communications connections, for life-saving functionality. Insulin pumps, for example, rely on a battery, pump mechanism, microprocessor, sensors, and embedded software. Pacemakers and cardiac monitors also contain batteries, sensors, and software. Many devices also have WiFi- or Bluetooth-based communications capabilities. Even hospital rooms with intravenous drug delivery systems are controlled by embedded microprocessors and software, which are frequently connected to the institution’s network. But these innovations also mean that a software defect can cause a critical failure or security vulnerability.

In 2007, former vice president Dick Cheney famously had the wireless capabilities of his pacemaker disabled. Why? He was concerned “about reports that attackers could hack the devices and kill their owners.” Since then, the vulnerabilities caused by the larger attack surface area on modern medical devices have gone from hypothetical to demonstrable, in part due to the complexity of the software, and in part due to the failure to properly harden the code.

In October 2011, The Register reported that “a security researcher has devised an attack that hijacks nearby insulin pumps, enabling him to surreptitiously deliver fatal doses to diabetic patients who rely on them.” The insulin pump worked because the pump contained a short-range radio that allow patients and doctors to adjust its functions. The researcher showed that, by using a special antenna and custom-written software, he could locate and seize control of any such device within 300 feet.

report published by Independent Security Evaluators (ISE) shows the danger. This report examined 12 hospitals, the organization concluded “that remote adversaries can easily deploy attacks that manipulate records or devices in order to fully compromise patient health” (p. 25). Later in the report, the researchers show how they demonstrated the ability to manipulate the flow of medicine or blood samples within the hospital, resulting in the delivery of improper medicate types and dosages (p. 37)–and do all this from the hospital lobby. They were also able to hack into and remotely control patient monitors and breathing tubes – and trigger alarms that might cause doctors or nurses to administer unneeded medications.

Read more in my blog post for Parasoft, “What’s the Cure for Software Defects and Vulnerabilities in Medical Devices?

, ,

Advocating for safer things: On the road, in the home, in business, everywhere

Think about alarm systems in cars. By default, many automobiles don’t come with an alarm system installed from the factory. That was for three main reasons: It lowered the base sticker price on the car; created a lucrative up-sell opportunity; and allowed for variations on alarms to suit local regulations.

My old 2004 BMW 3-series convertible (E46), for example, came pre-wired for an alarm. All the dealer had to do, upon request (and payment of $$$) was install a couple of sensors and activate the alarm in the car’s firmware. Voilà! Instant protection. Third-party auto supply houses and garages, too, were delighted that the car didn’t include the alarm, since that made it easier to sell one to worried customers, along with a great deal on a color-changing stereo head unit, megawatt amplifier and earth-shattering sub-woofer.

Let’s move from cars to cybersecurity. The dangers are real, and as an industry, it’s in our best interest to solve this problem, not by sticking our head in the sand, not by selling aftermarket products, but by a two-fold approach: 1) encouraging companies to make more secure products; and 2) encouraging customers to upgrade or replace vulnerable products — even if there’s not a dollar, pound, euro, yen or renminbi of profit in it for us:

  • If you’re a security hardware, software, or service company, the problem of malicious bits traveling over broadband, wireless and the Internet backbone is also not your problem. Rather, it’s an opportunity to sell products. Hurray for one-time sales, double hurray for recurring subscriptions.
  • If you’re a carrier, the argument goes, all you care about is the packets, and the reliability of your network. The service level agreement provided to consumers and enterprises talks about guaranteed bandwidth, up-time availability, and time to recover from failures; it certainly doesn’t promise that devices connected to your service will be free of malware or safe from hacking. Let customers buy firewalls and endpoint protection – and hey, if we offer that as a service, that’s a money-making opportunity.

Read more about this subject in my latest article for Pipeline Magazine, “An Advocate for Safer Things.”

, ,

There are two types of cloud firewalls: Vanilla and Strawberry

Cloud-based firewalls come in two delicious flavors: vanilla and strawberry. Both flavors are software that checks incoming and outgoing packets to filter against access policies and block malicious traffic. Yet they are also quite different. Think of them as two essential network security tools: Both are designed to protect you, your network, and your real and virtual assets, but in different contexts.

Disclosure: I made up the terms “vanilla firewall” and “strawberry firewall” for this discussion. Hopefully they help us differentiate between the two models as we dig deeper.

Let’s start with a quick overview:

  • Vanilla firewalls are usually stand-alone products or services designed to protect an enterprise network and its users — like an on-premises firewall appliance, except that it’s in the cloud. Service providers call this a software-as-a-service (SaaS) firewall, security as a service (SECaaS), or even firewall as a service (FaaS).
  • Strawberry firewalls are cloud-based services that are designed to run in a virtual data center using your own servers in a platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS) model. In these cases, the firewall application runs on the virtual servers and protects traffic going to, from, and between applications in the cloud. The industry sometimes calls these next-generation firewalls, though the term is inconsistently applied and sometimes refers to any advanced firewall system running on-prem or in the cloud.

So why do we need these new firewalls? Why not stick a 1U firewall appliance into a rack, connect it up to the router, and call it good? Easy: Because the definition of the network perimeter has changed. Firewalls used to be like guards at the entrance to a secured facility. Only authorized people could enter that facility, and packages were searched as they entered and left the building. Moreover, your users worked inside the facility, and the data center and its servers were also inside. Thus, securing the perimeter was fairly easy. Everything inside was secure, everything outside was not secure, and the only way in and out was through the guard station.

Intrigued? Hungry? Both? Please read the rest of my story, called “Understanding cloud-based firewalls,” published on Enterprise.nxt.

, , ,

Thinking new about cyberattacks — and fighting back smarter

What’s the biggest tool in the security industry’s toolkit? The patent application. Security thrives on innovation, and always has, because throughout recorded history, the bad guys have always had the good guys at the disadvantage. The only way to respond is to fight back smarter.

Sadly, fighting back smarter isn’t always the case. At least, not when looking over the vendor offerings at RSA 2017, held mid-February in San Francisco. Sadly, some of the products and services wouldn’t have seemed out of place a decade ago. Oh, look, a firewall! Oh look, a hardware device that sits on the network and scans for intrusions! Oh, look, a service that trains employees not to click on phishing spam!

Fortunately, some companies and big thinkers are thinking new about the types of attacks… and the best ways to protect against them, detect when those protections end, how to respond when attacks are detected, and ways to share information about those attacks.

Read more about this in my latest story for Zonic News, “InfoSec Requires Innovation.”

, ,

Phishing and ransomware attacks against you and your company are getting smarter

Everyone has received those crude emails claiming to be from your bank’s “Secuirty Team” that tells you that you need to click a link to “reset you account password.” It’s pretty easy to spot those emails, with all the misspellings, the terrible formatting, and the bizarre “reply to” email addresses at domains halfway around the world. Other emails of that sort ask you to review an unclothed photo of a A-list celebrity, or open up an attached document that tells you what you’ve won.

We can laugh. However, many people fall for those phishing scams — and willingly surrender their bank account numbers and passwords, or install malware, such as ransomware.

Less obvious, and more effective, are attacks that are carefully crafted to appeal to a high-value individual, such as a corporate executive or systems administrator. Despite their usual technological sophistication, anyone can be fooled, if the spearphishing email is good enough – spearphishing being the term for phishing emails designed specifically to entrap a certain person.

What’s the danger? Plenty. Spearphishing emails that pretend to be from the CEO can convince a corporate accounting manager to wire money to an overseas account. Called the “Wire Transfer Scam,” this has been around for several years and still works, costing hundreds of millions of dollars, said the FBI.

Read more in my latest for Zonic News, “Phishing and Spearphishing: Delivery Vehicles for Ransomware, Theft and More.”

, , ,

Mobility and security at two big shows: RSA and Mobile World Conference

What’s on the industry’s mind? Security and mobility are front-and-center of the cerebral cortex, as two of the year’s most important events prepare to kick off.

The Security Story: At RSA (February 13-17 in San Francisco), expect to see the best of the security industry, from solutions providers to technology firms to analysts. The conference can’t come too soon.

Ransomware, which exploded into the public’s mind last year with high-profile incidents, continues to run rampant. Attackers are turning to ever-bigger targets, with ever-bigger fallout. It’s not enough that hospitals are still being crippled (this was big in 2016), but hotel guests are locked out of their rooms, police departments are losing important crime evidence, and even CCTV footage has been locked away.

The Mobility Story: Halfway around the world, mobility is only part of the story at Mobile World Congress (February 27 – March 2 in Barcelona). There will be many sessions about 5G wireless, which can provision not only traditional mobile users, but also industrial controls and the Internet of Things. AT&T recently announced that it will launch 5G service (with peak speeds of 400Mbps or better) in two American cities, Austin and Indianapolis. While the standards are not yet complete, that’s not stopping carriers and the industry from moving ahead.

Also key to the success of all mobile platforms is cloud computing. Microsoft is moving more aggressively to the cloud, going beyond Azure and Office 365 with a new Windows 10 Cloud edition, a simplified experience designed to compete against Google’s Chrome platform.

Read more about what to expect in security and mobility in my latest for Zonic News, “Get ready for RSA and Mobile World Congress.”

, ,

How to take existing enterprise code to Microsoft Azure or Google Cloud Platform

The best way to have a butt-kicking cloud-native application is to write one from scratch. Leverage the languages, APIs, and architecture of the chosen cloud platform before exploiting its databases, analytics engines, and storage. As I wrote for Ars Technica, this will allow you to take advantage of the wealth of resources offered by companies like Microsoft, with their Azure PaaS (Platform-as-a-Service) offering or by Google Cloud Platform’s Google App Engine PaaS service.

Sometimes, however, that’s not the job. Sometimes, you have to take a native application running on a server in your local data center or colocation facility and make it run in the cloud. That means virtual machines.

Before we get into the details, let’s define “native application.” For the purposes of this exercise, it’s an application written in a high-level programming language, like C/C++, C#, or Java. It’s an application running directly on a machine talking to an operating system, like Linux or Windows, that you want to run on a cloud platform like Windows Azure, Amazon Web Services (AWS), or Google Cloud Platform (GCP).

What we are not talking about is an application that has already been virtualized, such as already running within VMware’s ESXi or Microsoft’s Hyper-V virtual machine. Sure, moving an ESXi or Hyper-V application running on-premises into the cloud is an important migration that may improve performance and add elasticity while switching capital expenses to operational expenses. Important, yes, but not a challenge. All the virtual machine giants and cloud hosts have copious documentation to help you make the switch… which amounts to basically copying the virtual machine file onto a cloud server and turning it on.

Many possible scenarios exist for moving a native datacenter application into the cloud. They boil down to two main types of migrations, and there’s no clear reason to choose one over the other:

The first is to create a virtual server within your chosen cloud provider, perhaps running Windows Server or running a flavor of Linux. Once that virtual server has been created, you migrate the application from your on-prem server to the new virtual server—exactly as you would if you were moving from one of your servers to a new server. The benefits: the application migration is straightforward, and you have 100-percent control of the server, the application, and security. The downside: the application doesn’t take advantage of cloud APIs or other special servers. It’s simply a migration that gets a server out of your data center. When you do this, you are leveraging a type of cloud called Infrastructure-as-a-Service (IaaS). You are essentially treating the cloud like a colocation facility.

The second is to see if your application code can be ported to run within the native execution engine provided by the cloud service. This is called Platform-as-a-Service (PaaS). The benefits are that you can leverage a wealth of APIs and other services offered by the cloud provider. The downsides are that you have to ensure that your code can work on the service (which may require recoding or even redesign) in order to use those APIs or even to run at all. You also don’t have full control over the execution environment, which means that security is managed by the cloud provider, not by you.

And of course, there’s the third option mentioned at the beginning: Writing an entirely new application native for the cloud provider’s PaaS. That’s still the best option, if you can do it. But our task today is to focus on migrating an existing application.

Let’s look into this more closely, via my recent article for Ars Technica, “Great app migration takes enterprise “on-prem” applications to the Cloud.”

, , ,

Cybersecurity alert: Trusted websites can harbor malware, thanks to savvy hackers

According to a recent study, 46% of the top one million websites are considered risky. Why? Because the homepage or background ad sites are running software with known vulnerabilities, the site was categorized as a known bad for phishing or malware, or the site had a security incident in the past year.

According to Menlo Security, in its “State of the Web 2016” report introduced mid-December 2016, “… nearly half (46%) of the top million websites are risky.” Indeed, Menlo says, “Primarily due to outdated software, cyber hackers now have their veritable pick of half the web to exploit. And exploitation is becoming more widespread and effective for three reasons: 1. Risky sites have never been easier to exploit; 2. Traditional security products fail to provide adequate protection; 3. Phishing attacks can now utilize legitimate sites.”

This has been a significant issue for years. However, the issue came to the forefront earlier this year when several well-known media sites were essentially hijacked by malicious ads. The New York Times, the BBC, MSN and AOL were hit by tainted advertising that installed ransomware, reports Ars Technica. From their March 15, 2016, article, “Big-name sites hit by rash of malicious ads spreading crypto ransomware”:

The new campaign started last week when ‘Angler,’ a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

The results of this attack, reported The Guardian at around the same time: 

When the infected adverts hit users, they redirect the page to servers hosting the malware, which includes the widely-used (amongst cybercriminals) Angler exploit kit. That kit then attempts to find any back door it can into the target’s computer, where it will install cryptolocker-style software, which encrypts the user’s hard drive and demands payment in bitcoin for the keys to unlock it.

If big-money trusted media sites can be hit, so can nearly any corporate site, e-commerce portal, or any website that uses third-party tools – or where there might be the possibility of unpatched servers and software. That means just about anyone. After all, not all organizations are diligent about monitoring for common vulnerabilities and exploits (CVE) on their on-premises servers. When companies run their websites on multi-tenant hosting facilities, they don’t even have access to the operating system directly, but rely upon the hosting company to install patches and fixes to Windows Server, Linux, Joomla, WordPress and so-on.

A single unpatched operating system, web server platform, database or extension can introduce a vulnerability which can be scanned for. Once found, that CVE can be exploited, by a talented hacker — or by a disgruntled teenager with a readily-available web exploit kit

What can you do about it? Well, you can read my complete story on this subject, “Malware explosion: The web is risky,” published on ITProPortal.

,

Edgescan loves to do what most people hate. Lucky you!

“If you give your security team the work they hate to do day in and day out, you won’t be able to retain that team.” Eoin Keary should know. As founder, director and CTO of edgescan, a fast-growing managed security service provider (MSSP), his company frees up enterprise security teams to focus on the more strategic, more interesting, more business-critical aspects of InfoSec while his team deals with the stuff they know and do best; deal with the monotony of full-stack vulnerability management.

It’s a perfect match, Keary says. By using an MSSP, customers can focus on business-critical issues, save money, have better security—and not have to replace expensive, highly trained employees who quit after a few months out of boredom. “We are experts in vulnerability management, have built the technology and can deliver very efficiently.”

BCC Risk Advisory Ltd, edgescan’s parent company, based in Dublin, Ireland, was formed in 2011 with “me and a laptop,” explains Keary. He expects his company to end the 2016 fiscal year at seven figure revenues and a growth trajectory of circa 400% compared to 2015. Its secret cyberweapon is a cloud-based SaaS called edgescan. edgescan detects security weaknesses across the customer’s full stack of technology assets, from servers to networks, from websites to apps to mobile devices. It also provides continuous asset profiling and virtual patching coupled with expert support.

edgescan constantly assesses clients’ systems on a continuous basis. “We have a lot of intelligence and automation in the platform to determine what needs to be addressed,” explains Keary.

There’s a lot more to my interview with Eoin Keary — you can read the whole story, “Apparently We Love To Do What Companies Hate. Lucky You!” published in ITSP Magazine.

,

No honor: You can’t trust cybercriminals any more

hackerOnce upon a time, goes the story, there was honor between thieves and victims. They held a member of your family for ransom; you paid the ransom; they left you alone. The local mob boss demanded protection money; if you didn’t pay, your business burned down, but if you did pay and didn’t rat him out to the police, his and his gang honestly tried to protect you. And hackers may have been operating outside legal boundaries, but for the most part, they were explorers and do-gooders intending to shine a bright light on the darkness of the Internet – not vandals, miscreants, hooligans and ne’er-do-wells.

That’s not true any more, perhaps. As I write in IT Pro Portal, “Faceless and faithless: A true depiction of today’s cyber-criminals?

Not that long ago, hackers emerged as modern-day Robin Hoods, digital heroes who relentlessly uncovered weaknesses in applications and networks to reveal the dangers of using technology carelessly. They are curious, provocative; love to know how things work and how they can be improved.

Today, however, there is blight on their good name. Hackers have been maligned by those who do not have our best interests at heart, but are instead motivated by money – attackers who steal our assets and hold organisations such as banks and hospitals to ransom.

(My apologies for the British spelling – it’s a British website and they’re funny that way.)

It’s hard to lose faith in hackers, but perhaps we need to. Sure, not all are cybercriminals, but with the rise of ransomware, nasty action by state actors, and some pretty nasty attacks like the new single-pixel malvertising exploit written up yesterday by Dan Goodwin in Ars Technica (which was discovered out after I wrote this story), it’s hard to trust that most hackers secretly have our best interests at heart.

This reminds me of Ranscam. In a blog post, “When Paying Out Doesn’t Pay Off,” Talos reports that:

Ranscam is one of these new ransomware variants. It lacks complexity and also tries to use various scare tactics to entice the user to paying, one such method used by Ranscam is to inform the user they will delete their files during every unverified payment click, which turns out to be a lie. There is no longer honor amongst thieves. Similar to threats like AnonPop, Ranscam simply delete victims’ files, and provides yet another example of why threat actors cannot always be trusted to recover a victim’s files, even if the victim complies with the ransomware author’s demands. With some organizations likely choosing to pay the ransomware author following an infection,  Ranscam further justifies the importance of ensuring that you have a sound, offline backup strategy in place rather than a sound ransom payout strategy. Not only does having a good backup strategy in place help ensure that systems can be restored, it also ensures that attackers are no longer able to collect revenue that they can then reinvest into the future development of their criminal enterprise.

Scary — and it shows that often, crime does pay. No honor among thieves indeed.

, ,

Four ways enterprise IT can reduce mobile risk

phoneFrom company-issued tablets to BYOD (bring your own device) smartphones, employees are making the case that mobile devices are essential for productivity, job satisfaction, and competitive advantage. Except in the most regulated industries, phones and tablets are part of the landscape, but their presence requires a strong security focus, especially in the era of non-stop malware, high-profile hacks, and new vulnerabilities found in popular mobile platforms. Here are four specific ways of examining this challenge that can help drive the choice of both policies and technologies for reducing mobile risk.

Protect the network: Letting any mobile device on the business network is a risk, because if the device is compromised, the network (and all of its servers and other assets) may be compromised as well. Consider isolating internal WiFi links to secured network segments, and only permit external access via virtual private networks (VPNs). Install firewalls that guard the network by recognizing not only authorized devices, but also authorized users — and authorized applications. Be sure to keep careful tabs on devices accessing the network, from where, and when.

Protect the device: A mobile device can be compromised in many ways: It might be stolen, or the user might install malware that provides a gateway for a hacker. Each mobile device should be protect by strong passwords not only for the device, but on critical business apps. Don’t allow corporate data to be stored on the device itself. Ensure that there are remote-wipe capabilities if the device is lost. And consider installed a Mobile Device Management (MDM) platform that can give IT full control over the mobile device – or at least those portions of a employee-owned device that might ever be used for business purposes.

Protect the data: To be productive with their mobile devices, employees want access to important corporate assets, such as email, internal websites, ERP or CRM applications, document repositories, as well as cloud-based services. Ensure that permissions are granted specifically for needed services, and that all access is encrypted and logged. As mentioned above, never let corporate data – including documents, emails, chats, internal social media, contacts, and passwords – be stored or cached on the mobile device. Never allow co-mingling of personal and business data, such as email accounts. Yes, it’s a nuisance, but make the employee log into the network, and authenticate into enterprise-authorized applications, each and every time. MDM platforms can help enforce those policies as well.

Protect the business: The policies regarding mobile access should be worked out along with corporate counsel, and communicated clearly to all employees before they are given access to applications and data. The goal isn’t to be heavy-handed, but rather, to gain their support. If employees understand the stakes, they become allies in helping protect business interests. Mobile access is risky for enterprises, and with today’s aggressive malware, the potential for harm has never been higher. It’s not too soon to take it seriously.

, , ,

Blindspotter: Big Data and machine learning can help detect early-stage hack attacks

wayne-rashWhen an employee account is compromised by malware, the malware establishes a foothold on the user’s computer – and immediately tries to gain access to additional resources. It turns out that with the right data gathering tools, and with the right Big Data analytics and machine-learning methodologies, the anomalous network traffic caused by this activity can be detected – and thwarted.

That’s the role played by Blindspotter, a new anti-malware system that seems like a specialized version of a network intrusion detection/prevention system (IDPS). Blindspotter can help against many types of malware attacks. Those include one of the most insidious and successful hack vectors today: spear phishing. That’s when a high-level target in your company is singled out for attack by malicious emails or by compromised websites. All the victim has to do is open an email, or click on a link, and wham – malware is quietly installed and operating. (High-level targets include top executives, financial staff and IT administrators.)

My colleague Wayne Rash recently wrote about this network monitoring solution and its creator, Balabit, for eWeek in “Blindspotter Uses Machine Learning to Find Suspicious Network Activity”:

The idea behind Balabit’s Blindspotter and Shell Control Box is that if you gather enough data and subject it to analysis comparing activity that’s expected with actual activity on an active network, it’s possible to tell if someone is using a person’s credentials who shouldn’t be or whether a privileged user is abusing their access rights.

 The Balabit Shell Control Box is an appliance that monitors all network activity and records the activity of users, including all privileged users, right down to every keystroke and mouse movement. Because privileged users such as network administrators are a key target for breaches it can pay special attention to them.

The Blindspotter software sifts through the data collected by the Shell Control Box and looks for anything out of the ordinary. In addition to spotting things like a user coming into the network from a strange IP address or at an unusual time of day—something that other security software can do—Blindspotter is able to analyze what’s happening with each user, but is able to spot what is not happening, in other words deviations from normal behavior.

Read the whole story here. Thank you, Wayne, for telling us about Blindspotter.

, , , , ,

Medical devices – the wild west for cybersecurity vulnerabilities and savvy hackers

bloombergMedical devices are incredibly vulnerable to hacking attacks. In some cases it’s because of software defects that allow for exploits, like buffer overflows, SQL injection or insecure direct object references. In other cases, you can blame misconfigurations, lack of encryption (or weak encryption), non-secure data/control networks, unfettered wireless access, and worse.

Why would hackers go after medical devices? Lots of reasons. To name but one: It’s a potential terrorist threat against real human beings. Remember that Dick Cheney famously disabled the wireless capabilities of his implanted heart monitor for fear of an assassination attack.

Certainly healthcare organizations are being targeted for everything from theft of medical records to ransomware. To quote the report “Hacking Healthcare IT in 2016,” from the Institute for Critical Infrastructure Technology (ICIT):

The Healthcare sector manages very sensitive and diverse data, which ranges from personal identifiable information (PII) to financial information. Data is increasingly stored digitally as electronic Protected Health Information (ePHI). Systems belonging to the Healthcare sector and the Federal Government have recently been targeted because they contain vast amounts of PII and financial data. Both sectors collect, store, and protect data concerning United States citizens and government employees. The government systems are considered more difficult to attack because the United States Government has been investing in cybersecurity for a (slightly) longer period. Healthcare systems attract more attackers because they contain a wider variety of information. An electronic health record (EHR) contains a patient’s personal identifiable information, their private health information, and their financial information.

EHR adoption has increased over the past few years under the Health Information Technology and Economics Clinical Health (HITECH) Act. Stan Wisseman [from Hewlett-Packard] comments, “EHRs enable greater access to patient records and facilitate sharing of information among providers, payers and patients themselves. However, with extensive access, more centralized data storage, and confidential information sent over networks, there is an increased risk of privacy breach through data leakage, theft, loss, or cyber-attack. A cautious approach to IT integration is warranted to ensure that patients’ sensitive information is protected.”

Let’s talk devices. Those could be everything from emergency-room monitors to pacemakers to insulin pumps to X-ray machines whose radiation settings might be changed or overridden by malware. The ICIT report says,

Mobile devices introduce new threat vectors to the organization. Employees and patients expand the attack surface by connecting smartphones, tablets, and computers to the network. Healthcare organizations can address the pervasiveness of mobile devices through an Acceptable Use policy and a Bring-Your-Own-Device policy. Acceptable Use policies govern what data can be accessed on what devices. BYOD policies benefit healthcare organizations by decreasing the cost of infrastructure and by increasing employee productivity. Mobile devices can be corrupted, lost, or stolen. The BYOD policy should address how the information security team will mitigate the risk of compromised devices. One solution is to install software to remotely wipe devices upon command or if they do not reconnect to the network after a fixed period. Another solution is to have mobile devices connect from a secured virtual private network to a virtual environment. The virtual machine should have data loss prevention software that restricts whether data can be accessed or transferred out of the environment.

The Internet of Things – and the increased prevalence of medical devices connected hospital or home networks – increase the risk. What can you do about it? The ICIT report says,

The best mitigation strategy to ensure trust in a network connected to the internet of things, and to mitigate future cyber events in general, begins with knowing what devices are connected to the network, why those devices are connected to the network, and how those devices are individually configured. Otherwise, attackers can conduct old and innovative attacks without the organization’s knowledge by compromising that one insecure system.

Given how common these devices are, keeping IT in the loop may seem impossible — but we must rise to the challenge, ICIT says:

If a cyber network is a castle, then every insecure device with a connection to the internet is a secret passage that the adversary can exploit to infiltrate the network. Security systems are reactive. They have to know about something before they can recognize it. Modern systems already have difficulty preventing intrusion by slight variations of known malware. Most commercial security solutions such as firewalls, IDS/ IPS, and behavioral analytic systems function by monitoring where the attacker could attack the network and protecting those weakened points. The tools cannot protect systems that IT and the information security team are not aware exist.

The home environment – or any use outside the hospital setting – is another huge concern, says the report:

Remote monitoring devices could enable attackers to track the activity and health information of individuals over time. This possibility could impose a chilling effect on some patients. While the effect may lessen over time as remote monitoring technologies become normal, it could alter patient behavior enough to cause alarm and panic.

Pain medicine pumps and other devices that distribute controlled substances are likely high value targets to some attackers. If compromise of a system is as simple as downloading free malware to a USB and plugging the USB into the pump, then average drug addicts can exploit homecare and other vulnerable patients by fooling the monitors. One of the simpler mitigation strategies would be to combine remote monitoring technologies with sensors that aggregate activity data to match a profile of expected user activity.

A major responsibility falls onto the device makers – and the programmers who create the embedded software. For the most part, they are simply not up to the challenge of designing secure devices, and may not have the polices, practices and tools in place to get cybersecurity right. Regrettably, the ICIT report doesn’t go into much detail about the embedded software, but does state,

Unlike cell phones and other trendy technologies, embedded devices require years of research and development; sadly, cybersecurity is a new concept to many healthcare manufacturers and it may be years before the next generation of embedded devices incorporates security into its architecture. In other sectors, if a vulnerability is discovered, then developers rush to create and issue a patch. In the healthcare and embedded device environment, this approach is infeasible. Developers must anticipate what the cyber landscape will look like years in advance if they hope to preempt attacks on their devices. This model is unattainable.

In November 2015, Bloomberg Businessweek published a chilling story, “It’s Way too Easy to Hack the Hospital.” The authors, Monte Reel and Jordon Robertson, wrote about one hacker, Billy Rios:

Shortly after flying home from the Mayo gig, Rios ordered his first device—a Hospira Symbiq infusion pump. He wasn’t targeting that particular manufacturer or model to investigate; he simply happened to find one posted on EBay for about $100. It was an odd feeling, putting it in his online shopping cart. Was buying one of these without some sort of license even legal? he wondered. Is it OK to crack this open?

Infusion pumps can be found in almost every hospital room, usually affixed to a metal stand next to the patient’s bed, automatically delivering intravenous drips, injectable drugs, or other fluids into a patient’s bloodstream. Hospira, a company that was bought by Pfizer this year, is a leading manufacturer of the devices, with several different models on the market. On the company’s website, an article explains that “smart pumps” are designed to improve patient safety by automating intravenous drug delivery, which it says accounts for 56 percent of all medication errors.

Rios connected his pump to a computer network, just as a hospital would, and discovered it was possible to remotely take over the machine and “press” the buttons on the device’s touchscreen, as if someone were standing right in front of it. He found that he could set the machine to dump an entire vial of medication into a patient. A doctor or nurse standing in front of the machine might be able to spot such a manipulation and stop the infusion before the entire vial empties, but a hospital staff member keeping an eye on the pump from a centralized monitoring station wouldn’t notice a thing, he says.

 The 97-page ICIT report makes some recommendations, which I heartily agree with.

  • With each item connected to the internet of things there is a universe of vulnerabilities. Empirical evidence of aggressive penetration testing before and after a medical device is released to the public must be a manufacturer requirement.
  • Ongoing training must be paramount in any responsible healthcare organization. Adversarial initiatives typically start with targeting staff via spear phishing and watering hole attacks. The act of an ill- prepared executive clicking on a malicious link can trigger a hurricane of immediate and long term negative impact on the organization and innocent individuals whose records were exfiltrated or manipulated by bad actors.
  • A cybersecurity-centric culture must demand safer devices from manufacturers, privacy adherence by the healthcare sector as a whole and legislation that expedites the path to a more secure and technologically scalable future by policy makers.

This whole thing is scary. The healthcare industry needs to set up its game on cybersecurity.

, , , ,

We need a new browser security default: Privacy mode for external, untrusted or email links

firefox-privateBe paranoid! When you visit a website for the first time, it can learn a lot about you. If you have cookies on your computer from one of the site’s partners, it can see what else you have been doing. And it can place cookies onto your computer so it can track your future activities.

Many (or most?) browsers have some variation of “private” browsing mode. In that mode, websites shouldn’t be able to read cookies stored on your computer, and they shouldn’t be able to place permanent cookies onto your computer. (They think they can place cookies, but those cookies are deleted at the end of the session.)

Those settings aren’t good enough, because they are either all or nothing, and offer a poor balance between ease-of-use and security/privacy. The industry can and must do better. See why in my essay on NetworkWorld, “We need a better Private Browsing Mode.

 

,

When meeting to exchange goods bought online, be safe and careful

muggingNothing is scarier than getting together with a buyer (or a seller) to exchange dollars for a product advertised on Craig’s List, eBay or another online service… and then be mugged or robbed. There are certainly plenty of news stories on this subject, but the danger continues. Here are some recent reports:

Don’t be a victim! The Phoenix Police Department has released an advisory. It’s good advice. Follow it.

Phoenix Police Media Advisory:

Internet Exchange Related Crimes

The Phoenix Police Department has recently experienced reported crimes specific to the usage of internet exchange sites that allow sellers to advertise items for sale and then interact with buyers. Subsequent to the online interaction, the two parties usually meet and exchange money for goods in a private party transaction at an agreed-upon location. However, due to circumstances surrounding the nature of these interactions, many criminals are using them for their own purposes

 Specifically, the Phoenix Police Department has seen an increase in robberies of one of the involved parties by the other party during these exchanges. However, crimes as serious as homicide and kidnapping have been linked to these transactions. Although no strategy is 100% effective when trying to be safe, there are a number of steps one can take to ensure the transaction is done under the safest possible circumstances. The department is urging those involved in these private, internet-based sales transactions to consider the following while finalizing the deal and making safety their primary consideration:

  • If the deal seems too good to be true, it probably is.
  • The location of the exchange should be somewhere in public that has many people around like a mall, a well-traveled parking lot, or a public area. Do not agree to meet at someone’s house, a secluded place, a vacant house, or the like.
  • Try to schedule the transaction while it is still daylight, or at least in a place that is very well lit.
  • Ask why the person is selling the item and what type of payment they will accept. Be wary of agreeing to a cash payment and then travelling to the deal with a large sum of cash.
  • Bring a friend with you to the meet and let someone who isn’t going with you know where you are going and when you can be expected back.
  • Know the fair market value of the item you are purchasing.
  •  Trust your instinct! If something seems suspicious, or you get a bad feeling, pass on the deal!

Other good advice that I’ve seen:

  • Never agree to meet in a second place, when you show up at the agreed-upon place and receive a text message redirecting you somewhere else.
  • Never give the other party your home address. If you must do so (because they are picking up a large item from your house), bring the item outside; don’t let them into your house. Inform your neighbors what’s going on.
  • Call your local police department and ask if they can recommend an Internet Purchase Exchange Location, also known as a Safe Exchange Zone.

Be careful out there, my friends.

, , ,

Securely disposing of computers with spinning or solid state drives

big-shredderCan someone steal the data off your old computer? The short answer is yes. A determined criminal can grab the bits, including documents, images, spreadsheets, and even passwords.

If you donate, sell or recycle a computer, whoever gets hold of it can recover the information in its hard drive or solid-state storage (SSD). The platform doesn’t matter: Whether its Windows or Linux or Mac OS, you can’t 100% eliminate sensitive data by, say, eliminating user accounts or erasing files!

You can make the job harder by using the computer’s disk utilities to format the hard drive. Be aware, however, that formatting will thwart a casual thief, but not a determined hacker.

The only truly safe way to destroy the data is to physically destroy the storage media. For years, businesses have physically removed and destroyed the hard drives in desktops, servers and laptops. It used to be easy to remove the hard drive: take out a couple of screws, pop open a cover, unplug a cable, and lift the drive right out.

Once the hard drive is identified and removed, you can smash it with a hammer, drill holes in it, even take it apart (which is fun, albeit time-consuming). Some businesses will put the hard drive into an industrial shredder, which is a scaled-up version of an office paper shredder. Some also use magnetism to attempt to destroy the data. Not sure how effective that is, however, and magnets won’t work at all on SSDs.

It’s much harder to remove the storage from today’s ultra-thin, tightly sealed notebooks, such as a Microsoft Surface or Apple MacBook Air, or even from tablets. What if you want to destroy the storage in order to prevent hackers from gaining access? It’s a real challenge.

If you have access to an industrial shredder, an option is to shred the entire computer. It seems wasteful, and I can imagine that it’s not good to shred lithium-ion batteries – many of which are not easily removable, again, as in the Microsoft Surface or Apple MacBook Air. You don’t want those chemicals lying around. Still, that works, and works well.

Note that an industrial shredder is kinda big and expensive – you can see some from SSL World. However, if you live in any sort of medium-sized or larger urban area, you can probably find a shredding service that will destroy the computer right in front of you. I’ve found one such service here in Phoenix, Assured Document Destruction Inc., that claims to be compliant with industry regulations for privacy, such as HIPAA and Sarbanes-Oxley.

Don’t want to shred the whole computer? Let’s say the computer uses a standard hard drive, usually in a 3.5-inch form factor (desktops and servers) or 2.5-inch form factor (notebooks). If you have a set of small screwdrivers, you should be able to dismantle the computer, remove the storage device, and kill it – such as by smashing it with a maul, drilling holes in it, or taking it completely apart. Note that driving over it in your car, while satisfying, may not cause significant damage.

What about solid state storage? The same actually applies with SSDs, but it’s a bit trickier. Sometimes the drive still looks like a standard 2.5-inch hard drive. But sometimes the “solid state drive” is merely a few exposed chips on the motherboard or a smaller circuit board. You’ve got to smash that sucker. Remove it from the computer. Hulk Smash! Break up the circuit board, pulverize the chips. Only then will it be dead dead dead. (Though one could argue that government agencies like the NSA could still put Humpty Dumpty back together again.)

In short: Even if the computer itself seems totally worthless, its storage can be removed, connected to a working computer, and accessed by a skilled techie. If you want to ensure that your data remains private, you must destroy it.