It’s official: Internet service providers in the United States can continue to sell information about their customers’ Internet usage to marketers — and to anyone else who wants to use it. In 2016, during the Obama administration, the Federal Communications Commission (FCC) tried to require ISPs to get customer permission before using or sharing information about their web browsing. According to the FCC, the rule change, entitled, “Protecting the Privacy of Customers of Broadband and Other Telecommunications Services,” meant:

The rules implement the privacy requirements of Section 222 of the Communications Act for broadband ISPs, giving broadband customers the tools they need to make informed decisions about how their information is used and shared by their ISPs. To provide consumers more control over the use of their personal information, the rules establish a framework of customer consent required for ISPs to use and share their customers’ personal information that is calibrated to the sensitivity of the information. This approach is consistent with other privacy frameworks, including the Federal Trade Commission’s and the Administration’s Consumer Privacy Bill of Rights.

More specifically, the rules required that customers had to positively agree to have their information used in that fashion. Previously, customers had to opt-out. Again, according to the FCC,

Opt-in: ISPs are required to obtain affirmative “opt-in” consent from consumers to use and share sensitive information. The rules specify categories of information that are considered sensitive, which include precise geo-location, financial information, health information, children’s information, social security numbers, web browsing history, app usage history and the content of communications.

Opt-out: ISPs would be allowed to use and share non-sensitive information unless a customer “opts-out.” All other individually identifiable customer information – for example, email address or service tier information – would be considered non-sensitive and the use and sharing of that information would be subject to opt-out consent, consistent with consumer expectations.

Consumer Privacy Never Happened

That rule change, however, ended up being stuck with legal challenges and never took effect. In March 2017, both chambers of Congress voted to reverse that change. The resolution, passed by both the House and Senate, was simple:

Resolved by the Senate and House of Representatives of the United States of America in Congress assembled, That Congress disapproves the rule submitted by the Federal Communications Commission relating to “Protecting the Privacy of Customers of Broadband and Other Telecommunications Services,” and such rule shall have no force or effect.

What’s the net effect? In some ways, not much, despite all the hyperbole. The rule only applied to broadband providers. It didn’t apply to others who could tell what consumers were doing on the Internet, such as social media (think Facebook) or search engines (think Google) or e-commerce (think Amazon) or streaming media (think Netflix). Those other organizations could use or market their knowledge about consumers, bound only by the terms of their own privacy policy. Similarly, advertising networks and others who tracked browser activity via cookies could also use the information however they wanted.

What’s different about the FCC rule on broadband carriers, however, is that ISPs can see just about everything that a customer does. Every website visited, every DNS address lookup, and every Internet query sent via other applications like email or messaging apps. Even if that traffic is end-to-end encrypted, the broadband carrier knows where the traffic is going or coming from – because, after all, it is delivering the packets. That makes the carriers’ metadata information about customer traffic unique, and invaluable, to marketers, government agencies, and to others who might wish to leverage it.

Customers Can Shield — To Some Extent

Customers can attempt to shield their privacy. For example, many use end-to-end VPN services to route their Internet traffic to a single relay point, and then use that relay to anonymously surf the web. However, a privacy VPN is technically difficult for many consumers to set up. Plus, the service costs money. Also, for true privacy fanatics, that VPN service could also be a source of danger, since it could be compromised by an intelligence agency, or used for a man-in-the-middle attack.

So in the United States, the demise of the FCC ruling is bad news. Customers’ Internet usage data — including websites visited, phrases searched for, products purchased and movies watched — remains available for marketers and others who use to study it and exploit it. However, in reality, such was always the case.

Can’t we fix injection already? It’s been nearly four years since the most recent iteration of the OWASP Top 10 came out — that’s June 12, 2013. The OWASP Top 10 are the most critical web application security flaws, as determined by a large group of experts. The list doesn’t change much, or change often, because the fundamentals of web application security are consistent.

The 2013 OWASP Top 10 were

  1. Injection
  2. Broken Authentication and Session Management
  3. Cross-Site Scripting (XSS)
  4. Insecure Direct Object References
  5. Security Misconfiguration
  6. Sensitive Data Exposure
  7. Missing Function Level Access Control
  8. Cross-Site Request Forgery (CSRF)
  9. Using Components with Known Vulnerabilities
  10. Unvalidated Redirects and Forwards

The preceding list came out on April 19. 2010:

  1. Injection
  2. Cross-Site Scripting (XSS)
  3. Broken Authentication and Session Management
  4. Insecure Direct Object References
  5. Cross-Site Request Forgery (CSRF)
  6. Security Misconfiguration
  7. Insecure Cryptographic Storage
  8. Failure to Restrict URL Access
  9. Insufficient Transport Layer Protection
  10. Unvalidated Redirects and Forwards

Looks pretty familiar. If you go back further to the inaugural Open Web Application Security Project 2004 and then the 2007 lists, the pattern of flaws stays the same. That’s because programmers, testers, and code-design tools keep making the same mistakes, over and over again.

Take the #1, Injection (often written as SQL Injection, but it’s broader than simply SQL). It’s described as:

Injection flaws occur when an application sends untrusted data to an interpreter. Injection flaws are very prevalent, particularly in legacy code. They are often found in SQL, LDAP, Xpath, or NoSQL queries; OS commands; XML parsers, SMTP Headers, program arguments, etc. Injection flaws are easy to discover when examining code, but frequently hard to discover via testing. Scanners and fuzzers can help attackers find injection flaws.

The technical impact?

Injection can result in data loss or corruption, lack of accountability, or denial of access. Injection can sometimes lead to complete host takeover.

And the business impact?

Consider the business value of the affected data and the platform running the interpreter. All data could be stolen, modified, or deleted. Could your reputation be harmed?

Eliminating the vulnerability to injection attacks is not rocket science. OWASP summaries three approaches:

Preventing injection requires keeping untrusted data separate from commands and queries.

The preferred option is to use a safe API which avoids the use of the interpreter entirely or provides a parameterized interface. Be careful with APIs, such as stored procedures, that are parameterized, but can still introduce injection under the hood.

If a parameterized API is not available, you should carefully escape special characters using the specific escape syntax for that interpreter. OWASP’s ESAPI provides many of these escaping routines.

Positive or “white list” input validation is also recommended, but is not a complete defense as many applications require special characters in their input. If special characters are required, only approaches 1. and 2. above will make their use safe. OWASP’s ESAPI has an extensible library of white list input validation routines.

Not rocket science, not brain surgery — and the same is true of the other vulnerabilities. There’s no excuse for still getting these wrong, folks. Cut down on these top 10, and our web applications will be much safer, and our organizational risk much reduced.

Do you know how often your web developers make the OWASP Top 10 mistakes? The answer should be “never.” They’ve had plenty of time to figure this out.

“Call with Alan.” That’s what the calendar event says, with a bridge line as the meeting location. That’s it. For the individual who sent me that invitation, that’s a meaningful description, I guess. For me… worthless! This meeting was apparently sent out (and I agreed to attend) at least three weeks ago. I have no recollection about what this meeting is about. Well, it’ll be an adventure! (Also: If I had to cancel or reschedule, I wouldn’t even know who to contact.)

When I send out calendar invites, I try hard to make the event name descriptive to everyone, not just me. Like “ClientCorp and Camden call re keynote topics” or “Suzie Q and Alan Z — XYZ donations.” Something! Give a hint, at least! After all, people who receive invitations can’t edit the names to make them more meaningful.

And then there’s time-zone ambiguity. Some calendar programs (like Google Calendar) do a good job of tracking the event’s time zone, and mapping it to mine. Others, and I’m thinking of Outlook 365, do a terrible job there, and make it difficult to specify the event in a different time zone.

For example, I’m in Phoenix, and often set up calls with clients on the East Coast or in the U.K. As a courtesy, I like to set up meetings using the client’s time zone. Easy when I use Google Calendar to set up the event. Not easy in Outlook 365, which I must use for some projects.

Similarly, some calendar programs do a good job mapping the event to each recipient’s time zone. Others don’t. The standards are crappy, and the implementations of the standards are worse.)

There’s more than the bad time-zone mappings. Each Web-based, mobile, and desktop calendar app, even those that claim to conform to standards, has its own quirks, proprietary features, and incompatibilities. For example, repeating events aren’t handled consistently from calendar program to calendar program. It’s a real mess.

Here are a few simple do’s and don’ts for event creators. Or rather, don’ts and do’s.

  • DON’T just put the name of the person you are meeting with in the event name.
  • DO put your name and organization too, and include your contact information (phone, email, whatever) in the calendar invite itself. Having just a conference bridge or location of the coffee shop won’t do someone any good if they need to reach you before the meeting.
  • DON’T assume that everyone will remember what the meeting is about.
  • DO put the purpose of the meeting into the event title.
  • DON’T think that everyone’s calendar software works like yours or has the same features, vis-à-vis time zones, attachments, comments, and so-on.
  • DO consider putting the meeting time and time zone into the event name. It’s something I don’t do, but I have friends who do, like “ClientCorp and Camden call re keynote topics — 3pm Pacific.” Hmm, maybe I should do that?
  • DON’T expect that if you change the event time on your end, that change will percolate to all recipients. Again, this can be software-specific.
  • DO cancel the event if it’s necessary to reschedule, and set up a new one. Also send an email to all participants explaining what happened. I dislike getting calendar emails saying the meeting date/time has been changed — with no explanation.
  • DON’T assume that people will be able to process your software’s calendar invitations. Different calendar program don’t play well with each other.
  • DO send a separate email with all the details, including the event name, start time, time zone, and list of participants, in addition to the calendar invite. Include the meeting location, or conference-call dial-in codes, in that email.
  • DON’T trust that everyone will use the “accept” button to indicate that they are attending. Most will not.
  • DO follow up with people who don’t “accept” to ask if they are coming.
  • DON’T assume that just because it’s on their calendar, people will remember to show up. I had one guy miss an early-morning call he “accepted” because it was early and he hadn’t checked his calendar yet. D’oh!
  • DO send a meeting confirmation email, one day before, if the event was scheduled more than a week in advance.

Have more do’s and don’ts? Please add them using the comments.

The word went out Wednesday, March 22, spreading from techie to techie. “Better change your iCloud password, and change it fast.” What’s going on? According to ZDNet, “Hackers are demanding Apple pay a ransom in bitcoin or they’ll blow the lid off millions of iCloud account credentials.”

A hacker group claims to have access to 250 million iCloud and other Apple accounts. They are threatening to reset all the passwords on those accounts – and then remotely wipe those phones using lost-phone capabilities — unless Apple pays up with untraceable bitcoins or Apple gift cards. The ransom is a laughably small $75,000.

What’s Happening at Apple?

According to various sources, at least some of the stolen account credentials appear to be legitimate. Whether that means all 250 million accounts are in peril, of course, is unknowable.

Apple seems to have acknowledged that there is a genuine problem. The company told CNET, “The alleged list of email addresses and passwords appears to have been obtained from previously compromised third-party services.” We obviously don’t know what Apple is going to do, or what Apple can do. It hasn’t put out a general call, at least as of Thursday, for users to change their passwords, which would seem to be prudent. It also hasn’t encouraged users to enable two-factor authentication, which should make it much more difficult for hackers to reset iCloud passwords without physical access to a user’s iPhone, iPad, or Mac.

Unless the hackers alter the demands, Apple has a two-week window to respond. From its end, it could temporarily disable password reset capabilities for iCloud accounts, or at least make the process difficult to automate, access programmatically, or even access more than once from a given IP address. So, it’s not “game over” for iCloud users and iPhone owners by any means.

It could be that the hackers are asking for such a low ransom because they know their attack is unlikely to succeed. They’re possibly hoping that Apple will figure it’s easier to pay a small amount than to take any real action. My guess is they are wrong, and Apple will lock them out before the April 7 deadline.

Where Did This Come From

Too many criminal networks have access to too much data. Where are they getting it? Everywhere. The problem multiplies because people reuse usernames and passwords. For nearly every site nowadays, the username is the email address. That means if you know my email address (and it’s not hard to find), you know my username for Facebook, for iCloud, for Dropbox, for Salesforce.com, for Windows Live, for Yelp. Using the email address for the login is superficially good for consumers: They are unlikely to forget their login.

The bad news is that account access now depends on a single piece of hidden information: the password. And people reuse passwords and choose weak passwords. So if someone steals a database from a major retailer with a million account usernames (which are email addresses) and passwords, many of those will also be Facebook logins. And Twitter. And iCloud.

That’s how hackers can quietly accumulate what they claim are 250 million iCloud passwords. They probably have 250 million email address / password pairs amalgamated from various sources: A million from this retailer, ten million from that social network. It adds up. How many of those will work in iTunes? Unknown. Not 250 million. But maybe 10 million? Or 20 million? Either way, it’s a nightmare for customers and a disaster for Apple, if those accounts are locked, or if phones are bricked.

What’s the Answer?

As long as we use passwords, and users have the ability to reuse passwords, this problem will exist. Hackers are excellent at stealing data. Companies are bad at detecting breaches, and even worse about disclosing them unless legally obligated to do so.

Can Apple present those 250 million accounts from being seized? Probably. Will problems like this happen again and again and again? For sure, until we move away from any possibility of shared credentials. And that’s not happening any time soon.

Let’s talk about the practical application of artificial intelligence to cybersecurity. Or rather, let’s read about it. My friend Sean Martin has written a three-part series on the topic for ITSP Magazine, exploring AI, machine learning, and other related topics. I provided review and commentary into the series.

The first part, “It’s a Marketing Mess! Artificial Intelligence vs Machine Learning,” explores probably the biggest challenge about AI: Hyperbole. That, and inconsistency. Every lab, every vendor, every conference, every analyst, defines even the most basic terminology — when they bother to define it at all. Vagueness begets vagueness, and so the terms “artificial intelligence” and “machine learning” are thrown around with wanton abandon. As Sean writes,

The latest marketing discovery of AI as a cybersecurity product term only exacerbates an already complex landscape of jingoisms with like muddled understanding. A raft of these associated terms, such as big data, smart data, heuristics (which can be a branch of AI), behavioral analytics, statistics, data science, machine learning and deep learning. Few experts agree on exactly what those terms mean, so how can consumers of the solutions that sport these fancy features properly understand what those things are?

Machine Learning: The More Intelligent Artificial Intelligence,” the second installment, picks up by digging into pattern recognition. Specifically, the story is about when AI software can discern patterns based on its own examination of raw data. Sean also digs into deep learning:

Deep Learning (also known as deep structured learning, hierarchical learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and non-linear transformations.

In the conclusion, “The Actual Benefits of Artificial Intelligence and Machine Learning,” Sean brings it home to your business. How you can tell if an AI solution is real? How can you tell what it really does? That means going beyond the marketing material’s attempts to obfuscate:

The bottom line on AI-based technologies in the security world: Whether it’s called machine learning or some flavor of analytics, look beyond the terminology – and the oooh, ahhh hype of artificial intelligence – to see what the technology does. As the saying goes, pay for the steak – not the artificial intelligent marketing sizzle.

It was a pleasure working on this series with Sean, and we hope you enjoy reading it.

Was the Russian government behind the 2004 theft of data on about 500 million Yahoo subscribers? The U.S. Justice Department thinks so: It accused two Russian intelligence officers of directing the hacking efforts, and also named two hackers as being part of the conspiracy to steal the data.

According to Mary B. McCord, Acting Assistant Attorney General,

The defendants include two officers of the Russian Federal Security Service (FSB), an intelligence and law enforcement agency of the Russian Federation and two criminal hackers with whom they conspired to accomplish these intrusions. Dmitry Dokuchaev and Igor Sushchin, both FSB officers, protected, directed, facilitated and paid criminal hackers to collect information through computer intrusions in the United States and elsewhere.

Ms. McCord added that scheme targeted Yahoo accounts of Russian and U.S. government officials, including security staff, diplomats and military personnel. “They also targeted Russian journalists; numerous employees of other providers whose networks the conspirators sought to exploit; and employees of financial services and other commercial entities,” she said.

From a technological perspective, the hackers first broke into computers of American companies providing email and internet-related services. From there, they harvested information, including information about individual users and the private contents of their accounts. The hackers, explained Ms. McCord, were hired to gather information for the FSB officers — classic espionage. However, they quietly went farther to steal financial information, such as gift card and credit card numbers, from users’ email accounts — and also use millions of stolen Yahoo accounts to set up an email spam scheme.

Was this state-sponsored cybertheft? Probably, but it’s not certain. What we have are serious allegations, but we don’t know if the FSB agents were working on orders from the Kremlin, or if they were running their own operation for their own private benefit. It’s simply too soon to tell.

The Turkish/Dutch Hacking Connection

Similarly, it’s too soon to know who is behind this week’s use of hijacked Twitter accounts to fling some nasty rhetoric against the Netherlands. This comes on the heels of the Dutch government’s efforts to block Turkish government ministers from traveling to the Netherlands to encourage Turkish ex-pats to vote in a Turkish referendum. At the same time, the Netherlands themselves were having an important election, with one of the leading candidates offering an isolationist, anti-Muslim platform. According to Reuters,

A diplomatic spat between Turkey, the Netherlands and Germany spread online on Wednesday when a large number of Twitter accounts were hijacked and replaced with anti-Nazi messages in Turkish.

The attacks, using the hashtags #Nazialmanya (NaziGermany) or #Nazihollanda (NaziHolland), took over accounts of high-profile CEOs, publishers, government agencies, politicians and also some ordinary Twitter users.

The account hijackings took place as the Dutch began voting on Wednesday in a parliamentary election that is seen as a test of anti-establishment and anti-immigrant sentiment.

The hackers did a good job getting access to Twitter accounts. Reuters continued,

The hacked accounts featured tweets with Nazi symbols, a variety of hashtags and the phrase “See you on April 16”, the date of a planned referendum in Turkey on extending Erdogan’s presidential powers.

Among them were the accounts of the European Parliament and the personal profile of French conservative politician Alain Juppe.

They also included the UK Department of Health and BBC North America, along with the profile of Marcelo Claure, the chief executive of U.S. telecoms operator Sprint Corp.

Other accounts included publishing sites for Die Welt, Forbes and Reuters Japan and several non-profit agencies including Amnesty International and UNICEF USA, as well as Duke University in the United States.

How did the hackers get access to Twitter? In part by breaking into a Dutch audience analytics company, which would have had access to some or all of those accounts. As Reuters reported,

At least some of the hijacked tweets appear to have been delivered via Twitter Counter, a Netherlands-based Twitter audience analytics company. Twitter Counter Chief Executive Omer Ginor acknowledged via email that the service had been hacked.

Meanwhile in a separate action, Reuters said,

Last Saturday, denial of service attacks staged by a Turkish hacking group hit the websites of Rotterdam airport and anti-Islam firebrand Geert Wilders, whose Freedom Party is vying to form to form the biggest party in the Dutch parliament.

So – as with the Yahoo hack in 2014 – are these the work of state-sponsored hackers? Or of hackers who believe in a cause, and who are working on their own to support that cause? It’s too soon to tell, and in this case, we may never know; it’s unclear if any organizations as powerful as the U.S. Justice Department and FBI are investigating. What we do know, though, is that nearly everything is vulnerable. A reputable analytics service can be hacked in order to provide a backdoor means to take over Twitter accounts. Internet access companies can be subverted and used for espionage or for staging man-in-the-middle attacks.

How many more of these attacks will be unveiled in the weeks, months and years ahead? One safe prediction: There will be many more attacks — whether state sponsors are behind them or not.

Let’s take a chainsaw to content-free buzzwords favored by technology marketers and public relations professionals. Or even better, let’s applaud one PR agency’s campaign to do just that. Houston PR, based in the UK, has a fun website called “Buzzsaw” which removes those empty phrases from text, such as press releases. Says the agency:

This free tool automatically hacks PR buzzwords out of press releases to make life more bearable for Britain’s hard-working journalists.

The Buzzsaw can also be used for speeches, strategy documents, advertising copy or any other collections of words that need to be as clear as possible.

You’ll find that toe-curling terms like repurposing, solution, robust, best of breed, mission-critical, next-generation, web-enabled, leading, value-added, leverage, seamless, etc, are struck out by the Buzzsaw.

It also takes a scythe to cutesy Hipster-style words and phrases like “totes amazeballs”, “awesome” and “super excited”.

To compile the Buzzsaw database we asked thousands of journalists to supply examples of the PR terms that irritate them the most.

Here are some of the words and phrases that Buzzsaw looks for. Note that it does tend to be British-centric in terms of spelling.

“win rates”, “business development lifecycle”, “market-leading”, “global provider”, “simple mission”, “optimal opportunities”, “unmatched capabilities”, “big data”, “pace of investment”, “priority needs”, “Blue Sky Thinking”, “Descriptor”, “Packages”, “Manage expectations”, “collegiate approach”, “oxygenate the process”, “low hanging fruit”, “Happy Bunny”, “Robust procedures”, “keep across”, “Stewardship”, “Solutioning”, “net net”, “sub-ideal”, “action that solve”, “expidite the deliverables”, “park this issue”, “suite of offerings”, “sunset”, “horizon scan”, “110%”, “Socialise”, “Humble”, “Special someone”, “Super”, “Shiny”, “Taxing times”, “Do the math”, “Sharing”, “Nailed it”, “Bail in”, “Revert”, “Sense check”, “Snackable content”, “higher order thinking”, “Coopetition”, “Fulfilment issues”, “Cascade”, “Demising”, “Horizon scanning”, “Do-able”, “Yardstick”, “Milestone”, “Landmark achievement”, “Negativity”, “True story”, “So True”, “Next-generation”, “Voice to voice”, “So to speak”, “Step change”, “Edgy”

Check it out — and if you are a tech PR professional or marketeer, maybe try it on your own collateral.

As many of you know, I am co-founder and part owner of BZ Media LLC. Yes, I’m the “Z” of BZ Media. Here is exciting news released today about one of our flagship events, InterDrone.

MELVILLE, N.Y., March 13, 2017 BZ Media LLC announced today that InterDrone™ The International Drone Conference & Exposition has been acquired by Emerald Expositions LLC, the largest producer of trade shows in North America. InterDrone 2016 drew 3,518 attendees from 54 different countries on 6 continents and the event featured 155 exhibitors and sponsors. The 2017 event will be managed and produced by BZ Media on behalf of Emerald.

Emerald Expositions is the largest operator of business-to-business trade shows in the United States, with their oldest trade shows dating back over 110 years. They currently operate more than 50 trade shows, including 31 of the top 250 trade shows in the country as ranked by TSNN, as well as numerous other events. Emerald events connect over 500,000 global attendees and exhibitors and occupy over 6.7 million NSF of exhibition space.

“We are very proud of InterDrone and how it has emerged so quickly to be the industry leading event for commercial UAV applications in North America,” said Ted Bahr, President of BZ Media. “We decided that to take the event to the next level required a company of scale and expertise like Emerald Expositions. We look forward to supporting Emerald through the 2017 and 2018 shows and working together to accelerate the show’s growth under their ownership over the coming years.”

InterDrone was just named to the Trade Show Executive magazine list of fastest growing shows in 2016 and was one of only 14 shows in the country that was named in each of the three categories; fastest growth in exhibit space, growth in number of exhibitors and in attendance. InterDrone was the only drone show named to the list.

InterDrone 2017 will take place September 6–8, 2017, at the Rio Hotel & Casino in Las Vegas, NV, and, in addition to a large exhibition floor, features three subconferences for attendees, making InterDrone the go-to destination for UAV educational content in North America. More than 120 classes, panels and keynotes are presented under Drone TechCon (for drone builders, engineers, OEMs and developers), Drone Enterprise (for enterprise UAV pilots, operators and drone service businesses) and Drone Cinema (for pilots engaged in aerial photography and videography).

“Congratulations to Ted Bahr and his team at BZ Media for successfully identifying this market opportunity and building a strong event that provides a platform for commercial interaction and education to this burgeoning industry”, said David Loechner, President and CEO of Emerald Expositions. “We have seen first-hand the emerging interest in drones in our two professional photography shows, and we are excited at the prospect of leveraging our scale, experience and expertise in trade shows and conferences to deliver even greater benefits to attendees, sponsors, exhibitors at InterDrone and to the entire UAV industry.”

To absolutely nobody’s surprise, the U.S. Central Intelligence Agency can spy on mobile phones. That includes Android and iPhone, and also monitor the microphones on smart home devices like televisions.

This week’s disclosure of CIA programs by WikiLeaks has been billed as the largest-ever publication of confidential documents from the American spy agency. The document dump will appear in pieces; the first installment has 8,761 documents and files from the CIA’s Center for Cyber Intelligence, says WikiLeaks.

According to WikiLeaks, the CIA malware and hacking tools are built by EDG (Engineering Development Group), a software development group within the CIA’s Directorate for Digital Innovation. WikiLeaks says the EDG is responsible for the development, testing and operational support of all backdoors, exploits, malicious payloads, trojans, viruses and any other kind of malware used by the CIA.

Smart TV = Spy TV?

Another part of the covert program, code-named “Weeping Angel,” turns smart TVs into secret microphones. After infestation, Weeping Angel places the target TV in a ‘Fake-Off’ mode. The owner falsely believes the TV is off when it is on. In ‘Fake-Off’ mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server.

The New York Times reports the CIA has refused to explicitly confirm the authenticity of the documents. however, the government strongly implied their authenticity when the agency put out a statement to defend its work and chastise WikiLeaks, saying the disclosures “equip our adversaries with tools and information to do us harm.”

The WikiLeaks data dump talked about efforts to infect and control non-mobile systems. That includes desktops, notebooks and servers running Windows, Linux, Mac OS and Unix. The malware is distributed in many ways, including website viruses, software on CDs or DVDs, and portable USB storage devices.

Going mobile with spyware

What about the iPhone? Again, according to WikiLeaks, the CIA produces malware to infest, control and exfiltrate data from Apple products running iOS, such as iPhones and iPads. Similarly, other programs target Android. Says WikiLeaks, “These techniques permit the CIA to bypass the encryption of WhatsApp, Signal, Telegram, Wiebo, Confide and Cloackman by hacking the smart phones that they run on and collecting audio and message traffic before encryption is applied.”

The tech industry is scrambling to patch the vulnerabilities revealed by the WikiLeaks data dump. For example, Apple said,

Apple is deeply committed to safeguarding our customers’ privacy and security. The technology built into today’s iPhone represents the best data security available to consumers, and we’re constantly working to keep it that way. Our products and software are designed to quickly get security updates into the hands of our customers, with nearly 80 percent of users running the latest version of our operating system. While our initial analysis indicates that many of the issues leaked today were already patched in the latest iOS, we will continue work to rapidly address any identified vulnerabilities. We always urge customers to download the latest iOS to make sure they have the most recent security updates.

Enterprises should expect patches to come from every major hardware or software vendors. IT must be vigilant about making those security updates. In addition, everyone should attempt to identify unpatched devices on the network, and deny those devices access to critical resources until they are properly patched and tested. We don’t want to help mobile devices to become spy devices.

Apple isn’t as friendly or as as communicative as one would think. Earlier today, I received a panic call from someone trying to sync videos to her iPad from a Mac – and receiving a message that there was no suitable application on the iPad. Huh? That made no sense. The app for playing locally stored videos on an iPad is called Videos, and it’s a standard, built-in app. What’s the deal?

In short: With the iOS 10.2 operating system update, Apple renamed the Videos app to TV. And it has to be installed from the Apple App Store. It’s a free download, but who knew? Apparently not me. And not a lot of people who queried their favorite search engine with phrases like “ipad videos app missing.”

What’s worse, the change had the potential to delete locally stored video content. One dissatisfied user posted on an Apple discussion forum:

New TV App deleted home videos from iPad

I had a bunch of home videos on my iPad, and when I updated to iOS 10.2, the new TV App replaced videos. On my iPhone 6, this process went fine. I launched TV, and up popped the Library, and within it was a sub-menu for Home Videos. The one and only one I had on my iPhone is still there.

But I had dozens on my iPad and now they are all gone. Not only are they all gone, but there is no sub-menu for Home Videos AT ALL! I can probably replace them by synching to my laptop, but this is a time-consuming pain in the *$$, and why should I have to do this at all?

This change was unveiled in October 2016, with much fanfare, claiming:

Apple today introduced the new TV app, offering a unified experience for discovering and accessing TV shows and movies from multiple apps on Apple TV, iPhone and iPad. The TV app provides one place to access TV shows and movies, as well as a place to discover new content to watch. Apple also introduced a new Siri feature for Apple TV that lets viewers tune in directly to live news and sporting events across their apps. Watching TV shows and movies across Apple devices has never been easier.

The update appeared, for U.S. customers at least, on December 12, 2016. That’s when iOS 10.2 came out. Buh-bye, Videos app!

The change moved a piece of core functionality from iOS itself into an app. The benefits: The new TV app can be updated on its own schedule, not tied to iOS releases, and iOS releases themselves can be smaller. The drawback: Users must manually install the TV app.

Once the TV app is installed, the user can re-sync the videos from a Mac or Windows PC running iTunes. This should restore the missing content, assuming the content is on the desktop/notebook computer. How rude, Apple!

Let me add, snarkily, that the new name is stupid since there’s already a thing from Apple called TV – Apple TV.

What’s the Snapchat appeal? For now, it’s a red-hot initial public offering and the promise of more public offerings to come, after a period of slow tech movement on Wall Street.

The Snapchat social-media service is perplexing to nearly anyone born before 1990, myself included. That didn’t stop its debut on the New York Stock Exchange from ringing everyone’s bell. According to Fox News, Snapchat’s (SNAP) wildly successful trading debut, which bested Facebook’s (FB), Alibaba’s (BABA) and Google’s (GOOGL). At the outset of trading Thursday, the stock jumped more than 40 percent to $24 a share, no thanks to Main Street investors who were largely left out of the action. Snapchat surged 44 percent Thursday, closing at $24.48, which valued the social media company’s market cap around $28.3 billion.

Not bad for a social media service whose appeal is that its messages, photos and videos only stick around for a little while, and then vanish forever. That places Snapchat in stark contract against services like Facebook and Twitter, which saves everything forever (unless the original poster goes back a deletes a specific post).

Snapchat’s hot IPO came on one of the biggest recent days on stock markets, with the FTSE 100 and Dow breaking records. As reported by the Telegraph:

“Animal spirits have taken over,” said Neil Wilson, of ETX Capital, as the FTSE 100 charged to a fresh intraday record high of 7,383.05. It closed at a new peak of 7,382.9, up 119.46 points, or 1.64pc, on the day, while the more domestically-focused FTSE 250 also hit an intraday high of 18,983.01.

and

On Wall Street, the Dow Jones crossed the 21,000 mark for the first time ever, as industrial and banking stocks rallied. Clocking in at 25 trading sessions, the rally from 20,000 to 21,000 is the Dow’s fastest move between thousand-point milestones since 1999.

Which 2017 IPOs will come next?

According to MarketWatch, possible hot IPOs to watch for in 2017 include:

  • Spotify: Spotify raised $1 billion in debt financing in March, according to The Wall Street Journal, with conditions that essentially force it to go public in 2017 or pay greater interest on its debt and increased discounts to its investors.
  • Palantir Technologies Inc.: Palantir sells its software to government agencies, including the Defense Intelligence Agency and other military branches, which could become more relevant under Trump’s administration.
  • Uber: In 2017, the ride-hailing startup is expected to face continuing battles over regulation as it fights with cities over self-driving cars as well as the operation of its driver-based business. Additionally, the company faces lawsuits over whether its drivers are employees or independent contractors, and a widespread ruling that finds the drivers are employees could have major implications for Uber’s business model.
  • Lyft Inc.: In 2016, Lyft was followed by rumors of a possible sale because it had hired investment bank Qatalyst Partners, which helps companies find a buyer. But John Green, Lyft’s co-founder, denied pursuing a sale in October and said the company could go public within a few years.
  • Airbnb Inc.: Like Uber and Lyft, Airbnb is also up against a bevy of regulations. But the company appears to be chipping away at short-term housing regulation city by city, with the most recent example coming in New York City. That will likely continue through 2017, Rao said, at least until the company can develop solid working relationships in major cities.
  • Dropbox Inc.: It feels like the file-storage company has been forever rumored to go public, but 2017 may finally be the year for Dropbox.

It will be a rollercoaster ride this year. Let’s hope the market exuberance doesn’t go the way of Snapchat’s messages: Poof!

You keep reading the same three names over and over again. Amazon Web Services. Google Cloud Platform. Microsoft Windows Azure. For the past several years, that’s been the top tier, with a wide gap between them and everyone else. Well, there’s a fourth player, the IBM cloud, with their SoftLayer acquisition. But still, it’s AWS in the lead when it comes to Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS), with many estimates showing about a 37-40% market share in early 2017. In second place, Azure, at around 28-31%. Third, place, Google at around 16-18%. Fourth place, IBM SoftLayer, at 3-5%.

Add that all up, and you get the big four (let’s count IBM) at between 84% and 94%. That doesn’t leave much room for everyone else, including companies like Rackspace, and all the cloud initiatives launched by major computer companies like Alibaba, Dell, HP Enterprise, Oracle, and all the telcos around the world.

Of course, IaaS and PaaS can’t account for all the cloud activity. In the Software-as-a-Service realm, companies like Salesforce.com and Oracle operate their own clouds, which are huge. And then there are the private clouds, operated by the likes of Apple and Facebook, which are immense, with data centers all around the world.

Still, it’s clear that when it comes to the public cloud, there are very few choices. That covers the clouds telcos want to monetize, and enterprises need for hybrid clouds or full migrations,  You can go with the big winner, which is Amazon. You can look to Azure (which is appealing, of course, to Microsoft shops) or Google. And then you can look at everyone else, including IBM SoftLayer, Rackspace, and, well, everyone else.

Amazon Web Services Inside?

Remember when computer makers were touting “Intel Inside”? In today’s world, many SaaS providers are basing their platforms on Amazon, Azure or Google. And many IaaS and PaaS players are doing the same —except in many cases, they’re not advertising it. Unlike many of the smaller PC companies, who wanted to hitch their star to Intel’s huge advertising budget, cloud software companies want to build out their own brands. In the international space, they also don’t want to be seen as fronting U.S.-based technology providers, but rather, want to appeal as a local option.

Speaking of international, the dominance of the IaaS/PaaS market by three U.S. companies can create a bit of a conundrum for global tech providers. Many governments and global businesses are leery of letting their data touch U.S. servers, and in some cases, even if the Amazon/Azure/Google data center is based in Europe or Asia, there are legal minefields regarding U.S. courts and surveillance. Not only that, but across the globe, privacy laws are increasingly strict about where consumer information may be stored.

What does this add up to? Probably not much in the long run. There’s no reason to expect that the lineup of Amazon, Azure and Google will change much over the next year or two, or that they will lose market share to smaller players. In fact, to the contrary: The big players are getting bigger at the expense of the niche offerings. According to a recent report from Synergy Research Group:

New Q4 data from Synergy Research Group shows that Amazon Web Services (AWS) is maintaining its dominant share of the burgeoning public cloud services market at over 40%, while the three main chasing cloud providers – Microsoft, Google and IBM – are gaining ground but at the expense of smaller players in the market. In aggregate the three have increased their worldwide market share by almost five percentage points over the last year and together now account for 23% of the total public IaaS and PaaS market, helped by particularly strong growth at Microsoft and Google.

The bigger are getting bigger. The smaller are getting smaller. That’s the cloud market story, in a nutshell.

Modern medical devices increasingly leverage microprocessors and embedded software, as well as sophisticated communications connections, for life-saving functionality. Insulin pumps, for example, rely on a battery, pump mechanism, microprocessor, sensors, and embedded software. Pacemakers and cardiac monitors also contain batteries, sensors, and software. Many devices also have WiFi- or Bluetooth-based communications capabilities. Even hospital rooms with intravenous drug delivery systems are controlled by embedded microprocessors and software, which are frequently connected to the institution’s network. But these innovations also mean that a software defect can cause a critical failure or security vulnerability.

In 2007, former vice president Dick Cheney famously had the wireless capabilities of his pacemaker disabled. Why? He was concerned “about reports that attackers could hack the devices and kill their owners.” Since then, the vulnerabilities caused by the larger attack surface area on modern medical devices have gone from hypothetical to demonstrable, in part due to the complexity of the software, and in part due to the failure to properly harden the code.

In October 2011, The Register reported that “a security researcher has devised an attack that hijacks nearby insulin pumps, enabling him to surreptitiously deliver fatal doses to diabetic patients who rely on them.” The insulin pump worked because the pump contained a short-range radio that allow patients and doctors to adjust its functions. The researcher showed that, by using a special antenna and custom-written software, he could locate and seize control of any such device within 300 feet.

report published by Independent Security Evaluators (ISE) shows the danger. This report examined 12 hospitals, the organization concluded “that remote adversaries can easily deploy attacks that manipulate records or devices in order to fully compromise patient health” (p. 25). Later in the report, the researchers show how they demonstrated the ability to manipulate the flow of medicine or blood samples within the hospital, resulting in the delivery of improper medicate types and dosages (p. 37)–and do all this from the hospital lobby. They were also able to hack into and remotely control patient monitors and breathing tubes – and trigger alarms that might cause doctors or nurses to administer unneeded medications.

Read more in my blog post for Parasoft, “What’s the Cure for Software Defects and Vulnerabilities in Medical Devices?

Think about alarm systems in cars. By default, many automobiles don’t come with an alarm system installed from the factory. That was for three main reasons: It lowered the base sticker price on the car; created a lucrative up-sell opportunity; and allowed for variations on alarms to suit local regulations.

My old 2004 BMW 3-series convertible (E46), for example, came pre-wired for an alarm. All the dealer had to do, upon request (and payment of $$$) was install a couple of sensors and activate the alarm in the car’s firmware. Voilà! Instant protection. Third-party auto supply houses and garages, too, were delighted that the car didn’t include the alarm, since that made it easier to sell one to worried customers, along with a great deal on a color-changing stereo head unit, megawatt amplifier and earth-shattering sub-woofer.

Let’s move from cars to cybersecurity. The dangers are real, and as an industry, it’s in our best interest to solve this problem, not by sticking our head in the sand, not by selling aftermarket products, but by a two-fold approach: 1) encouraging companies to make more secure products; and 2) encouraging customers to upgrade or replace vulnerable products — even if there’s not a dollar, pound, euro, yen or renminbi of profit in it for us:

  • If you’re a security hardware, software, or service company, the problem of malicious bits traveling over broadband, wireless and the Internet backbone is also not your problem. Rather, it’s an opportunity to sell products. Hurray for one-time sales, double hurray for recurring subscriptions.
  • If you’re a carrier, the argument goes, all you care about is the packets, and the reliability of your network. The service level agreement provided to consumers and enterprises talks about guaranteed bandwidth, up-time availability, and time to recover from failures; it certainly doesn’t promise that devices connected to your service will be free of malware or safe from hacking. Let customers buy firewalls and endpoint protection – and hey, if we offer that as a service, that’s a money-making opportunity.

Read more about this subject in my latest article for Pipeline Magazine, “An Advocate for Safer Things.”

What’s the biggest tool in the security industry’s toolkit? The patent application. Security thrives on innovation, and always has, because throughout recorded history, the bad guys have always had the good guys at the disadvantage. The only way to respond is to fight back smarter.

Sadly, fighting back smarter isn’t always the case. At least, not when looking over the vendor offerings at RSA Conference 2017, held mid-February in San Francisco. Sadly, some of the products and services wouldn’t have seemed out of place a decade ago. Oh, look, a firewall! Oh look, a hardware device that sits on the network and scans for intrusions! Oh, look, a service that trains employees not to click on phishing spam!

Fortunately, some companies and big thinkers are thinking new about the types of attacks… and the best ways to protect against them, detect when those protections end, how to respond when attacks are detected, and ways to share information about those attacks.

The battle, after all, is asymmetric. Think about your typical target: It’s a business or a government organization or a military or a person. It is known. It can be identified. It can’t hide, or it can’t hide for long. It defenses, or at least their outer perimeter, can be seen and tested. Security secrets and vulnerabilities can be neutralized by someone who spills those secrets, whether through spying or social engineering.

Knowing the enemy

By contrast, while attackers know who the target is, the target doesn’t know who the attackers are. There many be many attackers, and they can shift targets on short notice, going after the biggest prize or the weakest prize. They can swamp the target with attacks. If one attackers is neutralized, the other attackers are still a threat. And in fact, even the attackers don’t know who the other attackers are. Their lack of coordination is a strength.

In cyberwarfare, as in real warfare, a single successful incursion can have incredible consequences. With one solid foothold in an endpoint – whether that endpoint is on a phone or a laptop, on a server or in the cloud – the bad guys are in a good position to gain more intelligence, seek out credentials, undermine defenses, and take over new footholds.

A Failed Approach

The posture of the cybersecurity industry – and of info sec professionals and the CISO – must shift. For years, the focus was almost exclusively on prevention. Install a firewall, and keep that firewall up to date! Install antivirus software, and keep adding signatures! Install intrusion detection systems, and then upgrade them to intrusion prevention systems!

That approached failed, just as an approach to medicine that focus exclusively on wellness, healthy eating and taking vitamins will fail. The truth is that breaches happen, in part because organizations don’t do a perfect job with their prevention methods, and in part because bad guys find new weaknesses that nobody considered, from zero-day software vulnerabilities to clever new spearphishing techniques. A breach is inevitable, the industry has admitted. Now, the challenge is to detect that breach quickly, move swiftly to isolate the damage, and then identify root causes so that future attacks using that same vulnerability won’t succeed.

Meanwhile, threat intelligence tools allow businesses to share information, carefully and confidentially. When one company is attacked, others can learn how to guard against that same attack vector. Hey, criminals share information about vulnerabilities using the dark web – so let’s learn from their example.

At RSA Conference 2017, most of the messages were same-old, same-old. Not all, fortunately. I was delighted, however, to see a renewed emphasis at some companies, and in some keynotes, on innovation. Not merely to keep up with the competition or to realize short-term advantage of cybercriminals. But rather, continuous, long-term investment focused on the constantly changing nature of cybersecurity. Security thrives on innovation. Because the bad guys innovate too.

Everyone has received those crude emails claiming to be from your bank’s “Secuirty Team” that tells you that you need to click a link to “reset you account password.” It’s pretty easy to spot those emails, with all the misspellings, the terrible formatting, and the bizarre “reply to” email addresses at domains halfway around the world. Other emails of that sort ask you to review an unclothed photo of a A-list celebrity, or open up an attached document that tells you what you’ve won.

We can laugh. However, many people fall for those phishing scams — and willingly surrender their bank account numbers and passwords, or install malware, such as ransomware.

Less obvious, and more effective, are attacks that are carefully crafted to appeal to a high-value individual, such as a corporate executive or systems administrator. Despite their usual technological sophistication, anyone can be fooled, if the spearphishing email is good enough – spearphishing being the term for phishing emails designed specifically to entrap a certain person.

What’s the danger? Plenty. Spearphishing emails that pretend to be from the CEO can convince a corporate accounting manager to wire money to an overseas account. Called the “Wire Transfer Scam,” this has been around for several years and still works, costing hundreds of millions of dollars, said the FBI.

These types of scams can hurt individuals as well, getting access to their private financial information. In February 2017, about 7,700 employees of the Manatee School District in Florida had their taxpayer numbers stolen when a payroll employee responded to what she thought was a legitimate query from a district officer:

Forward all schools employees 2016 W2 forms to me attached and sent in PDF, I will like to have them as soon as possible for board review. Thanks.

It was a scam, and the scammers have each employee’s W2, a key information document in the United States. The cost of the damage: Unknown at this point, but it’s bad for the school district and for the employees as well. Sadly, this is not a new threat: The U.S. Internal Revenue Service had warned about this exact phishing scam in March 2016.

The cybercriminals behind spearphishing are continuing to innovate. Fortunately, the industry is fight back. Menlo Security, a leading security company, recently uncovered a sophisticated spearphishing attack at a well-known enterprise. While it’s understandable that the victim would decline to be identified, Menlo Security was able to provide some details on the scheme – which incorporate multiple scripts to truly customize the attack and trick the victim into disclosing key credentials.

  • The attackers performed various checks on the password entered by the victim and their IP address to determine whether it was a true compromise versus somebody who had figured out the attack.
  • The attackers supported various email providers. This was determined by the fact that they served custom pages based on the email domain. For example, a victim whose email address was email hidden; JavaScript is required would be served a page that looked like a Gmail login page.
  • The attackers exfiltrated the victim’s personally identifiable information (PII) to an attacker controlled account.
  • The attacker relied heavily on several key scripts to execute the phishing campaign, and to obtain the victim’s IP address in addition to the victim’s country and city.

Phishing and spearphishing have come a long way from those crude emails – which still work, believe it or not. We can’t count on spotting bad spelling and laughable return-address domains on emails to help identify the fraud, because the hackers will figure that out, and use native English speakers and spellcheck. The only solution will be, must be, a technological one.

What’s on the industry’s mind? Security and mobility are front-and-center of the cerebral cortex, as two of the year’s most important events prepare to kick off.

The Security Story

At the RSA Conference 2017 (February 13-17 in San Francisco), expect to see the best of the security industry, from solutions providers to technology firms to analysts. RSA can’t come too soon.

Ransomware, which exploded into the public’s mind last year with high-profile incidents, continues to run rampant. Attackers are turning to ever-bigger targets, with ever-bigger fallout. It’s not enough that hospitals are still being crippled (this was big in 2016), but hotel guests are locked out of their rooms, police departments are losing important crime evidence, and even CCTV footage has been locked away.

What makes ransomware work? Human weakness, for the most part. Many successful ransomware attacks begin with either generalized phishing or highly sophisticated and targeted spearphishing. Once the target user has clicked on a link in a malicious email or website, odds are good that his/her computer will be infected. From there, the malware can do more than encrypt data and request a payout. It can also spread to other computers on the network, install spyware, search for unpatched vulnerabilities and cause untold havoc.

Expect to hear a lot about increasingly sophisticated ransomware at RSA. We’ll see solutions to help, ranging from ever-more-sophisticated email scanners, endpoint security tools, isolation platforms and tools to prevent malware from spreading beyond the initially affected machine.

Also expect to hear plenty about artificial intelligence as the key to preventing and detecting attacks that evade traditional technologies like signatures. AI has the ability to learn and respond in ways that go far beyond anything that humans can do – and when coupled with increasingly sophisticated threat intelligence systems, AI may be the future of computer security.

The Mobility Story

Halfway around the world, mobility is only part of the story at Mobile World Congress (February 27 – March 2 in Barcelona). There will be many sessions about 5G wireless, which can provision not only traditional mobile users, but also industrial controls and the Internet of Things. AT&T recently announced that it will launch 5G service (with peak speeds of 400Mbps or better) in two American cities, Austin and Indianapolis. While the standards are not yet complete, that’s not stopping carriers and the industry from moving ahead.

Also key to the success of all mobile platforms is cloud computing. Microsoft is moving more aggressively to the cloud, going beyond Azure and Office 365 with a new Windows 10 Cloud edition, a simplified experience designed to compete against Google’s Chrome platform.

The Internet of Things is also roaring to life, and it means a lot more than fitness bands and traffic sensors. IoT applications are showing up in everything from industrial controls to embedded medical devices to increasingly intelligent cars and trucks. What makes it work? Big batteries, reliable wireless, industry standards and strong security. Every type of security player is involved with IoT, from the cloud to wireless to endpoint protection. You’ll hear more about security at Mobile World Congress than in the past, because the threats are bigger than ever. And so are the solutions.

Want to open up your eyes, expand your horizons, and learn from really smart people? Attend a conference or trade show. Get out there. Meet people. Have conversations. Network. Be inspired by keynotes. Take notes in classes that are delivering great material, and walk out of boring sessions and find something better.

I wrote an article about the upcoming 2017 conferences and trade shows about cloud computing and enterprise infrastructure. Think big and think outside the cubicle: Don’t go to only the events that are about the exact thing you do, and don’t attend only the sessions about the exact thing you do.

The list is organized alphabetically in “must attend,” worth attending,” and “worthy mentions” sections. Those are my subjective labels (though based on experience, having attended many of these conferences in the past decades), so read the descriptions carefully and make your own decisions. If you don’t use Amazon Web Services, then AWS re:Invent simply isn’t right for you. However, if you use or might use the company’s cloud services, then, yes, it’s a must-attend.

And oh, a word about the differences between conferences and trade shows (also known as expos). These can be subtle, and reasonable people might disagree in some edge cases. However, a conference’s main purpose is education: The focus is on speakers, panels, classes, and other sessions. While there might be an exhibit floor for vendors, it’s probably small and not very useful. In contrast, a trade show is designed to expose you to the greatest number of exhibitors, including vendors and trade associations. The biggest value is in walking the floor; while the trade show may offer classes, they are secondary and often (but not always) vendor fluff sessions “awarded” to big advertisers in return for their gold sponsorships.

So if you want to learn from classes, panels, and workshops, you probably want a conference. If you want to talk to vendors, kick the tires on products, and decide which solutions to buy or recommend, you want a trade show or an expo.

And now, on with the list: the most important events in cloud computing and enterprise infrastructure, compiled at the very beginning of 2017. Note that events can change their dates or cities without notice, or even be cancelled, so keep an eye on the websites. You can read the list here.

Las Vegas, January 2017 — “Alexa, secure the enterprise against ransomware.” Artificial intelligence is making tremendous headway, as seen at this year’s huge Consumer Electronics Show (CES). We’re seeing advances that leverage AI in everything from speech recognition to the Internet of Things (IoT) to robotics to home entertainment.

Not sure what type of music to play? Don’t worry, the AI engine in your cloud-based music service knows your taste better than you do. Want to read a book whilst driving to the office? Self-driving cars are here today in limited applications, and we’ll see a lot more of them in 2017.

Want to make brushing your teeth more fun, all while promoting good dental health? The Ara is the “1st toothbrush with Artificial Intelligence,” claims Kolibree, a French company that introduced the product at CES 2017.

Gadgets dominate CES. While crowds are lining up to see the AI-powered televisions, cookers and robots, the real power of AI is hidden, behind the scenes, and not part of the consumer context. Unknown to happy shoppers exploring AI-based barbecues, artificial intelligence is keeping our networks safe, detecting ransomware, helping improve the efficiency of advertising and marketing, streamlining business efficiencies, diagnosing telecommunication faults in undersea cables, detecting fraud in banking and stock-marketing transactions, and even helping doctors track the spread of infectious diseases.

Medical applications capture the popular imagination because they’re so fast and effective. The IBM Watson AI-enabled supercomputer, for example, can read 200 million pages of text in three seconds — and understand what it reads. An oncology application running on Watson analyzes a patient’s medical records, and then combines attributes from the patient’s file with clinical expertise, external research, and data. Based on that information, Watson for Oncology identifies potential treatment plans for a patient. This means doctors can consider the treatment options provided by Watson when making decisions for individual patients. Watson even offers supporting evidence in the form of administration information, as well as warnings and toxicities for each drug.

Doctor AI Can Cure Cybersecurity Ills

Moving beyond medicine, AI is proving essential for protecting computer networks — and their users against intrusion. The traditional non-AI-based anti-virus and anti-malware products can’t protect against advanced threats, and that’s where companies like Cylance come in. They can use neural networks and other machine-learning techniques to study millions of malicious files, from executables to documents to PDFs to images. Using pattern recognition, Cylance have developed a revolutionary machine learning platform that can identify suspicious files that might be seen on websites or as email attachments, even if it’s never seen that particular type of malware before. Nothing but AI can get the job done, not in an era when over a million new pieces of malware, ranging from phishing to ransomware, appear every single day.

Menlo Security is another network-protection company that leverages artificial intelligence. The Menlo Security Isolation Platform uses AI to prevent Internet-based malware from ever reaching an endpoint, such as a desktop or mobile device, because email and websites are accessed inside the cloud — not on the client’s computer. Only safe, malware-free rendering information is sent to the user’s endpoint, eliminating the possibility of malware reaching the user’s device. An artificial intelligence engine constantly scans the Internet session to provide protection against spear-phishing and other email attacks.

What if a machine does become compromised? It’s unlikely, but it can happen — and the price of a single breech can be incredible, especially if a hacker can take full control of the compromised device and use it to attack other assets within the enterprise, such as servers, routers or executives’ computers. If a breach does occur, that’s when the AI technology of Javelin Networks leaps into action, detecting that the attack is in progress, alerting security teams, isolating the device from the network — while simultaneously tricking the attackers into believing they’ve succeeded in their attack, therefore keeping them “on the line” while real-time forensics gather information needed to identify the attacker and help shut them down for good.

Socializing Artificial Intelligence

There’s a lot more to enterprise-scale AI than medicine and computer security, of course. QSocialNow, an incredibly innovative company in Argentina, uses AI-based Big Data and Predictive Analytics to watch an organization’s social media account — and empower them to not only analyze trends, but respond in mere seconds in the case of an unexpected event, such as a rise in customer complaints, the emergence of a social protest, even a physical disaster like an earthquake or tornado. Yes, humans can watch Twitter, Facebook and other networks, but they can’t act as fast as AI — or spot subtle trends that only advanced machine learning can observe through mathematics.

Robots can be powerful helpers for humanity, and AI-based toothbrushes can help us and our kids keep our teeth healthy. While the jury may be out on the implications of self-driving cars on our city streets, there’s no doubt that AI is keeping us — and our businesses — safe and secure. Let’s celebrate the consumer devices unveiled at CES, and the artificial intelligence working behind the scenes, far from the Las Vegas Strip, for our own benefit.

According to a recent study, 46% of the top one million websites are considered risky. Why? Because the homepage or background ad sites are running software with known vulnerabilities, the site was categorized as a known bad for phishing or malware, or the site had a security incident in the past year.

According to Menlo Security, in its “State of the Web 2016” report introduced mid-December 2016, “… nearly half (46%) of the top million websites are risky.” Indeed, Menlo says, “Primarily due to outdated software, cyber hackers now have their veritable pick of half the web to exploit. And exploitation is becoming more widespread and effective for three reasons: 1. Risky sites have never been easier to exploit; 2. Traditional security products fail to provide adequate protection; 3. Phishing attacks can now utilize legitimate sites.”

This has been a significant issue for years. However, the issue came to the forefront earlier this year when several well-known media sites were essentially hijacked by malicious ads. The New York Times, the BBC, MSN and AOL were hit by tainted advertising that installed ransomware, reports Ars Technica. From their March 15, 2016, article, “Big-name sites hit by rash of malicious ads spreading crypto ransomware”:

The new campaign started last week when ‘Angler,’ a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

The results of this attack, reported The Guardian at around the same time: 

When the infected adverts hit users, they redirect the page to servers hosting the malware, which includes the widely-used (amongst cybercriminals) Angler exploit kit. That kit then attempts to find any back door it can into the target’s computer, where it will install cryptolocker-style software, which encrypts the user’s hard drive and demands payment in bitcoin for the keys to unlock it.

If big-money trusted media sites can be hit, so can nearly any corporate site, e-commerce portal, or any website that uses third-party tools – or where there might be the possibility of unpatched servers and software. That means just about anyone. After all, not all organizations are diligent about monitoring for common vulnerabilities and exploits (CVE) on their on-premises servers. When companies run their websites on multi-tenant hosting facilities, they don’t even have access to the operating system directly, but rely upon the hosting company to install patches and fixes to Windows Server, Linux, Joomla, WordPress and so-on.

A single unpatched operating system, web server platform, database or extension can introduce a vulnerability which can be scanned for. Once found, that CVE can be exploited, by a talented hacker — or by a disgruntled teenager with a readily-available web exploit kit

What can you do about it? Well, you can read my complete story on this subject, “Malware explosion: The web is risky,” published on ITProPortal.

“If you give your security team the work they hate to do day in and day out, you won’t be able to retain that team.” Eoin Keary should know. As founder, director and CTO of edgescan, a fast-growing managed security service provider (MSSP), his company frees up enterprise security teams to focus on the more strategic, more interesting, more business-critical aspects of InfoSec while his team deals with the stuff they know and do best; deal with the monotony of full-stack vulnerability management.

It’s a perfect match, Keary says. By using an MSSP, customers can focus on business-critical issues, save money, have better security—and not have to replace expensive, highly trained employees who quit after a few months out of boredom. “We are experts in vulnerability management, have built the technology and can deliver very efficiently.”

BCC Risk Advisory Ltd, edgescan’s parent company, based in Dublin, Ireland, was formed in 2011 with “me and a laptop,” explains Keary. He expects his company to end the 2016 fiscal year at seven figure revenues and a growth trajectory of circa 400% compared to 2015. Its secret cyberweapon is a cloud-based SaaS called edgescan. edgescan detects security weaknesses across the customer’s full stack of technology assets, from servers to networks, from websites to apps to mobile devices. It also provides continuous asset profiling and virtual patching coupled with expert support.

edgescan constantly assesses clients’ systems on a continuous basis. “We have a lot of intelligence and automation in the platform to determine what needs to be addressed,” explains Keary.

There’s a lot more to my interview with Eoin Keary — you can read the whole story, “Apparently We Love To Do What Companies Hate. Lucky You!” published in ITSP Magazine.

5d3_1277I was dismayed this morning to find an email from Pebble — the smart watch folks — essentially announcing their demise. The company is no longer a viable concern, says the message, and the assets of the company are being sold to Fitbit. Some of Pebble’s staff will go to Fitbit as well.

This is a real loss. The Pebble is an excellent watch. I purchased the original monochrome-screen model by signing onto their Kickstarter campaign, back in April 2012, for an investment of $125.

The Kickstarter watch’s screen became a little flakey after a few years. I purchased the Pebble Time – a much-improved color version – in May 2016, for the odd price of $121.94 through Amazon. You can see the original Pebble, with a dead battery, on the left, and the Pebble Time on the right. The watchface I’ve chosen isn’t colorful, so you can’t see that attribute.

I truly adore the Pebble Time. Why?

  • The battery life is a full week; I don’t travel with a charging cable unless it’s a long trip.
  • The watch does everything I want: The watch face I’ve chosen can be read quickly, and is always on.
  • The watch lets me know about incoming text messages. I can answer phone call in the car (using speakerphone) by pressing a button on the watch.
  • Also in the car I can control my phone’s music playback from the watch.
  • It was inexpensive enough that if it gets lost, damaged or stolen, no big deal.

While I love the concept of the Apple Watch, it’s too complicated. The battery life is far too short. And I don’t need the extra functions. The Pebble Time is (or rather was) far less expensive.

Fortunately, my Pebble Time should keep running for a long, long time. Don’t know what will replace it, when the time comes. Hopefully something with at least a week of battery life.

Here’s the statement from Pebble:

Pebble is joining Fitbit

Fitbit has agreed to acquire key Pebble assets. Due to various factors, Pebble can no longer operate as an independent entity, and we have made the tough decision to shut down the company. The deal finalized today preserves as much of Pebble as possible.

Pebble is ceasing all hardware operations. We are no longer manufacturing, promoting, or selling any new products. Active Pebble models in the wild will continue to work.

Making Awesome Happen will live on at Fitbit. Much of our team and resources will join Fitbit to deliver new “moments of awesome” in future Fitbit products, developer tools, and experiences. As our transition progresses, we’ll have exciting new stories to tell and milestones to celebrate.

It’s no doubt a bittersweet time. We’ll miss what we’re leaving behind, but are excited for what the future holds. It will be important for Pebblers to extend a warm welcome to Fitbit—as fans and customers—sharing what they love about Pebble and what they’d like to see next.

phoneFrom company-issued tablets to BYOD (bring your own device) smartphones, employees are making the case that mobile devices are essential for productivity, job satisfaction, and competitive advantage. Except in the most regulated industries, phones and tablets are part of the landscape, but their presence requires a strong security focus, especially in the era of non-stop malware, high-profile hacks, and new vulnerabilities found in popular mobile platforms. Here are four specific ways of examining this challenge that can help drive the choice of both policies and technologies for reducing mobile risk.

Protect the network: Letting any mobile device on the business network is a risk, because if the device is compromised, the network (and all of its servers and other assets) may be compromised as well. Consider isolating internal WiFi links to secured network segments, and only permit external access via virtual private networks (VPNs). Install firewalls that guard the network by recognizing not only authorized devices, but also authorized users — and authorized applications. Be sure to keep careful tabs on devices accessing the network, from where, and when.

Protect the device: A mobile device can be compromised in many ways: It might be stolen, or the user might install malware that provides a gateway for a hacker. Each mobile device should be protect by strong passwords not only for the device, but on critical business apps. Don’t allow corporate data to be stored on the device itself. Ensure that there are remote-wipe capabilities if the device is lost. And consider installed a Mobile Device Management (MDM) platform that can give IT full control over the mobile device – or at least those portions of a employee-owned device that might ever be used for business purposes.

Protect the data: To be productive with their mobile devices, employees want access to important corporate assets, such as email, internal websites, ERP or CRM applications, document repositories, as well as cloud-based services. Ensure that permissions are granted specifically for needed services, and that all access is encrypted and logged. As mentioned above, never let corporate data – including documents, emails, chats, internal social media, contacts, and passwords – be stored or cached on the mobile device. Never allow co-mingling of personal and business data, such as email accounts. Yes, it’s a nuisance, but make the employee log into the network, and authenticate into enterprise-authorized applications, each and every time. MDM platforms can help enforce those policies as well.

Protect the business: The policies regarding mobile access should be worked out along with corporate counsel, and communicated clearly to all employees before they are given access to applications and data. The goal isn’t to be heavy-handed, but rather, to gain their support. If employees understand the stakes, they become allies in helping protect business interests. Mobile access is risky for enterprises, and with today’s aggressive malware, the potential for harm has never been higher. It’s not too soon to take it seriously.

wayne-rashWhen an employee account is compromised by malware, the malware establishes a foothold on the user’s computer – and immediately tries to gain access to additional resources. It turns out that with the right data gathering tools, and with the right Big Data analytics and machine-learning methodologies, the anomalous network traffic caused by this activity can be detected – and thwarted.

That’s the role played by Blindspotter, a new anti-malware system that seems like a specialized version of a network intrusion detection/prevention system (IDPS). Blindspotter can help against many types of malware attacks. Those include one of the most insidious and successful hack vectors today: spear phishing. That’s when a high-level target in your company is singled out for attack by malicious emails or by compromised websites. All the victim has to do is open an email, or click on a link, and wham – malware is quietly installed and operating. (High-level targets include top executives, financial staff and IT administrators.)

My colleague Wayne Rash recently wrote about this network monitoring solution and its creator, Balabit, for eWeek in “Blindspotter Uses Machine Learning to Find Suspicious Network Activity”:

The idea behind Balabit’s Blindspotter and Shell Control Box is that if you gather enough data and subject it to analysis comparing activity that’s expected with actual activity on an active network, it’s possible to tell if someone is using a person’s credentials who shouldn’t be or whether a privileged user is abusing their access rights.

 The Balabit Shell Control Box is an appliance that monitors all network activity and records the activity of users, including all privileged users, right down to every keystroke and mouse movement. Because privileged users such as network administrators are a key target for breaches it can pay special attention to them.

The Blindspotter software sifts through the data collected by the Shell Control Box and looks for anything out of the ordinary. In addition to spotting things like a user coming into the network from a strange IP address or at an unusual time of day—something that other security software can do—Blindspotter is able to analyze what’s happening with each user, but is able to spot what is not happening, in other words deviations from normal behavior.

Read the whole story here. Thank you, Wayne, for telling us about Blindspotter.

For programmers, a language style guide is essential for helping learn a language’s standards. A style guide also can resolve potential ambiguities in syntax and usage. Interestingly, though, the official Code Conventions for the Java Programming Language guide has not been updated since April 20,1999 – back from long before Oracle bought Sun Microsystems. In fact, the page is listed as for “Archival Purposes Only.”

What’s up with that? I wrote to Andrew Binstock (@PlatypusGuy), the editor-in-chief of Oracle Java Magazine. In the November/December 2016 issue of the magazine, Andrew explained that according to the Java team, the Code Conventions guide was meant as an internal coding guide – not as an attempt to standardize the language.

Instead of Coding Conventions, Mr. B recommends the Google Java Style Guide as a “full set of well-reasoned Java coding guidelines.” So there you have it: If you want the good Java guidelines, look to Google — not to Oracle. Here’s the letter and the response.

bloombergMedical devices are incredibly vulnerable to hacking attacks. In some cases it’s because of software defects that allow for exploits, like buffer overflows, SQL injection or insecure direct object references. In other cases, you can blame misconfigurations, lack of encryption (or weak encryption), non-secure data/control networks, unfettered wireless access, and worse.

Why would hackers go after medical devices? Lots of reasons. To name but one: It’s a potential terrorist threat against real human beings. Remember that Dick Cheney famously disabled the wireless capabilities of his implanted heart monitor for fear of an assassination attack.

Certainly healthcare organizations are being targeted for everything from theft of medical records to ransomware. To quote the report “Hacking Healthcare IT in 2016,” from the Institute for Critical Infrastructure Technology (ICIT):

The Healthcare sector manages very sensitive and diverse data, which ranges from personal identifiable information (PII) to financial information. Data is increasingly stored digitally as electronic Protected Health Information (ePHI). Systems belonging to the Healthcare sector and the Federal Government have recently been targeted because they contain vast amounts of PII and financial data. Both sectors collect, store, and protect data concerning United States citizens and government employees. The government systems are considered more difficult to attack because the United States Government has been investing in cybersecurity for a (slightly) longer period. Healthcare systems attract more attackers because they contain a wider variety of information. An electronic health record (EHR) contains a patient’s personal identifiable information, their private health information, and their financial information.

EHR adoption has increased over the past few years under the Health Information Technology and Economics Clinical Health (HITECH) Act. Stan Wisseman [from Hewlett-Packard] comments, “EHRs enable greater access to patient records and facilitate sharing of information among providers, payers and patients themselves. However, with extensive access, more centralized data storage, and confidential information sent over networks, there is an increased risk of privacy breach through data leakage, theft, loss, or cyber-attack. A cautious approach to IT integration is warranted to ensure that patients’ sensitive information is protected.”

Let’s talk devices. Those could be everything from emergency-room monitors to pacemakers to insulin pumps to X-ray machines whose radiation settings might be changed or overridden by malware. The ICIT report says,

Mobile devices introduce new threat vectors to the organization. Employees and patients expand the attack surface by connecting smartphones, tablets, and computers to the network. Healthcare organizations can address the pervasiveness of mobile devices through an Acceptable Use policy and a Bring-Your-Own-Device policy. Acceptable Use policies govern what data can be accessed on what devices. BYOD policies benefit healthcare organizations by decreasing the cost of infrastructure and by increasing employee productivity. Mobile devices can be corrupted, lost, or stolen. The BYOD policy should address how the information security team will mitigate the risk of compromised devices. One solution is to install software to remotely wipe devices upon command or if they do not reconnect to the network after a fixed period. Another solution is to have mobile devices connect from a secured virtual private network to a virtual environment. The virtual machine should have data loss prevention software that restricts whether data can be accessed or transferred out of the environment.

The Internet of Things – and the increased prevalence of medical devices connected hospital or home networks – increase the risk. What can you do about it? The ICIT report says,

The best mitigation strategy to ensure trust in a network connected to the internet of things, and to mitigate future cyber events in general, begins with knowing what devices are connected to the network, why those devices are connected to the network, and how those devices are individually configured. Otherwise, attackers can conduct old and innovative attacks without the organization’s knowledge by compromising that one insecure system.

Given how common these devices are, keeping IT in the loop may seem impossible — but we must rise to the challenge, ICIT says:

If a cyber network is a castle, then every insecure device with a connection to the internet is a secret passage that the adversary can exploit to infiltrate the network. Security systems are reactive. They have to know about something before they can recognize it. Modern systems already have difficulty preventing intrusion by slight variations of known malware. Most commercial security solutions such as firewalls, IDS/ IPS, and behavioral analytic systems function by monitoring where the attacker could attack the network and protecting those weakened points. The tools cannot protect systems that IT and the information security team are not aware exist.

The home environment – or any use outside the hospital setting – is another huge concern, says the report:

Remote monitoring devices could enable attackers to track the activity and health information of individuals over time. This possibility could impose a chilling effect on some patients. While the effect may lessen over time as remote monitoring technologies become normal, it could alter patient behavior enough to cause alarm and panic.

Pain medicine pumps and other devices that distribute controlled substances are likely high value targets to some attackers. If compromise of a system is as simple as downloading free malware to a USB and plugging the USB into the pump, then average drug addicts can exploit homecare and other vulnerable patients by fooling the monitors. One of the simpler mitigation strategies would be to combine remote monitoring technologies with sensors that aggregate activity data to match a profile of expected user activity.

A major responsibility falls onto the device makers – and the programmers who create the embedded software. For the most part, they are simply not up to the challenge of designing secure devices, and may not have the polices, practices and tools in place to get cybersecurity right. Regrettably, the ICIT report doesn’t go into much detail about the embedded software, but does state,

Unlike cell phones and other trendy technologies, embedded devices require years of research and development; sadly, cybersecurity is a new concept to many healthcare manufacturers and it may be years before the next generation of embedded devices incorporates security into its architecture. In other sectors, if a vulnerability is discovered, then developers rush to create and issue a patch. In the healthcare and embedded device environment, this approach is infeasible. Developers must anticipate what the cyber landscape will look like years in advance if they hope to preempt attacks on their devices. This model is unattainable.

In November 2015, Bloomberg Businessweek published a chilling story, “It’s Way too Easy to Hack the Hospital.” The authors, Monte Reel and Jordon Robertson, wrote about one hacker, Billy Rios:

Shortly after flying home from the Mayo gig, Rios ordered his first device—a Hospira Symbiq infusion pump. He wasn’t targeting that particular manufacturer or model to investigate; he simply happened to find one posted on EBay for about $100. It was an odd feeling, putting it in his online shopping cart. Was buying one of these without some sort of license even legal? he wondered. Is it OK to crack this open?

Infusion pumps can be found in almost every hospital room, usually affixed to a metal stand next to the patient’s bed, automatically delivering intravenous drips, injectable drugs, or other fluids into a patient’s bloodstream. Hospira, a company that was bought by Pfizer this year, is a leading manufacturer of the devices, with several different models on the market. On the company’s website, an article explains that “smart pumps” are designed to improve patient safety by automating intravenous drug delivery, which it says accounts for 56 percent of all medication errors.

Rios connected his pump to a computer network, just as a hospital would, and discovered it was possible to remotely take over the machine and “press” the buttons on the device’s touchscreen, as if someone were standing right in front of it. He found that he could set the machine to dump an entire vial of medication into a patient. A doctor or nurse standing in front of the machine might be able to spot such a manipulation and stop the infusion before the entire vial empties, but a hospital staff member keeping an eye on the pump from a centralized monitoring station wouldn’t notice a thing, he says.

 The 97-page ICIT report makes some recommendations, which I heartily agree with.

  • With each item connected to the internet of things there is a universe of vulnerabilities. Empirical evidence of aggressive penetration testing before and after a medical device is released to the public must be a manufacturer requirement.
  • Ongoing training must be paramount in any responsible healthcare organization. Adversarial initiatives typically start with targeting staff via spear phishing and watering hole attacks. The act of an ill- prepared executive clicking on a malicious link can trigger a hurricane of immediate and long term negative impact on the organization and innocent individuals whose records were exfiltrated or manipulated by bad actors.
  • A cybersecurity-centric culture must demand safer devices from manufacturers, privacy adherence by the healthcare sector as a whole and legislation that expedites the path to a more secure and technologically scalable future by policy makers.

This whole thing is scary. The healthcare industry needs to set up its game on cybersecurity.

zebra-tc8000Are you a coder? Architect? Database guru? Network engineer? Mobile developer? User-experience expert? If you have hands-on tech skills, get those hands dirty at a Hackathon.

Full disclosure: Years ago, I thought Hackathons were, well, silly. If you’ve got the skills and extra energy, put them to work for coding your own mobile apps. Do a startup! Make some dough! Contribute to an open-source project! Do something productive instead of taking part in coding contests!

Since then, I’ve seen the light, because it’s clear that Hackathons are a win-win-win.

  • They are a win for techies, because they get to hone their abilities, meet people, and learn stuff.
  • They are a win for Hackathon sponsors, because they often give the latest tools, platforms and APIs a real workout.
  • They are a win for the industry, because they help advance the creation and popularization of emerging standards.

One upcoming Hackathon that I’d like to call attention to: The MEF LSO Hackathon will be at the upcoming MEF16 Global Networking Conference, in Baltimore, Nov. 7-10. The work will support Third Network service projects that are built upon key OpenLSO scenarios and OpenCS use cases for constructing Layer 2 and Layer 3 services. You can read about a previous MEF LSO Hackathon here.

Build your skills! Advance the industry! Meet interesting people! Sign up for a Hackathon!

liberalAs Aesop wrote in his short fable, “The Donkey and His Purchaser,” you can quite accurately judge people by the company they keep.

I am “very liberal,” believes Facebook. If you know me, you are probably not surprised by that. However, I was: I usually think of myself as a small-l libertarian who caucuses with the Democrats on social issues. But Facebook, by looking at what I write, who I follow, and which pages I like, probably has a more accurate assessment.

The spark for this particular revelation is “Liberal, Moderate or Conservative? See How Facebook Labels You.” The article, by Jeremy Merrill, in today’s New York Times, explains how to see how Facebook categorizes you (presumably this is most appropriate for U.S. residents):

Try this (it works best on your desktop computer):

Go to facebook.com/ads/preferences on your browser. (You may have to log in to Facebook first.)

That will bring you to a page featuring your ad preferences. Under the “Interests” header, click the “Lifestyle and Culture” tab.

Then look for a box titled “US Politics.” In parentheses, it will describe how Facebook has categorized you, such as liberal, moderate or conservative.

(If the “US Politics” box does not show up, click the “See more” button under the grid of boxes.)

Part of the power of Big Data is that it can draw correlations based on vague inferences. So, yes, if you like Donald Trump’s page, but don’t like Hillary Clinton’s, you are probably conservative. What if you don’t follow either candidate? Jeremy writes,

Even if you do not like any candidates’ pages, if most of the people who like the same pages that you do — such as Ben and Jerry’s ice cream — identify as liberal, then Facebook might classify you as one, too.

This is about more than Facebook or political preferences. It’s how Big Data works in lots of instances where there is not only information about a particular person’s preference and actions, but a web of connections to other people and their preferences and actions. It’s certainly true about any social network where it’s easy to determine who you follow, and who follows you.

If most of your friends are Jewish, or Atheist, or Catholic, or Hindu, perhaps you are too, or have interests similar to theirs. If most of your friends are African-American or Italian-American, or simply Italian, perhaps you are too, or have interests similar to theirs. If many of your friends are seriously into car racing, book clubs, gardening, Game of Thrones, cruise ship vacations, or Elvis Presley, perhaps you are too.

Here is that Aesop fable, by the way:

The Donkey and his Purchaser

A man who wanted to buy a donkey went to market, and, coming across a likely-looking beast, arranged with the owner that he should be allowed to take him home on trial to see what he was like.

When he reached home, he put him into his stable along with the other donkeys. The newcomer took a look round, and immediately went and chose a place next to the laziest and greediest beast in the stable. When the master saw this he put a halter on him at once, and led him off and handed him over to his owner again.

The latter was a good deal surprised to seem him back so soon, and said, “Why, do you mean to say you have tested him already?”

“I don’t want to put him through any more tests,” replied the other. “I could see what sort of beast he is from the companion he chose for himself.”

Moral: “A man is known by the company he keeps.”

big-shredderCan someone steal the data off your old computer? The short answer is yes. A determined criminal can grab the bits, including documents, images, spreadsheets, and even passwords.

If you donate, sell or recycle a computer, whoever gets hold of it can recover the information in its hard drive or solid-state storage (SSD). The platform doesn’t matter: Whether its Windows or Linux or Mac OS, you can’t 100% eliminate sensitive data by, say, eliminating user accounts or erasing files!

You can make the job harder by using the computer’s disk utilities to format the hard drive. Be aware, however, that formatting will thwart a casual thief, but not a determined hacker.

The only truly safe way to destroy the data is to physically destroy the storage media. For years, businesses have physically removed and destroyed the hard drives in desktops, servers and laptops. It used to be easy to remove the hard drive: take out a couple of screws, pop open a cover, unplug a cable, and lift the drive right out.

Once the hard drive is identified and removed, you can smash it with a hammer, drill holes in it, even take it apart (which is fun, albeit time-consuming). Some businesses will put the hard drive into an industrial shredder, which is a scaled-up version of an office paper shredder. Some also use magnetism to attempt to destroy the data. Not sure how effective that is, however, and magnets won’t work at all on SSDs.

It’s much harder to remove the storage from today’s ultra-thin, tightly sealed notebooks, such as a Microsoft Surface or Apple MacBook Air, or even from tablets. What if you want to destroy the storage in order to prevent hackers from gaining access? It’s a real challenge.

If you have access to an industrial shredder, an option is to shred the entire computer. It seems wasteful, and I can imagine that it’s not good to shred lithium-ion batteries – many of which are not easily removable, again, as in the Microsoft Surface or Apple MacBook Air. You don’t want those chemicals lying around. Still, that works, and works well.

Note that an industrial shredder is kinda big and expensive – you can see some from SSL World. However, if you live in any sort of medium-sized or larger urban area, you can probably find a shredding service that will destroy the computer right in front of you. I’ve found one such service here in Phoenix, Assured Document Destruction Inc., that claims to be compliant with industry regulations for privacy, such as HIPAA and Sarbanes-Oxley.

Don’t want to shred the whole computer? Let’s say the computer uses a standard hard drive, usually in a 3.5-inch form factor (desktops and servers) or 2.5-inch form factor (notebooks). If you have a set of small screwdrivers, you should be able to dismantle the computer, remove the storage device, and kill it – such as by smashing it with a maul, drilling holes in it, or taking it completely apart. Note that driving over it in your car, while satisfying, may not cause significant damage.

What about solid state storage? The same actually applies with SSDs, but it’s a bit trickier. Sometimes the drive still looks like a standard 2.5-inch hard drive. But sometimes the “solid state drive” is merely a few exposed chips on the motherboard or a smaller circuit board. You’ve got to smash that sucker. Remove it from the computer. Hulk Smash! Break up the circuit board, pulverize the chips. Only then will it be dead dead dead. (Though one could argue that government agencies like the NSA could still put Humpty Dumpty back together again.)

In short: Even if the computer itself seems totally worthless, its storage can be removed, connected to a working computer, and accessed by a skilled techie. If you want to ensure that your data remains private, you must destroy it.

gaurdian_duke-1What’s it going to mean for Java? When Oracle purchased Sun Microsystems that was one of the biggest questions on the minds of many software developers, and indeed, the entire industry. In an April 2009 blog post, “Oracle, Sun, Winners, Losers,” written when the deal was announced (it closed in January 2010), I predicted,

Winner: Java. Java is very important to Sun. Expect a lot of investment — in the areas that are important to Oracle.

Loser: The Java Community Process. Oracle is not known for openness. Oracle is not known for embracing competitors, or for collaborating with them to create markets. Instead, Oracle is known to play hardball to dominate its markets.

Looks like I called that one correctly. While Oracle continues to invest in Java, it’s not big on true engagement with the community (aka, the Java Community Process). In a story in SD Times, “Java EE awaits its future,” published July 20, 2016, Alex Handy writes about what to expect at the forthcoming JavaOne conference, including about Java EE:

When Oracle purchased Sun Microsystems in 2010, the immediate worry in the marketplace was that the company would become a bad actor around Java. Six years later, it would seem that these fears have come true—at least in part. The biggest new platform for Java, Android, remains embroiled in ugly litigation between Google and Oracle.

Despite outward appearances of a danger for mainstream Java, however, it’s undeniable that the OpenJDK has continued along apace, almost at the same rate of change IT experienced at Sun. When Sun open-sourced the OpenJDK under the GPL before it was acquired by Oracle, it was, in a sense, ensuring that no single entity could control Java entirely, as with Linux.

Java EE, however, has lagged behind in its attention from Oracle. Java EE 7 arrived two years ago, and it’s already out of step with the new APIs introduced in OpenJDK 8. The executive committee at the Java Community Process is ready to move the enterprise platform along its road map. Yet something has stopped Java EE dead in its tracks at Oracle. JSR 366 laid out the foundations for this next revision of the platform in the fall of 2015. One would never know that, however, by looking at the Expert Committee mailing lists at the JCP: Those have been completely silent since 2014.

Alex continues,

One person who’s worried that JavaOne won’t reveal any amazing new developments in Java EE is Reza Rahman. He’s a former Java EE evangelist at Oracle, and is now one of the founders of the Java EE Guardians, a group dedicated to goading Oracle into action, or going around them entirely.

“Our principal goal is to move Java EE forward using community involvement. Our biggest concern now is if Oracle is even committed to delivering Java EE. There are various ways of solving it, but the best is for Oracle to commit to and deliver Java EE 8,” said Rahman.

His concerns come from the fact that the Java EE 8 specification has been, essentially, stalled by lack of action on Oracle’s part. The specification leads for the project are stuck in a sort of limbo, with their last chunk of work completed in December, followed by no indication of movement inside Oracle.

Alex quotes an executive at Red Hat, Craig Muzilla, who seems justifiably pessimistic:

The only thing standing in the way of evolving Java EE right now, said Muzilla, is Oracle. “Basically, what Oracle does is they hold the keys to the [Test Compatibility Kit] for certifying in EE, but in terms of creating other ways of using Java, other runtime environments, they don’t have anything other than their name on the language,” he said.

Java is still going strong. Oracle’s commitment to the community and the process – not so much. This is one “told you so” that I’m not proud of, not one bit.