, ,

AI-driven network scanning is the secret to effective mobile security

The secret sauce is AI-based zero packet inspection. That’s how to secure mobile users, and their personal data and employers’ data.

Let’s back up a step. Mobile devices are increasingly under attack, from malicious apps, from rogue emails, from adware, and from network traffic. Worse, that network traffic can come from any number of sources, including cellular data, WiFi, even Bluetooth. Users want their devices to be safe and secure. But how, if the network traffic can’t be trusted?

The best approach around is AI-based zero packet inspection (ZPI). It all starts with data. Tons of training data, used to train a machine learning algorithm to recognize patterns that indicate whether a device is performing normally – or if it’s under attack. Machine learning refers to a number of advanced AI algorithms that can study streams of data, rapidly and accurately detect patterns in that data, and from those patterns, sort the data into different categories.

The Zimperium z9 engine, as an example, works with machine learning to train against a number of test cases (on both iOS and Android devices) that represent known patterns of safe and not-safe traffic. We call those patterns zero-packet inspection in that the objective is not to look at the contents of the network packets but to scan the lower-level underlying traffic patterns at the network level, such as IP, TCP, UDP and ARP scans.

If you’re not familiar with those terms, suffice it to say that at the network level, the traffic is focused on delivering data to a specific device, and then within that device, making sure it gets to the right application. Think of it as being like an envelope going to a big business – it has the business name, street address, and department/mail stop. The machine learning algorithms look at patterns at that level, rather than examining the contents of the envelope. This makes the scans very fast and accurate.

Read more in my new essay for Security Brief Europe, “Opinion: Mobile security starts with a powerful AI-based scanning engine.”

, ,

Forget the IoT: It’s all about the Industrial IoT

Smart televisions, talking home assistants, consumer wearables – that’s not the real story of the Internet of Things. While those are fun and get great stories on blogs and morning news reports, the real IoT is the Industrial IoT. That’s where businesses will truly be transformed, with intelligent, connected devices working together to improve services, reduce friction, and disrupt everything. Everything.

According to Grand View Research, the Industrial IoT (IIoT) market will be $933.62 billion by 2025. “The ability of IoT to reduce costs has been the prime factor for its adoption in the industrial sector. However, several significant investment incentives, such as increased productivity, process automation, and time-to-market, have also been boosting this adoption. The falling prices of sensors have reduced the overall cost associated with data collection and analytics,” says the report.

The report continues,

An emerging trend among enterprises worldwide is the transformation of technical focus to improving connectivity in order to undertake data collection with the right security measures in place and with improved connections to the cloud. The emergence of low-power hardware devices, cloud integration, big data analytics, robotics & automation, and smart sensors are also driving IIoT market growth.

Meanwhile, Markets & Markets predicts that IIoT will be worth $195.47 billion by 2022. The company says,

A key influencing factor for the growth of the IIoT market is the need to implement predictive maintenance techniques in industrial equipment to monitor their health and avoid unscheduled downtimes in the production cycle. Factors which driving the IIoT market include technological advancements in semiconductor and electronics industry and evolution of cloud computing technologies.

The manufacturing vertical is witnessing a transformation through the implementation of the smart factory concept and factory automation technologies. Government initiatives such as Industrie 4.0 in Germany and Plan Industriel in France are expected to promote the implementation of the IIoT solutions in Europe. Moreover, leading countries in the manufacturing vertical such as U.S., China, and India are expected to further expand their manufacturing industries and deploy smart manufacturing technologies to increase this the contribution of this vertical to their national GDPs.

Read more in my essay, “Forget Fitbit And Smart TVs: The Industrial IoT Is The Real Story.”

, , ,

Why you should care about serverless computing

The bad news: There are servers used in serverless computing. Real servers, with whirring fans and lots of blinking lights, installed in racks inside data centers inside the enterprise or up in the cloud.

The good news: You don’t need to think about those servers in order to use their functionality to write and deploy enterprise software. Your IT administrators don’t need to provision or maintain those servers, or think about their processing power, memory, storage, or underlying software infrastructure. It’s all invisible, abstracted away.

The whole point of serverless computing is that there are small blocks of code that do one thing very efficiently. Those blocks of code are designed to run in containers so that they are scalable, easy to deploy, and can run in basically any computing environment. The open Docker platform has become the de facto industry standard for containers, and as a general rule, developers are seeing the benefits of writing code that target Docker containers, instead of, say, Windows servers or Red Hat Linux servers or SuSE Linux servers, or any specific run-time environment. Docker can be hosted in a data center or in the cloud, and containers can be easily moved from one Docker host to another, adding to its appeal.

Currently, applications written for Docker containers still need to be managed by enterprise IT developers or administrators. That means deciding where to create the containers, ensuring that the container has sufficient resources (like memory and processing power) for the application, actually installing the application into the container, running/monitoring the application while it’s running, and then adding more resources if required. Helping do that is Kubernetes, an open container management and orchestration system for Docker. So while containers greatly assist developers and admins in creating portable code, the containers still need to be managed.

That’s where serverless comes in. Developers write their bits of code (such as to read or write from a database, or encrypt/decrypt data, or search the Internet, or authenticate users, or to format output) to run in a Docker container. However, instead of deploying directly to Docker, or using Kubernetes to handle deployment, they write their code as a function, and then deploy that function onto a serverless platform, like the new Fn project. Other applications can call that function (perhaps using a RESTful API) to do the required operation, and the serverless platform then takes care of everything else automatically behind the scenes, running the code when needed, idling it when not needed.

Read my essay, “Serverless Computing: What It Is, Why You Should Care,” to find out more.

, ,

Too long: The delays between cyberattacks and their discovery and disclosure

Critical information about 46 million Malaysians were leaked online onto the Dark Web. The stolen data included mobile phone numbers from telcos and mobile virtual network operators (MVNOs), prepaid phone numbers, customers details including physical addresses – and even the unique IMEI and IMSI registration numbers associated with SIM cards.

That’s pretty bad, right? The carriers included Altel, Celcom, DiGi, Enabling Asia, Friendimobile, Maxis, MerchantTradeAsia, PLDT, RedTone, TuneTalk, Umobile and XOX; news about the breach were first published 19 October 2017 by a Malaysian online community.

When did the breach occur? According to lowyat.net, “Time stamps on the files we downloaded indicate the leaked data was last updated between May and July 2014 between the various telcos.”

That’s more than three years between theft of the information and its discovery. We have no idea if the carriers had already discovered the losses, and chose not to disclose the breaches.

A huge delay between a breach and its disclosure is not unusual. Perhaps things will change once the General Data Protection Regulation (GDPR) kicks in next year, when organizations must reveal a breach within three days of discovery. That still leaves the question of discovery. It simply takes too long!

Verizon’s Data Breach Investigations Report for 2017 has some depressing news: “Breach timelines continue to paint a rather dismal picture — with time-to-compromise being only seconds, time-to-exfiltration taking days, and times to discovery and containment staying firmly in the months camp. Not surprisingly, fraud detection was the most prominent discovery method, accounting for 85% of all breaches, followed by law enforcement which was seen in 4% of cases.”

Read more in my essay, “Months, Years Go By Before Cyberattacks Are Discovered And Revealed.”

, ,

Managing the impact of open source software on data centers

Open source software (OSS) offers many benefits for organizations large and small—not the least of which is the price tag, which is often zero. Zip. Nada. Free-as-in-beer. Beyond that compelling price tag, what you often get with OSS is a lack of a hidden agenda. You can see the project, you can see the source code, you can see the communications, you can see what’s going on in the support forums.

When OSS goes great, everyone is happy, from techies to accounting teams. Yes, the legal department may want to scrutinize the open source license to make sure your business is compliant, but in most well-performing scenarios, the lawyers are the only ones frowning. (But then again, the lawyers frown when scrutinizing commercial closed-source software license agreements too, so you can’t win.)

The challenge with OSS is that it can be hard to manage, especially when something goes wrong. Depending on the open source package, there can be a lot of mysteries, which can make ongoing support, including troubleshooting and performance tuning, a real challenge. That’s because OSS is complex.

It’s not like you can say, well, here’s my Linux distribution on my server. Oh, and here’s my open source application server, and my open source NoSQL database, and my open source log suite. In reality, those bits of OSS may be from separate OSS projects, which may (or may not) have been tested for how well they work together.

A separate challenge is that because OSS is often free-as-in-beer, the software may not be in the corporate inventory. That’s especially common if the OSS is in the form of a library or an API that might be built into other applications you’ve written yourself. The OSS might be invisible but with the potential to break or cause problems down the road.

You can’t manage what you don’t know about

When it comes to OSS, there may be a lot you don’t know about, such as those license terms or interoperability gotchas. Worse, there can be maintenance issues — and security issues. Ask yourself: Does your organization know all the OSS it has installed on servers on-prem or in the cloud? Coded into custom applications? Are you sure that all patches and fixes have been installed (and installed correctly), even on virtual machine templates, and that there are no security vulnerabilities?

In my essay “The six big gotchas: The impact of open source on data centers,” we’ll dig into the key topics: License management, security, patch management, maximizing uptime, maximizing performance, and supporting the OSS.

, , ,

Learn datacenter principles from ISO 26262 standards for automotive safety engineering

Automotive ECU (engine control unit)

Automotive ECU (engine control unit)

In my everyday life, I trust that if I make a panic stop, my car’s antilock brake system will work. The hardware, software, and servos will work together to ensure that my wheels don’t lock up—helping me avoid an accident. If that’s not sufficient, I trust that the impact sensors embedded behind the front bumper will fire the airbag actuators with the correct force to protect me from harm, even though they’ve never been tested. I trust that the bolts holding the seat in its proper place won’t shear. I trust the seat belts will hold me tight, and that cargo in the trunk won’t smash through the rear seats into the passenger cabin.

Engineers working on nearly every automobile sold worldwide ensure that their work practices conform to ISO 26262. That standard describes how to manage the functional safety of the electrical and electronic systems in passenger cars. A significant portion of ISO 26262 involves ensuring that software embedded into cars—whether in the emissions system, the antilock braking systems, the security systems, or the entertainment system—is architected, coded, and tested to be as reliable as possible.

I’ve worked with ISO 26262 and related standards on a variety of automotive software security projects. Don’t worry, we’re not going to get into the hairy bits of those standards because unless you are personally designing embedded real-time software for use in automobile components, they don’t really apply. Also, ISO 26262 is focused on the real-world safety of two-ton machines hurtling at 60-plus miles per hour—that is, things that will kill or hurt people if they don’t work as expected.

Instead, here are five IT systems management ideas that are inspired by ISO 26262. We’ll help you ensure your systems are designed to be Reliable, with a capital R, and Safe, with a capital S.

Read the list, and more, in my article for HP Enterprise Insights, “5 lessons for data center pros, inspired by automotive engineering standards.”

, , ,

Business advice for chief information security officers (CISOs)

An organization’s Chief Information Security Officer’s job isn’t ones and zeros. It’s not about unmasking cybercriminals. It’s about reducing risk for the organization, for enabling executives and line-of-business managers to innovate and compete safely and  securely. While the CISO is often seen as the person who loves to say “No,” in reality, the CISO wants to say “Yes” — the job, after all, is to make the company thrive.

Meanwhile, the CISO has a small staff, tight budget, and the need to demonstrate performance metrics and ROI. What’s it like in the real world? What are the biggest challenges? We asked two former CISOs (it’s hard to get current CISOs to speak on the record), both of whom worked in the trenches and now advise CISOs on a daily basis.

To Jack Miller, a huge challenge is the speed of decision-making in today’s hypercompetitive world. Miller, currently Executive in Residence at Norwest Venture Partners, conducts due diligence and provides expertise on companies in the cyber security space. Most recently he served as chief security strategy officer at ZitoVault Software, a startup focused on safeguarding the Internet of Things.

Before his time at ZitoVault, Miller was the head of information protection for Auto Club Enterprises. That’s the largest AAA conglomerate with 15 million members in 22 states. Previously, he served as the CISO of the 5th and 11th largest counties in the United States, and as a security executive for Pacific Life Insurance.

“Big decisions are made in the blink of an eye,” says Miller. “Executives know security is important, but don’t understand how any business change can introduce security risks to the environment. As a CISO, you try to get in front of those changes – but more often, you have to clean up the mess afterwards.”

Another CISO, Ed Amoroso, is frustrated by the business challenge of justifying a security ROI. Amoroso is the CEO of TAG Cyber LLC, which provides advanced cybersecurity training and consulting for global enterprise and U.S. Federal government CISO teams. Previously, he was Senior Vice President and Chief Security Officer for AT&T, and managed computer and network security for AT&T Bell Laboratories. Amoroso is also an Adjunct Professor of Computer Science at the Stevens Institute of Technology.

Amoroso explains, “Security is an invisible thing. I say that I’m going to spend money to prevent something bad from happening. After spending the money, I say, ta-da, look, I prevented that bad thing from happening. There’s no demonstration. There’s no way to prove that the investment actually prevented anything. It’s like putting a “This House is Guarded by a Security Company” sign in front of your house. Maybe a serial killer came up the street, saw the sign, and moved on. Maybe not. You can’t put in security and say, here’s what didn’t happen. If you ask, 10 out of 10 CISOs will say demonstrating ROI is a huge problem.”

Read more in my article for Global Banking & Finance Magazine, “Be Prepared to Get Fired! And Other Business Advice for CISOs.”

, ,

How to design software that gracefully handles poor Internet connectivity

“Someone is waiting just for you / Spinnin’ wheel, spinnin’ true.”

Those lyrics to a 1969 song by Blood, Sweat & Tears could also describe 2017 enterprise apps that time-out or fail because of dropped or poor connectivity. Wheels spin. Data is lost. Applications crash. Users are frustrated. Devices are thrown. Screens are smashed.

It doesn’t have to be that way. Always-on applications can continue to function even when the user loses an Internet or Wi-Fi connection. With proper design and testing, you won’t have to handle as many smartphone accidental-damage insurance claims.

Let’s start with the fundamentals. Many business applications are friendly front ends to remote services. The software may run on phones, tablets, or laptops, and the services may be in the cloud or in the on-premises data center.

When connectivity is strong, with sufficient bandwidth and low latency, the front-end software works fine. The user experience is excellent. Data sent to the back end is received and confirmed, and data served to the user front end is transmitted without delay. Joy!

When connectivity is non-existent or fails intermittently, when bandwidth is limited, and when there’s too much latency — which you can read as “Did the Internet connection go down again?!” — users immediately feel frustration. That’s bad news for the user experience, and also extremely bad in terms of saving and processing transactions. A user who taps a drop-down menu or presses “Enter” and sees nothing happen might progress to multiple mouse clicks, a force-reset of the application, or a reboot of the device, any of which could result in data loss. Submitted forms and uploads could be lost in a time-out. Sessions could halt. In some cases, the app could freeze (with or without a spinning indicator) or crash outright. Disaster!

What can you do about it? Easy: Read my article for HP Enterprise Insights, “How to design software that doesn’t crash when the Internet connection fails.”

 

, ,

Open up the network, that’s how you enable innovation

I have a new research paper in Elsevier’s technical journal, Network Security. Here’s the abstract:

Lock it down! Button it up tight! That’s the default reaction of many computer security professionals to anything and everything that’s perceived as introducing risk. Given the rapid growth of cybercrime such as ransomware and the non-stop media coverage of data theft of everything from customer payment card information through pre-release movies to sensitive political email databases, this is hardly surprising.

The default reaction of many computer security professionals to anything that’s perceived as introducing risk is to lock down the system.

In attempting to lower risk, however, they also exclude technologies and approaches that could contribute significantly to the profitability and agility of the organisation. Alan Zeichick of Camden Associates explains how to make the most of technology by opening up networks and embracing innovation – but safely.

You can read the whole article, “Enabling innovation by opening up the network,” here.

, , ,

What to do about credentials theft – the scourge of cybersecurity

Cybercriminals want your credentials and your employees’ credentials. When those hackers succeed in stealing that information, it can be bad for individuals – and even worse for corporations and other organizations. This is a scourge that’s bad, and it will remain bad.

Credentials come in two types. There are personal credentials, such as the login and password for an email account, bank and retirement accounts, credit-card numbers, airline membership program, online shopping and social media. When hackers manage to obtain those credentials, such as through phishing, they can steal money, order goods and services, and engage in identity theft. This can be extremely costly and inconvenient for victims, but the damage is generally contained to that one unfortunate individual.

Corporate digital credentials, on the other hand, are the keys to an organization’s network. Consider a manager, executive or information-technology worker within a typical medium-size or larger-size business. Somewhere in the organization is a database that describes that employee – and describes which digital assets that employee is authorized to use. If cybercriminals manage to steal the employee’s corporate digital credentials, the criminals can then access those same assets, without setting off any alarm bells. Why? Because they have valid credentials.

What might those assets be? Depending on the employee:

  • It might range from everything to file servers that contain intellectual property, as pricing sheets, product blueprints, or patent applications.
  • It might include email archives that describe business plans. Or accounting servers that contain important financial information that could help competitors or allow for “insider trading.”
  • It might be human resources data that can help the hackers attack other individuals. Or engage in identity theft or even blackmail.

What if the stolen credentials are for individuals in the IT or information security department? The hackers can learn a great deal about the company’s technology infrastructure, perhaps including passwords to make changes to configurations, open up backdoors, or even disable security systems.

Read my whole story about this —including what to do about it — in Telecom Times, “The CyberSecurity Scourge of Credentials Theft.”

, ,

The three big cloud providers keep getting bigger!

You keep reading the same three names over and over again. Amazon Web Services. Google Cloud Platform. Microsoft Windows Azure. For the past several years, that’s been the top tier, with a wide gap between them and everyone else. Well, there’s a fourth player, the IBM cloud, based on their SoftLayer acquisition. But still, it’s AWS in the lead when it comes to Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS), with many estimates showing about a 37-40% market share in early 2017. In second place, Azure, at around 28-31%. Third, place, Google at around 16-18%. Fourth place, IBM SoftLayer, at 3-5%.

Add that all up, and you get the big four at between 84% and 94%. That doesn’t leave much room for everyone else, including companies like Rackspace, and all the cloud initiatives launched by major computer companies like Alibaba, Dell and HP, and all the telcos around the world.

it’s clear that when it comes to the public cloud, the sort of cloud that telcos want to monetize, and enterprises want to use for hybrid clouds or full migrations, there are very few choices. You can go with the big winner, which is Amazon. You can look to Azure (which is appealing, of course, to Microsoft shops) or Google. And then you can look at everyone else, including IBM SoftLayer, Rackspace, and, well, everyone else.

Read more in my blog post for Zonic News, “For Cloud IaaS and PaaS Providers, There Are the Big Three – and That’s How It’s Going to Stay (for Now).” That post also covers the international angle when all the big cloud providers are in the U.S. — and that’s a real issue.

, ,

There are two types of cloud firewalls: Vanilla and Strawberry

Cloud-based firewalls come in two delicious flavors: vanilla and strawberry. Both flavors are software that checks incoming and outgoing packets to filter against access policies and block malicious traffic. Yet they are also quite different. Think of them as two essential network security tools: Both are designed to protect you, your network, and your real and virtual assets, but in different contexts.

Disclosure: I made up the terms “vanilla firewall” and “strawberry firewall” for this discussion. Hopefully they help us differentiate between the two models as we dig deeper.

Let’s start with a quick overview:

  • Vanilla firewalls are usually stand-alone products or services designed to protect an enterprise network and its users — like an on-premises firewall appliance, except that it’s in the cloud. Service providers call this a software-as-a-service (SaaS) firewall, security as a service (SECaaS), or even firewall as a service (FaaS).
  • Strawberry firewalls are cloud-based services that are designed to run in a virtual data center using your own servers in a platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS) model. In these cases, the firewall application runs on the virtual servers and protects traffic going to, from, and between applications in the cloud. The industry sometimes calls these next-generation firewalls, though the term is inconsistently applied and sometimes refers to any advanced firewall system running on-prem or in the cloud.

So why do we need these new firewalls? Why not stick a 1U firewall appliance into a rack, connect it up to the router, and call it good? Easy: Because the definition of the network perimeter has changed. Firewalls used to be like guards at the entrance to a secured facility. Only authorized people could enter that facility, and packages were searched as they entered and left the building. Moreover, your users worked inside the facility, and the data center and its servers were also inside. Thus, securing the perimeter was fairly easy. Everything inside was secure, everything outside was not secure, and the only way in and out was through the guard station.

Intrigued? Hungry? Both? Please read the rest of my story, called “Understanding cloud-based firewalls,” published on Enterprise.nxt.

, , ,

Thinking new about cyberattacks — and fighting back smarter

What’s the biggest tool in the security industry’s toolkit? The patent application. Security thrives on innovation, and always has, because throughout recorded history, the bad guys have always had the good guys at the disadvantage. The only way to respond is to fight back smarter.

Sadly, fighting back smarter isn’t always the case. At least, not when looking over the vendor offerings at RSA 2017, held mid-February in San Francisco. Sadly, some of the products and services wouldn’t have seemed out of place a decade ago. Oh, look, a firewall! Oh look, a hardware device that sits on the network and scans for intrusions! Oh, look, a service that trains employees not to click on phishing spam!

Fortunately, some companies and big thinkers are thinking new about the types of attacks… and the best ways to protect against them, detect when those protections end, how to respond when attacks are detected, and ways to share information about those attacks.

Read more about this in my latest story for Zonic News, “InfoSec Requires Innovation.”

, , ,

The top cloud and infrastructure conferences of 2017

Want to open up your eyes, expand your horizons, and learn from really smart people? Attend a conference or trade show. Get out there. Meet people. Have conversations. Network. Be inspired by keynotes. Take notes in classes that are delivering great material, and walk out of boring sessions and find something better.

I wrote an article about the upcoming 2017 conferences and trade shows about cloud computing and enterprise infrastructure. Think big and think outside the cubicle: Don’t go to only the events that are about the exact thing you do, and don’t attend only the sessions about the exact thing you do.

The list is organized alphabetically in “must attend,” worth attending,” and “worthy mentions” sections. Those are my subjective labels (though based on experience, having attended many of these conferences in the past decades), so read the descriptions carefully and make your own decisions. If you don’t use Amazon Web Services, then AWS re:Invent simply isn’t right for you. However, if you use or might use the company’s cloud services, then, yes, it’s a must-attend.

And oh, a word about the differences between conferences and trade shows (also known as expos). These can be subtle, and reasonable people might disagree in some edge cases. However, a conference’s main purpose is education: The focus is on speakers, panels, classes, and other sessions. While there might be an exhibit floor for vendors, it’s probably small and not very useful. In contrast, a trade show is designed to expose you to the greatest number of exhibitors, including vendors and trade associations. The biggest value is in walking the floor; while the trade show may offer classes, they are secondary and often (but not always) vendor fluff sessions “awarded” to big advertisers in return for their gold sponsorships.

So if you want to learn from classes, panels, and workshops, you probably want a conference. If you want to talk to vendors, kick the tires on products, and decide which solutions to buy or recommend, you want a trade show or an expo.

And now, on with the list: the most important events in cloud computing and enterprise infrastructure, compiled at the very beginning of 2017. Note that events can change their dates or cities without notice, or even be cancelled, so keep an eye on the websites. You can read the list here.

, , ,

Cybersecurity alert: Trusted websites can harbor malware, thanks to savvy hackers

According to a recent study, 46% of the top one million websites are considered risky. Why? Because the homepage or background ad sites are running software with known vulnerabilities, the site was categorized as a known bad for phishing or malware, or the site had a security incident in the past year.

According to Menlo Security, in its “State of the Web 2016” report introduced mid-December 2016, “… nearly half (46%) of the top million websites are risky.” Indeed, Menlo says, “Primarily due to outdated software, cyber hackers now have their veritable pick of half the web to exploit. And exploitation is becoming more widespread and effective for three reasons: 1. Risky sites have never been easier to exploit; 2. Traditional security products fail to provide adequate protection; 3. Phishing attacks can now utilize legitimate sites.”

This has been a significant issue for years. However, the issue came to the forefront earlier this year when several well-known media sites were essentially hijacked by malicious ads. The New York Times, the BBC, MSN and AOL were hit by tainted advertising that installed ransomware, reports Ars Technica. From their March 15, 2016, article, “Big-name sites hit by rash of malicious ads spreading crypto ransomware”:

The new campaign started last week when ‘Angler,’ a toolkit that sells exploits for Adobe Flash, Microsoft Silverlight, and other widely used Internet software, started pushing laced banner ads through a compromised ad network.

The results of this attack, reported The Guardian at around the same time: 

When the infected adverts hit users, they redirect the page to servers hosting the malware, which includes the widely-used (amongst cybercriminals) Angler exploit kit. That kit then attempts to find any back door it can into the target’s computer, where it will install cryptolocker-style software, which encrypts the user’s hard drive and demands payment in bitcoin for the keys to unlock it.

If big-money trusted media sites can be hit, so can nearly any corporate site, e-commerce portal, or any website that uses third-party tools – or where there might be the possibility of unpatched servers and software. That means just about anyone. After all, not all organizations are diligent about monitoring for common vulnerabilities and exploits (CVE) on their on-premises servers. When companies run their websites on multi-tenant hosting facilities, they don’t even have access to the operating system directly, but rely upon the hosting company to install patches and fixes to Windows Server, Linux, Joomla, WordPress and so-on.

A single unpatched operating system, web server platform, database or extension can introduce a vulnerability which can be scanned for. Once found, that CVE can be exploited, by a talented hacker — or by a disgruntled teenager with a readily-available web exploit kit

What can you do about it? Well, you can read my complete story on this subject, “Malware explosion: The web is risky,” published on ITProPortal.

,

No honor: You can’t trust cybercriminals any more

hackerOnce upon a time, goes the story, there was honor between thieves and victims. They held a member of your family for ransom; you paid the ransom; they left you alone. The local mob boss demanded protection money; if you didn’t pay, your business burned down, but if you did pay and didn’t rat him out to the police, his and his gang honestly tried to protect you. And hackers may have been operating outside legal boundaries, but for the most part, they were explorers and do-gooders intending to shine a bright light on the darkness of the Internet – not vandals, miscreants, hooligans and ne’er-do-wells.

That’s not true any more, perhaps. As I write in IT Pro Portal, “Faceless and faithless: A true depiction of today’s cyber-criminals?

Not that long ago, hackers emerged as modern-day Robin Hoods, digital heroes who relentlessly uncovered weaknesses in applications and networks to reveal the dangers of using technology carelessly. They are curious, provocative; love to know how things work and how they can be improved.

Today, however, there is blight on their good name. Hackers have been maligned by those who do not have our best interests at heart, but are instead motivated by money – attackers who steal our assets and hold organisations such as banks and hospitals to ransom.

(My apologies for the British spelling – it’s a British website and they’re funny that way.)

It’s hard to lose faith in hackers, but perhaps we need to. Sure, not all are cybercriminals, but with the rise of ransomware, nasty action by state actors, and some pretty nasty attacks like the new single-pixel malvertising exploit written up yesterday by Dan Goodwin in Ars Technica (which was discovered out after I wrote this story), it’s hard to trust that most hackers secretly have our best interests at heart.

This reminds me of Ranscam. In a blog post, “When Paying Out Doesn’t Pay Off,” Talos reports that:

Ranscam is one of these new ransomware variants. It lacks complexity and also tries to use various scare tactics to entice the user to paying, one such method used by Ranscam is to inform the user they will delete their files during every unverified payment click, which turns out to be a lie. There is no longer honor amongst thieves. Similar to threats like AnonPop, Ranscam simply delete victims’ files, and provides yet another example of why threat actors cannot always be trusted to recover a victim’s files, even if the victim complies with the ransomware author’s demands. With some organizations likely choosing to pay the ransomware author following an infection,  Ranscam further justifies the importance of ensuring that you have a sound, offline backup strategy in place rather than a sound ransom payout strategy. Not only does having a good backup strategy in place help ensure that systems can be restored, it also ensures that attackers are no longer able to collect revenue that they can then reinvest into the future development of their criminal enterprise.

Scary — and it shows that often, crime does pay. No honor among thieves indeed.

, , ,

Blindspotter: Big Data and machine learning can help detect early-stage hack attacks

wayne-rashWhen an employee account is compromised by malware, the malware establishes a foothold on the user’s computer – and immediately tries to gain access to additional resources. It turns out that with the right data gathering tools, and with the right Big Data analytics and machine-learning methodologies, the anomalous network traffic caused by this activity can be detected – and thwarted.

That’s the role played by Blindspotter, a new anti-malware system that seems like a specialized version of a network intrusion detection/prevention system (IDPS). Blindspotter can help against many types of malware attacks. Those include one of the most insidious and successful hack vectors today: spear phishing. That’s when a high-level target in your company is singled out for attack by malicious emails or by compromised websites. All the victim has to do is open an email, or click on a link, and wham – malware is quietly installed and operating. (High-level targets include top executives, financial staff and IT administrators.)

My colleague Wayne Rash recently wrote about this network monitoring solution and its creator, Balabit, for eWeek in “Blindspotter Uses Machine Learning to Find Suspicious Network Activity”:

The idea behind Balabit’s Blindspotter and Shell Control Box is that if you gather enough data and subject it to analysis comparing activity that’s expected with actual activity on an active network, it’s possible to tell if someone is using a person’s credentials who shouldn’t be or whether a privileged user is abusing their access rights.

 The Balabit Shell Control Box is an appliance that monitors all network activity and records the activity of users, including all privileged users, right down to every keystroke and mouse movement. Because privileged users such as network administrators are a key target for breaches it can pay special attention to them.

The Blindspotter software sifts through the data collected by the Shell Control Box and looks for anything out of the ordinary. In addition to spotting things like a user coming into the network from a strange IP address or at an unusual time of day—something that other security software can do—Blindspotter is able to analyze what’s happening with each user, but is able to spot what is not happening, in other words deviations from normal behavior.

Read the whole story here. Thank you, Wayne, for telling us about Blindspotter.

, , , , ,

Medical devices – the wild west for cybersecurity vulnerabilities and savvy hackers

bloombergMedical devices are incredibly vulnerable to hacking attacks. In some cases it’s because of software defects that allow for exploits, like buffer overflows, SQL injection or insecure direct object references. In other cases, you can blame misconfigurations, lack of encryption (or weak encryption), non-secure data/control networks, unfettered wireless access, and worse.

Why would hackers go after medical devices? Lots of reasons. To name but one: It’s a potential terrorist threat against real human beings. Remember that Dick Cheney famously disabled the wireless capabilities of his implanted heart monitor for fear of an assassination attack.

Certainly healthcare organizations are being targeted for everything from theft of medical records to ransomware. To quote the report “Hacking Healthcare IT in 2016,” from the Institute for Critical Infrastructure Technology (ICIT):

The Healthcare sector manages very sensitive and diverse data, which ranges from personal identifiable information (PII) to financial information. Data is increasingly stored digitally as electronic Protected Health Information (ePHI). Systems belonging to the Healthcare sector and the Federal Government have recently been targeted because they contain vast amounts of PII and financial data. Both sectors collect, store, and protect data concerning United States citizens and government employees. The government systems are considered more difficult to attack because the United States Government has been investing in cybersecurity for a (slightly) longer period. Healthcare systems attract more attackers because they contain a wider variety of information. An electronic health record (EHR) contains a patient’s personal identifiable information, their private health information, and their financial information.

EHR adoption has increased over the past few years under the Health Information Technology and Economics Clinical Health (HITECH) Act. Stan Wisseman [from Hewlett-Packard] comments, “EHRs enable greater access to patient records and facilitate sharing of information among providers, payers and patients themselves. However, with extensive access, more centralized data storage, and confidential information sent over networks, there is an increased risk of privacy breach through data leakage, theft, loss, or cyber-attack. A cautious approach to IT integration is warranted to ensure that patients’ sensitive information is protected.”

Let’s talk devices. Those could be everything from emergency-room monitors to pacemakers to insulin pumps to X-ray machines whose radiation settings might be changed or overridden by malware. The ICIT report says,

Mobile devices introduce new threat vectors to the organization. Employees and patients expand the attack surface by connecting smartphones, tablets, and computers to the network. Healthcare organizations can address the pervasiveness of mobile devices through an Acceptable Use policy and a Bring-Your-Own-Device policy. Acceptable Use policies govern what data can be accessed on what devices. BYOD policies benefit healthcare organizations by decreasing the cost of infrastructure and by increasing employee productivity. Mobile devices can be corrupted, lost, or stolen. The BYOD policy should address how the information security team will mitigate the risk of compromised devices. One solution is to install software to remotely wipe devices upon command or if they do not reconnect to the network after a fixed period. Another solution is to have mobile devices connect from a secured virtual private network to a virtual environment. The virtual machine should have data loss prevention software that restricts whether data can be accessed or transferred out of the environment.

The Internet of Things – and the increased prevalence of medical devices connected hospital or home networks – increase the risk. What can you do about it? The ICIT report says,

The best mitigation strategy to ensure trust in a network connected to the internet of things, and to mitigate future cyber events in general, begins with knowing what devices are connected to the network, why those devices are connected to the network, and how those devices are individually configured. Otherwise, attackers can conduct old and innovative attacks without the organization’s knowledge by compromising that one insecure system.

Given how common these devices are, keeping IT in the loop may seem impossible — but we must rise to the challenge, ICIT says:

If a cyber network is a castle, then every insecure device with a connection to the internet is a secret passage that the adversary can exploit to infiltrate the network. Security systems are reactive. They have to know about something before they can recognize it. Modern systems already have difficulty preventing intrusion by slight variations of known malware. Most commercial security solutions such as firewalls, IDS/ IPS, and behavioral analytic systems function by monitoring where the attacker could attack the network and protecting those weakened points. The tools cannot protect systems that IT and the information security team are not aware exist.

The home environment – or any use outside the hospital setting – is another huge concern, says the report:

Remote monitoring devices could enable attackers to track the activity and health information of individuals over time. This possibility could impose a chilling effect on some patients. While the effect may lessen over time as remote monitoring technologies become normal, it could alter patient behavior enough to cause alarm and panic.

Pain medicine pumps and other devices that distribute controlled substances are likely high value targets to some attackers. If compromise of a system is as simple as downloading free malware to a USB and plugging the USB into the pump, then average drug addicts can exploit homecare and other vulnerable patients by fooling the monitors. One of the simpler mitigation strategies would be to combine remote monitoring technologies with sensors that aggregate activity data to match a profile of expected user activity.

A major responsibility falls onto the device makers – and the programmers who create the embedded software. For the most part, they are simply not up to the challenge of designing secure devices, and may not have the polices, practices and tools in place to get cybersecurity right. Regrettably, the ICIT report doesn’t go into much detail about the embedded software, but does state,

Unlike cell phones and other trendy technologies, embedded devices require years of research and development; sadly, cybersecurity is a new concept to many healthcare manufacturers and it may be years before the next generation of embedded devices incorporates security into its architecture. In other sectors, if a vulnerability is discovered, then developers rush to create and issue a patch. In the healthcare and embedded device environment, this approach is infeasible. Developers must anticipate what the cyber landscape will look like years in advance if they hope to preempt attacks on their devices. This model is unattainable.

In November 2015, Bloomberg Businessweek published a chilling story, “It’s Way too Easy to Hack the Hospital.” The authors, Monte Reel and Jordon Robertson, wrote about one hacker, Billy Rios:

Shortly after flying home from the Mayo gig, Rios ordered his first device—a Hospira Symbiq infusion pump. He wasn’t targeting that particular manufacturer or model to investigate; he simply happened to find one posted on EBay for about $100. It was an odd feeling, putting it in his online shopping cart. Was buying one of these without some sort of license even legal? he wondered. Is it OK to crack this open?

Infusion pumps can be found in almost every hospital room, usually affixed to a metal stand next to the patient’s bed, automatically delivering intravenous drips, injectable drugs, or other fluids into a patient’s bloodstream. Hospira, a company that was bought by Pfizer this year, is a leading manufacturer of the devices, with several different models on the market. On the company’s website, an article explains that “smart pumps” are designed to improve patient safety by automating intravenous drug delivery, which it says accounts for 56 percent of all medication errors.

Rios connected his pump to a computer network, just as a hospital would, and discovered it was possible to remotely take over the machine and “press” the buttons on the device’s touchscreen, as if someone were standing right in front of it. He found that he could set the machine to dump an entire vial of medication into a patient. A doctor or nurse standing in front of the machine might be able to spot such a manipulation and stop the infusion before the entire vial empties, but a hospital staff member keeping an eye on the pump from a centralized monitoring station wouldn’t notice a thing, he says.

 The 97-page ICIT report makes some recommendations, which I heartily agree with.

  • With each item connected to the internet of things there is a universe of vulnerabilities. Empirical evidence of aggressive penetration testing before and after a medical device is released to the public must be a manufacturer requirement.
  • Ongoing training must be paramount in any responsible healthcare organization. Adversarial initiatives typically start with targeting staff via spear phishing and watering hole attacks. The act of an ill- prepared executive clicking on a malicious link can trigger a hurricane of immediate and long term negative impact on the organization and innocent individuals whose records were exfiltrated or manipulated by bad actors.
  • A cybersecurity-centric culture must demand safer devices from manufacturers, privacy adherence by the healthcare sector as a whole and legislation that expedites the path to a more secure and technologically scalable future by policy makers.

This whole thing is scary. The healthcare industry needs to set up its game on cybersecurity.

, , , , , ,

Hackathons are great for learning — and great for the industry too

zebra-tc8000Are you a coder? Architect? Database guru? Network engineer? Mobile developer? User-experience expert? If you have hands-on tech skills, get those hands dirty at a Hackathon.

Full disclosure: Years ago, I thought Hackathons were, well, silly. If you’ve got the skills and extra energy, put them to work for coding your own mobile apps. Do a startup! Make some dough! Contribute to an open-source project! Do something productive instead of taking part in coding contests!

Since then, I’ve seen the light, because it’s clear that Hackathons are a win-win-win.

  • They are a win for techies, because they get to hone their abilities, meet people, and learn stuff.
  • They are a win for Hackathon sponsors, because they often give the latest tools, platforms and APIs a real workout.
  • They are a win for the industry, because they help advance the creation and popularization of emerging standards.

One upcoming Hackathon that I’d like to call attention to: The MEF LSO Hackathon will be at the upcoming MEF16 Global Networking Conference, in Baltimore, Nov. 7-10. The work will support Third Network service projects that are built upon key OpenLSO scenarios and OpenCS use cases for constructing Layer 2 and Layer 3 services. You can read about a previous MEF LSO Hackathon here.

Build your skills! Advance the industry! Meet interesting people! Sign up for a Hackathon!

, , , ,

NetGear blinked – will continue VueZone video cloud service

vz_use_outdoor_headerThank you, NetGear, for taking care of your valued customers. On July 1, the company announced that it would be shutting down the proprietary back-end cloud services required for its VueZone cameras to work – turning them into expensive camera-shaped paperweights. See “Throwing our IoT investment in the trash thanks to NetGear.”

The next day, I was contacted by the company’s global communications manager. He defended the policy, arguing that NetGear was not only giving 18 months’ notice of the shutdown, but they are “doing our best to help VueZone customers migrate to the Arlo platform by offering significant discounts, exclusive to our VueZone customers.” See “A response from NetGear regarding the VueZone IoT trashcan story.”

And now, the company has done a 180° turn. NetGear will not turn off the service, at least not at this time. Well done. Here’s the email that came a few minutes ago. The good news for VueZone customers is that they can continue. On the other hand, let’s not party too heartily. The danger posed by proprietary cloud services driving IoT devices remains. When the vendor decides to turn it off, all you have is recycle-ware and potentially, one heck of a migration issue.

Subject: VueZone Services to Continue Beyond January 1, 2018

Dear valued VueZone customer,

On July 1, 2016, NETGEAR announced the planned discontinuation of services for the VueZone video monitoring product line, which was scheduled to begin as of January 1, 2018.

Since the announcement, we have received overwhelming feedback from our VueZone customers expressing a desire for continued services and support for the VueZone camera system. We have heard your passionate response and have decided to extend service for the VueZone product line. Although NETGEAR no longer manufactures or sells VueZone hardware, NETGEAR will continue to support existing VueZone customers beyond January 1, 2018.

We truly appreciate the loyalty of our customers and we will continue our commitment of delivering the highest quality and most innovative solutions for consumers and businesses. Thank you for choosing us.

Best regards,

The NETGEAR VueZone Team

July 19, 2016

, ,

The Birth of the Internet Plaque at Stanford University

BirthInternetLIn the “you learn something every day” department: Discovered today that there’s a plaque at Stanford honoring the birth of the Internet. The plaque was dedicated on July 28, 2005, and is in the Gates Computer Science Building.

You can read all about the plaque, and see it more clearly, on J. Noel Chiappa’s website. His name is on the plaque.

Here’s what the plaque says. Must check it out during my next trip to Palo Alto.


BIRTH OF THE INTERNET

THE ARCHITECTURE OF THE INTERNET AND THE DESIGN OF THE CORE NETWORKING PROTOCOL TCP (WHICH LATER BECAME TCP/IP) WERE CONCEIVED BY VINTON G. CERF AND ROBERT E. KAHN DURING 1973 WHILE CERF WAS AT STANFORD’S DIGITAL SYSTEMS LABORATORY AND KAHN WAS AT ARPA (LATER DARPA). IN THE SUMMER OF 1976, CERF LEFT STANFORD TO MANAGE THE PROGRAM WITH KAHN AT ARPA.

THEIR WORK BECAME KNOWN IN SEPTEMBER 1973 AT A NETWORKING CONFERENCE IN ENGLAND. CERF AND KAHN’S SEMINAL PAPER WAS PUBLISHED IN MAY 1974.

CERF, YOGEN K. DALAL, AND CARL SUNSHINE WROTE THE FIRST FULL TCP SPECIFICATION IN DECEMBER 1974. WITH THE SUPPORT OF DARPA, EARLY IMPLEMENTATIONS OF TCP (AND IP LATER) WERE TESTED BY BOLT BERANEK AND NEWMAN (BBN), STANFORD, AND UNIVERSITY COLLEGE LONDON DURING 1975.

BBN BUILT THE FIRST INTERNET GATEWAY, NOW KNOWN AS A ROUTER, TO LINK NETWORKS TOGETHER. IN SUBSEQUENT YEARS, RESEARCHERS AT MIT AND USC-ISI, AMONG MANY OTHERS, PLAYED KEY ROLES IN THE DEVELOPMENT OF THE SET OF INTERNET PROTOCOLS.

KEY STANFORD RESEARCH ASSOCIATES AND FOREIGN VISITORS

  • VINTON CERF
  • DAG BELSNES
  • RONALD CRANE
  • BOB METCALFE
  • YOGEN DALAL
  • JUDITH ESTRIN
  • RICHARD KARP
  • GERARD LE LANN
  • JAMES MATHIS
  • DARRYL RUBIN
  • JOHN SHOCH
  • CARL SUNSHINE
  • KUNINOBU TANNO

DARPA

  • ROBERT KAHN

COLLABORATING GROUPS

BOLT BERANEK AND NEWMAN

  • WILLIAM PLUMMER
  • GINNY STRAZISAR
  • RAY TOMLINSON

MIT

  • NOEL CHIAPPA
  • DAVID CLARK
  • STEPHEN KENT
  • DAVID P. REED

NDRE

  • YNGVAR LUNDH
  • PAAL SPILLING

UNIVERSITY COLLEGE LONDON

  • FRANK DEIGNAN
  • MARTINE GALLAND
  • PETER HIGGINSON
  • ANDREW HINCHLEY
  • PETER KIRSTEIN
  • ADRIAN STOKES

USC-ISI

  • ROBERT BRADEN
  • DANNY COHEN
  • DANIEL LYNCH
  • JON POSTEL

ULTIMATELY, THOUSANDS IF NOT TENS TO HUNDREDS OF THOUSANDS HAVE CONTRIBUTED THEIR EXPERTISE TO THE EVOLUTION OF THE INTERNET.

DEDICATED JULY 28, 2005

, , , ,

Internet over Carrier Pigeon? There’s a standard for that

pidgeonThere are standards for everything, it seems. And those of us who work on Internet things are often amused (or bemused) by what comes out of the Internet Engineering Task Force (IETF). An oldie but a goodie is a document from 1999, RFC-2549, “IP over Avian Carriers with Quality of Service.”

An RFC, or Request for Comment, is what the IETF calls a standards document. (And yes, I’m browsing my favorite IETF pages during a break from doing “real” work. It’s that kind of day.)

RFC-2549 updates RFC-1149, “A Standard for the Transmission of IP Datagrams on Avian Carriers.” That older standard did not address Quality of Service. I’ll leave it for you to enjoy both those documents, but let me share this part of RFC-2549:

Overview and Rational

The following quality of service levels are available: Concorde, First, Business, and Coach. Concorde class offers expedited data delivery. One major benefit to using Avian Carriers is that this is the only networking technology that earns frequent flyer miles, plus the Concorde and First classes of service earn 50% bonus miles per packet. Ostriches are an alternate carrier that have much greater bulk transfer capability but provide slower delivery, and require the use of bridges between domains.

The service level is indicated on a per-carrier basis by bar-code markings on the wing. One implementation strategy is for a bar-code reader to scan each carrier as it enters the router and then enqueue it in the proper queue, gated to prevent exit until the proper time. The carriers may sleep while enqueued.

Most years, the IETF publishes so-called April Fool’s RFCs. The best list of them I’ve seen is on Wikipedia. If you’re looking to take a work break, give ’em a read. Many of them are quite clever! However, I still like RFC-2549 the best.

A prized part of my library is “The Complete April Fools’ Day RFCs” compiled by by Thomas Limoncelli and Peter Salus. Sadly this collection stops at 2007. Still, it’s a great coffee table book to leave lying around for when people like Bob MetcalfeTim Berners-Lee or Al Gore come by to visit.

, , ,

MEF LSO Hackathon at Euro16 brings together open source, open standards

hackathonThe MEF recently conducted its second LSO Hackathon at a Rome event called Euro16. You can read my story about it here in DiarioTi: LSO Hackathons Bring Together Open Standards, Open Source.

Alas, my coding skills are too rusty for a Hackathon, unless the objective is to develop a fully buzzword compliant implementation of “Hello World.” Fortunately, there are others with better skills, as well as a broader understanding of today’s toughest problems.

Many Hackathons are thinly veiled marketing exercises by companies, designed to find ways to get programmers hooked on their tools, platforms, APIs, etc. Not all! One of the most interesting Hackathons is from the MEF, an industry group that drives communications interoperability. As a standards defining organization (SDO), the MEF wants to help carriers and equipment vendors design products/services ready for the next generation of connectivity. That means building on a foundation of SDN (software defined networks), NFV (network functions virtualization), LSO (lifecycle service orchestration) and CE2.0 (Carrier Ethernet 2.0).

To make all this happen:

  • What the MEF does: Create open standards for architectures and specifications.
  • What vendors, carriers and open source projects do: Write software to those specifications.
  • What the Hackathon does: Give everyone a chance to work together, make sure the code is truly interoperable, and find areas where the written specs might be ambiguous.

Thus, the MEF LSO Hackathons. They bring together a wide swatch of the industry to move beyond the standards documents and actually write and test code that implements those specs.

As mentioned above, the MEF just completed its second Hackathon at Euro16. The first LSO Hackathon was at last year’s MEF GEN15 annual conference in Dallas. Here’s my story about it in Telecom Ramblings: The MEF LSO Hackathon: Building Community, Swatting Bugs, Writing Code.

The third LSO Hackathon will be at this year’s MEF annual conference, MEF16, in Baltimore, Nov. 7-10. I will be there as an observer – alas, without the up-to-date, practical skills to be a coding participant.

, , , ,

Happy World WiFi Day!

world-wifi-dayWiFi is the present and future of local area networking. Forget about families getting rid of the home phone. The real cable-cutters are dropping the Cat-5 Ethernet in favor of IEEE 802.11 Wireless Local Area Networks, generally known as WiFi. Let’s celebrate World WiFi Day!

There are no Cat-5 cables connected in my house and home office. Not one. And no Ethernet jacks either. (By contrast, when we moved into our house in the Bay Area in the early 1990s, I wired nearly every room with Ethernet jacks.) There’s a box of Ethernet cables, but I haven’t touched them in years. Instead, it’s all WiFi. (Technically, WiFi refers to industry products that are compatible with the IEEE 802.11 specification, but for our casual purposes here, it’s all the same thing.)

My 21” iMac (circa 2011) has an Ethernet port. I’ve never used it. My MacBook Air (also circa 2011) doesn’t have an Ethernet port at all; I used to carry a USB-to-Ethernet dongle, but it disappeared a long time ago. It’s not missed. My tablets (iOS, Android and Kindle) are WiFi-only for connectivity. Life is good.

The first-ever World WiFi Day is today — June 20, 2016 . It was declared by the Wireless Broadband Alliance to

be a global platform to recognize and celebrate the significant role Wi-Fi is playing in getting cities and communities around the world connected. It will champion exciting and innovative solutions to help bridge the digital divide, with Connected City initiatives and new service launches at its core.

Sadly, the World WiFi Day initiative is not about the wire-free convenience of Alan’s home office and personal life. Rather, it’s about bringing Internet connectivity to third-world, rural, poor, or connectivity-disadvantaged areas. According to the organization, here are eight completed projects:

  • KT – KT Giga Island – connecting islands to the mainland through advanced networks
  • MallorcaWiFi – City of Palma – Wi-Fi on the beach
  • VENIAM – Connected Port @ Leixões Porto, Portugal
  • ISOCEL – Isospot – Building a Wi-Fi hotspot network in Benin
  • VENIAM – Smart City @ Porto, Portugal
  • Benu Neworks – Carrier Wi-Fi Business Case
  • MCI – Free Wi-Fi for Arbaeen
  • Fon – After the wave: Japan and Fon’s disaster support procedure

It’s a worthy cause. Happy World WiFi Day, everyone!

, , , ,

Blast from the past: Facebook’s tech infrastructure from 2008

Waybackmachine3Fire up the WABAC Machine, Mr. Peabody: In June 2008, I wrote a piece for MIT Technology Review explaining “How Facebook Works.”

The story started with this:

Facebook is a wonderful example of the network effect, in which the value of a network to a user is exponentially proportional to the number of other users that network has.

Facebook’s power derives from what Jeff Rothschild, its vice president of technology, calls the “social graph”–the sum of the wildly various connections between the site’s users and their friends; between people and events; between events and photos; between photos and people; and between a huge number of discrete objects linked by metadata describing them and their connections.

Facebook maintains data centers in Santa Clara, CA; San Francisco; and Northern Virginia. The centers are built on the backs of three tiers of x86 servers loaded up with open-source software, some that Facebook has created itself.

Let’s look at the main facility, in Santa Clara, and then show how it interacts with its siblings.

Read the whole story here… and check out Facebook’s current Open Source project pages too.

, , ,

The glacial pace of cellular security standards: From 3GPP to 5G

mobile_everythingSecurity standards for cellular communications are pretty much invisible. The security standards, created by groups like the 3GPP, play out behind the scenes, embedded into broader cellular protocols like 3G, 4G, LTE and the oft-discussed forthcoming 5G. Due to the nature of the security and other cellular specs, they evolve very slowly and deliberately; it’s a snail-like pace compared to, say, WiFi or Bluetooth.

Why the glacial pace? One reason is that cellular standards of all sorts must be carefully designed and tested in order to work in a transparent global marketplace. There are also a huge number of participants in the value chain, from handset makers to handset firmware makers to radio manufacturers to tower equipment to carriers… the list goes on and on.

Another reason why cellular software, including security protocols and algorithms goes slowly is that it’s all bound up in large platform versions. The current cellular security system is unlikely to change significantly before the roll-out of 5G… and even then, older devices will continue to use the security protocols embedded in their platform, unless a bug forces a software patch. Those security protocols cover everything from authentication of the cellular device to the tower, to the authentication of the tower to the device, to encryption of voice and data traffic.

We can only hope that end users will move swiftly to 5G. Why? because 4G and older platforms aren’t incredibly secure. Sure, they are good enough today, but that’s only “good enough.” The downside is that everything is pretty fuzzy when it comes to what 5G will actually offer… or even how many 5G standards there will be.

Read more in my story in Pipeline Magazine, “Wireless Security Standards.”

, , ,

Open source is eating carrier OSS and BSS stacks, and that’s a good thing

5D3_9411Forget vendor lock-in: Carrier operation support systems (OSS) and business support systems (BSS) are going open source. And so are many of the other parts of the software stack that drive the end-to-end services within and between carrier networks.

That’s the message from TM Forum Live, one of the most important conferences for the telecommunications carrier industry.

Held in Nice, France, from May 9-12, 2016, TM Forum Live is produced by TM Forum, a key organization in the carrier universe.

TM Forum works closely with other industry groups, like the MEF, OpenDaylight and OPNFV. I am impressed how so many open-source projects, standards-defining bodies and vendor consortia are collaborating a very detailed level to improve interoperability at many, many levels. The key to making that work: Open source.

You can read more about open source and collaboration between these organizations in my NetworkWorld column, “Open source networking: The time is now.”

While I’m talking about TM Forum Live, let me give a public shout-out to:

Pipeline Magazine – this is the best publication, bar none, for the OSS, BSS, digital transformation and telecommunications service provider space. At TM Forum Live, I attended their annual Innovation Awards, which is the best-prepared, best-vetted awards program I’ve ever seen.

Netcracker Technology — arguably the top vendor in providing software tools for telecommunications and cable companies. They are leading the charge for the agile reinvention of a traditionally slow-moving industry. I’d like to thank them for hosting a delicious press-and-analyst dinner at the historic Hotel Negresco – wow.

Looking forward to next year’s TM Forum Live, May 15-18, 2017.

, ,

Dell’s EMC Deal is a Good Move for Customers, Industry

dell-pcslimitedGet used to new names. Instead of Dell the computer company, think Dell Technologies. Instead of EMC, think Dell EMC. So far, it seems that VMware won’t be renamed Dell VMware, but one never can tell. (They’ve come a long way since PC’s Limited.)

What’s in a name? Not much. What’s in an acquisition of this magnitude (US$67 billion)? In this case, lots of synergies.

Within the Dell corporate structure, EMC will find a stable, predictable management team. Michael Dell is a thoughtful leader, and is unlikely to do anything stupid with EMC’s technology, products, branding and customer base. The industry shouldn’t expect bet-the-business moonshots. Satisfied customers should expect more-of-the-same, but with Dell’s deep pockets to fuel innovation.

Dell’s private ownership is another asset. Without the distraction of stock prices and quarterly reporting, managers don’t have to worry about beating the Street. They can focus on beating competitors.

EMC and Dell have partnered to develop technology and products since 2001. While the partnership dissolved in 2011, the synergies remained… and now will be locked in, obviously, by the acquisition. That means new products for physical data centers, the cloud, and hybrid environments. Those will be boosted by Dell. Similarly, there are tons of professional services. The Dell relationship will only expand those opportunities.

Nearly everyone will be a winner…. Everyone, that is, except for Dell/EMC’s biggest competitors, like HPE and IBM. They must be quaking in their boots.

, , , ,

A Man, a Plan, a Canal – Panama Papers and Shadow IT

panamaThe Panama Papers should be a wake-up call to every CEO, COO, CTO and CIO in every company.

Yes, it’s good that alleged malfeasance by governments and big institutions came to light. However, it’s also clear that many companies simply take for granted that their confidential information will remain confidential. This includes data that’s shared within the company, as well as information that’s shared with trusted external partners, such as law firms, financial advisors and consultants. We’re talking everything from instant messages to emails, from documents to databases, from passwords to billing records.

Clients of Mossack Fonseca, the hacked Panamanian law firm, erroneously thought its documents were well protected. How well protected are your documents and IP held by your company’s law firms and other partners? It’s a good question, and shadow IT makes the problem worse. Much worse.

Read why in my column in NetworkWorld: Fight corporate data loss with secure, easy-to-use collaboration tools.

, , , , , ,

Sauron hacks the Internet of Rings as a state sponsor of cyberterrorism

sauronBarcelona, Mobile World Congress 2016—IoT success isn’t about device features, like long-life batteries, factory-floor sensors and snazzy designer wristbands. The real power, the real value, of the Internet of Things is in the data being transmitted from devices to remote servers, and from those remote servers back to the devices.

“Is it secret? Is it safe?” Gandalf asks Frodo in the “Lord of the Rings” movies about the seductive One Ring to Rule Them All. He knows that the One Ring is the ultimate IoT wearable: Sure, the wearer is uniquely invisible, but he’s also vulnerable because the ring’s communications can be tracked and hijacked by the malicious Nazgûl and their nation/state sponsor of terrorism.

Wearables, sensors, batteries, cool apps, great wristbands. Sure, those are necessary for IoT success, but the real trick is to provision reliable, secure and private communications that Black Riders and hordes of nasty Orcs can’t intercept. Read all about it in my NetworkWorld column, “We need secure network infrastructure – not shiny rings – to keep data safe.”