In Australia, at 8 a.m. on ‘Results Day,’ thousands and thousands of South Australian year 12 students receive their ATAR (Australia Tertiary Admissions Rank)—the all-important standardized score used to gain admission to universities across Australia. The frustrating challenge: many are eligible to add as many as nine school and subject-specific bonus points to their ATAR, which can improve their chances of gaining admission to tertiary institutions like the University of Adelaide. To find out about those bonuses, or adjusted ATAR, they must talk to university staff.

Thousands of students. All receiving their ATAR at the same time. All desperate to know about their bonus points. That very moment. They’re all phoning the university wanting a 5- or 10-minute call to answer a few questions and learn about their adjusted score. This past year, 2,100 of those students skipped what in the past could be an hours-long phone queue to talk to university staff. Instead, they used Facebook Messenger to converse with a chatbot, answering questions about their bonus eligibility and learning their adjusted ATAR score–in about three minutes.

“It’s always been really difficult for us to support the adjusted ATAR calls,” says Catherine Cherry, director of prospect management at University of Adelaide. “There are only so many people we can bring in on that busy day, and only so many phone calls that the staff can take at any given time.” Without the chatbot option, even when the prospective student is able to reach university staff, the staff can’t afford to stay on the phone to answer all that student’s questions, which can create a potentially bad first experience with the university. “The staff who are working that day really feel compelled to hurry the student off the phone because we can see the queue of 15, 20 people waiting, and we can see that they’ve been waiting for a long time,” Cherry says.

Enter the chatbot: Three minutes on Facebook Messenger and students had their adjusted ATAR. Read about the technology behind this chatbot application in my story in Forbes, “University of Adelaide Builds A Chatbot To Solve One Very Hard Problem.”

“All aboooooaaaaard!” Whether you love to watch the big freight engines rumble by, or you just ride a commuter train to work, the safety rules around trains are pretty simple for most of us. Look both ways before crossing the track, and never try to beat a train, for example. If you’re a rail operator, however, safety is a much more complicated challenge—such as making sure you always have the right people on the right positions, and ensuring that the crew is properly trained, rested, and has up-to-date safety certifications.

Helping rail operators tackle that huge challenge is CrewPro, the railroad crew management software from PS Technology, a wholly owned subsidiary of the Union Pacific RailroadThe original versions of this package run on mainframes and still are used by railroads ranging from the largest Class I freight operators to local rail-based passenger transit systems in major US cities.

Those railroad operators use CrewPro to handle complex staffing issues on the engines and on the ground. The demands include scheduling based on availability and seniority; tracking mandatory rest status; and managing certifications and qualifications, including pending certification expirations.

Smaller railroads, though, don’t have the sophisticated IT departments needed to stand up this fully automated crew management system. That’s which is why PS Technology launched a cloud version that saw its first railroad customer online in April. “There are more than 600 short line railroads, and that is our growth area,” says Seenu Chundru, president of PS Technology. “They don’t want to host this type of software on premises.”

Learn more about this in my story for Forbes, “Railroads Roll Ahead With Cloud-Based Crew Management.”

Users care passionately about their software being fast and responsive. You need to give your applications both 0-60 speed and the strongest long-term endurance. Here are 14 guidelines for choosing a deployment platform to optimize performance, whether your application runs in the data center or the cloud.

Faster! Faster! Faster! That killer app won’t earn your company a fortune if the software is slow as molasses. Sure, your development team did the best it could to write server software that offers the maximum performance, but that doesn’t mean diddly if those bits end up on a pokey old computer that’s gathering cobwebs in the server closet.

Users don’t care where it runs as long as it runs fast. Your job, in IT, is to make the best choices possible to enhance application speed, including deciding if it’s best to deploy the software in-house or host it in the cloud.

When choosing an application’s deployment platform, there are 14 things you can do to maximize the opportunity for the best overall performance. But first, let’s make two assumptions:

  • These guidelines apply only to choosing the best data center or cloud-based platform, not to choosing the application’s software architecture. The job today is simply to find the best place to run the software.
  • I presume that if you are talking about a cloud deployment, you are choosing infrastructure as a service (IaaS) instead of platform as a service (PaaS). What’s the difference? In PaaS, the operating system is provided by the host, such as Windows or Linux, .NET, or Java; all you do is provide the application. In IaaS, you can provide, install, and configure the operating system yourself, giving you more control over the installation.

Here’s the checklist

  1. Run the latest software. Whether in your data center or in the IaaS cloud, install the latest version of your preferred operating system, the latest core libraries, and the latest application stack. (That’s one reason to go with IaaS, since you can control updates.) If you can’t control this yourself, because you’re assigned a server in the data center, pick the server that has the latest software foundation.
  2. Run the latest hardware. Assuming we’re talking the x86 architecture, look for the latest Intel Xeon processors, whether in the data center or in the cloud. As of mid-2018, I’d want servers running the Xeon E5 v3 or later, or E7 v4 or later. If you use anything older than that, you’re not getting the most out of the applications or taking advantage of the hardware chipset. For example, some E7 v4 chips have significantly improved instructions-per-CPU-cycle processing, which is a huge benefit. Similarly, if you choose AMD or another processor, look for the latest chip architectures.
  3. If you are using virtualization, make sure the server has the best and latest hypervisor. The hypervisor is key to a virtual machine’s (VM) performance—but not all hypervisors are created equal. Many of the top hypervisors have multiple product lines as well as configuration settings that affect performance (and security). There’s no way to know which hypervisor is best for any particular application. So, assuming your organization lets you make the choice, test, test, test. However, in the not-unlikely event you are required to go with the company’s standard hypervisor, make sure it’s the latest version.
  4. Take Spectre and Meltdown into account. The patches for Spectre and Meltdown slow down servers, but the extent of the performance hit depends on the server, the server’s firmware, the hypervisor, the operating system, and your application. It would be nice to give an overall number, such as expect a 15 percent hit (a number that’s been bandied about, though some dispute its accuracy). However, there’s no way to know except by testing. Thus, it’s important to know if your server has been patched. If it hasn’t been yet, expect application performance to drop when the patch is installed. (If it’s not going to be patched, find a different host server!)
  5. Base the number of CPUs and cores and the clock speed on the application requirements. If your application and its core dependencies (such as the LAMP stack or the .NET infrastructure) are heavily threaded, the software will likely perform best on servers with multiple CPUs, each equipped with the greatest number of cores—think 24 cores. However, if the application is not particularly threaded or runs in a not-so-well-threaded environment, you’ll get the biggest bang with the absolute top clock speeds on an 8-core server.

But wait, there’s more!

Read the full list of 14 recommendations in my story for HPE Enteprise.nxt, “Checklist: Optimizing application performance at deployment.”

Ransomware rules the cybercrime world – perhaps because ransomware attacks are often successful and financially remunerative for criminals. Ransomware features prominently in Verizon’s fresh-off-the-press 2018 Data Breach Investigations Report (DBIR). As the report says, although ransomware is still a relatively new type of attack, it’s growing fast:

Ransomware was first mentioned in the 2013 DBIR and we referenced that these schemes could “blossom as an effective tool of choice for online criminals”. And blossom they did! Now we have seen this style of malware overtake all others to be the most prevalent variety of malicious code for this year’s dataset. Ransomware is an interesting phenomenon that, when viewed through the mind of an attacker, makes perfect sense.

The DBIR explains that ransomware can be attempted with little risk or cost to the attacker. It can be successful because the attacker doesn’t need to monetize stolen data, only ransom the return of that data; and can be deployed across numerous devices in organizations to inflict more damage, and potentially justify bigger ransoms.

Botnets Are Also Hot

Ransomware wasn’t the only prominent attack; the 2018 DBIR also talks extensively about botnet-based infections. Verizon cites more than 43,000 breaches using customer credentials stolen from botnet-infected clients. It’s a global problem, says the DBIR, and can affect organizations in two primary ways:

The first way, you never even see the bot. Instead, your users download the bot, it steals their credentials, and then uses them to log in to your systems. This attack primarily targeted banking organizations (91%) though Information (5%) and Professional Services organizations (2%) were victims as well.

The second way organizations are affected involves compromised hosts within your network acting as foot soldiers in a botnet. The data shows that most organizations clear most bots in the first month (give or take a couple of days).

However, the report says, some bots may be missed during the disinfection process. This could result in a re-infection later.

Insiders Are Still Significant Threats

Overall, says Verizon, outsiders perpetrated most breaches, 73%. But don’t get too complacent about employees or contracts: Many involved internal actors, 28%. Yes, that adds to more than 100% because some outside attacks had inside help. Here’s who Verizon says is behind breaches:

  • 73% perpetrated by outsiders
  • 28% involved internal actors
  • 2% involved partners
  • 2% featured multiple parties
  • 50% of breaches were carried out by organized criminal groups
  • 12% of breaches involved actors identified as nation-state or state-affiliated

Email is still the delivery vector of choice for malware and other attacks. Many of those attacks were financially motivated, says the DBIR. Most worrying, a significant number of breaches took a long time to discover.

  • 49% of non-point-of-sale malware was installed via malicious email
  • 76% of breaches were financially motivated
  • 13% of breaches were motivated by the gain of strategic advantage (espionage)
  • 68% of breaches took months or longer to discover

Taking Months to Discover the Breach

To that previous point: Attackers can move fast, but defenders can take a while. To use a terrible analogy: If someone breaks into your car and steals your designer sunglasses, the time from their initial penetration (picking the lock or smashing the window) to compromising the asset (grabbing the glasses) might be a minute or less. The time to discovery (when you see the broken window or realize your glasses are gone) could be minutes if you parked at the mall – or days, if the car was left at the airport parking garage. The DBIR makes the same point about enterprise data breaches:

When breaches are successful, the time to compromise continues to be very short. While we cannot determine how much time is spent in intelligence gathering or other adversary preparations, the time from first action in an event chain to initial compromise of an asset is most often measured in seconds or minutes. The discovery time is likelier to be weeks or months. The discovery time is also very dependent on the type of attack, with payment card compromises often discovered based on the fraudulent use of the stolen data (typically weeks or months) as opposed to a stolen laptop which is discovered when the victim realizes they have been burglarized.

Good News, Bad News on Phishing

Let’s end on a positive note, or a sort of positive note. The 2018 DBIR notes that most people never click phishing emails: “When analyzing results from phishing simulations the data showed that in the normal (median) organization, 78% of people don’t click a single phish all year.”

The less good news: “On average 4% of people in any given phishing campaign will click it.” The DBIR notes that the more phishing emails someone has clicked, the more they are likely to click on phishing emails in the future. The report’s advice: “Part of your overall strategy to combat phishing could be that you can try and find those 4% of people ahead of time and plan for them to click.”

Good luck with that.

Endpoints everywhere! That’s the future, driven by the Internet of Things. When IoT devices are deployed in their billions, network traffic patterns won’t look at all like today’s patterns. Sure, enterprises have a few employees working at home, or use technologies like MPLS (Multi-Protocol Label Switching) or even SD-WAN (Software Defined Wide-Area Networks) to connect branch offices. However, for the most part, most internal traffic remains within the enterprise LAN, and external traffic is driven by end-users accessing websites from browsers.

The IoT will change all of that, predicts IHS Markit, one of the industry’s most astute analyst firms. In particular, the IoT will accelerate the growth of colo facilities, because it will be to everyone’s benefit to place servers closer to the network edge, avoiding the last mile.

To set the stage, IHS Markit forecasts Internet connectable devices to grow from 27.5 billion in 2017 to 45.4 billion in 2021. That’s a 65% increase in four short years. How will that affect colos? “Data center growth is correlated with general data growth. The more data transmitted via connected devices; the more data centers are needed to store, transfer, and analyze this data.” The analysts say:

In the specific case of the Internet of Things, there’s a need for geographically distributed data centers that can provide low-latency connections to certain connected devices. There are applications, like autonomous vehicles or virtual reality, which are going to require local data centers to manage much of the data processing required to operate.

Therefore, most enterprises will not have the means or the business case to build new data centers everywhere. “They will need to turn to colocations to provide quickly scalable, low capital-intensive options for geographically distributed data centers.”

Another trend being pointed to by IHS Markit: More local processing, rather than relying on servers in a colo-facility, at a cloud provider, or in the enterprise’s own data center. “And thanks to local analytics on devices, and the use of machine learning, a lot of data will never need to leave the device. Which is good news for the network infrastructure of the world that is not yet capable of handling a 65% increase in data traffic, given the inevitable proliferation of devices.”

Four Key Drivers of IoT This Year

The folks at IHS Markit have pointed out four key drivers of IoT growth. They paint a compelling picture, which we can summarize here:

  • Innovation and competitiveness. There are many new wireless models and solutions being released, which means lots of possibility for the future, but confusion in the short term. Companies are also seeing that the location of data is increasingly relevant to competition, and this will drive both on-prem data center and cloud computing.
  • Business models. As 5G rolls out, it will improve the economies of scale on machine-to-machine communications. This will create new business opportunities for the industry, a well as new security products and services.
  • Standardization and security. Speaking of which, IoT must be secure from the beginning, not only for business reasons, but also for compliance reasons. Soon there will be more IoT devices out there than traditional computing devices, which changes the security equation.
  • Wireless technology innovation. IHS Markit says there are currently more than 400 IoT platform providers, and vendors are working hard to integrate the platforms so that the data can be accessed by app developers. “A key inflection point for the IoT will be the gradual shift from the current ‘Intranets of Things’ deployment model to one where data can be exposed, discovered, entitled and shared with third-party IoT application developers,” says IHS Markit.

The IoT is not new. However, “what is new is it’s now working hand in hand with other transformative technologies like artificial intelligence and the cloud,” said Jenalea Howell, research director for IoT connectivity and smart cities at IHS Markit. “This is fueling the convergence of verticals such as industrial IoT, smart cities and buildings, and the connected home, and it’s increasing competitiveness.”

 

The VPN model of extending security through enterprise firewalls is dead, and the future now belongs to the Software Defined Perimeter (SDP). Firewalls imply that there’s an inside to the enterprise, a place where devices can communicate in a trusted manner. This being so, there must also be an outside where communications aren’t trusted. Residing between the two is that firewall which decides which traffic can egress and which can enter following deep inspection, based on scans and policies.

What about trusted applications requiring direct access to corporate resources from outside the firewall? That’s where Virtual Private Networks came in, by offering a way to push a hole in the firewall. VPNs are a complex mechanism for using encryption and secure tunnels to bridge multiple networks, such as a head-office and regional office network. They can also temporarily allow remote users to become part of the network.

VPNs are well established but perceived as difficult to configure on the endpoints, hard for IT to manage and challenging to scale for large deployments. There are also issues of software compatibility: not everything works through a VPN. Putting it bluntly, almost nobody likes VPNs and there is now a better way to securely connect mobile applications and Industrial Internet of Things (IIoT) devices into the world of datacenter servers and cloud-based applications.

Authenticate Then Connect

The Software Defined Perimeter depends on a rigorous process of identity verification of both client and server using a secure control channel, thereby replacing the VPN. The negotiation for trustworthy identification is based on cryptographic protocols like Transport Layer Security (TLS) which succeeds the old Secure Sockets Layer (SSL).

With identification and trust established by both parties, a secure data channel can be provisioned with specified bandwidth and quality. For example, the data channel might require very low latency and minimal jitter for voice messaging or it might need high bandwidth for streaming video, or alternatively be low-bandwidth and low-cost for data backups.

On the client side, the trust negotiation and data channel can be tied to a specific mobile application, perhaps an employee’s phone or tablet. The corporate customer account management app needs trusted access to the corporate database server, but no other phone service should be granted access.

SDP is based on the notion of authenticate-before-connect, which reminds me of reverse-charge phone calls of the distant past. A caller would ask the operator to place a reverse charge call to Sally on a specified number from her nephew, Bob. The operator placing the call would chat with Sally over the equivalent of the control channel. Only if the operator believed she was talking to Sally, and providing Sally accepted the charges, would the operator establish the Bob-to-Sally connection, which is the equivalent of the SDP data channel.

Read more in my essay for Network Computing, “Forget VPNs: the future is SDP.”

We all have heard the usual bold predictions for technology in 2018: Lots of cloud computing, self-driving cars, digital cryptocurrencies, 200-inch flat-screen televisions, and versions of Amazon’s Alexa smart speaker everywhere on the planet. Those types of predictions, however, are low-hanging fruit. They’re not bold. One might as well predict that there will be some sunshine, some rainy days, a big cyber-bank heist, and at least one smartphone catching fire.

Let’s dig for insights beyond the blindingly obvious. I talked to several tech leaders, deep-thinking individuals in California’s Silicon Valley, asking them for their predictions, their idea of new trends, and disruptions in the tech industry. Let’s see what caught their eye.

Gary Singh, VP of marketing, OnDot Systems, believes that 2018 will be the year when mobile banking will transform into digital banking — which is more disruptive than one would expect. “The difference between digital and mobile banking is that mobile banking is informational. You get information about your accounts,” he said. Singh continues, “But in terms of digital banking, it’s really about actionable insights, about how do you basically use your funds in the most appropriate way to get the best value for your dollar or your pound in terms of how you want to use your monies. So that’s one big shift that we would see start to happen from mobile to digital.”

Tom Burns, Vice President and General Manager of Dell EMC Networking, has been following Software-Defined Wide Area Networks. SD-WAN is a technology that allows enterprise WANs to thrive over the public Internet, replacing expensive fixed-point connections provisioned by carriers using technologies like MPLS. “The traditional way of connecting branches in office buildings and providing services to those particular branches is going to change,” Burns observed. “If you look at the traditional router, a proprietary architecture, dedicated lines. SD-WAN is offering a much lower cost but same level of service opportunity for customers to have that data center interconnectivity or branch connectivity providing some of the services, maybe a full even office in the box, but security services, segmentation services, at a much lower cost basis.”

NetFoundry’s co-founder, Mike Hallett, sees a bright future for Application Specific Networks, which link applications directly to cloud or data center applications. The focus is on the application, not on the device. “For 2018, when you think of the enterprise and the way they have to be more agile, flexible and faster to move to markets, particularly going from what I would call channel marketing to, say, direct marketing, they are going to need application-specific networking technologies.” Hallett explains that Application Specific Networks offer the ability to be able to connect from an application, from a cloud, from a device, from a thing, to any application or other device or thing quickly and with agility. Indeed, those connections, which are created using software, not hardware, could be created “within minutes, not within the weeks or months it might take, to bring up a very specific private network, being able to do that. So the year of 2018 will see enterprises move towards software-only networking.”

Mansour Karam, CEO and founder of Apstra, also sees software taking over the network. “I really see massive software-driven automation as a major trend. We saw technologies like intent-based networking emerge in 2017, and in 2018, they’re going to go mainstream,” he said.

There’s more

There are predictions around open networking, augmented reality, artificial intelligence – and more. See my full story in Upgrade Magazine, “From SD-WAN to automation to white-box switching: Five tech predictions for 2018.”

Wireless Ethernet connections aren’t necessarily secure. The authentication methods used to permit access between a device and a wireless router aren’t very strong. The encryption methods used to handle that authentication, and then the data traffic after authorization, aren’t very strong. The rules that enforce the use of authorization and encryption aren’t always enabled, especially with public hotspots like in hotel, airports and coffee shops; the authentication is handled by a web browser application, not the Wi-Fi protocols embedded in a local router.

Helping to solve those problems will be WPA3, an update to decades-old wireless security protocols. Announced by the Wi-Fi Alliance at CES in January 2018, the new standard is said to:

Four new capabilities for personal and enterprise Wi-Fi networks will emerge in 2018 as part of Wi-Fi CERTIFIED WPA3™. Two of the features will deliver robust protections even when users choose passwords that fall short of typical complexity recommendations, and will simplify the process of configuring security for devices that have limited or no display interface. Another feature will strengthen user privacy in open networks through individualized data encryption. Finally, a 192-bit security suite, aligned with the Commercial National Security Algorithm (CNSA) Suite from the Committee on National Security Systems, will further protect Wi-Fi networks with higher security requirements such as government, defense, and industrial.

This is all good news. According to Zack Whittaker writing for ZDNet,

One of the key improvements in WPA3 will aim to solve a common security problem: open Wi-Fi networks. Seen in coffee shops and airports, open Wi-Fi networks are convenient but unencrypted, allowing anyone on the same network to intercept data sent from other devices.

WPA3 employs individualized data encryption, which scramble the connection between each device on the network and the router, ensuring secrets are kept safe and sites that you visit haven’t been manipulated.

Another key improvement in WPA3 will protect against brute-force dictionary attacks, making it tougher for attackers near your Wi-Fi network to guess a list of possible passwords.

The new wireless security protocol will also block an attacker after too many failed password guesses.

What About KRACK?

A challenge for the use of WPA2 is that a defect, called KRACK, was discovered and published in October 2017. To quote my dear friend John Romkey, founder of FTP Software:

The KRACK vulnerability allows malicious actors to access a Wi-Fi network without the password or key, observe what connected devices are doing, modify the traffic amongst them, and tamper with the responses the network’s users receive. Everyone and anything using Wi-Fi is at risk. Computers, phones, tablets, gadgets, things. All of it. This isn’t just a flaw in the way vendors have implemented Wi-Fi. No. It’s a bug in the specification itself.

The timing of the WPA3 release couldn’t be better. But what about older devices. I have no idea how many of my devices — including desktops, phones, tablets, and routers — will be able to run WPA3. I don’t know if firmware updates will be automatically applied, or I will need to search them out.

What’s more, what about the millions of devices out there? Presumably new hotspots will downgrade to WPA2 if a device can’t support WPA3. (And the other way around: A new mobile device will downgrade to talk to an older or unpatched hotel room’s Wi-Fi router.) It could take ages before we reach a critical mass of new devices that can handle WPA3 end-to-end.

The Wi-Fi Alliance says that it “will continue enhancing WPA2 to ensure it delivers strong security protections to Wi-Fi users as the security landscape evolves.” Let’s hope that is indeed the case, and that those enhancements can be pushed down to existing devices. If not, well, the huge installed base of existing Wi-Fi devices will continue to lack real security for years to come.

Software can affect the performance of hardware. Under the right (or wrong) circumstances, malware can cause the hardware to become physically damaged – as the cyberattack on Iran’s centrifuges provided in 2010, and which an errant coin-mining malware is demonstrating right now. Will intentional or unintentional damage to IoT devices be next?

Back in late 2009 and early 2010, a computer worm labeled Stuxnet targeted the centrifuges used by Iran to refine low-grade nuclear material into weapons-class materials. The Stuxnet worm, which affected more than 200,000 machines, was estimated to physically damage 1,000 centrifuges.

How did it work? The Stuxnet virus checked to see if it was running on the right type of machine (i.e., a centrifuge of the specific type used by Iran), and if so, says Wikipedia:

The worm worked by first causing an infected Iranian IR-1 centrifuge to increase from its normal operating speed of 1,064 hertz to 1,410 hertz for 15 minutes before returning to its normal frequency. Twenty-seven days later, the worm went back into action, slowing the infected centrifuges down to a few hundred hertz for a full 50 minutes. The stresses from the excessive, then slower, speeds caused the aluminum centrifugal tubes to expand, often forcing parts of the centrifuges into sufficient contact with each other to destroy the machine.

From centrifuges to coin mining

The Stuxnet attacks were subtle, specific, and intentional. By contrast, the Loapi malware, which appeared in December 2017, appears to cause its damage inadvertently. Loapi, discovered by Kaspersky Labs, installs itself on Android devices using administrator privileges, and then does several nasty things, including displaying ads, acting as a zombie for distributed denial-of-service (DDoS) attacks, and mining Monero crypto-coin tokens.

The problem is that Loapi is a little too enthusiastic. When mining coins, Loapi works so hard that the phone overheats – and cooks the devices. Whoops. Says Neowin.net:

In its test, the firm found that after just two days, the constant load from mining caused its test phone’s battery to bulge, which also deformed the phone’s outer shell. This last detail is quite alarming, as it has the potential to cause serious physical harm to affected handset owners.

Damaging the Internet of Things

If malware gets onto an IoT device… who knows what it could do? Depending on the processor, memory, and network connectivity, some IoT devices could be turned into effective DDoS zombies or digital coin miners. Network security cameras have already been infected by spyware, so why not zombieware or miningware? This could be a significant threat for plug-in devices that are not monitored closely, and which contain considerable CPU power. Imagine a point-of-sale kiosk that also mined Bitcoin.

The possibility of damage is a reality, as is shown by Loapi. It’s possible that malware could somehow damage the device inadvertently, perhaps by messing up the firmware and bricking the machine, or by overloading the processor and memory to the point where it overwhelms on-board cooling mechanisms.

Then there’s the potential for intentional damage of IoT devices, either in a large scale or targeting a specific organization. This could be leveraged for extortion by criminal gangs, or for the destruction of public infrastructure or private enterprise by cyberterrorists or state-sponsored actors. If the creators of Stuxnet could damage centrifuges nearly a decade ago, it’s a sure bet that researchers are working on other attacks of that sort. It’s a sobering thought.

The secret sauce is AI-based zero packet inspection. That’s how to secure mobile users, and their personal data and employers’ data.

Let’s back up a step. Mobile devices are increasingly under attack, from malicious apps, from rogue emails, from adware, and from network traffic. Worse, that network traffic can come from any number of sources, including cellular data, WiFi, even Bluetooth. Users want their devices to be safe and secure. But how, if the network traffic can’t be trusted?

The best approach around is AI-based zero packet inspection (ZPI). It all starts with data. Tons of training data, used to train a machine learning algorithm to recognize patterns that indicate whether a device is performing normally – or if it’s under attack. Machine learning refers to a number of advanced AI algorithms that can study streams of data, rapidly and accurately detect patterns in that data, and from those patterns, sort the data into different categories.

The Zimperium z9 engine, as an example, works with machine learning to train against a number of test cases (on both iOS and Android devices) that represent known patterns of safe and not-safe traffic. We call those patterns zero-packet inspection in that the objective is not to look at the contents of the network packets but to scan the lower-level underlying traffic patterns at the network level, such as IP, TCP, UDP and ARP scans.

If you’re not familiar with those terms, suffice it to say that at the network level, the traffic is focused on delivering data to a specific device, and then within that device, making sure it gets to the right application. Think of it as being like an envelope going to a big business – it has the business name, street address, and department/mail stop. The machine learning algorithms look at patterns at that level, rather than examining the contents of the envelope. This makes the scans very fast and accurate.

Read more in my new essay for Security Brief Europe, “Opinion: Mobile security starts with a powerful AI-based scanning engine.”

Smart televisions, talking home assistants, consumer wearables – that’s not the real story of the Internet of Things. While those are fun and get great stories on blogs and morning news reports, the real IoT is the Industrial IoT. That’s where businesses will truly be transformed, with intelligent, connected devices working together to improve services, reduce friction, and disrupt everything. Everything.

According to Grand View Research, the Industrial IoT (IIoT) market will be $933.62 billion by 2025. “The ability of IoT to reduce costs has been the prime factor for its adoption in the industrial sector. However, several significant investment incentives, such as increased productivity, process automation, and time-to-market, have also been boosting this adoption. The falling prices of sensors have reduced the overall cost associated with data collection and analytics,” says the report.

The report continues,

An emerging trend among enterprises worldwide is the transformation of technical focus to improving connectivity in order to undertake data collection with the right security measures in place and with improved connections to the cloud. The emergence of low-power hardware devices, cloud integration, big data analytics, robotics & automation, and smart sensors are also driving IIoT market growth.

Markets and Markets

Markets & Markets predicts that IIoT will be worth $195.47 billion by 2022. The company says,

A key influencing factor for the growth of the IIoT market is the need to implement predictive maintenance techniques in industrial equipment to monitor their health and avoid unscheduled downtimes in the production cycle. Factors which driving the IIoT market include technological advancements in semiconductor and electronics industry and evolution of cloud computing technologies.

The manufacturing vertical is witnessing a transformation through the implementation of the smart factory concept and factory automation technologies. Government initiatives such as Industrie 4.0 in Germany and Plan Industriel in France are expected to promote the implementation of the IIoT solutions in Europe. Moreover, leading countries in the manufacturing vertical such as U.S., China, and India are expected to further expand their manufacturing industries and deploy smart manufacturing technologies to increase this the contribution of this vertical to their national GDPs.

The IIoT market for camera systems is expected to grow at the highest rate between 2016 and 2022. Camera systems are mainly used in the retail and transportation verticals. The need of security and surveillance in these sectors is the key reason for the high growth rate of the market for camera systems. In the retail sector, the camera systems are used for capturing customer behavior, moment tracking, people counting, and heat mapping. The benefits of installation of surveillance systems include the safety at the workplace, and the prevention of theft and other losses, sweet hearting, and other retail crimes. Video analytics plays a vital role for security purpose in various areas in transportation sector including airports, railway stations, and large public places. Also, intelligent camera systems are used for traffic monitoring, and incident detection and reporting.

Accenture

The huge research firm Accenture says that the IIoT will add $14.2 trillion to the global economy by 2030. That’s not talking about the size of the market, but the overall lift that IIoT will have. By any measure, that’s staggering. Accenture reports,

Today, the IIoT is helping to improve productivity, reduce operating costs and enhance worker safety. For example, in the petroleum industry, wearable devices sense dangerous chemicals and unmanned aerial vehicles can inspect remote pipelines.

However, the longer-term economic and employment potential will require companies to establish entirely new product and service hybrids that disrupt their own markets and generate fresh revenue streams. Many of these will underpin the emergence of the “outcome economy,” where organizations shift from selling products to delivering measurable outcomes. These may range from guaranteed energy savings in commercial buildings to guaranteed crop yields in a specific parcel of farmland.

IIoT Is a Work in Progress

The IIoT is going to have huge impact. But it hasn’t yet, not on any large scale. As Accenture says,

When Accenture surveyed more than 1,400 C-suite decision makers—including 736 CEOs—from some of the world’s largest companies, the vast majority (84 percent) believe their organizations have the capability to create new, service-based income streams from the IIoT.

But scratch beneath the surface and the gloss comes off. Seventy-three percent confess that their companies have yet to make any concrete progress. Just 7 percent have developed a comprehensive strategy with investments to match.

Challenge and opportunity: That’s the Industrial Internet of Things. Watch this space.

Still no pastrami sandwich. Still no guinea pig. What’s the deal with the cigarette?

I installed iOS 11.1 yesterday, tantalized by Apple’s boasting of tons of new emoji. Confession: Emoji are great fun. Guess what I looked for right after the completed software install?

Many of the 190 new emoji are skin-tone variations on new or existing people or body parts. That’s good: Not everyone is yellow, like the Simpsons. (If you don’t count the different skin-tone versions, there are about 70 new graphics.)

New emoji that I like:

  • Steak. Yum!
  • Shushing finger face. Shhhh!
  • Cute hedgehog. Awww!
  • Scottish flag. Och aye!

What’s still stupidly missing:

  • Pastrami sandwich. Sure, there’s a new sandwich emoji, but it’s not a pastrami sandwich. Boo.
  • There’s a cheeseburger (don’t get me started on the cheese top/bottom debate), but nothing for those who don’t put cheese on their burgers at all. Grrrr.
  • Onion rings. They’ve got fries, but no rings. Waah.
  • Coffee with creamer. I don’t drink my coffee black. Bleh.
  • Guinea pig. That’s our favorite pet, but no cute little caviidae in the emoji. Wheek!

I still don’t like the cigarette emoji, but I guess once they added it in 2015, they couldn’t delete it.

Here is a complete list of all the emoji, according to PopSugar. What else is missing?

I am unapologetically mocking this company’s name. Agylytyx emailed me this press release today, and only the name captured my attention. Plus, their obvious love of the ™ symbol — even people they quote use the ™. Amazing!

Beyond that, I’ve never talked to the company or used its products, and have no opinion about them. (My guess is that it’s supposed to be pronounced as “Agil-lytics.”)

Agylytyx Announces Availability of New IOT Data Analysis Application

SUNNYVALE, Calif., June 30, 2017 /PRNewswire/ — Agylytyx, a leading cloud-based analytic software vendor, today announced a new platform for analyzing IoT data. The Agylytyx Generator™ IoT platform represents an application of the vendor’s novel Construct Library™ approach to the IoT marketplace. For the first time, companies can both explore their IoT data and make it actionable much more quickly than previously thought possible.

From PLC data streams archived as tags in traditional historians to time series data streaming from sensors attached to devices, the Agylytyx Generator™ aggregates and presents IoT data in a decision-ready format. The company’s unique Construct Library™ (“building block”) approach allows decision makers to create and explore aggregated data such as pressure, temperature, output productivity, worker status, waste removal, fuel consumption, heat transfer, conductivity, condensation or just about any “care abouts.” This data can be instantly explored visually at any level such as region, plant, line, work cell or even device. Best of all, the company’s approach eliminates the need to build charts or write queries.

One of the company’s long-time advisors, John West of Clean Tech Open, noticed the Agylytyx Generator™ potential from the outset. West’s wide angle on data analysis led him to stress the product’s broad applicability. West said “Even as the company was building the initial product, I advised the team that I thought there was strong applicability of the platform to operational data. The idea of applying Constructs to a received data set has broad usage. Their evolution of the Agylytyx Generator™ platform to IoT data is a very natural one.”

The company’s focus on industrial process data was the brainchild of one the company’s investors, Jim Smith. Jim is a chemical engineer with extensive experience working with plant floor data. Smith stated “I recognized the potential in the company’s approach for analyzing process data. Throughout the brainstorming process, we all gradually realized we were on to something groundbreaking.”

This unique approach to analytics attracted the attention of PrecyseTech, a pioneer of Industrial IoT (IIoT) Systems providing end-to-end management of high-value physical assets and personnel. Paul B. Silverman, the CEO of PrecyseTech, has had a longstanding relationship with the company. Silverman noted: “The ability of the Agylytyx Generator™ to address cloud-based IoT data analytic solutions is a good fit with PrecyseTech’s strategy. Agylytyx is working with the PrecyseTech team to develop our inPALMSM Solutions IoT applications, and we are working collaboratively to identify and develop IoT data opportunities targeting PrecyseTech’s clients. Our plans are to integrate the Agylytyx Generator™ within our inPALMSM Solutions product portfolio and also to offer users access to the Agylytyx Generator™ via subscription.”

Creating this IoT focus made the ideal use of the Agylytyx Generator™. Mark Chang, a data scientist for Agylytyx, noted: “All of our previous implementations – financial, entertainment, legal, customer service – had data models with common ‘units of measure’ – projects, media, timekeepers, support cases, etc. IoT data is dissimilar in that there is no common ‘unit of measure’ across devices. This dissimilarity is exactly what makes our Construct Library™ approach so useful to IoT data. The logical next step for us will be to apply machine learning and cluster inference to enable optimization of resource deployment and predictive analytics like predictive maintenance.”

About Agylytyx

Agylytyx provides cloud-based enterprise business analytic software. The company’s flagship product, the Agylytyx Generator™, frees up analyst time and results in better decision making across corporations. Agylytyx is based in Sunnyvale, California, and has locations in Philadelphia and Chicago, IL. For more information about Agylytyx visit www.agylytyx.com.

An organization’s Chief Information Security Officer’s job isn’t ones and zeros. It’s not about unmasking cybercriminals. It’s about reducing risk for the organization, for enabling executives and line-of-business managers to innovate and compete safely and  securely. While the CISO is often seen as the person who loves to say “No,” in reality, the CISO wants to say “Yes” — the job, after all, is to make the company thrive.

Meanwhile, the CISO has a small staff, tight budget, and the need to demonstrate performance metrics and ROI. What’s it like in the real world? What are the biggest challenges? We asked two former CISOs (it’s hard to get current CISOs to speak on the record), both of whom worked in the trenches and now advise CISOs on a daily basis.

To Jack Miller, a huge challenge is the speed of decision-making in today’s hypercompetitive world. Miller, currently Executive in Residence at Norwest Venture Partners, conducts due diligence and provides expertise on companies in the cyber security space. Most recently he served as chief security strategy officer at ZitoVault Software, a startup focused on safeguarding the Internet of Things.

Before his time at ZitoVault, Miller was the head of information protection for Auto Club Enterprises. That’s the largest AAA conglomerate with 15 million members in 22 states. Previously, he served as the CISO of the 5th and 11th largest counties in the United States, and as a security executive for Pacific Life Insurance.

“Big decisions are made in the blink of an eye,” says Miller. “Executives know security is important, but don’t understand how any business change can introduce security risks to the environment. As a CISO, you try to get in front of those changes – but more often, you have to clean up the mess afterwards.”

Another CISO, Ed Amoroso, is frustrated by the business challenge of justifying a security ROI. Amoroso is the CEO of TAG Cyber LLC, which provides advanced cybersecurity training and consulting for global enterprise and U.S. Federal government CISO teams. Previously, he was Senior Vice President and Chief Security Officer for AT&T, and managed computer and network security for AT&T Bell Laboratories. Amoroso is also an Adjunct Professor of Computer Science at the Stevens Institute of Technology.

Amoroso explains, “Security is an invisible thing. I say that I’m going to spend money to prevent something bad from happening. After spending the money, I say, ta-da, look, I prevented that bad thing from happening. There’s no demonstration. There’s no way to prove that the investment actually prevented anything. It’s like putting a “This House is Guarded by a Security Company” sign in front of your house. Maybe a serial killer came up the street, saw the sign, and moved on. Maybe not. You can’t put in security and say, here’s what didn’t happen. If you ask, 10 out of 10 CISOs will say demonstrating ROI is a huge problem.”

Read more in my article for Global Banking & Finance Magazine, “Be Prepared to Get Fired! And Other Business Advice for CISOs.”

“Someone is waiting just for you / Spinnin’ wheel, spinnin’ true.”

Those lyrics to a 1969 song by Blood, Sweat & Tears could also describe 2017 enterprise apps that time-out or fail because of dropped or poor connectivity. Wheels spin. Data is lost. Applications crash. Users are frustrated. Devices are thrown. Screens are smashed.

It doesn’t have to be that way. Always-on applications can continue to function even when the user loses an Internet or Wi-Fi connection. With proper design and testing, you won’t have to handle as many smartphone accidental-damage insurance claims.

Let’s start with the fundamentals. Many business applications are friendly front ends to remote services. The software may run on phones, tablets, or laptops, and the services may be in the cloud or in the on-premises data center.

When connectivity is strong, with sufficient bandwidth and low latency, the front-end software works fine. The user experience is excellent. Data sent to the back end is received and confirmed, and data served to the user front end is transmitted without delay. Joy!

When connectivity is non-existent or fails intermittently, when bandwidth is limited, and when there’s too much latency — which you can read as “Did the Internet connection go down again?!” — users immediately feel frustration. That’s bad news for the user experience, and also extremely bad in terms of saving and processing transactions. A user who taps a drop-down menu or presses “Enter” and sees nothing happen might progress to multiple mouse clicks, a force-reset of the application, or a reboot of the device, any of which could result in data loss. Submitted forms and uploads could be lost in a time-out. Sessions could halt. In some cases, the app could freeze (with or without a spinning indicator) or crash outright. Disaster!

What can you do about it? Easy: Read my article for HP Enterprise Insights, “How to design software that doesn’t crash when the Internet connection fails.”

 

Movie subtitles — those are the latest attack vector for malware. According to Check Point Software, by crafting malicious subtitle files, which are then downloaded by a victim’s media player, attackers can take complete control over any type of device via vulnerabilities found in many popular streaming platforms. Those media players include VLC, Kodi (XBMC), Popcorn-Time and strem.io.

I was surprised to see that this would work, because I thought that text subtitles were just that – text. Silly me. Subtitles embedded into media files (like mp4 movies) can be encoded in dozens of different formats, each with unique features, capabilities, metadata, and payloads. The data and metadata in those subtitles can be hard to analyze, in part because of the many ways the subtitles are stored in a repository. To quote Check Point:

These subtitles repositories are, in practice, treated as a trusted source by the user or media player; our research also reveals that those repositories can be manipulated and be made to award the attacker’s malicious subtitles a high score, which results in those specific subtitles being served to the user. This method requires little or no deliberate action on the part of the user, making it all the more dangerous.

Unlike traditional attack vectors, which security firms and users are widely aware of, movie subtitles are perceived as nothing more than benign text files. This means users, Anti-Virus software, and other security solutions vet them without trying to assess their real nature, leaving millions of users exposed to this risk.

According to Check Point, more than 200 million users (or devices) are potentially vulnerable to this exploit. The risk?

Damage: By conducting attacks through subtitles, hackers can take complete control over any device running them. From this point on, the attacker can do whatever he wants with the victim’s machine, whether it is a PC, a smart TV, or a mobile device. The potential damage the attacker can inflict is endless, ranging anywhere from stealing sensitive information, installing ransomware, mass Denial of Service attacks, and much more.

Here’s an infographic from Check Point:

This type of vulnerability is reminiscent of steganography, where secret data is hidden inside image files. We have all become familiar with malicious macros, such as those hidden inside Microsoft Word .doc/.docx or Microsoft Excel .xls/.xlsx files. Those continue to become more sophisticated, even as antivirus or anti-malware scanners becomes more adept at detecting them. Similarly, executables and other malware can be hidden inside Adobe .pdf documents, or even inside image files.

Interestingly, sometimes that malware can be manually destroyed by format conversations. For example, you can turn a metadata-rich format into a dumb format. Turn a Word doc into rich text or plain text, and good-bye, malicious macro. Similarly, converting a malicious JPEG into a bitmap could wipe out any malware in the JPEG file’s header or footer. Of course, you’d lose other benefits as well, especially if there are benign or useful macros or metadata. That’s just how it goes.

See you at the movies!

From eWeek’s story, “Proposed Laptop Travel Ban Would Wreak Havoc on Business Travelers,” by Wayne Rash:

A current proposal from the Department of Homeland Security to mandate that large electronic devices be relegated to checked luggage is facing stiff resistance from airlines and business travelers.

Under the proposal, travelers with electronic devices larger than a cell phone would be required to carry them as checked luggage. Depending on the airline, those devices may either be placed in each passenger’s luggage, or the airline may offer secure containers at the gate.

While the proposed ban is still in the proposal stage, it could go into effect at any time. U.S. officials have begun meeting with European Union representatives in Brussels on May 17, and will continue their meetings in Washington the following week.

The proposed ban is similar to one that began in March that prohibited laptops and other large electronics from passenger cabins between certain airports in the Middle East and North Africa.

That ban has resulted in a significant reduction in travel between those countries and the U.S., according to a report by Emirates Airlines. That airline has already cut back on its flights to the U.S. because of the laptop ban.

The new laptop ban would work like the current one from the Middle East, except that it would affect all flights from Europe to the U.S.

The ban raises a series of concerns that so far have not been addressed by the Department of Homeland Security, most notably large lithium-ion batteries that are currently not allowed in cargo holds by many airlines because of their propensity to catch fire.

The story continues going into detail about the pros and cons – and includes some thoughtful analysis by yours truly.

Did you know that last year, 75% of data breaches were perpetrated by outsiders, and fully 25% involved internal actors? Did you know that 18% were conducted by state-affiliated actors, and 51% involved organized criminal groups?

That’s according to the newly release 2017 Data Breach Investigations Report from Verizon. It’s the 10th edition of the DBIR, and as always, it’s fascinating – and frightening at the same time.

The most successful tactic, if you want to call it that, used by hackers: stolen or weak (i.e., easily guessed) passwords. They were were used by 81% of breaches. The report says that 62% of breaches featured hacking of some sort, and 51% involved malware.

More disturbing is that fully 66% of malware was installed by malicious email attachments. This means we’re doing a poor job of training our employees not to click links and open documents. We teach, we train, we test, we yell, we scream, and workers open documents anyway. Sigh. According to the report,

People are still falling for phishing—yes still. This year’s DBIR found that around 1 in 14 users were tricked into following a link or opening an attachment — and a quarter of those went on to be duped more than once. Where phishing successfully opened the door, malware was then typically put to work to capture and export data—or take control of systems.

Ransomware is big

We should not be surprised that the DBIR fingers ransomware as a major tool in the hacker’s toolbox:

Ransomware is the latest scourge of the internet, extorting millions of dollars from people and organizations after infecting and encrypting their systems. It has moved from the 22nd most common variety of malware in the 2014 DBIR to the fifth most common in this year’s data.

The Verizon report spends a lot of time on ransomware, saying,

Encouraged by the profitability of ransomware, criminals began offering ransomware-as-a-service, enabling anyone to extort their favorite targets, while taking a cut of the action. This approach was followed by a variety of experiments in ransom demands. Criminals introduced time limits after which files would be deleted, ransoms that increased over time, ransoms calculated based on the estimated sensitivity of filenames, and even options to decrypt files for free if the victims became attackers themselves and infected two or more other people. Multi-level marketing at its finest!

And this, showing another alarming year-on-year increase:

Perhaps the most significant change to ransomware in 2016 was the swing away from infecting individual consumer systems toward targeting vulnerable organizations. Overall, ransomware is still very opportunistic, relying on infected websites and traditional malware delivery for most attacks. Looking again through the lens of DBIR data, web drive-by downloads were the number one malware vector in the 2016 report, but were supplanted by email this year. Social actions, notably phishing, were found in 21% of incidents, up from just 8% in the 2016 DBIR. These emails are often targeted at specific job functions, such as HR and accounting—whose employees are most likely to open attachments or click on links—or even specific individuals.

Read the report

The DBIR covers everything from cyber-espionage to the dangers caused by failing to keep up with patches, fixes, and updates. There are also industry-specific breakouts, covering healthcare, finance, and so-on. It’s a big report, but worth reading. And sharing.

The word went out Wednesday, March 22, spreading from techie to techie. “Better change your iCloud password, and change it fast.” What’s going on? According to ZDNet, “Hackers are demanding Apple pay a ransom in bitcoin or they’ll blow the lid off millions of iCloud account credentials.”

A hacker group claims to have access to 250 million iCloud and other Apple accounts. They are threatening to reset all the passwords on those accounts – and then remotely wipe those phones using lost-phone capabilities — unless Apple pays up with untraceable bitcoins or Apple gift cards. The ransom is a laughably small $75,000.

What’s Happening at Apple?

According to various sources, at least some of the stolen account credentials appear to be legitimate. Whether that means all 250 million accounts are in peril, of course, is unknowable.

Apple seems to have acknowledged that there is a genuine problem. The company told CNET, “The alleged list of email addresses and passwords appears to have been obtained from previously compromised third-party services.” We obviously don’t know what Apple is going to do, or what Apple can do. It hasn’t put out a general call, at least as of Thursday, for users to change their passwords, which would seem to be prudent. It also hasn’t encouraged users to enable two-factor authentication, which should make it much more difficult for hackers to reset iCloud passwords without physical access to a user’s iPhone, iPad, or Mac.

Unless the hackers alter the demands, Apple has a two-week window to respond. From its end, it could temporarily disable password reset capabilities for iCloud accounts, or at least make the process difficult to automate, access programmatically, or even access more than once from a given IP address. So, it’s not “game over” for iCloud users and iPhone owners by any means.

It could be that the hackers are asking for such a low ransom because they know their attack is unlikely to succeed. They’re possibly hoping that Apple will figure it’s easier to pay a small amount than to take any real action. My guess is they are wrong, and Apple will lock them out before the April 7 deadline.

Where Did This Come From

Too many criminal networks have access to too much data. Where are they getting it? Everywhere. The problem multiplies because people reuse usernames and passwords. For nearly every site nowadays, the username is the email address. That means if you know my email address (and it’s not hard to find), you know my username for Facebook, for iCloud, for Dropbox, for Salesforce.com, for Windows Live, for Yelp. Using the email address for the login is superficially good for consumers: They are unlikely to forget their login.

The bad news is that account access now depends on a single piece of hidden information: the password. And people reuse passwords and choose weak passwords. So if someone steals a database from a major retailer with a million account usernames (which are email addresses) and passwords, many of those will also be Facebook logins. And Twitter. And iCloud.

That’s how hackers can quietly accumulate what they claim are 250 million iCloud passwords. They probably have 250 million email address / password pairs amalgamated from various sources: A million from this retailer, ten million from that social network. It adds up. How many of those will work in iTunes? Unknown. Not 250 million. But maybe 10 million? Or 20 million? Either way, it’s a nightmare for customers and a disaster for Apple, if those accounts are locked, or if phones are bricked.

What’s the Answer?

As long as we use passwords, and users have the ability to reuse passwords, this problem will exist. Hackers are excellent at stealing data. Companies are bad at detecting breaches, and even worse about disclosing them unless legally obligated to do so.

Can Apple present those 250 million accounts from being seized? Probably. Will problems like this happen again and again and again? For sure, until we move away from any possibility of shared credentials. And that’s not happening any time soon.

To absolutely nobody’s surprise, the U.S. Central Intelligence Agency can spy on mobile phones. That includes Android and iPhone, and also monitor the microphones on smart home devices like televisions.

This week’s disclosure of CIA programs by WikiLeaks has been billed as the largest-ever publication of confidential documents from the American spy agency. The document dump will appear in pieces; the first installment has 8,761 documents and files from the CIA’s Center for Cyber Intelligence, says WikiLeaks.

According to WikiLeaks, the CIA malware and hacking tools are built by EDG (Engineering Development Group), a software development group within the CIA’s Directorate for Digital Innovation. WikiLeaks says the EDG is responsible for the development, testing and operational support of all backdoors, exploits, malicious payloads, trojans, viruses and any other kind of malware used by the CIA.

Smart TV = Spy TV?

Another part of the covert program, code-named “Weeping Angel,” turns smart TVs into secret microphones. After infestation, Weeping Angel places the target TV in a ‘Fake-Off’ mode. The owner falsely believes the TV is off when it is on. In ‘Fake-Off’ mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server.

The New York Times reports the CIA has refused to explicitly confirm the authenticity of the documents. however, the government strongly implied their authenticity when the agency put out a statement to defend its work and chastise WikiLeaks, saying the disclosures “equip our adversaries with tools and information to do us harm.”

The WikiLeaks data dump talked about efforts to infect and control non-mobile systems. That includes desktops, notebooks and servers running Windows, Linux, Mac OS and Unix. The malware is distributed in many ways, including website viruses, software on CDs or DVDs, and portable USB storage devices.

Going mobile with spyware

What about the iPhone? Again, according to WikiLeaks, the CIA produces malware to infest, control and exfiltrate data from Apple products running iOS, such as iPhones and iPads. Similarly, other programs target Android. Says WikiLeaks, “These techniques permit the CIA to bypass the encryption of WhatsApp, Signal, Telegram, Wiebo, Confide and Cloackman by hacking the smart phones that they run on and collecting audio and message traffic before encryption is applied.”

The tech industry is scrambling to patch the vulnerabilities revealed by the WikiLeaks data dump. For example, Apple said,

Apple is deeply committed to safeguarding our customers’ privacy and security. The technology built into today’s iPhone represents the best data security available to consumers, and we’re constantly working to keep it that way. Our products and software are designed to quickly get security updates into the hands of our customers, with nearly 80 percent of users running the latest version of our operating system. While our initial analysis indicates that many of the issues leaked today were already patched in the latest iOS, we will continue work to rapidly address any identified vulnerabilities. We always urge customers to download the latest iOS to make sure they have the most recent security updates.

Enterprises should expect patches to come from every major hardware or software vendors. IT must be vigilant about making those security updates. In addition, everyone should attempt to identify unpatched devices on the network, and deny those devices access to critical resources until they are properly patched and tested. We don’t want to help mobile devices to become spy devices.

“You walked 713 steps today. Good news is the sky’s the limit!”

Thank you, Pebble, for that encouragement yesterday.

The problem with fitness apps in smartwatches is that you have to wear the watch for them to work. When I am at home, I never wear a watch. Since I work from home, that means that I usually don’t have a watch on my wrist. And when I go out, sometimes I wear the Pebble, sometimes something else. For a recent three-day weekend trip away with my wife, for example, I carried the pocket watch she bought me for our 15th anniversary. So, it’s hard for the Pebble app to get an accurate read on my activity.

Yesterday, I only wore this watch for a brief period of time. The day before, not at all. That’s why Pebble thought that 713 steps was a great accomplishment.

(Too bad Pebble is out of business. I like this watch.)

Cybercriminals want your credentials and your employees’ credentials. When those hackers succeed in stealing that information, it can be bad for individuals – and even worse for corporations and other organizations. This is a scourge that’s bad, and it will remain bad.

Credentials come in two types. There are personal credentials, such as the login and password for an email account, bank and retirement accounts, credit-card numbers, airline membership program, online shopping and social media. When hackers manage to obtain those credentials, such as through phishing, they can steal money, order goods and services, and engage in identity theft. This can be extremely costly and inconvenient for victims, but the damage is generally contained to that one unfortunate individual.

Corporate digital credentials, on the other hand, are the keys to an organization’s network. Consider a manager, executive or information-technology worker within a typical medium-size or larger-size business. Somewhere in the organization is a database that describes that employee – and describes which digital assets that employee is authorized to use. If cybercriminals manage to steal the employee’s corporate digital credentials, the criminals can then access those same assets, without setting off any alarm bells. Why? Because they have valid credentials.

What might those assets be? Depending on the employee:

  • It might range from everything to file servers that contain intellectual property, as pricing sheets, product blueprints, or patent applications.
  • It might include email archives that describe business plans. Or accounting servers that contain important financial information that could help competitors or allow for “insider trading.”
  • It might be human resources data that can help the hackers attack other individuals. Or engage in identity theft or even blackmail.

What if the stolen credentials are for individuals in the IT or information security department? The hackers can learn a great deal about the company’s technology infrastructure, perhaps including passwords to make changes to configurations, open up backdoors, or even disable security systems.

Read my whole story about this —including what to do about it — in Telecom Times, “The CyberSecurity Scourge of Credentials Theft.”

Apple isn’t as friendly or as as communicative as one would think. Earlier today, I received a panic call from someone trying to sync videos to her iPad from a Mac – and receiving a message that there was no suitable application on the iPad. Huh? That made no sense. The app for playing locally stored videos on an iPad is called Videos, and it’s a standard, built-in app. What’s the deal?

In short: With the iOS 10.2 operating system update, Apple renamed the Videos app to TV. And it has to be installed from the Apple App Store. It’s a free download, but who knew? Apparently not me. And not a lot of people who queried their favorite search engine with phrases like “ipad videos app missing.”

What’s worse, the change had the potential to delete locally stored video content. One dissatisfied user posted on an Apple discussion forum:

New TV App deleted home videos from iPad

I had a bunch of home videos on my iPad, and when I updated to iOS 10.2, the new TV App replaced videos. On my iPhone 6, this process went fine. I launched TV, and up popped the Library, and within it was a sub-menu for Home Videos. The one and only one I had on my iPhone is still there.

But I had dozens on my iPad and now they are all gone. Not only are they all gone, but there is no sub-menu for Home Videos AT ALL! I can probably replace them by synching to my laptop, but this is a time-consuming pain in the *$$, and why should I have to do this at all?

This change was unveiled in October 2016, with much fanfare, claiming:

Apple today introduced the new TV app, offering a unified experience for discovering and accessing TV shows and movies from multiple apps on Apple TV, iPhone and iPad. The TV app provides one place to access TV shows and movies, as well as a place to discover new content to watch. Apple also introduced a new Siri feature for Apple TV that lets viewers tune in directly to live news and sporting events across their apps. Watching TV shows and movies across Apple devices has never been easier.

The update appeared, for U.S. customers at least, on December 12, 2016. That’s when iOS 10.2 came out. Buh-bye, Videos app!

The change moved a piece of core functionality from iOS itself into an app. The benefits: The new TV app can be updated on its own schedule, not tied to iOS releases, and iOS releases themselves can be smaller. The drawback: Users must manually install the TV app.

Once the TV app is installed, the user can re-sync the videos from a Mac or Windows PC running iTunes. This should restore the missing content, assuming the content is on the desktop/notebook computer. How rude, Apple!

Let me add, snarkily, that the new name is stupid since there’s already a thing from Apple called TV – Apple TV.

I can’t trust the Internet of Things. Neither can you. There are too many players and too many suppliers of the technology that can introduce vulnerabilities in our homes, our networks – or elsewhere. It’s dangerous, my friends. Quite dangerous. In fact, it can be thought of as a sort of Fifth Column, but not in the way many of us expected.

Merriam-Webster defines a Fifth Column as “a group of secret sympathizers or supporters of an enemy that engage in espionage or sabotage within defense lines or national borders.” In today’s politics, there’s lot of talk about secret sympathizers sneaking across national borders, such as terrorists posing as students or refugees. Such “bad actors” are generally part of an organization, recruited by state actors, and embedded into enemy countries for long-term penetration of society.

There have been many real-life Fifth Column activists in recent global history. Think about Kim Philby and Anthony Blunt, part of the “Cambridge Five” who worked for spy agencies in the United Kingdom in post-World War II era; but who themselves turned out to be double agents working for the Soviet Union. Fiction too, is replete with Fifth Column spies. They’re everywhere in James Bond movies and John le Carré novels.

Am I too paranoid?

Let’s bring our paranoia (or at least, my paranoia) to the Internet of Things, and start by way of the late 1990s and early 2000s. I remember quite clearly the introduction of telco and network routers by Huawei, and concerns that the Chinese government may have embedded software into those routers in order to surreptitiously listen to telecom networks and network traffic, to steal intellectual property, or to do other mischief like disable networks in the event of a conflict. (This was before the term “cyberwarfare” was widely used.)

Recall that Huawei was founded by a former engineer in the Chinese People’s Liberation Army. The company was heavily supported by Beijing. Also there were lawsuits alleging that Huawei infringed on Cisco’s intellectual property – i.e., stole its source code. Thus, there was lots of concern surrounding the company and its products.

Read my full story about this, published in Pipeline Magazine, “The Surprising and Dangerous Fifth Column Hiding Within the Internet of Things.”

What’s on the industry’s mind? Security and mobility are front-and-center of the cerebral cortex, as two of the year’s most important events prepare to kick off.

The Security Story

At the RSA Conference 2017 (February 13-17 in San Francisco), expect to see the best of the security industry, from solutions providers to technology firms to analysts. RSA can’t come too soon.

Ransomware, which exploded into the public’s mind last year with high-profile incidents, continues to run rampant. Attackers are turning to ever-bigger targets, with ever-bigger fallout. It’s not enough that hospitals are still being crippled (this was big in 2016), but hotel guests are locked out of their rooms, police departments are losing important crime evidence, and even CCTV footage has been locked away.

What makes ransomware work? Human weakness, for the most part. Many successful ransomware attacks begin with either generalized phishing or highly sophisticated and targeted spearphishing. Once the target user has clicked on a link in a malicious email or website, odds are good that his/her computer will be infected. From there, the malware can do more than encrypt data and request a payout. It can also spread to other computers on the network, install spyware, search for unpatched vulnerabilities and cause untold havoc.

Expect to hear a lot about increasingly sophisticated ransomware at RSA. We’ll see solutions to help, ranging from ever-more-sophisticated email scanners, endpoint security tools, isolation platforms and tools to prevent malware from spreading beyond the initially affected machine.

Also expect to hear plenty about artificial intelligence as the key to preventing and detecting attacks that evade traditional technologies like signatures. AI has the ability to learn and respond in ways that go far beyond anything that humans can do – and when coupled with increasingly sophisticated threat intelligence systems, AI may be the future of computer security.

The Mobility Story

Halfway around the world, mobility is only part of the story at Mobile World Congress (February 27 – March 2 in Barcelona). There will be many sessions about 5G wireless, which can provision not only traditional mobile users, but also industrial controls and the Internet of Things. AT&T recently announced that it will launch 5G service (with peak speeds of 400Mbps or better) in two American cities, Austin and Indianapolis. While the standards are not yet complete, that’s not stopping carriers and the industry from moving ahead.

Also key to the success of all mobile platforms is cloud computing. Microsoft is moving more aggressively to the cloud, going beyond Azure and Office 365 with a new Windows 10 Cloud edition, a simplified experience designed to compete against Google’s Chrome platform.

The Internet of Things is also roaring to life, and it means a lot more than fitness bands and traffic sensors. IoT applications are showing up in everything from industrial controls to embedded medical devices to increasingly intelligent cars and trucks. What makes it work? Big batteries, reliable wireless, industry standards and strong security. Every type of security player is involved with IoT, from the cloud to wireless to endpoint protection. You’ll hear more about security at Mobile World Congress than in the past, because the threats are bigger than ever. And so are the solutions.

5d3_1277I was dismayed this morning to find an email from Pebble — the smart watch folks — essentially announcing their demise. The company is no longer a viable concern, says the message, and the assets of the company are being sold to Fitbit. Some of Pebble’s staff will go to Fitbit as well.

This is a real loss. The Pebble is an excellent watch. I purchased the original monochrome-screen model by signing onto their Kickstarter campaign, back in April 2012, for an investment of $125.

The Kickstarter watch’s screen became a little flakey after a few years. I purchased the Pebble Time – a much-improved color version – in May 2016, for the odd price of $121.94 through Amazon. You can see the original Pebble, with a dead battery, on the left, and the Pebble Time on the right. The watchface I’ve chosen isn’t colorful, so you can’t see that attribute.

I truly adore the Pebble Time. Why?

  • The battery life is a full week; I don’t travel with a charging cable unless it’s a long trip.
  • The watch does everything I want: The watch face I’ve chosen can be read quickly, and is always on.
  • The watch lets me know about incoming text messages. I can answer phone call in the car (using speakerphone) by pressing a button on the watch.
  • Also in the car I can control my phone’s music playback from the watch.
  • It was inexpensive enough that if it gets lost, damaged or stolen, no big deal.

While I love the concept of the Apple Watch, it’s too complicated. The battery life is far too short. And I don’t need the extra functions. The Pebble Time is (or rather was) far less expensive.

Fortunately, my Pebble Time should keep running for a long, long time. Don’t know what will replace it, when the time comes. Hopefully something with at least a week of battery life.

Here’s the statement from Pebble:

Pebble is joining Fitbit

Fitbit has agreed to acquire key Pebble assets. Due to various factors, Pebble can no longer operate as an independent entity, and we have made the tough decision to shut down the company. The deal finalized today preserves as much of Pebble as possible.

Pebble is ceasing all hardware operations. We are no longer manufacturing, promoting, or selling any new products. Active Pebble models in the wild will continue to work.

Making Awesome Happen will live on at Fitbit. Much of our team and resources will join Fitbit to deliver new “moments of awesome” in future Fitbit products, developer tools, and experiences. As our transition progresses, we’ll have exciting new stories to tell and milestones to celebrate.

It’s no doubt a bittersweet time. We’ll miss what we’re leaving behind, but are excited for what the future holds. It will be important for Pebblers to extend a warm welcome to Fitbit—as fans and customers—sharing what they love about Pebble and what they’d like to see next.

phoneFrom company-issued tablets to BYOD (bring your own device) smartphones, employees are making the case that mobile devices are essential for productivity, job satisfaction, and competitive advantage. Except in the most regulated industries, phones and tablets are part of the landscape, but their presence requires a strong security focus, especially in the era of non-stop malware, high-profile hacks, and new vulnerabilities found in popular mobile platforms. Here are four specific ways of examining this challenge that can help drive the choice of both policies and technologies for reducing mobile risk.

Protect the network: Letting any mobile device on the business network is a risk, because if the device is compromised, the network (and all of its servers and other assets) may be compromised as well. Consider isolating internal WiFi links to secured network segments, and only permit external access via virtual private networks (VPNs). Install firewalls that guard the network by recognizing not only authorized devices, but also authorized users — and authorized applications. Be sure to keep careful tabs on devices accessing the network, from where, and when.

Protect the device: A mobile device can be compromised in many ways: It might be stolen, or the user might install malware that provides a gateway for a hacker. Each mobile device should be protect by strong passwords not only for the device, but on critical business apps. Don’t allow corporate data to be stored on the device itself. Ensure that there are remote-wipe capabilities if the device is lost. And consider installed a Mobile Device Management (MDM) platform that can give IT full control over the mobile device – or at least those portions of a employee-owned device that might ever be used for business purposes.

Protect the data: To be productive with their mobile devices, employees want access to important corporate assets, such as email, internal websites, ERP or CRM applications, document repositories, as well as cloud-based services. Ensure that permissions are granted specifically for needed services, and that all access is encrypted and logged. As mentioned above, never let corporate data – including documents, emails, chats, internal social media, contacts, and passwords – be stored or cached on the mobile device. Never allow co-mingling of personal and business data, such as email accounts. Yes, it’s a nuisance, but make the employee log into the network, and authenticate into enterprise-authorized applications, each and every time. MDM platforms can help enforce those policies as well.

Protect the business: The policies regarding mobile access should be worked out along with corporate counsel, and communicated clearly to all employees before they are given access to applications and data. The goal isn’t to be heavy-handed, but rather, to gain their support. If employees understand the stakes, they become allies in helping protect business interests. Mobile access is risky for enterprises, and with today’s aggressive malware, the potential for harm has never been higher. It’s not too soon to take it seriously.

zebra-tc8000Are you a coder? Architect? Database guru? Network engineer? Mobile developer? User-experience expert? If you have hands-on tech skills, get those hands dirty at a Hackathon.

Full disclosure: Years ago, I thought Hackathons were, well, silly. If you’ve got the skills and extra energy, put them to work for coding your own mobile apps. Do a startup! Make some dough! Contribute to an open-source project! Do something productive instead of taking part in coding contests!

Since then, I’ve seen the light, because it’s clear that Hackathons are a win-win-win.

  • They are a win for techies, because they get to hone their abilities, meet people, and learn stuff.
  • They are a win for Hackathon sponsors, because they often give the latest tools, platforms and APIs a real workout.
  • They are a win for the industry, because they help advance the creation and popularization of emerging standards.

One upcoming Hackathon that I’d like to call attention to: The MEF LSO Hackathon will be at the upcoming MEF16 Global Networking Conference, in Baltimore, Nov. 7-10. The work will support Third Network service projects that are built upon key OpenLSO scenarios and OpenCS use cases for constructing Layer 2 and Layer 3 services. You can read about a previous MEF LSO Hackathon here.

Build your skills! Advance the industry! Meet interesting people! Sign up for a Hackathon!

Web filtering. The phrase connotes keeping employees from spending too much time monitoring Beanie Baby auctions on eBay, and stopping school children from encountering (accidentally or deliberately) naughty images on the internet. Were it that simple — but nowadays, web filtering goes far beyond monitoring staff productivity and maintaining the innocence of childhood. For nearly every organization today, web filtering should be considered an absolute necessity. Small business, K-12 school district, Fortune 500, non-profit or government… it doesn’t matter. The unfiltered internet is not your friend, and legally, it’s a liability; a lawsuit waiting to happen.

Web filtering means blocking internet applications – including browsers – from contacting or retrieving content from websites that violate an Acceptable Use Policy (AUP). The policy might set rules blocking some specific websites (like a competitor’s website). It might block some types of content (like pornography), or detected malware, or even access to external email systems via browser or dedicated clients. In some cases, the AUP might include what we might call government-mandated restrictions (like certain websites in hostile countries, or specific news sources).

Unacceptable use in the AUP

The specifics of the AUP might be up to the organization to define entirely on its own; that would be the case for a small business, perhaps. Government organizations, such as schools or military contractors, might have specific AUP requirements placed on them by funders or government regulators, thereby becoming a compliance/governance issue as well. And of course, legal counsel should be sought when creating policies that balance an employee’s ability to access content of his/her choice, against the company’s obligations to protect the employee (or the company) from unwanted content.

It sounds easy – the organization sets an AUP, consulting legal, IT and the executive suite. The IT department implements the AUP through web filtering, perhaps with software installed and configured on devices; perhaps through firewall settings at the network level; and perhaps through filters managed by the internet service provider. It’s not simple, however. The internet is constantly changing, employees are adept at finding ways around web filters; and besides, it’s tricky to translate policies written in English (as in the legal policy document) into technological actions. We’ll get into that a bit more shortly. First, let’s look more closely at why organizations need those Acceptable Use Policies, and what should be in them.

  • Improving employee productivity. This is the low-hanging fruit. You may not want employees spending too much time on Facebook on their company computers. (Of course, if they are permitted to bring mobile devices into the office, they can still access social media via cellular). That’s a policy consideration, though the jury is out if a blank blockage is the best way to improve productivity.
  • Preserving bandwidth. For technical reasons, you may not want employees streaming Netflix movies or Hulu-hosted classic TV shows across the business network. Seinfeld is fun, but not on company bandwidth. As with social media, this is truly up to the organization to decide.
  • Blocking email access. Many organizations do not want their employees accessing external email services from the business computers. That’s not only for productivity purposes, but also makes it difficult to engage in unapproved communications – such as emailing confidential documents to yourself. Merely configuring your corporate email server to block the exfiltration of intellectual property is not enough if users can access personal gmail.com or hushmail.com accounts. Blocking external email requires filtering multiple protocols as well as specific email hosts, and may be required to protect not only your IP, but also customers’ data, in addition to complying with regulations from organizations like the U.S. Securities and Exchange Commission.
  • Blocking access to pornography and NSFW content. It’s not that you are being a stick-in-the-mud prude, or protecting children. The initial NSFW (not safe for work) are often said as a joke, but in reality, some content can be construed as contributing to an hostile work environment. Just like the need to maintain a physically safe work environment – no blocked fire exits, for example – so too must you maintain a safe internet environment. If users can be unwillingly subjected to offensive content by other employees, there may be significant legal, financial and even public-relations consequences if it’s seen as harassment.
  • Blocking access to malware. A senior manager receives a spear-phishing email that looks legit. He clicks the link and, wham; ransomware is on his computer. Or spyware, like a keylogger. Or perhaps a back-door that allows other access by hackers. You can train employees over and over, and they will still click on unsafe email links or on web pages. Anti-malware software on the computer can help, but web filtering is part of a layered approach to anti-malware protection. This applies to trackers as well: As part of the AUP, the web filters may be configured to block ad networks, behavior trackers and other web services that attempt to glean information about your company and its workers.
  • Blocking access to specific internet applications. Whether you consider it Shadow IT or simply an individual’s personal preference, it’s up to an AUP to decide which online services should be accessible; either through an installed application or via a web interface. Think about online storage repositories such as Microsoft OneDrive, Google Drive, Dropbox or Box: Personal accounts can be high-bandwidth conduits for exfiltration of vast quantities of valuable IP. Web filtering can help manage the situation.
  • Compliance with government regulations. Whether it’s a military base commander making a ruling, or a government restricting access to news sites out-of-favor with the current regime; those are rules that often must be followed without question. It’s not my purpose here to discuss whether this is “censorship,” though in some cases it certainly is. However, the laws of the United States do not apply outside the United States, and blocking some internet sites or types of web content may be part of the requirements for doing business in some countries or with some governments. What’s important here is to ensure that you have effective controls and technology in place to implement the AUP – but don’t go broadly beyond it.
  • Compliance with industry requirements. Let’s use the example of the requirements that schools or public libraries must protect students (and the general public) from content deemed to be unacceptable in that environment. After all, just because a patron is an adult doesn’t mean he/she is allowed to watch pornography on one of the library’s publicly accessible computers, or even on his/her computer on the library’s Wi-Fi network.

What about children?

A key ingredient in creating an AUP for schools and libraries in the United States is the Children’s Internet Protection Act (CIPA). In order to receive government subsidies or discounts, schools and libraries must comply with these regulations. (Other countries may have an equivalent to these policies.)

Learn more about how the CIPA should drive the AUP for any organization where minors can be found, and how best to implement an AUP for secure protection. That’s all covered in my article for Upgrade Magazine, “Web filtering for business: Keep your secrets safe, and keep your employees happy.”

5d3_9839-100670811-primary.idgeThank you, NetGear, for the response to my July 11 opinion essay for NetworkWorld, “Throwing our IoT investment in the trash thanks to NetGear.” In that story, I used the example of our soon-to-be-obsolete VueZone home video monitoring system: At the end of 2017, NetGear is turning off the back-end servers that make VueZone work – and so all the hardware will become fancy camera-shaped paperweights.

The broader message of the story is that every IoT device tied into a proprietary back-end service will be turned to recycleware if (or when) the service provider chooses to turn it off. My friend Jason Perlow picked up this theme in his story published on July 12 on ZDNet, “All your IoT devices are doomed” and included a nice link to my NetworkWorld story. As Jason wrote,

First, it was Aether’s smart speaker, the Cone. Then, it was the Revolv smart hub. Now, it appears NetGear’s connected home wireless security cameras, VueZone, is next on the list.

I’m sure I’ve left out more than a few others that have slipped under the radar. It seems like every month an Internet of Things (IoT) device becomes abandonware after its cloud service is discontinued.

Many of these devices once disconnected from the cloud become useless. They can’t be remotely managed, and some of them stop functioning as standalone (or were never capable of it in the first place). Are these products going end-of-life too soon? What are we to do about this endless pile of e-waste that seems to be the inevitable casualty of the connected-device age?

I would like to publicly acknowledge NetGear for sending a quick response to my story. Apparently — and contrary to what I wrote — the company did offer a migration path for existing VueZone customers. I can’t find the message anywhere, but can’t ignore the possibility that it was sucked into the spamverse.

Here is the full response from Nathan Papadopulos, Global Communications & Strategic Marketing for NetGear:

Hello Alan,

I am writing in response to your recent article about disposing of IoT products. As you may know, the VueZone product line came to Netgear   as part of our acquisition of Avaak, Inc. back in 2012, and is the predecessor of the current Arlo security system. Although we wanted to avoid interruptions of the VueZone services as much as possible, we are now faced with the need to discontinue support  for the camera line. VueZone was built on technologies which are now outdated and a platform which is not scalable. Netgear has since shifted our resources to building better, more robust products which are the Arlo system of security cameras. Netgear is doing our best to help VueZone customers migrate to the Arlo platform by offering significant discounts, exclusive to our VueZone customers.

1. On July 1, 2016, Netgear officially announced the discontinuation of VueZone services to VueZone customers. Netgear has sent out an email notification to the entire VueZone customer base with the content in the “Official End-of-Services Announcement.” Netgear is providing the VueZone customers with an 18-month notice, which means that the actual effective date of this discontinuation of services will be on January 1, 2018.

2. Between July 2 and July 6, 26,000+ customers who currently have an active VueZone base station have received an email with an offer to purchase an Arlo 4-camera kit. There will be two options for them to choose from:

a. Standard Arlo 4-camera kit for $299.99

b. Refurbished Arlo 4-camera kit for $149.99

Both refurbished and new Arlo systems come with the NETGEAR limited 1-year hardware warranty. The promotion will run until the end of July 31, 2016.

It appears NetGear is trying to do the right thing, though they lose points for offering the discounted migration path for less than one month. Still, the fact remains that obsolescence of service-dependent IoT devices is a big problem. Some costly devices will cease functioning if the service goes down; others will lose significant functionality.

And thank you, Jason, for the new word: Abandonware.