Can you name that Top 40 pop song in 10 seconds? Sure, that sounds easy. Can you name that pop song—even if it’s played slightly out of tune? Uh oh, that’s a lot harder. However, if you can guess 10 in a row, you might share in a cash prize.

That’s the point of “Out of Tune,” an online music trivia game where players mostly in their teens and 20s compete to win small cash prizes–just enough to make the game more fun. And fun is the point of “Out of Tune,” launched in August by FTW Studios, a startup based in New York. What’s different about “Out of Tune” is that it’s designed for group play in real time. The intent is that players will get together in groups, and play together using their Android or Apple iOS phones.

Unlike in first-person shooter games, or other activities where a game player is interacting with the game’s internal logic, “Out of Tune” emphasizes the human-to-human aspect. Each game is broadcast live from New York — sometimes from FTW Studio’s facilities, sometimes from a live venue. Each game is hosted by a DJ, and is enjoyed through streaming video. “We’re not in the game show business or the music business,” says Avner Ronen, FTW Studio’s founder and CEO. “We’re in the shared experiences business.”

Because of all that human interaction, game players should feel like they’re part of something big, part of a group. “It’s social, says Ronen, noting 70% of its participants today are female. “The audience is younger, and people play with their friends.”

How does the game work? Twice a day, at 8 p.m. and 11 p.m. Eastern time, a DJ launches the game live from New York City. The game consists of 10 pop songs played slightly out of tune—and players, using a mobile app on their phones, have 10 seconds to guess the song. Players who guess all the songs correctly share in that event’s prize money.

Learn more about FTW Studios – and how the software works – in my story in Forbes, “This Online Game Features Out-Of-Tune Pop Songs. The End Game Is About Much More.”

Knowledge is power—and knowledge with the right context at the right moment is the most powerful of all. Emerging technologies will leverage the power of context to help people become more efficient, and one of the first to do so is a new generation of business-oriented digital assistants.

Let’s start by distinguishing a business digital assistant from consumer products such as Apple’s Siri, Amazon’s Echo, and Google’s Home. Those cloud-based technologies have proved themselves at tasks like information retrieval (“How long is my commute today?”) and personal organization (“Add diapers to my shopping list”). Those services have some limited context about you, like your address book, calendar, music library, and shopping cart. What they don’t have is deep knowledge about your job, your employer, and your customers.

In contrast, a business digital assistant needs much richer context to handle the kind of complex tasks we do at work, says Amit Zavery, executive vice president of product development at Oracle. Which sorts of business tasks? How about asking a digital assistant to summarize the recent orders from a company’s three biggest customers in Dallas; set up a conference call with everyone involved with a particular client account; create a report of all employees who haven’t completed information security training; figure out the impact of a canceled meeting on a travel plan; or pull reports on accounts receivable deviations from expected norms?

Those are usually tasks for human associates—often a tech-savvy person in supply chain, sales, finance, or human resources. That’s because so many business tasks require context about the employee making the request and about the organization itself, Zavery says. A digital assistant’s goal should be to reduce the amount of mental energy and physical steps needed to perform such tasks.

Learn more in my article for Forbes, “The One Thing Digital Assistants Need To Become Useful At Work: Context.”

“What type of dog are you?” “I scored 9 out of 10 on this vocabulary test! Can you beat me? Take the quiz!” “Are you a true New Yorker?”

If you use Facebook (or other social media sites) you undoubtedly see quizzes like this nearly every day. Sometimes the quizzes appear in Facebook advertisements. Sometimes they appear because one of your friends took the quiz, and the quiz appeared as a post by your friend.

Is it safe to take those quizzes? As with many security topics, the answer is a somewhat vague “yes and no.” There are two areas to think about. The first is privacy – are you giving away information that should be kept confidential? The second is, by interacting with the quiz, are you giving permission for future interactions? Let’s talk about both those aspects, and then you can make an informed decision.

Bear in mind, however, that quizzes like this were likely used by Cambridge Analytica to harvest personal details about millions of Facebook users. Those details were allegedly used to email hidden; JavaScript is required.

Personal Dossier

Let’s start with content. When you take a quiz, you may not realize the extent of the personal information you are providing. Does the quiz ask you for your favorite color? For the year you graduated secondary school? For the type of car you drive? All of that information could potentially be aggregated into a profile. That’s especially true if you take multiple quizzes from the same company.

You don’t know, and you can’t realistically learn, if the organization behind the quiz is storing the information — and what it’s doing with it. Certainly, they can tag you as someone who likes quizzes, and show you more of them. However, are they using that information to profile you for their advertisements? Are they depositing cookies or other tracking mechanisms on your computer? Are they selling that information to other organizations?

A quiz about your favorite color is probably benign. A quiz about “What type of dog are you?” might indicate that you are a dog owner. It’s likely that ads for dog food might be in your future!

Be wary of quizzes that ask for any information that might be used for identity theft, like your home town or the year you were born. While you might sometimes post information like that on Facebook, that information may not be readily accessible to third parties, like the company that offers up those fun quizzes. If you provide such info to the quiz company, you are handing it to them on a silver platter.

Consider the “Is My Dog Fat Quiz,” hosted on the site GoToQuiz. It asks for your age range and your gender – which is totally unnecessary for asking about your dog’s weight and dietary habits. (You can see the lack of professionalism with misspellings like, “How much excersize does your dog get?” This quiz isn’t about you or your dog, it’s about gathering information for Internet marketers.

Permission Granted

Second, you’re giving implicit permission for future interactions. Sometimes when you click on a Facebook quiz, you take the quiz right inside Facebook. When you do so, you are interacting with the quiz giver – which means that future posts or quizzes by that quiz giver will show up on your news feed. You may be totally fine with that… it’s not particularly harmful. However, you should be aware that this is the case. (Those posts and quizzes may also show up on your friends’ news feeds as well, spreading the marketer’s reach)

What concerns me more is when clicking the quiz opens up an external website. When you are on an external website, whatever happens is outside of Facebook’s privacy protections and security protocols. You have no idea what the quiz site will do with your information.

Well, now, perhaps you do now.

Go ahead, blame the user. You can’t expect end users to protect their Internet of Things devices from hacks or breaches. They can’t. They won’t. Security must be baked in. Security must be totally automatic. And security shouldn’t allow end users to mess anything up, especially if the device has some sort of Web browser.

Case in point: Medical devices with some sort of network connection, and thus qualify as IoT. In some cases, those connections might be very busy, connecting to a cloud service to report back telemetry and diagnostics, with the ability for a doctor to adjust functionality. In other cases, the connections might be quiet, used only for firmware updates. In either case, though, any connection might lead to a vulnerability.

According to the Annual Threat Report: Connected Medical Devices, from Zingbox, the most common IoT devices are infusion pumps, followed by imaging systems. Despite their #2 status, the study says that those imaging systems have the most security issues:

They account for 51% of all security issues across tens of thousands devices included in this study. Several characteristics of imaging systems attribute to it being the most risky device in an organization’s inventory. Imaging systems are often designed on commercial-off-the-shelf (COTS) OS, they are expected to have long lifespan (15-20 years), very expensive to replace, and often outlive the service agreement from the vendors as well as the COTS provider.

This is not good. For all devices, the study says that, “Most notably, user practice issues make up 41% of all security issues. The user practice issues consist of rogue applications and browser usage including risky internet sites.” In addition, Zingbox says, “Unfortunately, outdated OS/SW (representing 33% of security issues) is the reality of connected medical devices. Legacy OS, obsolete applications, and unpatched firmware makes up one-third of all security issues.”

Need to Restrict IoT Device Access to Websites

Many devices contain embedded web browsers. Not infusion pumps, of course, but other devices, such those imaging sensors. Network access for such devices should be severely restricted – the embedded browser on a medical device shouldn’t be able to access eBay or Amazon or the New York Times – or anything else other than the device’s approved services. As the study explains, “Context-aware policy enforcement should be put in place to restrict download of rogue applications and enable URL access specific to the operation of the device.”

Even if the device operator’s intentions are good, you don’t want the device used to access, say, Gmail. And then get a virus. Remember, many of the larger IoT medical devices run Windows, and may not have up-to-date malware protection. Or any malware protection whatsoever.

When planning out IoT security, the device must be protected from the user, as well as from hackers. “IoT Security: How To Make The World Safe When Everything’s Connected,” published in Forbes, quoted Gerry Kane, Cyber Security Segment Director for Risk Engineering at The Zurich Services Corporation:

Information security must evolve with the times, Kane believes. “It’s not just about data anymore,” he said. “It’s an accumulation of the bad things that could happen when there’s a security breach. And consider the number of threat vectors that are brought into play by the Internet of Things.”

Human error poses another risk. Although these devices are supposed to operate on their own, they still need to receive instructions from people. The wrong commands could result in mistakes.

“Human error is always a big part of security breaches, even if it’s not always done with malicious intent,” Kane said.

Indeed, the IoT world is pretty dangerous… thanks to those darned end users.

New phones are arriving nearly every day. Samsung unveiled its latest Galaxy S9 flagship. Google is selling lots of its Pixel 2 handset. Apple continues to push its iPhone X. The Vivo Apex concept phone, out of China, has a pop-up selfie camera. And Nokia has reintroduced its famous 8110 model – the slide-down keyboard model featured in the 1999 movie, “The Matrix.”

Yet there is a slowdown happening. Hard to say whether it’s merely seasonal, or an indication that despite the latest and newest features, it’s getting harder to distinguish a new phone from its predecessors.

According to the 451 report, “Consumer Smartphones: 90 Day Outlook: Smartphone Buying Slows but Apple and Samsung Demand Strong,” released February 2018: “Demand for smartphones is showing a seasonal downtick, with 12.7% of respondents from 451 Research’s Leading Indicator panel saying they plan on buying a smartphone in the next 90 days.” However, “Despite a larger than expected drop from the September survey, next 90 day smartphone demand is at its highest December level in three years.”

451 reports that over the next 90 days,

Apple (58%) leads in planned smartphone buying but is down 11 points. Samsung (15%) is up 2 points, as consumer excitement builds around next-gen Galaxy S9 and S9+ devices, scheduled to be released in March. Google (3%) is showing a slight improvement, buoyed by the October release of its Pixel 2 and 2 XL handsets. Apple’s latest releases are the most in-demand among planned iPhone buyers: iPhone X (37%; down 6 points), iPhone 8 (21%; up 5 points) and iPhone 8 Plus (18%; up 4 points).

Interestingly, Apple’s famous brand loyalty may be tracking. Says 451, “Google leads in customer satisfaction with 61% of owners saying they’re Very Satisfied. Apple is close behind, with 59% of iPhone owners saying they’re Very Satisfied. That said, it’s important to keep in mind that iPhone owners comprise 57% of smartphone owners in this survey vs. 2% who own a Google Pixel smartphone.”

Everyone Loves the Galaxy S9

Cnet was positively gushing over the new Samsung phone, writing,

A bold new camera, cutting-edge processor and a fix to a galling ergonomic pitfall — all in a body that looks nearly identical to last year’s model. That, in a nutshell, is the Samsung Galaxy S9 (with a 5.8-inch screen) and its larger step-up model, the Galaxy S9 Plus, which sports an even bigger 6.2-inch screen.

Cnet calls out two features. First, a camera upgrade that includes variable aperture designed to capture better low-light images – which is where most phones really fall down.

The other? “The second improvement is more of a fix. Samsung moved the fingerprint reader from the side of the rear camera to the center of the phone’s back, fixing what was without a doubt the Galaxy S8’s most maddening design flaw. Last year’s model made you stretch your finger awkwardly to hit the fingerprint target. No more.”

The Verge agrees with that assessment:

… the Galaxy S9 is actually a pretty simple device to explain. In essence, it’s the Galaxy S8, with a couple of tweaks (like moving the fingerprint sensor to a more sensible location), and all the specs jacked up to the absolute max for the most powerful device on the market — at least, on paper.

Pop Goes the Camera

The Vivo concept phone, the Apex, has a little pop-up front-facing camera designed for selfies. Says TechCrunch, this is part of a trend:

With shrinking bezels, gadget makers have to look for new solutions like the iPhone X notch. Others still, like Vivo and Huawei, are look at more elegant solutions than carving out a bit of the screen.

For Huawei, this means using a false key within the keyboard to house a hidden camera. Press the key and it pops up like a trapdoor. We tried it out and though the housing is clever, the placement makes for awkward photos — just make sure you trim those nose hairs before starting your conference call.

Vivo has a similar take to Huawei though the camera is embedded on a sliding tray that pops-up out of the top of the phone.

So, there’s still room for innovation. A little room. Beyond cameras, and some minor ergonomic improvements, it’s getting harder and harder to differentiate one phone from another – and possibly, to convince buyers to shell out for upgrades. At least, that is, until 5G handsets hit the market.

You are not the user. If you are the CEO, CTO, chief network architect, software developer – you aren’t the user of the software or systems that you are building, or at least, you aren’t the primary user. What you are looking for isn’t what your customer or employee is looking for. And the vocabulary you use isn’t the vocabulary your customer is using, and may not be what your partners say either.

Two trivial examples:

  1. I recently had my hair cut, and the stylist asked me, “Do you need any product?” Well, I don’t use product. I use shampoo. “Product” is stylist-speak, not customer-speak.
  2. For lunch one day, I stopped at a fast-food chain. Yes, yes, I know, not the healthiest. When my meal was ready, I heard over the speaker, “Order 143, your order is up.” Hmm. Up? In customer-speak, it should have been, “Your order is ready.”

In the essay, “You Are Not the User: The False-Consensus Effect,” Raluca Budiu observes:

While many people who earn a living from developing software will write tons of programs to make their own life easier, much, if not most, of their output will in fact be intended for other people — people who are not working in a cubicle nearby, or not even in the same building. These “users” are usually very different than those who write the code, even in the rare case where they are developers: they have different backgrounds, different experiences with user interfaces, different mindsets, different mental models, and different goals. They are not us.

Badiu defines the false-consensus effect as, “The false-consensus effect refers to people’s tendency to assume that others share their beliefs and will behave similarly in a given context.” And that is more than designing cool software. Good design, and avoiding a false consensus, requires real-life situations with real-life customers or end users.

The way I navigate a grocery store is not the way that the store’s designer, or store’s manager, navigates it. It’s certainly not the way that the store’s manager navigates it. Or its chief risk officer. That’s why grocery stores spend a fortune observing users and testing different layouts to not only maximize sales and profitability, but also maximize the user’s satisfaction. A good design often requires a balance between the needs of the designer and the needs of the users.

My wife was recently frustrated when navigating an insurance company’s website. It was clearly not designed for her use case. Frankly, it’s hard to imagine anyone being satisfied with that website. And how about the process of logging into a WiFi network in a hotel, airport, or coffee shop? Could it be more difficult?

Focus on the User Experience

The Nielsen Norman Group, experts in usability, have offered a list of “10 Usability Heuristics of User Interface Design.” While Jakob Nielsen is focused on the software user experience, these are rules that we should follow in many other situations. Consider this point:

Match between system and the real world: The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.

Yes, and how about

Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

That’s so familiar. How many of us have been frustrated by dialog boxes, not knowing exactly what will happen if we press “Cancel” or “Okay”?

Design Thinking

The article “Design Thinking” from Sarah Gibbons talks about what we should do when designing systems. That means getting them in front of real people:

Prototype: Build real, tactile representations for a subset of your ideas. The goal of this phase is to understand what components of your ideas work, and which do not. In this phase you begin to weigh the impact vs. feasibility of your ideas through feedback on your prototypes.

Test: Return to your users for feedback. Ask yourself ‘Does this solution meet users’ needs?’ and ‘Has it improved how they feel, think, or do their tasks?’

Put your prototype in front of real customers and verify that it achieves your goals. Has the users’ perspective during onboarding improved? Does the new landing page increase time or money spent on your site? As you are executing your vision, continue to test along the way.

Never forget, you are not the user.

From January 1, 2005 through December 27, 2017, the Identity Theft Resource Center (ITRC) reported 8,190 breaches, with 1,057,771,011 records exposed. That’s more than a billion records. Billion with a B. That’s not a problem. That’s an epidemic.

That horrendous number compiles data breaches in the United States confirmed by media sources or government agencies. Breaches may have exposed information that could potentially lead to identity theft, including Social Security numbers, financial account information, medical information, and even email addresses and passwords.

Of course, some people may be included on multiple breaches, and given today’s highly interconnected world, that’s probably very likely. There’s no good way to know how many individuals were affected.

What defines a breach? The organization says,

Security breaches can be broken down into a number of additional sub-categories by what happened and what information (data) was exposed. What they all have in common is they usually contain personal identifying information (PII) in a format easily read by thieves, in other words, not encrypted.

The ITRC tracks seven categories of breaches:

  • Insider Theft
  • Hacking / Computer Intrusion (includes Phishing, Ransomware/Malware and Skimming)
  • Data on the Move
  • Physical Theft
  • Employee Error / Negligence / Improper Disposal / Lost
  • Accidental Web/Internet Exposure
  • Unauthorized Access

As we’ve seen, data loss has occurred when employees store data files on a cloud service without encryption, without passwords, without access controls. It’s like leaving a luxury car unlocked, windows down, keys on the seat: If someone sees this and steals the car, it’s theft – but it was easily preventable theft abetted by negligence.

The rate of breaches is increasing, says the ITRC. The number of U.S. data breach incidents tracked in 2017 hit a record high of 1,579 breaches exposing 178,955,069 records. This is a 44.7% increase over the record high figures reported for 2016, says the ITRC.

It’s mostly but not entirely about hacking. The ITRC says in its “2017 Annual Data Breach Year-End Review,”

Hacking continues to rank highest in the type of attack, at 59.4% of the breaches, an increase of 3.2 percent over 2016 figures: Of the 940 breaches attributed to hacking, 21.4% involved phishing and 12.4% involved ransomware/malware.

In addition,

Nearly 20% of breaches included credit and debit card information, a nearly 6% increase from last year. The actual number of records included in these breaches grew by a dramatic 88% over the figures we reported in 2016. Despite efforts from all stakeholders to lessen the value of compromised credit/debit credentials, this information continues to be attractive and lucrative to thieves and hackers.

Data theft truly is becoming epidemic. And it’s getting worse.

Wireless Ethernet connections aren’t necessarily secure. The authentication methods used to permit access between a device and a wireless router aren’t very strong. The encryption methods used to handle that authentication, and then the data traffic after authorization, aren’t very strong. The rules that enforce the use of authorization and encryption aren’t always enabled, especially with public hotspots like in hotel, airports and coffee shops; the authentication is handled by a web browser application, not the Wi-Fi protocols embedded in a local router.

Helping to solve those problems will be WPA3, an update to decades-old wireless security protocols. Announced by the Wi-Fi Alliance at CES in January 2018, the new standard is said to:

Four new capabilities for personal and enterprise Wi-Fi networks will emerge in 2018 as part of Wi-Fi CERTIFIED WPA3™. Two of the features will deliver robust protections even when users choose passwords that fall short of typical complexity recommendations, and will simplify the process of configuring security for devices that have limited or no display interface. Another feature will strengthen user privacy in open networks through individualized data encryption. Finally, a 192-bit security suite, aligned with the Commercial National Security Algorithm (CNSA) Suite from the Committee on National Security Systems, will further protect Wi-Fi networks with higher security requirements such as government, defense, and industrial.

This is all good news. According to Zack Whittaker writing for ZDNet,

One of the key improvements in WPA3 will aim to solve a common security problem: open Wi-Fi networks. Seen in coffee shops and airports, open Wi-Fi networks are convenient but unencrypted, allowing anyone on the same network to intercept data sent from other devices.

WPA3 employs individualized data encryption, which scramble the connection between each device on the network and the router, ensuring secrets are kept safe and sites that you visit haven’t been manipulated.

Another key improvement in WPA3 will protect against brute-force dictionary attacks, making it tougher for attackers near your Wi-Fi network to guess a list of possible passwords.

The new wireless security protocol will also block an attacker after too many failed password guesses.

What About KRACK?

A challenge for the use of WPA2 is that a defect, called KRACK, was discovered and published in October 2017. To quote my dear friend John Romkey, founder of FTP Software:

The KRACK vulnerability allows malicious actors to access a Wi-Fi network without the password or key, observe what connected devices are doing, modify the traffic amongst them, and tamper with the responses the network’s users receive. Everyone and anything using Wi-Fi is at risk. Computers, phones, tablets, gadgets, things. All of it. This isn’t just a flaw in the way vendors have implemented Wi-Fi. No. It’s a bug in the specification itself.

The timing of the WPA3 release couldn’t be better. But what about older devices. I have no idea how many of my devices — including desktops, phones, tablets, and routers — will be able to run WPA3. I don’t know if firmware updates will be automatically applied, or I will need to search them out.

What’s more, what about the millions of devices out there? Presumably new hotspots will downgrade to WPA2 if a device can’t support WPA3. (And the other way around: A new mobile device will downgrade to talk to an older or unpatched hotel room’s Wi-Fi router.) It could take ages before we reach a critical mass of new devices that can handle WPA3 end-to-end.

The Wi-Fi Alliance says that it “will continue enhancing WPA2 to ensure it delivers strong security protections to Wi-Fi users as the security landscape evolves.” Let’s hope that is indeed the case, and that those enhancements can be pushed down to existing devices. If not, well, the huge installed base of existing Wi-Fi devices will continue to lack real security for years to come.

Amazon says that that a cloud-connected speaker/microphone was at the top of the charts: “This holiday season was better than ever for the family of Echo products. The Echo Dot was the #1 selling Amazon Device this holiday season, and the best-selling product from any manufacturer in any category across all of Amazon, with millions sold.”

The Echo products are an ever-expanding family of inexpensive consumer electronics from Amazon, which connect to a cloud-based service called Alexa. The devices are always listening for spoken commands, and will respond through conversation, playing music, turning on/off lights and other connected gadgets, making phone calls, and even by showing videos.

While Amazon doesn’t release sales figures for its Echo products, it’s clear that consumers love them. In fact, Echo is about to hit the road, as BMW will integrate the Echo technology (and Alexa cloud service) into some cars beginning this year. Expect other automakers to follow.

Why the Echo – and Apple’s Siri and Google’s Home? Speech.

The traditional way of “talking” to computers has been through the keyboard, augmented with a mouse used to select commands or input areas. Computers initially responded only to typed instructions using a command-line interface (CLI); this was replaced in the era of the Apple Macintosh and the first iterations of Microsoft Windows with windows, icons, menus, and pointing devices (WIMP). Some refer to the modern interface used on standard computers as a graphic user interface (GUI); embedded devices, such as network routers, might be controlled by either a GUI or a CLI.

Smartphones, tablets, and some computers (notably running Windows) also include touchscreens. While touchscreens have been around for decades, it’s only in the past few years they’ve gone mainstream. Even so, the primary way to input data was through a keyboard – even if it’s a “soft” keyboard implemented on a touchscreen, as on a smartphone.

Talk to me!

Enter speech. Sometimes it’s easier to talk, simply talk, to a device than to use a physical interface. Speech can be used for commands (“Alexa, turn up the thermostat” or “Hey Google, turn off the kitchen lights”) or for dictation.

Speech recognition is not easy for computers; in fact, it’s pretty difficult. However, improved microphones and powerful artificial-intelligence algorithms make speech recognition a lot easier. Helping the process: Cloud computing, which can throw nearly unlimited resources at speech recognition, including predictive analytics. Another helper: Constrained inputs, which means that when it comes to understanding commands, there are only so many words for the speech recognition system to decode. (Free-form dictation, like writing an essay using speech recognition, is a far harder problem.)

Speech recognition is only going to get better – and bigger. According to one report, “The speech and voice recognition market is expected to be valued at USD 6.19 billion in 2017and is likely to reach USD 18.30 billion by 2023, at a CAGR of 19.80% between 2017 and 2023. The growing impact of artificial intelligence (AI) on the accuracy of speech and voice recognition and the increased demand for multifactor authentication are driving the market growth.”

Helping the process: Cloud computing, which can throw nearly unlimited resources at speech recognition, including predictive analytics. Another helper: Constrained inputs, which means that when it comes to understanding commands, there are only so many words for the speech recognition system to decode. (Free-form dictation, like writing an essay using speech recognition, is a far harder problem.)

It’s a big market

Speech recognition is only going to get better – and bigger. According to one report, “The speech and voice recognition market is expected to be valued at USD 6.19 billion in 2017and is likely to reach USD 18.30 billion by 2023, at a CAGR of 19.80% between 2017 and 2023. The growing impact of artificial intelligence (AI) on the accuracy of speech and voice recognition and the increased demand for multifactor authentication are driving the market growth.” The report continues:

“The speech recognition technology is expected to hold the largest share of the market during the forecast period due to its growing use in multiple applications owing to the continuously decreasing word error rate (WER) of speech recognition algorithm with the developments in natural language processing and neural network technology. The speech recognition technology finds applications mainly across healthcare and consumer electronics sectors to produce health data records and develop intelligent virtual assistant devices, respectively.

“The market for the consumer vertical is expected to grow at the highest CAGR during the forecast period. The key factor contributing to this growth is the ability to integrate speech and voice recognition technologies into other consumer devices, such as refrigerators, ovens, mixers, and thermostats, with the growth of Internet of Things.”

Right now, many of us are talking to Alexa, talking to Siri, and talking to Google Home. Back in 2009, I owned a Ford car that had a primitive (and laughably inaccurate) infotainment system – today, a new car might do a lot better, perhaps due to embedded Alexa. Will we soon be talking to our ovens, to our laser printers and photocopiers, to our medical implants, to our assembly-line equipment, and to our network infrastructure? It wouldn’t surprise Alexa in the least.

Criminals steal money from banks. Nothing new there: As Willie Sutton famously said, “I rob banks because that’s where the money is.”

Criminals steal money from other places too. While many cybercriminals target banks, the reality is that there are better places to steal money, or at least, steal information that can be used to steal money. That’s because banks are generally well-protected – and gas stations, convenience stores, smaller on-line retailers, and even payment processors are likely to have inadequate defenses — or make stupid mistakes that aren’t caught by security professionals.

Take TIO Networks, a bill-payment service purchased by PayPal for US$233 in July 2017. TIO processed more than $7 billion in bill payments last year, serving more than 10,000 vendors and 16 million consumers.

Hackers now know critical information about all 16 million TIO customers. According to Paymts.com, “… the data that may have been impacted included names, addresses, bank account details, Social Security numbers and login information. How much of those details fell into the hands of cybercriminals depends on how many of TIO’s services the consumers used.”

PayPal has said,

“The ongoing investigation has uncovered evidence of unauthorized access to TIO’s network, including locations that stored personal information of some of TIO’s customers and customers of TIO billers. TIO has begun working with the companies it services to notify potentially affected individuals. We are working with a consumer credit reporting agency to provide free credit monitoring memberships. Individuals who are affected will be contacted directly and receive instructions to sign up for monitoring.”

Card Skimmers and EMV Chips

Another common place where money changes hands: The point-of-purchase device. Consider payment-card skimmers – that is, a hardware device secretly installed into a retail location’s card reader, often at an unattended location like a gasoline pump.

The amount of fraud caused by skimmers copying information on payment cards is expected to rise from $3.1 billion in 2015 to $6.4 billion in 2018, affecting about 16 million cardholders. Those are for payment cards that don’t have the integrated EMV chip, or for transactions that don’t use the EMV system.

EMV chips, also known as chip-and-PIN or chip-and-signature, are named for the three companies behind the technology standards – Europay, MasterCard, and Visa. Chip technology, which is seen as a nuisance by consumers, has dramatically reduced the amount of fraud by generating a unique, non-repeatable transaction code for each purchase.

The rollout of EMV, especially in the United States, is painfully slow. Many merchants still haven’t upgraded to the new card-reader devices or back-end financial services to handle those transactions. For example, there are very few fuel stations using chips to validate transactions, and so pay-at-the-pump in U.S. is universally still dependent on the mag stripe reader. That presents numerous opportunities for thieves to install skimmers on that stripe reader, and be able to steal payment card information.

For an excellent, well-illustrated primer on skimmers and skimmer-related fraud at gas stations, see “As gas station skimmer card fraud increases, here’s how to cut your risk.” Theft at the point of purchase, or at payment processors, will continue as long as companies fail to execute solid security practices – and continue to accept non-EMV payment card transactions, including allowing customers to type their credit- or debit-card numbers onto websites. Those are both threats for the foreseeable future, especially since desktops, notebooks, and mobile device don’t have built-in EMV chip readers.

Crooks are clever, and are everywhere. They always have been. Money theft and fraud – no matter how secure the banks are, it’s not going away any time soon.

I unlock my smartphone with a fingerprint, which is pretty secure. Owners of the new Apple iPhone X unlock theirs with their faces – which is reported to be hackable with a mask. My tablet is unlocked with a six-digit numerical code, which is better than four digits or a pattern. I log into my laptop with an alphanumeric password. Many online services, including banks and SaaS applications, require their own passwords.

It’s a mess! Not the least because lazy humans tend to reuse passwords, so that if a username and password for one service is stolen, criminals can try using that same combination on other services. Hackers steal your email and password from some insecure e-commerce site’s breach? They’ll try that same ID and password on Facebook, LinkedIn, eBay, Amazon, Walmart.com, Gmail, Office 365, Citibank, Fidelity, Schwab… you get the idea.

Two more weaknesses: Most people don’t change their passwords frequently, and the passwords that they choose are barely more secure than ABCD?1234. And while biometrics are good, they’re not always sufficient. Yes, my smartphone has a fingerprint sensor, but my laptop doesn’t. Sure, companies can add on such technology, but it’s a kludge. It’s not a standard, and certainly I can’t log into my Amazon.com account with a fingerprint swipe.

Passwords Spell Out Trouble

The 2017 Verizon Data Breach Report reports that 81% of hacking-related breaches leverage either stolen or weak passwords. That’s the single biggest tactic used in breaches – followed by actual hacking, at 62%, and malware, at 51%.

To quote from the report: “… if you are relying on username/email address and password, you are rolling the dice as far as password re-usage from other breaches or malware on your customers’ devices are concerned.” About retailers specifically — which is where we see a lot of breaches — Verizon writes: “Their business is their web presence and thus the web application is the prime target of compromise to harvest data, frequently some combination of usernames, passwords (sometimes encrypted, sometimes not), and email addresses.”

By the way, I am dismayed by the common use of a person’s email address instead of a unique login name by many retailers and online services. That reduces the bits of data that hackers or criminals need. It’s pretty easy to figure out my email address, which means that to get into my bank account, all you need is to guess or steal my password. But if my login name was a separate thing, like WeinerDogFancier, you’d have to know that andfind my password. On the other hand, using the email address makes things easier for programmers, and presumably for users as well. As usual, convenience beats security.

Too Much Hanging on a Single Identity

The Deloitte breach, which was discovered in March 2017, succeeded because an administrator account had basically unfettered access to everything. And that account wasn’t secured by two-factor authentication. There were apparently no secondary password protecting critical assets, even from an authenticated user.

As the Guardian wrote in “Deloitte hit by cyber-attack revealing clients’ secret emails,”

The hacker compromised the firm’s global email server through an “administrator’s account” that, in theory, gave them privileged, unrestricted “access to all areas”. The account required only a single password and did not have “two-step“ verification, sources said. Emails to and from Deloitte’s 244,000 staff were stored in the Azure cloud service, which was provided by Microsoft. This is Microsoft’s equivalent to Amazon Web Service and Google’s Cloud Platform. In addition to emails, the Guardian understands the hackers had potential access to usernames, passwords, IP addresses, architectural diagrams for businesses and health information. Some emails had attachments with sensitive security and design details.

There are no universal solutions to the password scourge. However, there are some best practices:

  • Don’t trust any common single-factor authentication scheme completely; they can all be bypassed or hacked.
  • Require two-factor authentication from any new device, for access outside of normal working hours or geographies, or potentially even a new IP address.
  • Look into schemes that require removable hardware, such as a USB dongle, as a third factor.
  • Secure valuable assets, such as identity databases, with additional protections. They should be encrypted and blocked from download.
  • Consider disabling remote access to such assets, and certainly disable the ability to download the results of identity or customer database searches.
  • If it’s possible to use biometrics or other hardware-based authentication, do so.

Passwords are B.S.

You might enjoy this riff on passwords by Jeff Atwood in his blog, Coding Horror. Be sure to read the comments.

Let’s talk about hackers, not through the eyes of the tech industry but through the eyes of current and former U.S. law enforcement officials. It’s their job to run those people down and throw them in jail.

The Federal Bureau of Investigation

MK Palmore is an Information Security Risk Management Executive with the FBI’s Cyber Branch in San Francisco. He runs the cyber-security teams assigned to the San Francisco division of the FBI. “My teams here in San Francisco typically play some part in the investigations, where our role is to identify, define attribution, and get those folks into the U.S. Justice system.”

“The FBI is 35,000-plus personnel, U.S.-based, and part of the Federal law enforcement community,” says Palmore. “There are 56 different field offices throughout the United States of America, but we also have an international presence in more than 62 cities throughout the world. A large majority of those cities contain personnel that are assigned there specifically for responsibilities in the cyber-security realm, and often-times are there to establish relationships with our counterparts in those countries, but also to establish relationships with some of the international companies, and folks that are raising their profile as it relates to international cyber-security issues.”

The U.S. Secret Service

It’s not really a secret: In 1865, the Secret Service was created by Congress to primarily suppress counterfeit currency. “Counterfeit currency represented greater than 50% of all the currency in the United States at that time, and that was why the Agency was created,” explained Dr. Ronald Layton, Deputy Assistant Director U.S. Secret Service. “The Secret Service has gone from suppressing counterfeit currency, or economic, or what we used to refer to as paper crimes, to plastic, meaning credit cards. So, we’ve had a progression, from paper, to plastic, to digital crimes, which is where we are today,” he continued.

Protecting Data, Personal and Business

“I found a giant hole in the way that private sector businesses are handling their security,” said Michael Levin. “They forgot one very important thing. They forgot to train their people what to do. I work with organizations to try to educate people — we’re not doing a very good job of protecting ourselves. “

A leading expert in cyber-security, Levin is Former Deputy Director, U.S. Department of Homeland Security’s National Cyber-Security Division. He retired from the government a few years ago, and is now CEO & Founder of the Center for Information Security Awareness.

“When I retired from the government, I discovered something,” he continued. “We’re not protecting our own personal data – so, everybody has a role to play in protecting their personal data, and their family’s data. We’re not protecting our business data. Then, we’re not protecting our country’s data, and there’s nation states, and organized crime groups, and activists, that are coming after us on a daily basis.”

The Modern Hacker: Who They Are, What They Want

There are essentially four groups of cyber-threat activists that we need to be concerned with, explained the FBI’s Palmore. “I break them down as financially-motivated criminal intrusion, threat actors, nation states, hacktivists, and then those security incidents caused by what we call the insider threat. The most prevalent of the four groups, and the most impactful, typically, are those motivated by financial concerns.”

“We’re talking about a global landscape, and the barrier to entry for most financially-motivated cyber-threat actors is extremely low,” Palmore continued. “In terms of looking at who these folks are, and in terms of who’s on the other end of the keyboard, we’re typically talking about mostly male threat actors, sometimes between the ages of, say, 14 and 32 years old. We’ve seen them as young as 14.”

Criminals? Nation states? Hacktivists? Insiders? While that matters to law enforcement, it shouldn’t to individuals and enterprise, said CIFSA’s Levin. “For most people, they don’t care if it’s a nation state. They just want to stop the bleeding. They don’t care if it’s a hacktivist, they just want to get their site back up. They don’t care who it is. They just start trying to fix the problem, because it means their business is being attacked, or they’re having some sort of a failure, or they’re losing data. They’re worried about it. So, from a private sector company’s business, they may not care.”

However, “Law enforcement cares, because they want to try to catch the bad guy. But for the private sector is, the goal is to harden the target,” points out Levin. “Many of these attacks are, you know, no different from a car break-in. A guy breaking into cars is going to try the handle first before he breaks the window, and that’s what we see with a lot of these hackers. Doesn’t matter if they’re nation states, it doesn’t matter if they’re script kiddies. It doesn’t matter to what level of the sophistication. They’re going to look for the open doors first.”

The Secret Service focuses almost exclusively about folks trying to steal money. “Several decades ago, there was a famous United States bank robber named Willie Sutton,” said Layton. “Willie Sutton was asked, why do you rob banks? ‘Because that’s where the money is.’ Those are the people that we deal with.”

Layton explained that the Secret Service has about a 25-year history of investigating electronic crimes. The first electronic crimes taskforce was established in New York City 25 years ago. “What has changed in the last five or 10 years? The groups worked in isolation. What’s different? It’s one thing: They all know each other. They all are collaborative. They all use Russian as a communications modality to talk to one another in an encrypted fashion. That’s what’s different, and that represents a challenge for all of us.”

Work with Law Enforcement

Palmore, Levin, and Layton have excellent, practical advice on how businesses and individuals can protect themselves from cybercrime. They also explain how law enforcement can help. Read more in my article for Upgrade Magazine, “The new hacker — Who are they, what they want, how to defeat them.”

AOL Instant Messenger will be dead before the end of 2017. Yet, instant messages have succeeded far beyond what anyone could have envisioned for either SMS (Short Message Service, carried by the phone company) or AOL, which arguably brought instant messaging to regular computers starting in 1997.

It would be wonderful to claim that there’s some great significance in the passing of AIM. However, my guess is that there simply wasn’t any business benefit to maintaining ia service that nearly nobody used. The AIM service was said to carry far less than 1% of all instant messages across the Internet… and that was in 2011.

I have an AIM account, and although it’s linked into my Apple Messages client, I had completely forgotten about it. Yes, there was a little flurry of news back in March 2017, when AOL began closing APIs and shutting down some third-party AIM applications. However, that didn’t resonate. Then, on Oct. 6, came the email from AOL’s new corporate overload, Oath, a subsidiary of Verizon:

Dear AIM user,

We see that you’ve used AOL Instant Messenger (AIM) in the past, so we wanted to let you know that AIM will be discontinued and will no longer work as of December 15, 2017.

Before December 15, you can continue to use the service. After December 15, you will no longer have access to AIM and your data will be deleted. If you use an @aim.com email address, your email account will not be affected and you will still be able to send and receive email as usual.

We’ve loved working on AIM for you. From setting the perfect away message to that familiar ring of an incoming chat, AIM will always have a special place in our hearts. As we move forward, all of us at AOL (now Oath) are excited to continue building the next generation of iconic brands and life-changing products for users around the world.

You can visit our FAQ to learn more. Thank you for being an AIM user.

Sincerely,

The AOL Instant Messenger team

Interestingly, my wife, who also has an AIM account but never uses it, thought that the message above was a phishing scam of some sorts. So, AIM is dead. But not instant messaging, which is popular for both consumers and business users, and for desktop/notebooks and smartphones. There are so many clients that consumers can use; according to Statistica, here are the leaders as of January 2017, cited in millions of active monthly users. AIM didn’t make the list.

Then there are the corporate instant message platforms, Slack, Lync, and Symphony. And we’re not even talking social media, like Twitter, Google+, Kik, and Instagram. So – Instant messaging is alive and well. AIM was the pioneer, but it ceased being relevant a long, long time ago.

My Benchmade Bugout Axis knife arrived last week. I’ve been using it as an everyday carry (EDC) knife, instead of my usual Benchmade Griptilian or Mini Griptilian.

Summary: The Bugout is very nice and light, with an excellent blade. The handle’s too thin for a sturdy grip, so I wouldn’t want it in a knife fight. It could be easily knocked out of my hand. Easier to drop, I think, than the Griptilian or Mini Grip. Still, the Bugout nice and practical for a pocket knife, and the Axis is my favorite locking mechanism.

Benchmade describes the Bugout as “designed for the modern outdoor adventurer, incorporating the lightest, best performing materials in an extremely slim yet ergonomic package.” Well, that’s not me: I’m an urban work-at-home adventurer who likes having a knife in my pocket whenever I got out, whether it’s to the store, a technical conference, or for a walk around the neighborhood. (Sadly, I can’t take a knife when I fly. Sniff.)

What’s good about the Bugout: Light (1.85 ounces, says Benchmade), blade length (3.24”) steel (S30V), pretty blue handle, thin (0.42”). The blade is thin (0.09”).

Compare to the Griptilian, seen here with a black handle and silver blade. Slightly longer and thicker blade than the Bugout (3.45” and 0.11”), much thicker handle (0.64”) and twice the weight (3.79 ounces). Many choices of steel.

Compare to the Mini Grip, seen here with a black handle and black blade. Shorter but thicker blade compared to the Bugout, (2.91” and 0.10”), thicker handle (0.51”), and greater weight (2.68 ounces). Many choices of steel.

What’s not so good about the Bugout: Beyond the slightly hard-to-grasp handle, it’s the lack of essential options. With the Griptilian and Mini Grip, you can choose the steel. You can choose the blade shape. You can choose the colors. Not so with the Bugout, at least not yet, so I’m stuck with the drop-point and blue.

With the Grip and Mini Grip, I’ve chosen knives with the sheepsfoot point. I like the flip-out hole, even though it makes the knives bulkier. The only real option on the Bugout, at least at present, is a plain or serrated drop-point blade. (I would buy another Bugout if it came with sheepsfoot, and give this one to my son.)

Oh, you can do custom engraving on the Bugout blades. Nice if you’re giving one as a gift.

Bottom line: The Bugout is a very nice, very civilized EDC. I’m happy to wear it with nice trousers, or at any time where slimness or light weight are paramount. (Those are the scenarios that Benchmade touts, especially for packing into a backpack or other “bugout” gear.) The big loser here is the Mini Grip, which has been supplanted by a lighter knife with a longer blade.

Go ahead, bring on the apple, bring on the wrapped package, bring on the rope/cord. The Bugout has it covered.

That said: For going out on walks, or other outings with jeans or cargo pants, when weight is not an issue, the Griptilian will still be my #1 EDC.

HP-35 slide rule calculatorAt the current rate of rainfall, when will your local reservoir overflow its banks? If you shoot a rocket at an angle of 60 degrees into a headwind, how far will it fly with 40 pounds of propellant and a 5-pound payload? Assuming a 100-month loan for $75,000 at 5.11 percent, what will the payoff balance be after four years? If a lab culture is doubling every 14 hours, how many viruses will there be in a week?

Those sorts of questions aren’t asked by mathematicians, who are the people who derive equations to solve problems in a general way. Rather, they are asked by working engineers, technicians, military ballistics officers, and financiers, all of whom need an actual number: Given this set of inputs, tell me the answer.

Before the modern era (say, the 1970s), these problems could be hard to solve. They required a lot of pencils and paper, a book of tables, or a slide rule. Mathematicians never carried slide rules, but astronauts did, as their backup computers.

However, slide rules had limitations. They were good to about three digits of accuracy, no more, in the hands of a skilled operator. Three digits was fine for real-world engineering, but not enough for finance. With slide rules, you had to keep track of the decimal point yourself: The slide rule might tell you the answer is 641, but you had to know if that was 64.1 or 0.641 or 641.0. And if you were chaining calculations (needed in all but the simplest problems), accuracy dropped with each successive operation.

Everything the slide rule could do, a so-called slide-rule calculator could do better—and more accurately. Slide rules are really good at few things. Multiplication and division? Easy. Exponents, like 613? Easy. Doing trig, like sines, cosines, and tangents? Easy. Logarithms? Easy.

Hewlett-Packard unleashed a monster when it created the HP-9100A desktop calculator, released in 1968 at a price of about $5,000. The HP-9100A did everything a slide rule could do, and more—such as trig, polar/rectangular conversions, and exponents and roots. However, it was big and it was expensive—about $35,900 in 2017 dollars, or the price of a nice car! HP had a market for the HP-9100A, since it already sold test equipment into many labs. However, something better was needed, something affordable, something that could become a mass-market item. And that became the pocket slide-rule calculator revolution, starting off with the amazing HP-35.

If you look at the HP-35 today, it seems laughably simplistic. The calculator app in your smartphone is much more powerful. However, back in 1972, and at a price of only $395 ($2,350 in 2017 dollars), the HP-35 changed the world. Companies like General Electric ordered tens of thousands of units. It was crazy, especially for a device that had a few minor math bugs in its first shipping batch (HP gave everyone a free replacement).

Read more about early slide-rule calculators — and the more advanced card-programmable models like the HP-65 and HP-67, in my story, “The early history of HP calculators.”

HP-65 and HP-67 card-programmable calculators

People Queue Magazine has a fascinating new article, “No more queuing at the ladies’ room.” You’ll want to read the whole thing, because it has some fascinating mathematics (this is a scientific article, not a sociological one). Here’s a teaser:

Although it’s a well-documented fact that women have to wait longer at the bathroom stall, so far the mathematical perspective seems to be lacking in literature. This is in spite of the decennia-long existence of the field of queuing theory, which has traditionally been applied most to problems of technology and decent people, rather than to such inescapable habits as the act of excreting.

Nevertheless, mathematics is what you need to analyze queues because of the inherent random nature of queuing phenomena, turning simple lines of people into complex nonlinear systems with numerous parameters, whereby a small deviation can lead to excessive additional waiting. This is as opposed to good old linear systems, which see linear changes of parameters translated in proportional variations at their output.

Nonlinear systems are common in everyday life and nature. A virus for example will result in a pandemic much faster if it is just slightly more infectious. And just a few extra cars make for a traffic jam appearing out of thin air. Similarly, toilet queues, or any queue for that matter, pose nonlinear problems in which the fragile balance between capacity and demand can be disrupted by subtle tweaks.

A first factor explaining why women wait longer is that the net number of toilets for women is smaller than that for men. The toilet sections for men and women are often of equal size, as is the surface dedicated to each of them. What appears to be “fair” at first sight, is quite unreasonable knowing that a toilet cabin inevitably takes up more space than a urinal. Overall, an average toilet area can accommodate 20 to 30% more toilets for men (urinals + cabins) than for women.

The major impact of the number of toilets on the average waiting time can be understood from the Erlang-C queuing model. This model allows to calculate the average waiting time when the number of available toilets, the average time spent on the toilet and the average arrival intensity are known. Where λ stands for the average arrival intensity expressed in number of arrivals per minute, μ for the inverse of the average time spent on the toilet, and t for the number of toilets, the average waiting time is obtained from following formulas:

Read the whole article — and there’s no waiting, whether you are male or female.

Did they tell their customers that data was stolen? No, not right away. When AA — a large automobile club and insurer in the United Kingdom — was hacked in April, the company was completely mum for months, in part because it didn’t believe the stolen data was sensitive. AA’s customers only learned about it when information about the breach was publicly disclosed in late June.

There are no global laws that require companies to disclose information about data thefts to customers. There are similarly no global laws that require companies to disclose defects in their software or hardware products, including those that might introduce security vulnerabilities.

It’s obviously why companies wouldn’t want to disclose problems with their products (such as bugs or vulnerabilities) or with their back-end operations (such as system breaches or data exfiltration). If customers think you’re insecure, they’ll leave. If investors think you’re insecure, they’ll leave. If competitors think you’re insecure, they’ll pounce on it. And if lawyers or regulators think you’re insecure, they might file lawsuits.

No matter how you slice it, disclosures about problems is not good for business. Far better to share information about new products, exciting features, customer wins, market share increases, additional platforms, and pricing promotions.

It’s Not Always Hidden

That’s not to say that all companies hide bad news. Microsoft, for example, is considered to be very proactive on disclosing flaws in its products and platforms, including those that affect security. When Microsoft learned about the Server Message Block (SMB) flaw that enabled malware like WannaCry and Petya in March, it quickly issued a Security Bulletin that explained the problem — and supplied the necessary patches. If customers had read the bulletin and applied the patches, those ransomware outbreaks wouldn’t have occurred.

When you get outside the domain of large software companies, such disclosures are rare. Automobile manufacturers do share information about vehicle defects with regulators, as per national laws, but resist recalls because of the expense and bad publicity. Beyond that, companies share information about problems with products, services, and operations unwillingly – and with delays.

In the AA case, as SC Magazine wrote,

The leaky database was first discovered by the AA on April 22 and fixed by April 25. In the time that it had been exposed, it had reportedly been accessed by several unauthorised parties. An investigation by the AA deemed the leaky data to be not sensitive, meaning that the organisation did not feel it necessary to tell customers.

Yet the breach contained over 13 gigabytes of data with information about 100,000 customers. Not sensitive? Well, the stolen information included email addresses along with names, IP addresses, and credit card details. That data seems sensitive to me!

Everything Will Change Under GDPR

The European Union’s new General Data Protection Regulation (GDPR) is go into effect May 2018. GDPR will for the first time require companies to tell customers and regulators about data breaches in a timely manner. Explains the U.K. Information Commissioner’s Office,

The GDPR will introduce a duty on all organisations to report certain types of data breach to the relevant supervisory authority, and in some cases to the individuals affected.

What is a personal data breach?

A personal data breach means a breach of security leading to the destruction, loss, alteration, unauthorised disclosure of, or access to, personal data. This means that a breach is more than just losing personal data.

Example

A hospital could be responsible for a personal data breach if a patient’s health record is inappropriately accessed due to a lack of appropriate internal controls.

When do individuals have to be notified?

Where a breach is likely to result in a high risk to the rights and freedoms of individuals, you must notify those concerned directly.

A ‘high risk’ means the threshold for notifying individuals is higher than for notifying the relevant supervisory authority.

What information must a breach notification contain?

  • The nature of the personal data breach including, where possible:
  • the categories and approximate number of individuals concerned; and
  • the categories and approximate number of personal data records concerned;
  • The name and contact details of the data protection officer (if your organisation has one) or other contact point where more information can be obtained;
  • A description of the likely consequences of the personal data breach; and
  • A description of the measures taken, or proposed to be taken, to deal with the personal data breach and, where appropriate, of the measures taken to mitigate any possible adverse effects.

Also, says the regulation,

If the breach is sufficiently serious to warrant notification to the public, the organisation responsible must do so without undue delay. Failing to notify a breach when required to do so can result in a significant fine up to 10 million Euros or 2 per cent of your global turnover.

Bottom line: Next year, companies in the E.U. must do better disclosing data breaches that affect their customers. Let’s hope this practice extends to more of the world.

Virtual reality and augmented reality are the darlings of the tech industry. Seemingly every company is interested, even though one of the most interested AR products, Google Glass, crashed and burned a few years ago.

What’s the difference?

  • Virtual reality (VR) is when you are totally immersed in a virtual world. You only see (and hear) what’s presented to you as part of that virtual world, generated by software and displayed in stereo goggles and headphones. The goggles can detect motion, and can let you move around in virtual world. Games and simulations take place in VR.
  • Augmented reality (AR) means visual overlays. You see the real world, with digital information superimposed on it. Google Glass was AR. So, too, are apps where you aim your smartphone’s camera at the sky, and the AR software overlays the constellations on top of the stars, and shows where Saturn is right now. AR also can guide a doctor to a blood clot, or an emergency worker away from a hot wire, or a game player to a Pokemon character in a local park.

Both AR and VR have been around for decades, although the technology has become smaller and less expensive. There are consumer-oriented devices, such as the Oculus, and many professional systems. Drivers for the success of AR and VR are more powerful computing devices (such as smartphones and game consoles), and advances in both high-resolution displays and motion sensors for goggles.

That doesn’t mean that AR/VR are the next Facebook or Instagram, though both those companies are looking at AR/VR. According to a study, “VR/AR Innovation Report,” presented by the UBM Game Network, VR’s biggest failures include a lack of subsidized hardware enterprise applications, and native VR experiences. The gear is too expensive, developers say, and manufacturers are perceived to have failed in marketing VR systems and software.

Keep that airsick bag handy

It’s well known that if the VR hardware doesn’t work exactly right. If image motion is not properly synchronized to head motion, many VR users experience nausea. That’s not good. To quote from the UBM study:

Notably, we saw that many still feel like VR’s greatest unsolved problem is the high risk of causing nausea and physical discomfort.

“The biggest issue is definitely the lack of available ‘simulator sickness’ mitigation techniques,” opined one respondent. “Since each VR application offers a unique user experience, no one mitigation technique can service all applications. Future designs must consider the medium/genre they are developing for and continue to investigate new mitigation techniques to ensure optimal user enjoyment.”

Lots of good applications

That doesn’t mean that VR and AR are worthless. Pokemon Go, which was a hit a few summers ago, demonstrated that AR can engage consumers without stereo goggles. Google Earth VR provides immersive mapping experiences.

The hardware is also moving forward. A startup in Helsinki, called Varjo, made a breakthrough in optimizing goggles for AR and VR. They are addressing the challenge that if you make the resolution low on the goggles so that you can refresh the image quickly, it doesn’t look realistic. But if you increase the resolution to match that of the human eye, it’s harder to drive the image seamlessly in real time.

Varjo’s answer is to see where the eye is looking – using a technology called gaze tracking – and seamlessly drive that part of the display in super-high resolution. Where you’re not looking? That can be at a lower resolution, to provide context. Varjo says they can shift the high-resolution spot as fast as you can move your eye – and by tracking the gaze on both eyes, they can see if you are looking at virtual objects “close” or “far away.” The result, Varjo claims, is a display that’s about 35x higher resolution than other commercial systems, without nausea.

Varjo is focusing on the professional marketing with headsets that will cost thousands (not hundreds) of dollars when they ship at the end of 2017. However, it shows the promise of realistic, affordable AR/VR technology. Augmented reality and virtual reality are becoming more real every day.

The folks at Varjo think they’re made a breakthrough in how goggles for virtual reality and augmented reality work. They are onto something.

Most VR/AR goggles have two displays, one for each eye, and they strive to drive those displays at the highest resolution possible. Their hardware and software takes into account that as the goggles move, the viewpoint has to move in a seamless way, without delay. If there’s delay, the “willing suspension of disbelief” required to make VR work fails, and in some cases, the user experiences nausea and disorientation. Not good.

The challenge come from making the display sufficiently high resolution to allow the user to make objects look photorealistic. That lets user manipulate virtual machine controls, operate flight simulators, read virtual text, and so-on. Most AR/VR systems try to make the display uniformly high resolution, so that no matter where the user looks, the resolution is there.

Varjo, based in Finland, has a different approach. They take advantage of the fact that the rods and cones in the human eye sees in high resolution in the spot that the eye’s fovea is pointing at – and much lower elsewhere. So while the whole display is capable of high resolution, Varjo uses fovea detectors to do “gaze tracking” to see what the user is looking at, and makes that area super high resolution. When the fovea moves to another spot, that area is almost instantly bumped up to super high resolution, while the original area is downgraded to a reduced resolution.

Sound simple? It’s not, and that’s why the initial Varjo technology will be targeted at professional applications, like doctors, computer-aided design workers, or remote instrument operators. Prototypes of the goggles will be available this year to software developers, and the first products should ship to customers at the end of 2018. The price of the goggles is said to be “thousands, not tens of thousands” of dollars, according to Urho Konttori, the company’s founder. We talked by phone; he was in the U.S. doing demos in San Francisco and New York, but unfortunately, I wasn’t able to attend one of them.

Now, Varjo isn’t the first to use gaze tracking technology to try to optimize the image. According to Konttori, other vendors use medium resolution where the eye is pointing, and low resolution elsewhere, just enough to establish context. By contrast, he says that Varjo uses super high resolution where the user looks, and high resolution elsewhere. Because each eye’s motion is tracked separately, the system can also tell when the user is looking at objects close to user (because the eyes are at a more converged angle) or farther away (the eyes are at a more parallel angle).

“In our prototype, wherever you are looking, that’s the center of the high resolution display,” he said. “The whole image looks to be in focus, no matter where you look. Even in our prototype, we can move the display projection ten times faster than the human eye.”

Konttori says that the effective resolution of the product, called 20/20, is 70 megapixels, updated in real time based on head motion and gaze tracking. That compares to fewer than 2 megapixels for Oculus, Vive, HoloLens and Magic Leap. (This graphic from Varjo compared their display to an unnamed competitor.) What’s more, he said the CPU/GPU power needed to drive this display isn’t huge. “The total pixel count is less than in a single 4K monitor. you need roughly 2x the GPU compared to a conventional VR set for the same scene.”

The current prototypes use two video connectors and two USB connectors. Konttori says that this will drop to one video connector and one USB connector shortly, so that the device can be driven by smaller professional-grade computers, such as a gaming laptop, though he expects most will be connected to workstations.

Konttori will be back in the U.S. later this year. I’m looking forward to getting my hands (and eyes) on a Varjo prototype. Will report back when I’ve actually seen it.

Twenty years ago, my friend Philippe Kahn introduced the first camera-phone. You may know Philippe as the founder of Borland, and as an entrepreneur who has started many companies, and who has accomplished many things. He’s also a sailor, jazz musician, and, well, a fun guy to hang out with.

About camera phones: At first, I was a skeptic. Twenty years ago I was still shooting film, and then made the transition to digital SLR platforms. Today, I shoot with big Canon DSLRs for birding and general stuff, Leica digital rangefinders when want to be artistic, and with pocket-sized digital cameras when I travel. Yet most of my pictures, especially those posted to social media, come from the built-in camera in my smartphone.

Philippe has blogged about this special anniversary – which also marks the birth of his daughter Sophie. To excerpt from his post, The Creation of the Camera-Phone and Instant-Picture-Mail:

Twenty years ago on June 11th 1997, I shared instantly the first camera-phone photo of the birth of my daughter Sophie. Today she is a university student and over 2 trillion photos will be instantly shared this year alone. Every smartphone is a camera-phone. Here is how it all happened in 1997, when the web was only 4 years old and cellular phones were analog with ultra limited wireless bandwidth.

First step 1996/1997: Building the server service infrastructure: For a whole year before June 1997 I had been working on a web/notification system that was capable of uploading a picture and text annotations securely and reliably and sending link-backs through email notifications to a stored list on a server and allowing list members to comment.

Remember it was 1996/97, the web was very young and nothing like this existed. The server architecture that I had designed and deployed is in general the blueprint for all social media today: Store once, broadcast notifications and let people link back on demand and comment. That’s how Instagram, Twitter, Facebook, LinkedIn and many others are function. In 1997 this architecture was key to scalability because bandwidth was limited and it was prohibitive, for example, to send the same picture to 500 friends. Today the same architecture is essential because while there is bandwidth, we are working with millions of views and potential viral phenomena. Therefore the same smart “frugal architecture” makes sense. I called this “Instant-Picture-Mail” at the time.

He adds:

What about other claims of inventions: Many companies put photo-sensors in phones or wireless modules in cameras, including Kodak, Polaroid, Motorola. None of them understood that the success of the camera-phone is all about instantly sharing pictures with the cloud-based Instant-Picture-Mail software/server/service-infrastructure. In fact, it’s even amusing to think that none of these projects was interesting enough that anyone has kept shared pictures. You’d think that if you’d created something new and exciting like the camera-phone you’d share a picture or two or at least keep some!

Read more about the fascinating story here — he goes into a lot of technical detail. Thank you, Philippe, for your amazing invention!

Many IT professionals were caught by surprise by last week’s huge cyberattack. Why? They didn’t expect ransomware to spread across their networks on its own.

The reports came swiftly on Friday morning, May 12. The first I saw were that dozens of hospitals in England were affected by ransomware, denying physicians access to patient medical records and causing surgery and other treatments to be delayed.

The infections spread quickly, reportedly hitting as many as 100 countries, with Russian systems affected apparently more than others. What was going on? The details came out quickly: This was a relatively unknown ransomware variant, dubbed WannaCry or WCry. WannaCry had been “discovered” by hackers who stole information from the U.S. National Security Agency (NSA); affected machines were Windows desktops, notebooks and servers that were not up to date on security patches.

Most alarming, WannaCry did not spread across networks in the usual way, through people clicking on email attachments. Rather, once one Windows system was affected on a Windows network, WannaCry managed to propagate itself and infect other unpatched machines without any human interaction. The industry term for this type of super-vigorous ransomware: Ransomworm.

Iturned to one of the experts on malware that can spread across Windows networks, Roi Abutbul. A former cybersecurity researcher with the Israeli Air Force’s famous OFEK Unit, he is founder and CEO of Javelin Networks, a security company that uses artificial intelligence to fight against malware.

Abutbul told me, “The WannaCry/Wcry ransomware—the largest ransomware infection in history—is a next-gen ransomware. Opposed to the regular ransomware that encrypts just the local machine it lands on, this type spreads throughout the organization’s network from within, without having users open an email or malicious attachment. This is why they call it ransomworm.”

He continued, “This ransomworm moves laterally inside the network and encrypts every PC and server, including the organization’s backup.” Read more about this, and my suggestions for copying with the situation, in my story for Network World, “Self-propagating ransomware: What the WannaCry ransomworm means for you.”

To those who run or serve on corporate, local government or non-profit boards:

Your board members are at risk, and this places your organizations at risk. Your board members could be targeted by spearphishing (that is, directed personalized attacks) or other hacking because

  • They are often not technologically sophisticated
  • They have access to valuable information
  • If they are breached, you may not know
  • Their email accounts and devices are not locked down using the enterprise-grade cybersecurity technology used to protect employees

In other words, they have a lot of the same information and access as executive employees, but don’t share in their protections. Even if you give them a corporate email address, their laptops, desktops, phone, and tablets are not covered by your IT cybersecurity systems.

Here’s an overview article I read today. It’s a bit vague but it does raise the alarm (and prompted this post). For the sake of the organization, it might be worth spending some small time at a board meeting on this topic, to raise the issue. But that’s not enough.

What can you do, beyond raising the issue?

  • Provide offline resources and training to board members about how to protect themselves from spearphishing
  • Teach them to use unique strong passwords on all their devices
  • Encourage them to use anti-malware solutions on their devices
  • Provide resources for them to call if they suspect they’ve been hacked

Perhaps your IT provider can prepare a presentation, and make themselves available to assist. Consider this issue in the same light as board liability insurance: Protecting your board members is the good for the organization.

“Alexa! Unlock the front door!” No, that won’t work, even if you have an intelligent lock designed to work with the Amazon Echo. That’s because Amazon is smart enough to know that someone could shout those five words into an open window, and gain entry to your house.

Presumably Amazon doesn’t allow voice control of “Alexa! Turn off the security system!” but that’s purely conjecture. It’s not something I’ve tried. And certainly it’s possible go use programming or clever work-around to enable voice-activated door unlocking or force-field deactivation. That’s why while our home contains a fair amount of cutting-edge AI-based automation, perimeter security is not hooked up to any of it. We’ll rely upon old-fashioned locks and keys and alarm keypads, thank you very much.

And sorry, no voice-enabled safes for me either. It didn’t work so well to protect the CIA against Jason Bourne, did it?

Unlike the fictional CIA safe and the equally fictional computer on the Starship Enterprise, Echo, Google Home, Siri, Android, and their friends can’t identify specific voices with any degree of accuracy. In most cases, they can’t do so at all. So, don’t look to be able to train Alexa to set up access control lists (ACLs) based on voiceprints. That’ll have to wait for the 23rd century, or at least for another couple of years.

The inability of today’s AI-based assistants to discriminate allows for some foolishness – and some shenanigans. We have an Echo in our family room, and every so often, while watching a movie, Alexa will suddenly proclaim, “Sorry, I didn’t understand that command,” or some such. What set the system off? No idea. But it’s amusing.

Less amusing was Burger King’s advertising prank which intentionally tried to get Google Home to help sell more hamburgers. As Fast Company explains:

A new Whopper ad from Burger King turns Google’s voice-activated speaker into an unwitting shill. In the 15-second spot, a store employee utters the words “OK Google, what is the Whopper burger?” This should wake up any Google Home speakers present, and trigger a partial readout of the Whopper’s Wikipedia page. (Android phones also support “OK Google” commands, but use voice training to block out unauthorized speakers.)

Fortunately, Google was as annoyed as everyone else, and took swift action, said the story:

Update: Google has stopped the commercial from working – presumably by blacklisting the specific audio clip from the ad – though Google Home users can still inquire about the Whopper in their own words.

Burger King wasn’t the first to try this stunt. Other similar tricks have succeeded against Home and Echo, and sometimes, the devices are activated accidentally by TV shows and news reports. Look forward to more of this.

It reminds me of the very first time I saw a prototype Echo. What did I say? “Alexa, Format See Colon.” Darn. It didn’t erase anything. But at least it’s better than a cat running around on your laptop keyboard, erasing your term paper. Or a TV show unlocking your doors. Right?

No, no, no, no, no!

The email client updates in the 10.12.4 update to macOS Sierra is everything that’s wrong with operating systems today. And so is the planned inclusion of an innovative, fun-sounding 3D painter as part of next week’s Windows 10 Creators Update.

Repeat after me: Applications do not belong in operating systems. Diagnostics, yes. Shared libraries, yes. Essential device drivers, yes. Hardware abstraction layers, yes. File systems, yes. Program loads and tools, yes. A network stack, yes. A graphical user interface, yes. A scripting/job control language, yes. A basic web browser, yes.

Applications? No, no, no!

Why not?

Applications bloat up the operating system release. What if you don’t need a 3D paint program? What if you don’t want to use the built-in mail client? The binaries are there anyway taking up storage. Whenever the operating system is updated, the binaries are updated, eating up bandwidth and CPU time.

If you do want those applications, bug fixes are tied to OS updates. The Sierra 10.12.4 update fixes a bug in Mail. Why must that be tied to an OS update? The update supports more digital camera RAW formats. Why are they tied to the operating system, and not released as they become available? The 10.12.4 update also fixes a Siri issue regarding cricket scores in the IPL. Why, for heaven’s sake, is that functionality tied to an operating system update?? That’s simply insane.

An operating system is easier for the developer test and verify if it’s smaller. The more things in your OS update release train, the more things can go wrong, whether it’s in the installation process or in the code itself. A smaller OS means less regression testing and fewer bugs.

An operating system is easier for the client to test and verify if it’s smaller. Take your corporate clients — if they are evaluating macOS Sierra 10/12/4 or Windows 10 Creators Update prior to roll-out, if there’s less stuff there, the validation process is easier.

Performance and memory utilization are better if it’s smaller. The microkernel concept says that the OS should be as small as possible – if something doesn’t have to be in the OS, leave it out. Well, that’s not the case any more, at least in terms of the software release trains.

This isn’t new

No, Alan isn’t off his rocker, at least not more than usual. Operating system releases, especially those for consumers, have been bloated up with applications and junk for decades. I know that. Nothing will change.

Yes, it would be better if productivity applications and games were distributed and installed separately. Maybe as free downloads, as optional components on the release CD/DVD, or even as a separate SKU. Remember Microsoft Plus and Windows Ultimate Extras? Yeah, those were mainly games and garbage. Never mind.

Still, seeing the macOS Sierra Update release notes today inspired this missive. I hope you enjoyed it. </rant>

Prepare to wait. And wait. Many Windows 10 users are getting ready for the Creators Update, due April 11. We know lots of things about it: There will be new tools for 3D designing, playing 4K-resolution games, improvements to the Edge browser, and claimed improvements to security and privacy protections.

We also know that it will take forever to install. Not literally forever. Still, a long time.

This came to mind when my friend Steven J. Vaughan-Nichols shared this amusing image:

Who could be surprised, when the installation estimation times for software are always ludicrously inaccurate? That’s especially true with Windows, which routinely requires multiple waves of download – update – reboot– download – update – reboot– download – update – reboot – rinse and repeat. That’s especially true if you haven’t updated for a while. It goes on and on and on.

This came to the fore about three weeks ago, when I decided to wipe a Windows 10 laptop in preparation for donating it to a nonprofit. It’s a beautiful machine — a Dell Inspiron 17 — which we purchased for a specific client project. The machine was not needed afterwards, and well, it was time to move it along. (My personal Windows 10 machine is a Microsoft Surface Pro.)

The first task was to restore the laptop to its factory installation. This was accomplished using the disk image stored on a hidden partition, which was pretty easy; Dell has good tools. It didn’t take long for Windows 10 to boot up, nice and pristine.

That’s when the fun began: Installing Windows updates. Download – update – reboot– download – update – rinse – repeat. For two days. TWO DAYS. And that’s for a bare machine without any applications or other software.

Thus, my belief in two things: First, Windows saying 256% done is entirely plausible. Second, it’s going to take forever to install Windows 10 Creators Update on my Surface Pro.

Good luck, and let me know how it goes for you.

It’s official: Internet service providers in the United States can continue to sell information about their customers’ Internet usage to marketers — and to anyone else who wants to use it. In 2016, during the Obama administration, the Federal Communications Commission (FCC) tried to require ISPs to get customer permission before using or sharing information about their web browsing. According to the FCC, the rule change, entitled, “Protecting the Privacy of Customers of Broadband and Other Telecommunications Services,” meant:

The rules implement the privacy requirements of Section 222 of the Communications Act for broadband ISPs, giving broadband customers the tools they need to make informed decisions about how their information is used and shared by their ISPs. To provide consumers more control over the use of their personal information, the rules establish a framework of customer consent required for ISPs to use and share their customers’ personal information that is calibrated to the sensitivity of the information. This approach is consistent with other privacy frameworks, including the Federal Trade Commission’s and the Administration’s Consumer Privacy Bill of Rights.

More specifically, the rules required that customers had to positively agree to have their information used in that fashion. Previously, customers had to opt-out. Again, according to the FCC,

Opt-in: ISPs are required to obtain affirmative “opt-in” consent from consumers to use and share sensitive information. The rules specify categories of information that are considered sensitive, which include precise geo-location, financial information, health information, children’s information, social security numbers, web browsing history, app usage history and the content of communications.

Opt-out: ISPs would be allowed to use and share non-sensitive information unless a customer “opts-out.” All other individually identifiable customer information – for example, email address or service tier information – would be considered non-sensitive and the use and sharing of that information would be subject to opt-out consent, consistent with consumer expectations.

Consumer Privacy Never Happened

That rule change, however, ended up being stuck with legal challenges and never took effect. In March 2017, both chambers of Congress voted to reverse that change. The resolution, passed by both the House and Senate, was simple:

Resolved by the Senate and House of Representatives of the United States of America in Congress assembled, That Congress disapproves the rule submitted by the Federal Communications Commission relating to “Protecting the Privacy of Customers of Broadband and Other Telecommunications Services,” and such rule shall have no force or effect.

What’s the net effect? In some ways, not much, despite all the hyperbole. The rule only applied to broadband providers. It didn’t apply to others who could tell what consumers were doing on the Internet, such as social media (think Facebook) or search engines (think Google) or e-commerce (think Amazon) or streaming media (think Netflix). Those other organizations could use or market their knowledge about consumers, bound only by the terms of their own privacy policy. Similarly, advertising networks and others who tracked browser activity via cookies could also use the information however they wanted.

What’s different about the FCC rule on broadband carriers, however, is that ISPs can see just about everything that a customer does. Every website visited, every DNS address lookup, and every Internet query sent via other applications like email or messaging apps. Even if that traffic is end-to-end encrypted, the broadband carrier knows where the traffic is going or coming from – because, after all, it is delivering the packets. That makes the carriers’ metadata information about customer traffic unique, and invaluable, to marketers, government agencies, and to others who might wish to leverage it.

Customers Can Shield — To Some Extent

Customers can attempt to shield their privacy. For example, many use end-to-end VPN services to route their Internet traffic to a single relay point, and then use that relay to anonymously surf the web. However, a privacy VPN is technically difficult for many consumers to set up. Plus, the service costs money. Also, for true privacy fanatics, that VPN service could also be a source of danger, since it could be compromised by an intelligence agency, or used for a man-in-the-middle attack.

So in the United States, the demise of the FCC ruling is bad news. Customers’ Internet usage data — including websites visited, phrases searched for, products purchased and movies watched — remains available for marketers and others who use to study it and exploit it. However, in reality, such was always the case.

“Call with Alan.” That’s what the calendar event says, with a bridge line as the meeting location. That’s it. For the individual who sent me that invitation, that’s a meaningful description, I guess. For me… worthless! This meeting was apparently sent out (and I agreed to attend) at least three weeks ago. I have no recollection about what this meeting is about. Well, it’ll be an adventure! (Also: If I had to cancel or reschedule, I wouldn’t even know who to contact.)

When I send out calendar invites, I try hard to make the event name descriptive to everyone, not just me. Like “ClientCorp and Camden call re keynote topics” or “Suzie Q and Alan Z — XYZ donations.” Something! Give a hint, at least! After all, people who receive invitations can’t edit the names to make them more meaningful.

And then there’s time-zone ambiguity. Some calendar programs (like Google Calendar) do a good job of tracking the event’s time zone, and mapping it to mine. Others, and I’m thinking of Outlook 365, do a terrible job there, and make it difficult to specify the event in a different time zone.

For example, I’m in Phoenix, and often set up calls with clients on the East Coast or in the U.K. As a courtesy, I like to set up meetings using the client’s time zone. Easy when I use Google Calendar to set up the event. Not easy in Outlook 365, which I must use for some projects.

Similarly, some calendar programs do a good job mapping the event to each recipient’s time zone. Others don’t. The standards are crappy, and the implementations of the standards are worse.)

There’s more than the bad time-zone mappings. Each Web-based, mobile, and desktop calendar app, even those that claim to conform to standards, has its own quirks, proprietary features, and incompatibilities. For example, repeating events aren’t handled consistently from calendar program to calendar program. It’s a real mess.

Here are a few simple do’s and don’ts for event creators. Or rather, don’ts and do’s.

  • DON’T just put the name of the person you are meeting with in the event name.
  • DO put your name and organization too, and include your contact information (phone, email, whatever) in the calendar invite itself. Having just a conference bridge or location of the coffee shop won’t do someone any good if they need to reach you before the meeting.
  • DON’T assume that everyone will remember what the meeting is about.
  • DO put the purpose of the meeting into the event title.
  • DON’T think that everyone’s calendar software works like yours or has the same features, vis-à-vis time zones, attachments, comments, and so-on.
  • DO consider putting the meeting time and time zone into the event name. It’s something I don’t do, but I have friends who do, like “ClientCorp and Camden call re keynote topics — 3pm Pacific.” Hmm, maybe I should do that?
  • DON’T expect that if you change the event time on your end, that change will percolate to all recipients. Again, this can be software-specific.
  • DO cancel the event if it’s necessary to reschedule, and set up a new one. Also send an email to all participants explaining what happened. I dislike getting calendar emails saying the meeting date/time has been changed — with no explanation.
  • DON’T assume that people will be able to process your software’s calendar invitations. Different calendar program don’t play well with each other.
  • DO send a separate email with all the details, including the event name, start time, time zone, and list of participants, in addition to the calendar invite. Include the meeting location, or conference-call dial-in codes, in that email.
  • DON’T trust that everyone will use the “accept” button to indicate that they are attending. Most will not.
  • DO follow up with people who don’t “accept” to ask if they are coming.
  • DON’T assume that just because it’s on their calendar, people will remember to show up. I had one guy miss an early-morning call he “accepted” because it was early and he hadn’t checked his calendar yet. D’oh!
  • DO send a meeting confirmation email, one day before, if the event was scheduled more than a week in advance.

Have more do’s and don’ts? Please add them using the comments.

The word went out Wednesday, March 22, spreading from techie to techie. “Better change your iCloud password, and change it fast.” What’s going on? According to ZDNet, “Hackers are demanding Apple pay a ransom in bitcoin or they’ll blow the lid off millions of iCloud account credentials.”

A hacker group claims to have access to 250 million iCloud and other Apple accounts. They are threatening to reset all the passwords on those accounts – and then remotely wipe those phones using lost-phone capabilities — unless Apple pays up with untraceable bitcoins or Apple gift cards. The ransom is a laughably small $75,000.

What’s Happening at Apple?

According to various sources, at least some of the stolen account credentials appear to be legitimate. Whether that means all 250 million accounts are in peril, of course, is unknowable.

Apple seems to have acknowledged that there is a genuine problem. The company told CNET, “The alleged list of email addresses and passwords appears to have been obtained from previously compromised third-party services.” We obviously don’t know what Apple is going to do, or what Apple can do. It hasn’t put out a general call, at least as of Thursday, for users to change their passwords, which would seem to be prudent. It also hasn’t encouraged users to enable two-factor authentication, which should make it much more difficult for hackers to reset iCloud passwords without physical access to a user’s iPhone, iPad, or Mac.

Unless the hackers alter the demands, Apple has a two-week window to respond. From its end, it could temporarily disable password reset capabilities for iCloud accounts, or at least make the process difficult to automate, access programmatically, or even access more than once from a given IP address. So, it’s not “game over” for iCloud users and iPhone owners by any means.

It could be that the hackers are asking for such a low ransom because they know their attack is unlikely to succeed. They’re possibly hoping that Apple will figure it’s easier to pay a small amount than to take any real action. My guess is they are wrong, and Apple will lock them out before the April 7 deadline.

Where Did This Come From

Too many criminal networks have access to too much data. Where are they getting it? Everywhere. The problem multiplies because people reuse usernames and passwords. For nearly every site nowadays, the username is the email address. That means if you know my email address (and it’s not hard to find), you know my username for Facebook, for iCloud, for Dropbox, for Salesforce.com, for Windows Live, for Yelp. Using the email address for the login is superficially good for consumers: They are unlikely to forget their login.

The bad news is that account access now depends on a single piece of hidden information: the password. And people reuse passwords and choose weak passwords. So if someone steals a database from a major retailer with a million account usernames (which are email addresses) and passwords, many of those will also be Facebook logins. And Twitter. And iCloud.

That’s how hackers can quietly accumulate what they claim are 250 million iCloud passwords. They probably have 250 million email address / password pairs amalgamated from various sources: A million from this retailer, ten million from that social network. It adds up. How many of those will work in iTunes? Unknown. Not 250 million. But maybe 10 million? Or 20 million? Either way, it’s a nightmare for customers and a disaster for Apple, if those accounts are locked, or if phones are bricked.

What’s the Answer?

As long as we use passwords, and users have the ability to reuse passwords, this problem will exist. Hackers are excellent at stealing data. Companies are bad at detecting breaches, and even worse about disclosing them unless legally obligated to do so.

Can Apple present those 250 million accounts from being seized? Probably. Will problems like this happen again and again and again? For sure, until we move away from any possibility of shared credentials. And that’s not happening any time soon.

The U.S. and U.K. are banning larger electronic items, like tablets, notebooks and DLSRs, from being carried onboard flights from a small number of countries. If that ban spreads to include more international or even domestic flights, this will result in several nasty consequences:

1. Business travelers may be unable to bring computers on trips at all. Some airlines ban checking luggage with lithium ion batteries into the cargo hold. Nearly all of these devices use LIB. If you can’t carry them onboard, and you can’t check them, they must stay home, or be overnighted to the destination. Shipping those devices may work for some people, but it’s a sucky solution.

2. Even if you can check them, there may be a surge of thefts of these costly electronic goodies from checked baggage. I always carry my expensive pro-grade DSLR and lenses onboard, and never check them. Why? I’m worried about theft and about breakage — that stuff is fragile. If I had to check my camera gear, they’d stay home. Same with my notebook and tablets. There is too much opportunity for stuff to disappear, especially when anyone can easily obtain a universal key for those silly TSA locks. Yes, a family member lost a DSLR from checked luggage.

3. This messes up the plans of airlines who are moving to a BYOD-centric entertainment model. Forget the drop-down TV screens playing one movie. Forget the individual seat-back TV screens offering a choice of movies, TV shows and video games. Airlines are saving money, saving weight, and making customers happy by ditching the electronics and using onboard WiFi to stream entertainment to the passengers’ phone, tablet or laptop. (And they get to charge for air-to-ground WiFi.) According to the Economist, 90% of passengers bring a suitable device. Everyone wins, unless devices are banned. No tablets? No laptops? No onboard entertainment.

The answer to terrorist threats isn’t security theater. Address the risks in an intelligent way, yes. Institute stupid rules that affect all travelers, no. One guy tries to light his shoe on fire, and now you have to take off your shoes to go through airport screening. And now there’s a “threat” and so here’s a new limitation on people making international flights.

That’s how the terrorists win and win and win.

Today’s calculation device is this lovely vintage HP-28S “advanced scientific” calculator from the late 1980s.

As a working calculator, it’s not my favorite. HP gets points for creativity, but the clamshell design makes for an awkward user experience. I’m finding it frustrating to use because each line on the display is hard to read, there are too many keys, and the visual cues are subtle. It is also hard to pry the clamshell open.

The keys do have a nice clickiness to them. If you are doing basic math, you can fold the alphanumeric left part of the clamshell behind the right part.

Functionally, the HP-28 series is also innovative, as it’s where HP first exposed RPL to the user. RPL is Reverse Polish Lisp, a next-generation RPN, or Reverse Polish Notation, designed to handle complex algebraic expressions.

Were I doing that sort of equation-solving or scientific work this afternoon, the HP-28S would be ideal. Today’s project, though, is simple arithmetic related to tracking video editing timings. (Last time I did this, I used an HP-32S II, which has a simpler interface and much larger numbers on the one-line display.)

While I don’t use it often, the HP-28S is a prized member of my extensive collection of vintage calculators. My goal is to keep using all the devices (well, at least, the ones that still function) because it’s more fun than simply looking at them.