, ,

Varjo offers a new type of high-def VR/AR display tuned to the user’s eye motions

The folks at Varjo think they’re made a breakthrough in how goggles for virtual reality and augmented reality work. They are onto something.

Most VR/AR goggles have two displays, one for each eye, and they strive to drive those displays at the highest resolution possible. Their hardware and software takes into account that as the goggles move, the viewpoint has to move in a seamless way, without delay. If there’s delay, the “willing suspension of disbelief” required to make VR work fails, and in some cases, the user experiences nausea and disorientation. Not good.

The challenge come from making the display sufficiently high resolution to allow the user to make objects look photorealistic. That lets user manipulate virtual machine controls, operate flight simulators, read virtual text, and so-on. Most AR/VR systems try to make the display uniformly high resolution, so that no matter where the user looks, the resolution is there.

Varjo, based in Finland, has a different approach. They take advantage of the fact that the rods and cones in the human eye sees in high resolution in the spot that the eye’s fovea is pointing at – and much lower elsewhere. So while the whole display is capable of high resolution, Varjo uses fovea detectors to do “gaze tracking” to see what the user is looking at, and makes that area super high resolution. When the fovea moves to another spot, that area is almost instantly bumped up to super high resolution, while the original area is downgraded to a reduced resolution.

Sound simple? It’s not, and that’s why the initial Varjo technology will be targeted at professional applications, like doctors, computer-aided design workers, or remote instrument operators. Prototypes of the goggles will be available this year to software developers, and the first products should ship to customers at the end of 2018. The price of the goggles is said to be “thousands, not tens of thousands” of dollars, according to Urho Konttori, the company’s founder. We talked by phone; he was in the U.S. doing demos in San Francisco and New York, but unfortunately, I wasn’t able to attend one of them.

Now, Varjo isn’t the first to use gaze tracking technology to try to optimize the image. According to Konttori, other vendors use medium resolution where the eye is pointing, and low resolution elsewhere, just enough to establish context. By contrast, he says that Varjo uses super high resolution where the user looks, and high resolution elsewhere. Because each eye’s motion is tracked separately, the system can also tell when the user is looking at objects close to user (because the eyes are at a more converged angle) or farther away (the eyes are at a more parallel angle).

“In our prototype, wherever you are looking, that’s the center of the high resolution display,” he said. “The whole image looks to be in focus, no matter where you look. Even in our prototype, we can move the display projection ten times faster than the human eye.”

Konttori says that the effective resolution of the product, called 20/20, is 70 megapixels, updated in real time based on head motion and gaze tracking. That compares to fewer than 2 megapixels for Oculus, Vive, HoloLens and Magic Leap. (This graphic from Varjo compared their display to an unnamed competitor.) What’s more, he said the CPU/GPU power needed to drive this display isn’t huge. “The total pixel count is less than in a single 4K monitor. you need roughly 2x the GPU compared to a conventional VR set for the same scene.”

The current prototypes use two video connectors and two USB connectors. Konttori says that this will drop to one video connector and one USB connector shortly, so that the device can be driven by smaller professional-grade computers, such as a gaming laptop, though he expects most will be connected to workstations.

Konttori will be back in the U.S. later this year. I’m looking forward to getting my hands (and eyes) on a Varjo prototype. Will report back when I’ve actually seen it.

,

The good and bad of press relations – a view from four editors

What do PR people do right? What do they do wrong? Khali Henderson, a senior partner in BuzzTheory Strategies, recently interviewed me (and a few other technology editors) about “Things Editors Hate (and Like) About Your Press Relations.”

She started the story with,

I asked these veteran editors what they think about interfacing with business executives and/or their PR representatives in various ways – from press releases to pitches to interviews.

The results are a set of guidelines on what to do and, more importantly, what NOT to do when interfacing with media.

If you’re new to media relations, this advice will start you off on the right track.

Even if you’ve been around the press pool a lap or two, you may learn something new.

After that, Khali asked a number of practical questions, including:

  • When you receive a press release, what makes you most likely to follow up?
  • What makes you skip a press release and go to the next one?
  • When a company executive pitches you a story, what makes you take notice?
  • What makes you pass on a story pitch?
  • When you are reporting on a story, what are you looking for in a source?
  • What do you wish business executives and/or their PR representatives knew about your job?

Read and enjoy the story, and my answers to Khali’s questions!

,

Lordy, I hope there are tapes

I received this awesome tech spam message today from LaserVault. (It’s spam because it went to my company’s info@ address).

There’s only one thought: “Lordy, I hope there are backup tapes.”

Free White Paper: Is A Tape-Related Data Disaster In Your Future?

Is a tape-related data disaster in your future? It may be if you currently use tape for your backup and recovery.

This paper discusses the many risks you take by using tape and relying on it to keep your data safe in case of a disaster.

Read how you can better protect your data from the all too common dangers that threaten your business, and learn about using D2D technology, specifically tape emulation, instead of tape for iSeries, AIX, UNIX, and Windows.

This white paper should be required reading for anyone involved in overseeing their company’s tape backup operations.

Don’t be caught short when the need to recover your data is most critical. Download the free white paper now.

Ha ha ha ha ha. I slay me.

, ,

Running old software? It’s dangerous. Update or replace!

The WannaCry (WannaCrypt) malware attack spread through unpatched old software. Old software is the bane of the tech industry. Software vendors hate old software for many reasons. One, of course, is that the old software has vulnerabilities that must be patched. Another is that the support costs for older software keeps going and growing. Plus, of course, newer software has new features that can generate business. Meanwhile, of course, customers running old software aren’t generating much revenue.

Enterprises, too, hate old software. They don’t like the support costs, either, or the security vulnerabilities. However, there are huge costs in licensing and installing new software – which might require training users and IT staff, buying new hardware, updating templates, adjusting integrations, and so-on. Plus, old software has been tested and certified, and better the risk you know than the risk you don’t know. So, they keep using old software.

Think about a family that’s torn between keeping a paid-for 13-year-old car, like my 2004 BMW, instead of leasing a newer, safer, more reliable model. The decision about whether to upgrade or not upgrade is complicated. There’s no good answer, and in case of doubt, the best decision is to simply wait until next year’s budget.

However: What about a family that decides to go car-shopping after paying for a scary breakdown or an unexpectedly large repair bill? Similarly, companies are inspired to upgrade critical software after suffering a data breach or learning about irreparable vulnerabilities in the old code.

WannaCry might be that call to action for some organizations. Take Windows, for example – but let me be quick to stress that this issue isn’t entirely about Microsoft products. Smartphones running old versions of Android or Apple’s iOS, or old Mac laptops that can’t be moved to the latest edition of OS X, are just as vulnerable.

Okay, back to Windows and WannaCry. In its critical March 14, 2017, security update, Microsoft accurately identified a flaw in its Server Message Block (SMB) code that could be exploited; the flaw was disclosed in documents stolen by hackers from the U.S. security agencies. Given the massive severity of that flaw, Microsoft offered patches to old software including Windows Server 2008 and Windows Vista.

It’s important to note that customers who applied those patches were not affected by WannaCry. Microsoft fixed it. Many customers didn’t install the fix because they didn’t know about it, they couldn’t find the IT staff resources, or simply thought this vulnerability was no big deal. Well, some made the wrong bet, and paid for it.

What can you do?

Read more about this in my latest for Zonic News, “Old Software is Bad, Unsafe, Insecure Software.”

,

Streamlining the cybersecurity insurance application process

Have you ever suffered through the application process for cybersecurity insurance? You know that “suffered” is the right word because of a triple whammy.

  • First, the general risk factors involved in cybersecurity are constantly changing. Consider the rapid rise in ransomware, for example.
  • Second, it is extremely labor-intensive for businesses to document how “safe” they are, in terms of their security maturity, policies, practices and technology.
  • Third, it’s hard for insurers, the underwriters, and their actuaries, to feel confident that they truly understand how risky a potential customer can be — information and knowledge that’s required for quoting a policy that offers sufficient coverage at reasonable rates.

That is, of course, assuming that everyone is on the same page and agrees that cybersecurity insurance is important to consider for the organization. Is cybersecurity insurance a necessary evil for every company to consider? Or, is it only a viable option for a small few? That’s a topic for a separate conversation. For now, let’s assume that you’re applying for insurance.

From their part, insurance carriers aren’t equipped to go into your business and examine your IT infrastructure. They won’t examine firewall settings or audit your employee anti-phishing training materials. Instead, they rely upon your answers to questionnaires developed and interpreted by their own engineers. Unfortunately, those questionnaires may not get into the nuances, especially if you’re in a vertical where the risks are especially high, and so are the rewards for successful hackers.

According to InformationAge, 77% of ransomware appear in four industries. Those are business & professional services (28%), government (19%), healthcare (15%) and retail (15%). In 2016 and 2017, healthcare organizations like hospitals and medical practices were repeatedly hit by ransomware. Give that data to the actuaries, and they might look for those types of organizations to fill out even more questionnaires.

About those questionnaires? “Applications tend to have a lot of yes/no answers… so that doesn’t give the entire picture of what the IT framework actually looks like,” says Michelle Chia, Vice President, Zurich North America. She explained that an insurance company’s internal assessment engineers have to dig deeper to understand what is really going on: “They interview the more complex clients to get a robust picture of what the combination of processes and controls actually looks like and how secure the network and the IT infrastructure are.”

Read more in my latest for ITSP Magazine, “How to Streamline the Cybersecurity Insurance Process.”

, , ,

A phone that takes pictures? Smartphone cameras turn 20 years old

Twenty years ago, my friend Philippe Kahn introduced the first camera-phone. You may know Philippe as the founder of Borland, and as an entrepreneur who has started many companies, and who has accomplished many things. He’s also a sailor, jazz musician, and, well, a fun guy to hang out with.

About camera phones: At first, I was a skeptic. Twenty years ago I was still shooting film, and then made the transition to digital SLR platforms. Today, I shoot with big Canon DSLRs for birding and general stuff, Leica digital rangefinders when want to be artistic, and with pocket-sized digital cameras when I travel. Yet most of my pictures, especially those posted to social media, come from the built-in camera in my smartphone.

Philippe has blogged about this special anniversary – which also marks the birth of his daughter Sophie. To excerpt from his post, The Creation of the Camera-Phone and Instant-Picture-Mail:

Twenty years ago on June 11th 1997, I shared instantly the first camera-phone photo of the birth of my daughter Sophie. Today she is a university student and over 2 trillion photos will be instantly shared this year alone. Every smartphone is a camera-phone. Here is how it all happened in 1997, when the web was only 4 years old and cellular phones were analog with ultra limited wireless bandwidth.

First step 1996/1997: Building the server service infrastructure: For a whole year before June 1997 I had been working on a web/notification system that was capable of uploading a picture and text annotations securely and reliably and sending link-backs through email notifications to a stored list on a server and allowing list members to comment.

Remember it was 1996/97, the web was very young and nothing like this existed. The server architecture that I had designed and deployed is in general the blueprint for all social media today: Store once, broadcast notifications and let people link back on demand and comment. That’s how Instagram, Twitter, Facebook, LinkedIn and many others are function. In 1997 this architecture was key to scalability because bandwidth was limited and it was prohibitive, for example, to send the same picture to 500 friends. Today the same architecture is essential because while there is bandwidth, we are working with millions of views and potential viral phenomena. Therefore the same smart “frugal architecture” makes sense. I called this “Instant-Picture-Mail” at the time.

He adds:

What about other claims of inventions: Many companies put photo-sensors in phones or wireless modules in cameras, including Kodak, Polaroid, Motorola. None of them understood that the success of the camera-phone is all about instantly sharing pictures with the cloud-based Instant-Picture-Mail software/server/service-infrastructure. In fact, it’s even amusing to think that none of these projects was interesting enough that anyone has kept shared pictures. You’d think that if you’d created something new and exciting like the camera-phone you’d share a picture or two or at least keep some!

Read more about the fascinating story here — he goes into a lot of technical detail. Thank you, Philippe, for your amazing invention!

,

It’s suddenly harder to do tech business in China

Doing business in China is always a rollercoaster. For Internet businesses, the ride just became more thrilling.

The Chinese government has rolled out new cybersecurity laws, which begin affecting foreign companies today, June 1, 2017. The new rules give the Chinese government more control over Internet companies. The government says that the rules are designed to help address threats causes by terrorists and hackers – but the terms are broad enough to confuse anyone doing business in China.

Two of the biggest requirements of the new legislation:

  • Companies that do business in China must store all data related to that business, including customer data, within China.
  • Consumers must register with their real names on retail sites, community sites, news sites, and social media, including messaging services.

According to many accounts, the wording of the new law is too ambiguous to assure compliance. Perhaps the drafters were careless, or lacked of understanding of technical issues. However, it’s possible that the ambiguity is intentional, to give Chinese regulators room to selectively apply the new laws based on political or business objectives. To quote coverage in The New York Times,

One instance cited by Mats Harborn, president of the European Union Chamber of Commerce in China, in a round-table discussion with journalists, was that the government said it wanted to regulate “critical information infrastructure,” but had not defined what that meant.

“The way it’s enforced and implemented today and the way it might be enforced and implemented in a year is a big question mark,” added Lance Noble, the chamber’s policy and communications manager. He warned that uncertainty surrounding the law could make foreign technology firms reluctant to bring their best innovations to China.

Learn more about the new rules  facing tech companies in “The New Cybersecurity Requirement of Doing Business in China,” published today on Zonic News.

, ,

Malware in movie subtitles are coming to a mobile near you

Movie subtitles — those are the latest attack vector for malware. According to Check Point Software, by crafting malicious subtitle files, which are then downloaded by a victim’s media player, attackers can take complete control over any type of device via vulnerabilities found in many popular streaming platforms. Those media players include VLC, Kodi (XBMC), Popcorn-Time and strem.io.

I was surprised to see that this would work, because I thought that text subtitles were just that – text. Silly me. Subtitles embedded into media files (like mp4 movies) can be encoded in dozens of different formats, each with unique features, capabilities, metadata, and payloads. The data and metadata in those subtitles can be hard to analyze, in part because of the many ways the subtitles are stored in a repository. To quote Check Point:

These subtitles repositories are, in practice, treated as a trusted source by the user or media player; our research also reveals that those repositories can be manipulated and be made to award the attacker’s malicious subtitles a high score, which results in those specific subtitles being served to the user. This method requires little or no deliberate action on the part of the user, making it all the more dangerous.

Unlike traditional attack vectors, which security firms and users are widely aware of, movie subtitles are perceived as nothing more than benign text files. This means users, Anti-Virus software, and other security solutions vet them without trying to assess their real nature, leaving millions of users exposed to this risk.

According to Check Point, more than 200 million users (or devices) are potentially vulnerable to this exploit. The risk?

Damage: By conducting attacks through subtitles, hackers can take complete control over any device running them. From this point on, the attacker can do whatever he wants with the victim’s machine, whether it is a PC, a smart TV, or a mobile device. The potential damage the attacker can inflict is endless, ranging anywhere from stealing sensitive information, installing ransomware, mass Denial of Service attacks, and much more.

Here’s an infographic from Check Point:

Read more, about this vulnerability in my latest for Zonic News, “Malware Hides in Plain Sight on the Small Screen.”

, ,

My article on digital watermarks cited in a U.S. government paper

March 2003: The U.S. International Trade Commission released a 32-page paper called, “Protecting U.S. Intellectual Property Rights and the Challenge of Digital Piracy.” The authors, Christopher Johnson and Daniel J. Walworth, cited an article I wrote for the Red Herring in 1999.

Here’s the abstract of the ITC’s paper:

ABSTRACT: According to U.S. industry and government officials, intellectual property rights (IPR) infringement has reached critical levels in the United States as well as abroad. The speed and ease with which the duplication of products protected by IPR can occur has created an urgent need for industries and governments alike to address the protection of IPR in order to keep markets open to trade in the affected goods. Copyrighted products such as software, movies, music and video recordings, and other media products have been particularly affected by inadequate IPR protection. New tools, such as writable compact discs (CDs) and, of course, the Internet have made duplication not only effortless and low-cost, but anonymous as well. This paper discusses the merits of IPR protection and its importance to the U.S. economy. It then provides background on various technical, legal, and trade policy methods that have been employed to control the infringement of IPR domestically and internationally. This is followed by an analysis of current and future challenges facing U.S. industry with regard to IPR protection, particularly the challenges presented by the Internet and digital piracy.

Here’s where they cited yours truly:

To improve upon the basic encryption strategy, several methods have evolved that fall under the classification of “watermarks” and “digital fingerprints” (also known as steganography). Watermarks have been considered extensively by record labels in order to protect their content.44 However, some argue that “watermarking” is better suited to tracking content than it is to protecting against reproduction. This technology is based on a set of rules embedded in the content itself that define the conditions under which one can legally access the data. For example, a digital music file can be manipulated to have a secret pattern of noise, undetectable to the ear, but recorded such that different versions of the file distributed along different channels can be uniquely identified.45 Unlike encryption, which scrambles a file unless someone has a ‘key’ to unlock the process, watermarking does not intrinsically prevent use of a file. Instead it requires a player–a DVD machine or MP3 player, for example–to have instructions built in that can read watermarks and accept only correctly marked files.”46

Reference 45 goes to

Alan Zeichick, “Digital Watermarks Explained,” Red Herring, Dec. 1999

Another paper that referenced that Red Herring article is “Information Technology and the Increasing Efficacy of Non-Legal Sanctions in Financing Transactions.” It was written by Ronald J. Mann of the the University of Michigan Law School.

Sadly, my digital watermarks article is no longer available online.

,

Things you must understand for technical and business due diligence

Technical diligence starts when a startup or company has been approved for outside capital, but needs to be inspected to insure the value of the technology is “good enough” to accept investment. The average startup has something like 1/100 odds of receiving funding once they pitch a VC firm, which is why if investment is offered the ball shouldn’t be dropped during technical diligence. Most issues in technical diligence can be prevented. Since technical diligence is part of the investigation process to receiving venture capital, any business in theory could proactively prepare for technical diligence.

So advises my friend Ellie Cachette, General Partner at CCM Capital Management, a fund-of-funds specializing in venture capital investments. In her two-part series for Inc. Magazine, Ellie shares insights — real insights — in the following areas:

  • Intellectual property and awareness
  • Scaling
  • Security
  • Documentation
  • Risk management
  • Development budget
  • Development meeting and reporting
  • Development ROI
  • Having the right development talent in place

Here are the links:

Five “Business Things” to Understand for Technical Diligence: Part One

Five “Tech Things” to Understand for Technical Diligence: Part Two

While we’re at it, here’s another great article by Ellie in Inc.:

When Your Customers Want One Thing — And Your Investors Want Another

Got a business? Want to do better? Learn from Ellie Cachette. Follow her @ecachette.

, ,

The art and science of endpoint security

The endpoint is vulnerable. That’s where many enterprise cyber breaches begin: An employee clicks on a phishing link and installs malware, such a ransomware, or is tricked into providing login credentials. A browser can open a webpage which installs malware. An infected USB flash drive is another source of attacks. Servers can be subverted with SQL Injection or other attacks; even cloud-based servers are not immune from being probed and subverted by hackers. As the number of endpoints proliferate — think Internet of Things — the odds of an endpoint being compromised and then used to gain access to the enterprise network and its assets only increases.

Which are the most vulnerable endpoints? Which need extra protection? All of them, especially devices running some flavor of Windows, according to Mike Spanbauer, Vice President of Security at testing firm NSS Labs. “All of them. So the reality is that Windows is where most targets attack, where the majority of malware and exploits ultimately target. So protecting your Windows environment, your Windows users, both inside your businesses as well as when they’re remote is the core feature, the core component.”

Roy Abutbul, Co-Founder and CEO of security firm Javelin Networks, agreed. “The main endpoints that need the extra protection are those endpoints that are connected to the [Windows] domain environment, as literally they are the gateway for attackers to get the most sensitive information about the entire organization.” He continued, “From one compromised machine, attackers can get 100 per cent visibility of the entire corporate, just from one single endpoint. Therefore, a machine that’s connected to the domain must get extra protection.”

Scott Scheferman, Director of Consulting at endpoint security company Cylance, is concerned about non-PC devices, as well as traditional computers. That might include the Internet of Things, or unprotected routers, switches, or even air-conditioning controllers. “In any organization, every endpoint is really important, now more than ever with the internet of Things. There are a lot of devices on the network that are open holes for an attacker to gain a foothold. The problem is, once a foothold is gained, it’s very easy to move laterally and also elevate your privileges to carry out further attacks into the network.”

At the other end of the spectrum is cloud computing. Think about enterprise-controlled virtual servers, containers, and other resources configured as Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS). Anything connected to the corporate network is an attack vector, explained Roark Pollock, Vice President at security firm Ziften.

Microsoft, too, takes a broad view of endpoint security. “I think every endpoint can be a target of an attack. So usually companies start first with high privilege boxes, like administrator consoles onboard to service, but everybody can be a victim,” said Heike Ritter, a Product Manager for Security and Networking at Microsoft.

I’ve written a long, detailed article on this subject for NetEvents, “From Raw Data to Actionable Intelligence: The Art and Science of Endpoint Security.”

You can also watch my 10-minute video interview with these people here.

, ,

What the WannaCry ransomworm means for you

Many IT professionals were caught by surprise by last week’s huge cyberattack. Why? They didn’t expect ransomware to spread across their networks on its own.

The reports came swiftly on Friday morning, May 12. The first I saw were that dozens of hospitals in England were affected by ransomware, denying physicians access to patient medical records and causing surgery and other treatments to be delayed.

The infections spread quickly, reportedly hitting as many as 100 countries, with Russian systems affected apparently more than others. What was going on? The details came out quickly: This was a relatively unknown ransomware variant, dubbed WannaCry or WCry. WannaCry had been “discovered” by hackers who stole information from the U.S. National Security Agency (NSA); affected machines were Windows desktops, notebooks and servers that were not up to date on security patches.

Most alarming, WannaCry did not spread across networks in the usual way, through people clicking on email attachments. Rather, once one Windows system was affected on a Windows network, WannaCry managed to propagate itself and infect other unpatched machines without any human interaction. The industry term for this type of super-vigorous ransomware: Ransomworm.

Iturned to one of the experts on malware that can spread across Windows networks, Roi Abutbul. A former cybersecurity researcher with the Israeli Air Force’s famous OFEK Unit, he is founder and CEO of Javelin Networks, a security company that uses artificial intelligence to fight against malware.

Abutbul told me, “The WannaCry/Wcry ransomware—the largest ransomware infection in history—is a next-gen ransomware. Opposed to the regular ransomware that encrypts just the local machine it lands on, this type spreads throughout the organization’s network from within, without having users open an email or malicious attachment. This is why they call it ransomworm.”

He continued, “This ransomworm moves laterally inside the network and encrypts every PC and server, including the organization’s backup.” Read more about this, and my suggestions for copying with the situation, in my story for Network World, “Self-propagating ransomware: What the WannaCry ransomworm means for you.”

, ,

Almost on my way to London for NetEvents to talk about endpoint security

If you’re in London in a couple weeks, look for me. I’ll be at the NetEvents European Media Spotlight on Innovators in Cloud, IoT, AI and Security, on June 5.

At NetEvents, I’ll be doing lots of things:

  • Acting as the Master of Ceremonies for the day-long conference.
  • Introducing the keynote speaker, Brian Lord, OBE, who is former GCHQ Deputy Director for Intelligence and Cyber Operations
  • Conducting an on-stage interview with Mr. Lord, Arthur Snell, formerly of the British Foreign and Commonwealth Office, and Guy Franco, formerly with the Israeli Defense Forces.
  • Giving a brief talk on the state of endpoint cybersecurity risks and technologies.
  • Moderating a panel discussion about endpoint security.

The one-day conference will be at the Chelsea Harbour Hotel. Looking forward to it, and maybe will see you there?

,

Ransomworm golpea a más de 150 Países

Los informes llegaron rápidamente el viernes por la mañana, 12 de mayo – la primera vez que leí una alerta, referenciaba a docenas de hospitales en Inglaterra que fueron afectados por ransomware (sin darse cuenta que era ransomworm), negando a los médicos el acceso a los registros médicos de sus pacientes, causando demoras en cirujías y tratamientos en curso dijo la BBC,

El malware se propagó rápidamente el viernes, con el personal médico en el Reino Unido, según se informa, las computadoras “una por una” quebadan fuera de uso.

El personal del NHS compartió capturas de pantalla del programa WannaCry, que exigió un pago de $ 300 (£ 230) en moneda virtual Bitcoin para desbloquear los archivos de cada computadora.

A lo largo del día, otros países, principalmente europeos, reportaron infecciones.

Algunos informes dijeron que Rusia había visto el mayor número de infecciones del planeta. Los bancos nacionales, los ministerios del interior y de la salud, la empresa estatal de ferrocarriles rusa y la segunda mayor red de telefonía móvil, fueron reportados como afectados.

Las infecciones se diseminaron rápidamente, según se informa golpearon hasta 150 países, con los sistemas rusos afectados aparentemente más que otros.

Read the rest of my article, “Ransomworm golpea a más de 150 Países,” in IT Connect Latam.

, ,

The ongoing challenge for women in high-tech companies

In the United States, Sunday, May 14, is Mother’s Day. (Mothering Sunday was March 27 this year in the United Kingdom.) This is a good time to reflect on the status of women of all marital status and family situations in information technology. The results continue to disappoint.

According to the Unites States Department of Labor, 57.2% of all women participate in the labor force in the United States. 46.9% of the people employed in all occupations are women. So far, so good. Yet when it comes to information technology, women lag far, far behind. Based on 2014 stats:

  • Web developers – 35.2% women
  • Computer systems analysts – 34.2% women
  • Database administrators – 28.0%
  • Computer and information systems managers – 26.7%
  • Computer support specialists – 26.6%
  • Computer programmers – 21.4%
  • Software developers, applications and systems software – 19.8%
  • Network and computer systems administrators – 19.1%
  • Information security analysts – 18.1%
  • Computer network architects – 12.4%

The job area with the highest projected growth rate over the next few years will be information security analysts, says Labor. A question is, will women continue to be underrepresented in this high-paying, fast-growing field? Or will the demand for analysts provide new opportunities for women to enter into the security profession? Impossible to say, really.

The U.S. Equal Employment Opportunity Commission shows that the biggest high tech companies lag behind in diversity. That’s something that anyone working in Silicon Valley can sense intuitively, in large part due to the bro culture (and brogrammer culture) there.

Read more about this in my essay for Zonic News, “Women in Tech – An Ongoing Diversity Challenge.”

, ,

Open up the network, that’s how you enable innovation

I have a new research paper in Elsevier’s technical journal, Network Security. Here’s the abstract:

Lock it down! Button it up tight! That’s the default reaction of many computer security professionals to anything and everything that’s perceived as introducing risk. Given the rapid growth of cybercrime such as ransomware and the non-stop media coverage of data theft of everything from customer payment card information through pre-release movies to sensitive political email databases, this is hardly surprising.

The default reaction of many computer security professionals to anything that’s perceived as introducing risk is to lock down the system.

In attempting to lower risk, however, they also exclude technologies and approaches that could contribute significantly to the profitability and agility of the organisation. Alan Zeichick of Camden Associates explains how to make the most of technology by opening up networks and embracing innovation – but safely.

You can read the whole article, “Enabling innovation by opening up the network,” here.

,

H-1B visa abuse: Blame it on the lottery

In 2016, Carnival Cruises was alleged to have laid off its entire 200-person IT department – and forced its workers to train foreign replacements. The same year, about 80 IT workers at the University of California San Francisco were laid off, and forced to trained replacements, lower-paid tech workers from an Indian outsourcing firm. And according to the Daily Mail:

Walt Disney Parks and Resorts is being sued by 30 former IT staff from its Florida offices who claim they were unfairly replaced by foreign workers— but only after being forced to train them up.

The suit, filed Monday in an Orlando court, alleges that Disney laid off 250 of its US IT staff because it wanted to replace them with staff from India, who were hired in on H-1B foreign employee visas.

On one hand, these organizations were presumably quite successful with hiring American tech workers… but such workers are expensive. Thanks to a type of U.S. visa, called the H-1B, outsource contractors can bring in foreign workers, place them with those same corporations, and pay them a lot less than American workers. The U.S. organization, like Carnival Cruises, saves money. The outsource contractor, which might be a high-profile organization like the Indian firm Infosys, makes money. The low-cost offshore talent gets decent jobs and a chance to live in the U.S. Everyone wins, right? Except the laid-off American tech workers.

This is not what the H-1B was designed for. It was intended to help companies bring in overseas experts when they can’t fill the job with local applicants. Clearly that’s not what’s happening here. And the U.S. government is trying to fight back by cracking down on fraud and abuse.

One of the problem is the way that H-1B visas are allocation, which is in a big lottery system. The more visas your company asks for, the more visas you receive. Read about the problems that causes, and what’s being done to try to address it, in my latest for Zonic News, “Retaining Local Tech Workers Vs Outsourcing to Foreign Replacements Using H-1B Visas.”

, , ,

Last year’s top hacker tactics may surprise you

Did you know that last year, 75% of data breaches were perpetrated by outsiders, and fully 25% involved internal actors? Did you know that 18% were conducted by state-affiliated actors, and 51% involved organized criminal groups?

That’s according to the newly release 2017 Data Breach Investigations Report from Verizon. It’s the 10th edition of the DBIR, and as always, it’s fascinating – and frightening at the same time.

The most successful tactic, if you want to call it that, used by hackers: stolen or weak (i.e., easily guessed) passwords. They were were used by 81% of breaches. The report says that 62% of breaches featured hacking of some sort, and 51% involved malware.

More disturbing is that fully 66% of malware was installed by malicious email attachments. This means we’re doing a poor job of training our employees not to click links and open documents. We teach, we train, we test, we yell, we scream, and workers open documents anyway. Sigh. According to the report,

People are still falling for phishing—yes still. This year’s DBIR found that around 1 in 14 users were tricked into following a link or opening an attachment — and a quarter of those went on to be duped more than once. Where phishing successfully opened the door, malware was then typically put to work to capture and export data—or take control of systems.

There is a wealth of information in the 2017 DBIR, covering everything from cyber-espionage to the dangers caused by failing to keep up with patches, fixes, and updates. There’s a major section on ransomware, which has grown tremendously in the past year. There are also industry-specific breakouts, covering healthcare, finance, and so-on. It’s a big report, but worth reading. And sharing.

Learn more by reading my latest for Zonic News, “Verizon Describes 2016’S Hackers — And Their Top Tactics.”

, ,

No security plan? It’s like riding a bicycle in traffic in the rain without a helmet

Every company should have formal processes for implementing cybersecurity. That includes evaluating systems, describing activities, testing those policies, and authorizing action. After all, in this area, businesses can’t afford to wing it, thinking, “if something happens, we’ll figure out what to do.” In many cases, without the proper technology, a breach may not be discovered for months or years – or ever. At least not until the lawsuits begin.

Indeed, running without cybersecurity accreditations is like riding a bicycle in a rainstorm. Without a helmet. In heavy traffic. At night. A disaster is bound to happen sooner or later: That’s especially true when businesses are facing off against professional hackers. And when they are stumbled across as juicy victims by script-kiddies who can launch a thousand variations of Ransomware-as-a-Service with a single keystroke.

Yet, according to the British Chambers of Commerce (BCC), small and very small businesses are extremely deficient in terms of having cybersecurity plans. According to the BCC, in the U.K. only 10% of one-person businesses and 15% of those with 1-4 employees have any formal cybersecurity accreditations. Contrast that with businesses with more than 100 employees: 47% with more than 100 employees) have formal plans.

While a CEO may want to focus on his/her primary business, in reality, it’s irresponsible to neglect cybersecurity planning. Indeed, it’s also not good for long-term business success. According to the BCC study, 21% of businesses believe the threat of cyber-crime is preventing their company from growing. And of the businesses that do have cybersecurity accreditations, half (49%) believe it gives their business a competitive advantage over rival companies, and a third (33%) consider it important in creating a more secure environment when trading with other businesses.

Read more about this in my latest for Zonic News, “One In Five Businesses Were Successfully Cyber-Attacked Last Year — Here’s Why.

, ,

Self-inflicted public relations disasters: United Airlines, Pepsi, Tanium, Uber

There are public-relations disasters… and there are self-inflicted public-relations disasters. Those are arguably the worst, and it’s been a meaningful couple of weeks for them, both in the general world and in the technology industry. In some cases, the self-inflicted crises exploded because of stupid or ham-handed initial responses.

In PR crisis management, it’s important to get the initial response right. That means:

  1. Acknowledging that something unfortunate happened
  2. Owning responsibility (in a way that doesn’t expose you to lawsuits, of course)
  3. Apologizing humbly, profusely and sincerely
  4. Promising to make amends to everyone affected by what happened
  5. Vowing to fix processes to avoid similar problems in the future

Here are some recent public relations disasters that I’d label as self-inflicted. Ouch!

United Airlines beats passengers

Two recent episodes. First, a young girl flying on an employee-travel pass wasn’t allowed to board wearing leggings. Second, a doctor was dragged out of a plane, and seriously injured, for refusing to give up his seat to make room for a United employee. Those incidents showed that gate agents were unaware of the optics of situations like this, and didn’t have the training and/or flexibility to adapt rules to avoid a public snafu.

However, the real disaster came from the poor handling of both situations by executives and their PR advisors. With the leggings situation, United’s hiding behind obscure rules and the employee-ticket status of the young passenger, didn’t help a situation where all the sympathy was with the girl. With the ejected and beaten passenger, where to begin? The CEO, Oscar Munoz, should have known that his first response was terrible, and his “confidential” email to employees, which blamed the passenger for being unruly, would be immediately leaked to the public. What a freakin’ idiot. It’s going to take some time for United to recover from these disasters.

Pepsi Cola misses the point

A commercial for a soft drink tried to reinterpret a famous Black Lives Matter protest moment in Baton Rouge. That’s where a young African-American woman, Ieshia Evans, faced off against heavily armored police officers. In Pepsi’s version of the event, a white celebrity, Kendall Jenner, faced off against attractive fake police officers, and defused a tense situation by handing a handsome young cop a can of soda. Dancing ensues. World peace is achieved. The Internet explodes with outrage.

Pepsi’s initial response is to defend the video by saying “We think that’s an important message to convey.” Oops. Later on, the company pulled the ad and apologized to everyone (including Ms. Jenner), but the damage was done, so much so that a fun meme was of White House spokesman Sean Spicer dressed up as an United Airlines pilot offering a can of Pepsi.

Tanium’s bad-boy CEO sends the wrong message

Tanium, a maker of endpoint security and management software, has fallen into the trap of owner hubris. As this story in Bloomberg explains, the top executives, including CEO Orion Hindawi, run the company more for their own benefit than for the benefit of their customers or other shareholders. For example, says Bloomberg, “One of the most unnerving aspects of life at Tanium is what’s known internally as Orion’s List. The CEO allegedly kept a close eye on which employees would soon be eligible to take sizable chunks of stock. For those he could stand to do without, Hindawi ordered the workers to be fired before they were able to acquire the shares, according to current and former employees.” As Business Insider reported, nine executives have left recently, including the president and top marketing and finance officers.

And then there’s the power-trip aspect, says Bloomberg. “The company’s successes didn’t do much to lift morale. Orion berated workers in front of colleagues until they broke into tears and used all-hands meetings as a venue to taunt low-level staff, current and former employees said.” Bloomberg reports that a major VC firm, Andreessen Horowitz, made note of Orion’s managerial flaws and presented them to partners at the firm early last year, saying that Orion’s behavior risked interfering with the company’s operations if it hadn’t already. This sort of nonsense is not good for a company with a decent reputation for intellectual property. The company’s response? Crickets.

Uber drives off the clue train

I’m a happy Uber customer. When traveling, I’m quite disappointed when the service is not available, as was the case on a recent trip to Austin, where Uber and Lyft aren’t offered. However, I’m not a fan of the company’s treatment of women and of the misdeeds of its CEO. Those PR disasters have become the public face of the story, not its innovations in urban transportation and self-driving cars. When a female engineer went public with how she was mistreated and how the company’s HR department ignored the issue, the Internet went nuts — and the company responded by doing a mea culpa. Still, the message was clear: Uber is misogynistic.

And then there were several reports of public naughtiness by CEO Travis Kalanick. The best was a video of him berating an Uber driver. Yes, Kalanick apologized and said that he needs help with leadership… but more crickets in terms of real change. As Engadget wrote in mid-April, the time for Uber leadership to step down is long overdue for the good of its employees, drivers, customers and shareholders. It’s unlikely the company can withstand another self-inflicted PR disaster.

It doesn’t have to be this way

When a PR disaster happens — especially a self-inflicted one — it’s vital to get on top of the story. See the five tips at the top of this blog, and check out this story, “When It Hits the Fan,” on tips for crisis management. You can recover, but you have to do it right, and do it quickly.

,

Manage the network, Hal

Some large percentage of IT and security tasks and alerts require simple responses. On a small network, there aren’t many alerts, and so administrators can easily accommodate them: Fixing a connection here, approving external VPN access there, updating router firmware on that side, giving users the latest patches to Microsoft Office on that side, evaluating a security warning, dismissing a security warning, making sure that a newly spun-up virtual machine has the proper agents and firewall settings, reviewing log activity. That sort of thing.

On a large network, those tasks become tedious… and on a very large network, they can escalate unmanageably. As networks scale to hundreds, thousands, and hundreds of thousands of devices, thanks to mobility and the Internet of Things, the load expands exponentially – and so do routine IT tasks and alerts, especially when the network, its devices, users and applications are in constant flux.

Most tasks can be automated, yes, but it’s not easy to spell out in a standard policy-based system exactly what to do. Similarly, the proper way of handling alerts can be automated, but given the tremendous variety of situations, variables, combinations and permutations, that too can be challenging. Merely programming a large number of possible situations, and their responses, would be a tremendous task — and not even worth the effort, since the scripts would be brittle and would themselves require constant review and maintenance.

That’s why in many organizations, only responses to the very simplest of tasks and alert responses are programmed in rule-based systems. The rest are shunted over to IT and security professionals, whose highly trained brains can rapidly decide what to do and execute the proper response.

At the same time, those highly trained brains turn into mush because handling routine, easy-to-solve problems is mind-numbing and not intellectually challenging. Solving a problem once is exciting. Solving nearly the same problem a hundred times every day, five days a week, 52 weeks a year (not counting holidays) is inspiration for updating the C.V… and finding a more interesting job.

How do we solve this? Read my newest piece for Zonic News, “Artificial Intelligence Is The Right Answer To IT And Security Scalability Issues — And AI Won’t Get Bored.

, ,

Look who’s talking – and controlling your home speech-enabled technology

“Alexa! Unlock the front door!” No, that won’t work, even if you have an intelligent lock designed to work with the Amazon Echo. That’s because Amazon is smart enough to know that someone could shout those five words into an open window, and gain entry to your house.

Presumably Amazon doesn’t allow voice control of “Alexa! Turn off the security system!” but that’s purely conjecture. It’s not something I’ve tried. And certainly it’s possible go use programming or clever work-around to enable voice-activated door unlocking or force-field deactivation. That’s why while our home contains a fair amount of cutting-edge AI-based automation, perimeter security is not hooked up to any of it. We’ll rely upon old-fashioned locks and keys and alarm keypads, thank you very much.

And sorry, no voice-enabled safes for me either. It didn’t work so well to protect the CIA against Jason Bourne, did it?

Unlike the fictional CIA safe and the equally fictional computer on the Starship Enterprise, Echo, Google Home, Siri, Android, and their friends can’t identify specific voices with any degree of accuracy. In most cases, they can’t do so at all. So, don’t look to be able to train Alexa to set up access control lists (ACLs) based on voiceprints. That’ll have to wait for the 23rd century, or at least for another couple of years.

The inability of today’s AI-based assistants to discriminate allows for some foolishness – and some shenanigans. We have an Echo in our family room, and every so often, while watching a movie, Alexa will suddenly proclaim, “Sorry, I didn’t understand that command,” or some such. What set the system off? No idea. But it’s amusing.

Less amusing was Burger King’s advertising prank which intentionally tried to get Google Home to help sell more hamburgers. As Fast Company explains:

A new Whopper ad from Burger King turns Google’s voice-activated speaker into an unwitting shill. In the 15-second spot, a store employee utters the words “OK Google, what is the Whopper burger?” This should wake up any Google Home speakers present, and trigger a partial readout of the Whopper’s Wikipedia page. (Android phones also support “OK Google” commands, but use voice training to block out unauthorized speakers.)

Fortunately, Google was as annoyed as everyone else, and took swift action, said the story:

Update: Google has stopped the commercial from working – presumably by blacklisting the specific audio clip from the ad – though Google Home users can still inquire about the Whopper in their own words.

Burger King wasn’t the first to try this stunt. Other similar tricks have succeeded against Home and Echo, and sometimes, the devices are activated accidentally by TV shows and news reports. Look forward to more of this.

It reminds me of the very first time I saw a prototype Echo. What did I say? “Alexa, Format See Colon.” Darn. It didn’t erase anything. But at least it’s better than a cat running around on your laptop keyboard, erasing your term paper. Or a TV show unlocking your doors. Right?

, ,

Email clients and 3D paint applications do not belong in operating system releases

No, no, no, no, no!

The email client updates in the 10.12.4 update to macOS Sierra is everything that’s wrong with operating systems today. And so is the planned inclusion of an innovative, fun-sounding 3D painter as part of next week’s Windows 10 Creators Update.

Repeat after me: Applications do not belong in operating systems. Diagnostics, yes. Shared libraries, yes. Essential device drivers, yes. Hardware abstraction layers, yes. File systems, yes. Program loads and tools, yes. A network stack, yes. A graphical user interface, yes. A scripting/job control language, yes. A basic web browser, yes.

Applications? No, no, no!

Why not?

Applications bloat up the operating system release. What if you don’t need a 3D paint program? What if you don’t want to use the built-in mail client? The binaries are there anyway taking up storage. Whenever the operating system is updated, the binaries are updated, eating up bandwidth and CPU time.

If you do want those applications, bug fixes are tied to OS updates. The Sierra 10.12.4 update fixes a bug in Mail. Why must that be tied to an OS update? The update supports more digital camera RAW formats. Why are they tied to the operating system, and not released as they become available? The 10.12.4 update also fixes a Siri issue regarding cricket scores in the IPL. Why, for heaven’s sake, is that functionality tied to an operating system update?? That’s simply insane.

An operating system is easier for the developer test and verify if it’s smaller. The more things in your OS update release train, the more things can go wrong, whether it’s in the installation process or in the code itself. A smaller OS means less regression testing and fewer bugs.

An operating system is easier for the client to test and verify if it’s smaller. Take your corporate clients — if they are evaluating macOS Sierra 10/12/4 or Windows 10 Creators Update prior to roll-out, if there’s less stuff there, the validation process is easier.

Performance and memory utilization are better if it’s smaller. The microkernel concept says that the OS should be as small as possible – if something doesn’t have to be in the OS, leave it out. Well, that’s not the case any more, at least in terms of the software release trains.

This isn’t new

No, Alan isn’t off his rocker, at least not more than usual. Operating system releases, especially those for consumers, have been bloated up with applications and junk for decades. I know that. Nothing will change.

Yes, it would be better if productivity applications and games were distributed and installed separately. Maybe as free downloads, as optional components on the release CD/DVD, or even as a separate SKU. Remember Microsoft Plus and Windows Ultimate Extras? Yeah, those were mainly games and garbage. Never mind.

Still, seeing the macOS Sierra Update release notes today inspired this missive. I hope you enjoyed it. </rant>

, ,

Windows 10 Creators Update will take forever to download, install, and update

Prepare to wait. And wait. Many Windows 10 users are getting ready for the Creators Update, due April 11. We know lots of things about it: There will be new tools for 3D designing, playing 4K-resolution games, improvements to the Edge browser, and claimed improvements to security and privacy protections.

We also know that it will take forever to install. Not literally forever. Still, a long time.

This came to mind when my friend Steven J. Vaughan-Nichols shared this amusing image:

Who could be surprised, when the installation estimation times for software are always ludicrously inaccurate? That’s especially true with Windows, which routinely requires multiple waves of download – update – reboot– download – update – reboot– download – update – reboot – rinse and repeat. That’s especially true if you haven’t updated for a while. It goes on and on and on.

This came to the fore about three weeks ago, when I decided to wipe a Windows 10 laptop in preparation for donating it to a nonprofit. It’s a beautiful machine — a Dell Inspiron 17 — which we purchased for a specific client project. The machine was not needed afterwards, and well, it was time to move it along. (My personal Windows 10 machine is a Microsoft Surface Pro.)

The first task was to restore the laptop to its factory installation. This was accomplished using the disk image stored on a hidden partition, which was pretty easy; Dell has good tools. It didn’t take long for Windows 10 to boot up, nice and pristine.

That’s when the fun began: Installing Windows updates. Download – update – reboot– download – update – rinse – repeat. For two days. TWO DAYS. And that’s for a bare machine without any applications or other software.

Thus, my belief in two things: First, Windows saying 256% done is entirely plausible. Second, it’s going to take forever to install Windows 10 Creators Update on my Surface Pro.

Good luck, and let me know how it goes for you.

, ,

Listen to Sir Tim Berners-Lee: Don’t weaken encryption!

It’s always a bad idea to intentionally weaken the security that protects hardware, software, and data. Why? Many reasons, including the basic right (in many societies) of individuals to engage in legal activities anonymously. An additional reason: Because knowledge about weakened encryption, back doors and secret keys could be leaked or stolen, leading to unintended consequences and breaches by bad actors.

Sir Tim Berners-Lee, the inventor of the World Wide Web, is worried. Some officials in the United States and the United Kingdom want to force technology companies to weaken encryption and/or provide back doors to government investigators.

In comments to the BBC, Sir Tim said that there could be serious consequences to giving keys to unlock coded messages and forcing carriers to help with espionage. The BBC story said:

“Now I know that if you’re trying to catch terrorists it’s really tempting to demand to be able to break all that encryption but if you break that encryption then guess what – so could other people and guess what – they may end up getting better at it than you are,” he said.

Sir Tim also criticized moves by legislators on both sides of the Atlantic, which he sees as an assault on the privacy of web users. He attacked the UK’s recent Investigatory Powers Act, which he had criticised when it went through Parliament: “The idea that all ISPs should be required to spy on citizens and hold the data for six months is appalling.”

The Investigatory Powers Act 2016, which became U.K. law last November, gives broad powers to the government to intercept communications. It requires telecommunications providers to cooperate with government requests for assistance with such interception.

Read more about this topic — including real-world examples of stolen encryption keys, and why the government wants those back doors. It’s all in my piece for Zonic News, “Don’t Weaken Encryption with Back Doors and Intentional Flaws.

, , ,

Congress votes against Internet customer privacy; nothing changes

It’s official: Internet service providers in the United States can continue to sell information about their customers’ Internet usage to marketers — and to anyone else who wants to use it. In 2016, during the Obama administration, the Federal Communications Commission (FCC) tried to require ISPs to get customer permission before using or sharing information about their web browsing. According to the FCC, the rule change, entitled, “Protecting the Privacy of Customers of Broadband and Other Telecommunications Services,” meant:

The rules implement the privacy requirements of Section 222 of the Communications Act for broadband ISPs, giving broadband customers the tools they need to make informed decisions about how their information is used and shared by their ISPs. To provide consumers more control over the use of their personal information, the rules establish a framework of customer consent required for ISPs to use and share their customers’ personal information that is calibrated to the sensitivity of the information. This approach is consistent with other privacy frameworks, including the Federal Trade Commission’s and the Administration’s Consumer Privacy Bill of Rights.

More specifically, the rules required that customers had to positively agree to have their information used in that fashion. Previously, customers had to opt-out. Again, according to the FCC,

Opt-in: ISPs are required to obtain affirmative “opt-in” consent from consumers to use and share sensitive information. The rules specify categories of information that are considered sensitive, which include precise geo-location, financial information, health information, children’s information, social security numbers, web browsing history, app usage history and the content of communications.

Opt-out: ISPs would be allowed to use and share non-sensitive information unless a customer “opts-out.” All other individually identifiable customer information – for example, email address or service tier information – would be considered non-sensitive and the use and sharing of that information would be subject to opt-out consent, consistent with consumer expectations.

Sounds good, but Congress voted in March 2017 to overture that rule. Read about what happened — and what consumers can do — in my story for Zonic News, “U.S. Internet Service Providers Don’t Need To Protect Customer Privacy.”

, ,

Three years of the 2013 OWASP Top 10 — and it’s the same vulnerabilities over and over

Can’t we fix injection already? It’s been nearly four years since the most recent iteration of the OWASP Top 10 came out — that’s June 12, 2013. The OWASP Top 10 are the most critical web application security flaws, as determined by a large group of experts. The list doesn’t change much, or change often, because the fundamentals of web application security are consistent.

The 2013 OWASP Top 10 were

  1. Injection
  2. Broken Authentication and Session Management
  3. Cross-Site Scripting (XSS)
  4. Insecure Direct Object References
  5. Security Misconfiguration
  6. Sensitive Data Exposure
  7. Missing Function Level Access Control
  8. Cross-Site Request Forgery (CSRF)
  9. Using Components with Known Vulnerabilities
  10. Unvalidated Redirects and Forwards

The preceding list came out on April 19. 2010:

  1. Injection
  2. Cross-Site Scripting (XSS)
  3. Broken Authentication and Session Management
  4. Insecure Direct Object References
  5. Cross-Site Request Forgery (CSRF)
  6. Security Misconfiguration
  7. Insecure Cryptographic Storage
  8. Failure to Restrict URL Access
  9. Insufficient Transport Layer Protection
  10. Unvalidated Redirects and Forwards

Looks pretty familiar. If you go back further to the inaugural Open Web Application Security Project 2004 and then the 2007 lists, the pattern of flaws stays the same. That’s because programmers, testers, and code-design tools keep making the same mistakes, over and over again.

Take the #1, Injection (often written as SQL Injection, but it’s broader than simply SQL). It’s described as:

Injection flaws occur when an application sends untrusted data to an interpreter. Injection flaws are very prevalent, particularly in legacy code. They are often found in SQL, LDAP, Xpath, or NoSQL queries; OS commands; XML parsers, SMTP Headers, program arguments, etc. Injection flaws are easy to discover when examining code, but frequently hard to discover via testing. Scanners and fuzzers can help attackers find injection flaws.

The technical impact?

Injection can result in data loss or corruption, lack of accountability, or denial of access. Injection can sometimes lead to complete host takeover.

And the business impact?

Consider the business value of the affected data and the platform running the interpreter. All data could be stolen, modified, or deleted. Could your reputation be harmed?

Eliminating the vulnerability to injection attacks is not rocket science. OWASP summaries three approaches:

Preventing injection requires keeping untrusted data separate from commands and queries.

The preferred option is to use a safe API which avoids the use of the interpreter entirely or provides a parameterized interface. Be careful with APIs, such as stored procedures, that are parameterized, but can still introduce injection under the hood.

If a parameterized API is not available, you should carefully escape special characters using the specific escape syntax for that interpreter. OWASP’s ESAPI provides many of these escaping routines.

Positive or “white list” input validation is also recommended, but is not a complete defense as many applications require special characters in their input. If special characters are required, only approaches 1. and 2. above will make their use safe. OWASP’s ESAPI has an extensible library of white list input validation routines.

Not rocket science, not brain surgery — and the same is true of the other vulnerabilities. There’s no excuse for still getting these wrong, folks. Cut down on these top 10, and our web applications will be much safer, and our organizational risk much reduced.

Do you know how often your web developers make the OWASP Top 10 mistakes? The answer should be “never.” They’ve had plenty of time to figure this out.

, ,

Top Do’s and Don’ts for creating friendly calendar invites

“Call with Alan.” That’s what the calendar event says, with a bridge line as the meeting location. That’s it. For the individual who sent me that invitation, that’s a meaningful description, I guess. For me… worthless! This meeting was apparently sent out (and I agreed to attend) at least three weeks ago. I have no recollection about what this meeting is about. Well, it’ll be an adventure! (Also: If I had to cancel or reschedule, I wouldn’t even know who to contact.)

When I send out calendar invites, I try hard to make the event name descriptive to everyone, not just me. Like “ClientCorp and Camden call re keynote topics” or “Suzie Q and Alan Z — XYZ donations.” Something! Give a hint, at least! After all, people who receive invitations can’t edit the names to make them more meaningful.

And then there’s time-zone ambiguity. Some calendar programs (like Google Calendar) do a good job of tracking the event’s time zone, and mapping it to mine. Others, and I’m thinking of Outlook 365, do a terrible job there, and make it difficult to specify the event in a different time zone.

For example, I’m in Phoenix, and often set up calls with clients on the East Coast or in the U.K. As a courtesy, I like to set up meetings using the client’s time zone. Easy when I use Google Calendar to set up the event. Not easy in Outlook 365, which I must use for some projects.

Similarly, some calendar programs do a good job mapping the event to each recipient’s time zone. Others don’t. The standards are crappy, and the implementations of the standards are worse.)

There’s more than the bad time-zone mappings. Each Web-based, mobile, and desktop calendar app, even those that claim to conform to standards, has its own quirks, proprietary features, and incompatibilities. For example, repeating events aren’t handled consistently from calendar program to calendar program. It’s a real mess.

Here are a few simple do’s and don’ts for event creators. Or rather, don’ts and do’s.

  • DON’T just put the name of the person you are meeting with in the event name.
  • DO put your name and organization too, and include your contact information (phone, email, whatever) in the calendar invite itself. Having just a conference bridge or location of the coffee shop won’t do someone any good if they need to reach you before the meeting.
  • DON’T assume that everyone will remember what the meeting is about.
  • DO put the purpose of the meeting into the event title.
  • DON’T think that everyone’s calendar software works like yours or has the same features, vis-à-vis time zones, attachments, comments, and so-on.
  • DO consider putting the meeting time and time zone into the event name. It’s something I don’t do, but I have friends who do, like “ClientCorp and Camden call re keynote topics — 3pm Pacific.” Hmm, maybe I should do that?
  • DON’T expect that if you change the event time on your end, that change will percolate to all recipients. Again, this can be software-specific.
  • DO cancel the event if it’s necessary to reschedule, and set up a new one. Also send an email to all participants explaining what happened. I dislike getting calendar emails saying the meeting date/time has been changed — with no explanation.
  • DON’T assume that people will be able to process your software’s calendar invitations. Different calendar program don’t play well with each other.
  • DO send a separate email with all the details, including the event name, start time, time zone, and list of participants, in addition to the calendar invite. Include the meeting location, or conference-call dial-in codes, in that email.
  • DON’T trust that everyone will use the “accept” button to indicate that they are attending. Most will not.
  • DO follow up with people who don’t “accept” to ask if they are coming.
  • DON’T assume that just because it’s on their calendar, people will remember to show up. I had one guy miss an early-morning call he “accepted” because it was early and he hadn’t checked his calendar yet. D’oh!
  • DO send a meeting confirmation email, one day before, if the event was scheduled more than a week in advance.

Have more do’s and don’ts? Please add them using the comments.

, , ,

What’s the deal with Apple iCloud accounts being hacked?

The word went out Wednesday, March 22, spreading from techie to techie. “Better change your iCloud password, and change it fast.” What’s going on? According to ZDNet, “Hackers are demanding Apple pay a ransom in bitcoin or they’ll blow the lid off millions of iCloud account credentials.”

A hacker group claims to have access to 250 million iCloud and other Apple accounts. They are threatening to reset all the passwords on those accounts – and then remotely wipe those phones using lost-phone capabilities — unless Apple pays up with untraceable bitcoins or Apple gift cards. The ransom is a laughably small $75,000.

According to various sources, at least some of the stolen account credentials appear to be legitimate. Whether that means all 250 million accounts are in peril, of course, is unknowable.

Apple seems to have acknowledged that there is a genuine problem. The company told CNET, “The alleged list of email addresses and passwords appears to have been obtained from previously compromised third-party services.”

We obviously don’t know what Apple is going to do, or what Apple can do. It hasn’t put out a general call, at least as of Thursday, for users to change their passwords, which would seem to be prudent. It also hasn’t encouraged users to enable two-factor authentication, which should make it much more difficult for hackers to reset iCloud passwords without physical access to a user’s iPhone, iPad, or Mac.

Unless the hackers alter the demands, Apple has a two-week window to respond. From its end, it could temporarily disable password reset capabilities for iCloud accounts, or at least make the process difficult to automate, access programmatically, or even access more than once from a given IP address. So, it’s not “game over” for iCloud users and iPhone owners by any means.

It could be that the hackers are asking for such a low ransom because they know their attack is unlikely to succeed. They’re possibly hoping that Apple will figure it’s easier to pay a small amount than to take any real action. My guess is they are wrong, and Apple will lock them out before the April 7 deadline.

So what’s really going on, and what can be done about it? Read more in my essay, “Apple iCloud Accounts Hacked — Or Maybe Not,” on Zonic News.

,

The cybersecurity benefits of artificial intelligence and machine learning

Let’s talk about the practical application of artificial intelligence to cybersecurity. Or rather, let’s read about it. My friend Sean Martin has written a three-part series on the topic for ITSP Magazine, exploring AI, machine learning, and other related topics. I provided review and commentary into the series.

The first part, “It’s a Marketing Mess! Artificial Intelligence vs Machine Learning,” explores probably the biggest challenge about AI: Hyperbole. That, and inconsistency. Every lab, every vendor, every conference, every analyst, defines even the most basic terminology — when they bother to define it at all. Vagueness begets vagueness, and so the terms “artificial intelligence” and “machine learning” are thrown around with wanton abandon. As Sean writes,

The latest marketing discovery of AI as a cybersecurity product term only exacerbates an already complex landscape of jingoisms with like muddled understanding. A raft of these associated terms, such as big data, smart data, heuristics (which can be a branch of AI), behavioral analytics, statistics, data science, machine learning and deep learning. Few experts agree on exactly what those terms mean, so how can consumers of the solutions that sport these fancy features properly understand what those things are?

Machine Learning: The More Intelligent Artificial Intelligence,” the second installment, picks up by digging into pattern recognition. Specifically, the story is about when AI software can discern patterns based on its own examination of raw data. Sean also digs into deep learning:

Deep Learning (also known as deep structured learning, hierarchical learning or deep machine learning) is a branch of machine learning based on a set of algorithms that attempt to model high level abstractions in data by using a deep graph with multiple processing layers, composed of multiple linear and non-linear transformations.

In the conclusion, “The Actual Benefits of Artificial Intelligence and Machine Learning,” Sean brings it home to your business. How you can tell if an AI solution is real? How can you tell what it really does? That means going beyond the marketing material’s attempts to obfuscate:

The bottom line on AI-based technologies in the security world: Whether it’s called machine learning or some flavor of analytics, look beyond the terminology – and the oooh, ahhh hype of artificial intelligence – to see what the technology does. As the saying goes, pay for the steak – not the artificial intelligent marketing sizzle.

It was a pleasure working on this series with Sean, and we hope you enjoy reading it.