, ,

Managing the impact of open source software on data centers

Open source software (OSS) offers many benefits for organizations large and small—not the least of which is the price tag, which is often zero. Zip. Nada. Free-as-in-beer. Beyond that compelling price tag, what you often get with OSS is a lack of a hidden agenda. You can see the project, you can see the source code, you can see the communications, you can see what’s going on in the support forums.

When OSS goes great, everyone is happy, from techies to accounting teams. Yes, the legal department may want to scrutinize the open source license to make sure your business is compliant, but in most well-performing scenarios, the lawyers are the only ones frowning. (But then again, the lawyers frown when scrutinizing commercial closed-source software license agreements too, so you can’t win.)

The challenge with OSS is that it can be hard to manage, especially when something goes wrong. Depending on the open source package, there can be a lot of mysteries, which can make ongoing support, including troubleshooting and performance tuning, a real challenge. That’s because OSS is complex.

It’s not like you can say, well, here’s my Linux distribution on my server. Oh, and here’s my open source application server, and my open source NoSQL database, and my open source log suite. In reality, those bits of OSS may be from separate OSS projects, which may (or may not) have been tested for how well they work together.

A separate challenge is that because OSS is often free-as-in-beer, the software may not be in the corporate inventory. That’s especially common if the OSS is in the form of a library or an API that might be built into other applications you’ve written yourself. The OSS might be invisible but with the potential to break or cause problems down the road.

You can’t manage what you don’t know about

When it comes to OSS, there may be a lot you don’t know about, such as those license terms or interoperability gotchas. Worse, there can be maintenance issues — and security issues. Ask yourself: Does your organization know all the OSS it has installed on servers on-prem or in the cloud? Coded into custom applications? Are you sure that all patches and fixes have been installed (and installed correctly), even on virtual machine templates, and that there are no security vulnerabilities?

In my essay “The six big gotchas: The impact of open source on data centers,” we’ll dig into the key topics: License management, security, patch management, maximizing uptime, maximizing performance, and supporting the OSS.

, ,

Lift-and-shift vs building native cloud apps

Write new cloud-native applications. “Lifting and shifting” existing data center applications. Those are two popular ways of migrating enterprise assets to the cloud.

Gartner’s definition: “Lift-and-shift means that workloads are migrated to cloud IaaS in as unchanged a manner as possible, and change is done only when absolutely necessary. IT operations management tools from the existing data center are deployed into the cloud environment largely unmodified.”

There’s no wrong answer, no wrong way of proceeding. Some data center applications (including servers and storage) may be easier to move than others. Some cloud-native apps may be easier to write than others. Much depends on how much interconnectivity there is between the applications and other software; that’s why, for example, public-facing websites are often easiest to move to the web, while tightly coupled internal software, such as inventory control or factory-floor automation, can be trickier.

That’s why in some cases, a hybrid strategy is best. Some parts of the applications are moved up to the cloud, while others remain in the data centers, with SD-WANs or other connectivity linking everything together in a secure manner.

In other words, no one size fits all. And no one timeframe fits all, especially when it comes to lifting-and-shifting.

Joe Paiva, CIO of the U.S. Commerce Department’s International Trade Administration (ITA), is a fan of lift-and-shift. He said at a cloud conference that, “Sometimes it makes sense because it gets you there. That was the key. We had to get there because we would be no worse off or no better off, and we were still spending a lot of money, but it got us to the cloud. Then we started doing rationalization of hardware and applications, and dropped our bill to Amazon by 40 percent compared to what we were spending in our government data center. We were able to rationalize the way we use the service.” Paiva estimates government agencies could save 5%-15% using lift-and-shift.

The benefits of moving existing workloads to the cloud are almost entirely financial. If you can shut down a data center and pay less to run the application in the cloud, it’s can be a good short-term solution with immediate ROI. Gartner cautions, however, that lift and shift “generally results in little created value. Plus, it can be a more expensive option and does not deliver immediate cost savings.” Much depends on how much it costs to run that application today.

Read more in my essay, “Lifting and shifting from the data center up to the cloud.”

, ,

Modern programming lessons learned from 1970s mainframes

About a decade ago, I purchased a piece of a mainframe on eBay — the name ID bar. Carved from a big block of aluminum, it says “IBM System/370 168,” and it hangs proudly over my desk.

My time on mainframes was exclusively with the IBM System/370 series. With a beautiful IBM 3278 color display terminal on my desk, and, later, a TeleVideo 925 terminal and an acoustic coupler at home, I was happier than anyone had a right to be.

We refreshed our hardware often. The latest variant I worked on was the System/370 4341, introduced in early 1979, which ran faster and cooler than the slower, very costly 3031 mainframes we had before. I just found this on the IBM archives: “The 4341, under a 24-month contract, can be leased for $5,975 a month with two million characters of main memory and for $6,725 a month with four million characters. Monthly rental prices are $7,021 and $7,902; purchase prices are $245,000 and $275,000, respectively.” And we had three, along with tape drives, disk drives (in IBM-speak, DASD, for Direct Access Storage Devices), and high-speed line printers. Not cheap!

Our operating system on those systems was called Virtual Machine, or VM/370. It consisted of two parts, Control Program and Conversational Monitoring System. CP was the timesharing operating system – in modern virtualization terms, the hypervisor running on the bare metal. CMS was the user interface that users logged into, and provide access to not only a text-based command console, but also file storage and a library of tools, such as compilers. (We often referred to the platform as CP/CMS).

Thanks to VM/370, each user believed she had access to a 100% dedicated and isolated System/370 mainframe, with every resource available and virtualized. (I.e., she appeared to have dedicated access to tape drives, but they appeared non-functional if her tape(s) weren’t loaded, or if she didn’t buy access to the drives.)

My story about mainframes isn’t just reminiscing about the time of dinosaurs. When programming those computers, which I did full-time in the late 1970s and early 1980s, I learned a lot of lessons that are very applicable today. Read all about that in my article for HP Enterprise Insights, “4 lessons for modern software developers from 1970s mainframe programming.”

, ,

DevOps is the future of enterprise software development, because cloud computing

To get the most benefit from the new world of cloud-native server applications, forget about the old way of writing software. In the old model, architects designed software. Programmers wrote the code, and testers tested it on test server. Once the testing was complete, the code was “thrown over the wall” to administrators, who installed the software on production servers, and who essentially owned the applications moving forward, only going back to the developers if problems occurred.

The new model, which appeared about 10 years ago is called “DevOps,” or developer operations. In the DevOps model, architects, developers, testers, and administrators collaborate much more closely to create and manage applications. Specifically, developers play a much broader role in the day-to-day administration of deployed applications, and use information about how the applications are running to tune and enhance those applications.

The involvement of developers in administration made DevOps perfect for cloud computing. Because administrators had fewer responsibilities (i.e., no hardware to worry about), it was less threatening for those developers and administrators to collaborate as equals.

Change Matters

In that old model of software development and deployment, developers were always change agents. They created new stuff, or added new capabilities to existing stuff. They embraced change, including new technologies – and the faster they created change (i.e., wrote code), the more competitive their business.

By contrast, administrators are tasked with maintaining uptime, while ensuring security. Change is not a virtue to those departments. While admins must accept change as they install new applications, it’s secondary to maintaining stability. Indeed, admins could push back against deploying software if they believed those apps weren’t reliable, or if they might affect the overall stability of the data center as a whole.

With DevOps, everyone can embrace change. One of the ways that works, with cloud computing, is to reduce the risk that an unstable application can damage system reliability. In the cloud, applications can be build and deployed using bare-metal servers (like in a data center), or in virtual machines or containers.

DevOps works best when software is deployed in VMs or containers, since those are isolated from other systems – thereby reducing risk. Turns out that administrators do like change, if there’s minimal risk that changes will negatively affect overall system reliability, performance, and uptime.

Benefits of DevOps

Goodbye, CapEx, hello, OpEx. Cloud computing moves enterprises from capital-expense data centers (which must be built, electrified, cooled, networked, secured, stocked with servers, and refreshed periodically) to operational-expense service (where the business pays monthly for the processors, memory, bandwidth, and storage reserved and/or consumed).

Read more, including about the five biggest benefits of cloud computing, in my essay, “DevOps: The Key To Building And Deploying Cloud-Native Software.”

,

AOL Instant Messenger is almost dead, but we won’t miss AIM at all

AOL Instant Messenger will be dead before the end of 2017. Yet, instant messages have succeeded far beyond what anyone could have envisioned for either SMS (Short Message Service, carried by the phone company) or AOL, which arguably brought instant messaging to regular computers starting in 1997.

It would be wonderful to claim that there’s some great significance in the passing of AIM. However, my guess is that there simply wasn’t any business benefit to maintaining ia service that nearly nobody used. The AIM service was said to carry far less than 1% of all instant messages across the Internet… and that was in 2011.

I have an AIM account, and although it’s linked into my Apple Messages client, I had completely forgotten about it. Yes, there was a little flurry of news back in March 2017, when AOL began closing APIs and shutting down some third-party AIM applications. However, that didn’t resonate. Then, on Oct. 6, came the email from AOL’s new corporate overload, Oath, a subsidiary of Verizon:

Dear AIM user,

We see that you’ve used AOL Instant Messenger (AIM) in the past, so we wanted to let you know that AIM will be discontinued and will no longer work as of December 15, 2017.

Before December 15, you can continue to use the service. After December 15, you will no longer have access to AIM and your data will be deleted. If you use an @aim.com email address, your email account will not be affected and you will still be able to send and receive email as usual.

We’ve loved working on AIM for you. From setting the perfect away message to that familiar ring of an incoming chat, AIM will always have a special place in our hearts. As we move forward, all of us at AOL (now Oath) are excited to continue building the next generation of iconic brands and life-changing products for users around the world.

You can visit our FAQ to learn more. Thank you for being an AIM user.

Sincerely,

The AOL Instant Messenger team

Read more about this, including why it’s truly no big deal for anyone, in my article, “Instant Messaging Will Continue Just Fine Without AIM.”

, ,

Elon Musk is wrong about the dangers of machine learning and artificial intelligence

Despite Elon Musk’s warnings this summer, there’s not a whole lot of reason to lose any sleep worrying about Skynet and the Terminator. Artificial Intelligence (AI) is far from becoming a maleficent, all-knowing force. The only “Apocalypse” on the horizon right now is an over reliance by humans on machine learning and expert systems, as demonstrated by the deaths of Tesla owners who took their hands off the wheel.

Examples of what currently pass for “Artificial Intelligence” — technologies such as expert systems and machine learning — are excellent for creating software. AI software is truly valuable help in contexts that involve pattern recognition, automated decision-making, and human-to-machine conversations. Both types of AI have been around for decades. And both are only as good as the source information they are based on. For that reason, it’s unlikely that AI will replace human beings’ judgment on important tasks requiring decisions more complex than “yes or no” any time soon.

Expert systems, also known as rule-based or knowledge-based systems, are when computers are programmed with explicit rules, written down by human experts. The computers can then run the same rules but much faster, 24×7, to come up with the same conclusions as the human experts. Imagine asking an oncologist how she diagnoses cancer and then programming medical software to follow those same steps. For a particular diagnosis, an oncologist can study which of those rules was activated to validate that the expert system is working correctly.

However, it takes a lot of time and specialized knowledge to create and maintain those rules, and extremely complex rule systems can be difficult to validate. Needless to say, expert systems can’t function beyond their rules.

By contrast, machine learning allows computers to come to a decision—but without being explicitly programmed. Instead, they are shown hundreds or thousands of sample data sets and told how they should be categorized, such as “cancer | no cancer,” or “stage 1 | stage 2 | stage 3 cancer.”

Read more about this, including my thoughts on machine learning, pattern recognition, expert systems, and comparisons to human intelligence, in my story for Ars Technica, “Never mind the Elon—the forecast isn’t that spooky for AI in business.”

,

Breached Deloitte Talks About the Cots of Cyber Breaches

Long after intruders are removed and public scrutiny has faded, the impacts from a cyberattack can reverberate over a multi-year timeline. Legal costs can cascade as stolen data is leveraged in various ways over time; it can take years to recover pre-incident growth and profitability levels; and brand impact can play out in multiple ways.

That’s from a Deloitte report, “Beneath the surface of a cyberattack: A deeper look at business impacts,” released in late 2016. The report’s contents, and other statements on cyber security from Deloitte, are ironic given the company’s huge breach reported this week.

The breach was reported on Monday, Sept. 25, and appears to have leaked confidential emails and financial documents of some of its clients. According to the Guardian,

The Guardian understands Deloitte clients across all of these sectors had material in the company email system that was breached. The companies include household names as well as US government departments. So far, six of Deloitte’s clients have been told their information was “impacted” by the hack. Deloitte’s internal review into the incident is ongoing. The Guardian understands Deloitte discovered the hack in March this year, but it is believed the attackers may have had access to its systems since October or November 2016.

The Guardian asserts that hackers gained access to the Deloitte’s global email server via an administrator’s account that was protected by only a single password. Without two-factor authentication, hackers could gain entry via any computer, as long as they guessed the right password (or obtained it via hacking, malware, or social engineering). The story continues,

In addition to emails, the Guardian understands the hackers had potential access to usernames, passwords, IP addresses, architectural diagrams for businesses and health information. Some emails had attachments with sensitive security and design details.

Okay, the breach was bad. What did Deloitte have to say about these sorts of incidents? Lots. In the 2016 report, Deloitte’s researchers pointed to 14 cyberattack impact factors – half of which are the directly visible costs of breach incidents, the others which can be more subtle or hidden, and potentially never fully understood.

The “Above the Surface” incident costs include the expenses of technical investigations, consumer breach notifications, regulatory compliance, attorneys fees and litigation, post-preach customer protection, public relations, and cybersecurity protections. Hard to tally are the “Below the Surface” costs of insurance premium increases, increased cost to raise debt, impact of operational disruption/destruction, value of lost contact revenue, devaluation of trade name, loss of intellectual property, and lost value of customer relationship.

As the report says,

Common perceptions about the impact of a cyberattack are typically shaped by what companies are required to report publicly—primarily theft of personally identifiable information (PII), payment data, and personal health information (PHI). Discussions often focus on costs related to customer notification, credit monitoring, and the possibility of legal judgments or regulatory penalties. But especially when PII theft isn’t an attacker’s only objective, the impacts can be even more far-reaching.

Read more in my essay, “Hacked and Breached: Let’s Hear Deloitte In Its Own Words.”

, ,

The cause of the Equifax breach: Sheer human incompetence

Stupidity. Incompetence. Negligence. The unprecedented data breach at Equifax has dominated the news cycle, infuriating IT managers, security experts, legislators, and attorneys — and scaring consumers. It appears that sensitive personally identifiable information (PII) on 143 million Americans was exfiltrated, as well as PII on some non-US nationals.

There are many troubling aspects. Reports say the tools that consumers can use to see if they are affected by the breach are inaccurate. Articles that say that by using those tools, consumers are waiving their rights to sue Equifax. Some worry that Equifax will actually make money off this by selling affected consumers its credit-monitoring services.

Let’s look at the technical aspects, though. While details about the breach are still widely lacking, two bits of information are making the rounds. One is that Equifax practiced bad password practices, allowing hackers to easily gain access to at least one server. Another is that there was a flaw in a piece of open-source software – but the patch had been available for months, yet Equifax didn’t apply that patch.

It’s unclear about the veracity of those two possible causes of the breach. Even so, this points to a troubling pattern of utter irresponsibility by Equifax’s IT and security operations teams.

Bad Equifax Password Practices

Username “admin.” Password “admin.” That’s often the default for hardware, like a home WiFi router. The first thing any owner should do is change both the username and password. Every IT professional knows that. Yet the fine techies at Equifax, or at least their Argentina office, didn’t know that. According to well-known security writer Brian Krebs, earlier this week,

Earlier today, this author was contacted by Alex Holden, founder of Milwaukee, Wisc.-based Hold Security LLC. Holden’s team of nearly 30 employees includes two native Argentinians who spent some time examining Equifax’s South American operations online after the company disclosed the breach involving its business units in North America.

It took almost no time for them to discover that an online portal designed to let Equifax employees in Argentina manage credit report disputes from consumers in that country was wide open, protected by perhaps the most easy-to-guess password combination ever: “admin/admin.”

What’s more, writes Krebs,

Once inside the portal, the researchers found they could view the names of more than 100 Equifax employees in Argentina, as well as their employee ID and email address. The “list of users” page also featured a clickable button that anyone authenticated with the “admin/admin” username and password could use to add, modify or delete user accounts on the system.

and

A review of those accounts shows all employee passwords were the same as each user’s username. Worse still, each employee’s username appears to be nothing more than their last name, or a combination of their first initial and last name. In other words, if you knew an Equifax Argentina employee’s last name, you also could work out their password for this credit dispute portal quite easily.

Incompetence. Stupidity. Appalling. Amazing. Read more about the Equifax breach in my essay, “Initial Analysis Of The Equifax Breach.”

, , ,

The amazing HP calculators of the 1970s

HP-35 slide rule calculatorAt the current rate of rainfall, when will your local reservoir overflow its banks? If you shoot a rocket at an angle of 60 degrees into a headwind, how far will it fly with 40 pounds of propellant and a 5-pound payload? Assuming a 100-month loan for $75,000 at 5.11 percent, what will the payoff balance be after four years? If a lab culture is doubling every 14 hours, how many viruses will there be in a week?

Those sorts of questions aren’t asked by mathematicians, who are the people who derive equations to solve problems in a general way. Rather, they are asked by working engineers, technicians, military ballistics officers, and financiers, all of whom need an actual number: Given this set of inputs, tell me the answer.

Before the modern era (say, the 1970s), these problems could be hard to solve. They required a lot of pencils and paper, a book of tables, or a slide rule. Mathematicians never carried slide rules, but astronauts did, as their backup computers.

However, slide rules had limitations. They were good to about three digits of accuracy, no more, in the hands of a skilled operator. Three digits was fine for real-world engineering, but not enough for finance. With slide rules, you had to keep track of the decimal point yourself: The slide rule might tell you the answer is 641, but you had to know if that was 64.1 or 0.641 or 641.0. And if you were chaining calculations (needed in all but the simplest problems), accuracy dropped with each successive operation.

Everything the slide rule could do, a so-called slide-rule calculator could do better—and more accurately. Slide rules are really good at few things. Multiplication and division? Easy. Exponents, like 613? Easy. Doing trig, like sines, cosines, and tangents? Easy. Logarithms? Easy.

Hewlett-Packard unleashed a monster when it created the HP-9100A desktop calculator, released in 1968 at a price of about $5,000. The HP-9100A did everything a slide rule could do, and more—such as trig, polar/rectangular conversions, and exponents and roots. However, it was big and it was expensive—about $35,900 in 2017 dollars, or the price of a nice car! HP had a market for the HP-9100A, since it already sold test equipment into many labs. However, something better was needed, something affordable, something that could become a mass-market item. And that became the pocket slide-rule calculator revolution, starting off with the amazing HP-35.

If you look at the HP-35 today, it seems laughably simplistic. The calculator app in your smartphone is much more powerful. However, back in 1972, and at a price of only $395 ($2,350 in 2017 dollars), the HP-35 changed the world. Companies like General Electric ordered tens of thousands of units. It was crazy, especially for a device that had a few minor math bugs in its first shipping batch (HP gave everyone a free replacement).

Read more about early slide-rule calculators — and the more advanced card-programmable models like the HP-65 and HP-67, in my story, “The early history of HP calculators.”

HP-65 and HP-67 card-programmable calculators

, ,

Many on-prem ERP and CRM packages are not sufficiently secured

When was the last time most organizations discussed the security of their Oracle E-Business Suite? How about SAP S/4HANA? Microsoft Dynamics? IBM’s DB2? Discussions about on-prem server software security too often begin and end with ensuring that operating systems are at the latest level, and are current with patches.

That’s not good enough. Just as clicking on a phishing email or opening a malicious document in Microsoft Word can corrupt a desktop, so too server applications can be vulnerable. When those server applications are involved with customer records, billing systems, inventory, transactions, financials, or human resources, a hack into ERP or CRM systems can threaten an entire organization. Worse, if that hack leveraged stolen credentials, the business may never realize that competitors or criminals are stealing its data, and potentially even corrupting its records.

A new study from the Ponemon Institute points to the potential severity of the problem. Sixty percent of the respondents to the “Cybersecurity Risks to Oracle E-Business Suite” say that information theft, modification of data and disruption of business processes on their company’s Oracle E-Business Suite applications would be catastrophic. While 70% respondents said a material security or data breach due to insecure Oracle E-Business Suite applications is likely, 67% of respondents believe their top executives are not aware of this risk. (The research was sponsored by Onapsis, which sells security solutions for ERP suites, so apply a little sodium chloride to your interpretation of the study’s results.)

The audience of this study was of businesses that rely upon Oracle E-Business Suite. About 24% of respondents said that it was the most critical application they ran, and altogether, 93% said it was one of the top 10 critical applications. Bearing in mind that large businesses run thousands of server applications, that’s saying something.

Yet more than half of respondents – 53% — said that it was Oracle’s responsibility to ensure that its applications and platforms are safe and secure. Unless they’ve contracted with Oracle to manage their on-prem applications, and to proactively apply patches and fixes, well, they are delusional.

Another area of delusion: That software must be connected to the Internet to pose a risk. In this study, 52% of respondents agree or strongly agree that “Oracle E-Business applications that are not connected to the Internet are not a security threat.” They’ve never heard of insider threats? Credentials theft? Penetrations of enterprise networks?

What about securing other ERP/CRM packages, like those from IBM, Microsoft, and SAP? Read all about that, and more, in my story, “Organizations Must Secure Their Business-Critical ERP And CRM Server Applications.”

, ,

When natural disasters strike, the cloud can aid recovery

The water is rising up over your desktops, your servers, and your data center. You’d better hope that the disaster recovery plans included the word “offsite” – and that the backup IT site wasn’t another local business that’s also destroyed by the hurricane, the flood, the tornado, the fire, or the earthquake.

Disasters are real, as August’s Hurricane Harvey and immense floods in Southeast Asia have taught us all. With tens of thousands of people displaced, it’s hard to rebuild a business. Even with a smaller disaster, like a power outage that lasts a couple of days, the business impact can be tremendous.

I once worked for a company in New York that was hit by a blizzard that snapped the power and telephone lines to the office building. Down went the PBX, down went the phone system and the email servers. Remote workers (I was in in California) were massively impaired. Worse, incoming phone calls simply rang and rang; incoming email messages bounced back to the sender.

With that storm, electricity was gone for more than a week, and broadband took an additional time to be restored. You’d better believe our first order of business, once we began the recovery phase, was to move our internal Microsoft Exchange Server to a colocation facility with redundant T1 lines, and move our internal PBX to a hosted solution from the phone company. We didn’t like the cost, but we simply couldn’t afford to be shut down again the next time a storm struck.

These days, the answer lies within the cloud, either for primary data center operations, or for the source of a backup. (Forget trying to salvage anything from a submerged server rack or storage system.)

We aren’t prepared. In a February 2017 study conducted by the Disaster Recovery Journal and Forrester Research, “The State Of Disaster Recovery Preparedness 2017,” only 18% of disaster recovery decision makers said they were “very prepared” to recover their data center in the event of a site failure or disaster event. Another 37% were prepared, 34% were somewhat prepared, and 11% not prepared at all.

That’s not good enough if you’re in Houston or Bangladesh or even New York during a blizzard. And that’s clear even among the survey respondents, 43% of whom said there was a business requirement to stay online and competitive 24×7.

Read more in my article, “Before the Next Natural Disaster Strikes, Look to the Cloud.”

, ,

Cyberwar: Can ships like the USS John S. McCain be hacked?

The more advanced the military technology, the greater the opportunities for intentional or unintentional failure in a cyberwar. As Scotty says in Star Trek III: The Search for Spock, “The more they overthink the plumbing, the easier it is to stop up the drain.”

In the case of a couple of recent accidents involving the U.S. Navy, the plumbing might actually be the computer systems that control navigation. In mid-August, the destroyer U.S.S. John S. McCain rammed into an oil tanker near Singapore. A month or so earlier, a container ship hit the nearly identical U.S.S. Fitzgerald off Japan. Why didn’t those hugely sophisticated ships see the much-larger merchant vessels, and move out of the way?

There has been speculation, and only speculation, that both ships might have been victims of cyber foul play, perhaps as a test of offensive capabilities by a hostile state actor. The U.S. Navy has not given a high rating to that possibility, and let’s admit, the odds are against it.

Even so, the military hasn’t dismissed the idea, writes Bill Gertz in the Washington Free Beacon:

On the possibility that China may have triggered the collision, Chinese military writings indicate there are plans to use cyber attacks to “weaken, sabotage, or destroy enemy computer network systems or to degrade their operating effectiveness.” The Chinese military intends to use electronic, cyber, and military influence operations for attacks against military computer systems and networks, and for jamming American precision-guided munitions and the GPS satellites that guide them, according to one Chinese military report.

The datac enters of those ships are hardened and well protected. Still, given the sophistication of today’s warfare, what if systems are hacked?

Imagine what would happen if, say, foreign powers were able to break into drones or cruise missiles. This might cause them to crash prematurely, self-destruct, or hit a friendly target, or perhaps even “land” and become captured. What about disruptions to fighter aircraft, such as jets or helicopters? Radar systems? Gear carried by troops?

To learn more about these unsettling ideas, read my article, “Can Warships Like the U.S.S. John S. McCain Be Hacked?

,

The GDPR says you must reveal personal data breaches

No organization likes to reveal that its network has been breached, or it data has been stolen by hackers or disclosed through human error. Yet under the European Union’s new General Data Protection Regulation (GDPR), breaches must be disclosed.

The GDPR is a broad set of regulations designed to protect citizens of the European Union. The rules apply to every organization and business that collects or stores information about people in Europe. It doesn’t matter if the company has offices in Europe: If data is collected about Europeans, the GDPR applies.

Traditionally, most organizations hide all information about security incidents, especially if data is compromised. That makes sense: If a business is seen to be careless with people’s data, its reputation can suffer, competitors can attack, and there can be lawsuits or government penalties.

We tend to hear about security incidents only if there’s a breach sufficiently massive that the company must disclose to regulators, or if there’s a leak to the media. Even then, the delay between the breach can take weeks or month — meaning that folks aren’t given enough time to engage identity theft protection companies, monitor their credit/debit payments, or even change their passwords.

Thanks to GDPR, organizations must now disclose all incidents where personal data may have been compromised – and make that disclosure quickly. Not only that, but the GDPR says that the disclosure must be to the general public, or at least to those people affected; the disclosure can’t be buried in a regulatory filing.

Important note: The GDPR says absolutely nothing about disclosing successful cyberattacks where personal data is not stolen or placed at risk. That includes distributed denial-of-service (DDoS) attacks, ransomware, theft of financial data, or espionage of intellectual property. That doesn’t mean that such cyberattacks can be kept secret, but in reality, good luck finding out about them, unless the company has other reasons to disclose. For example, after some big ransomware attacks earlier this year, some publicly traded companies revealed to investors that those attacks could materially affect their quarterly profits. This type of disclosure is mandated by financial regulation – not by the GDPR, which is focused on protecting individuals’ personal data.

The clock is ticking. To see what you must do, read my article, “With the GDPR, You Must Reval the Personal Data Breach.”

,

Get ready for huge fines if you don’t comply with the GDPR

The European Union is taking computer security, data breaches, and individual privacy seriously. The EU’s General Data Protection Regulation (GDPR) will take effect on May 25, 2018 – but it’s not only a regulation for companies based in Europe.

The GDPR is designed to protect European consumers. That means that every business that stores information about European residents will be affected, no matter where that business operates or is headquartered. That means the United States, and also a post-Brexit United Kingdom.

There’s a hefty fee for non-compliance: Businesses can be fined up to 4% of their worldwide top-line revenue, with a cap of €20 million. No matter how you slice it, for most businesses that’s going to hurt, though for some of the tech industry’s giants, that €20 million penalty might look like a slap on the wrist.

A big topic within GDPR is “data portability.” That is the notion that an individual has the right to see information that it has shared with an organization (or has given permission to be collected), inn a commonly used machine-readable format. Details need to be worked out to make that effective.

Another topic is that individuals have the right to make changes to some of their information, or to delete all or part of their information. No, customers can’t delete their transaction history, for example, or delete that they owe the organization money. However, they may choose to delete information that the organization may have collected, such as their age, where they went to college, or the names of their children. They also have the right to request corrections to the data, such as a misspelled name or an incorrect address.

That’s not as trivial as it may seem. It is not uncommon for organizations to have multiple versions of, say, a person’s name and spelling, or to have the information contain differences in formatting. This can have implications when records don’t match. In some countries, there have been problems with a traveler’s passport information not 100% exactly matching the information on a driver’s license, airline ticket, or frequent traveller program. While the variations might appear trivial to a human — a missing middle name, a missing accent mark, an extra space — it can be enough to throw off automated data processing systems, which therefore can’t 100% match the traveler to a ticket. Without rules like the GDPR, organizations haven’t been required to make it easy, or even possible, for customers to make corrections.

For more about this, read my article, “The GDPR is coming.”

, ,

Attack of the Killer Social Media Robots!

The late, great science fiction writer Isaac Asimov frequently referred to the “Frankenstein Complex,” That was deep-seated and irrational phobia that robots (i.e, artificial intelligence) would rise up and destroy their creators. Whether it’s HAL in “2001: A Space Odyssey,” or the mainframe in “Colossus: The Forbin Project,” or Arnold Schwarzenegger in “Terminator,” or even the classic Star Trek episode “The Ultimate Computer,” sci-fi carries the message that AI will soon render us obsolescent… or obsolete… or extinct. Many people are worried this fantasy will become reality.

No, Facebook didn’t have to kill creepy bots. To listen to the breathless news reports, Facebook created some chatbots that were out of control. The bots, designed to test AI’s ability to negotiate, had created their own language – and scientists were alarmed that they could no longer understand what those devious rogues were up to. So, the plug had to be pulled before Armageddon. Said Poulami Nag in the International Business Times:

Facebook may have just created something, which may cause the end of a whole Homo sapien species in the hand of artificial intelligence. You think I am being over dramatic? Not really. These little baby Terminators that we’re breeding could start talking about us behind our backs! They could use this language to plot against us, and the worst part is that we won’t even understand.

Well, no. Not even close. The development of an optimized negotiating language was no surprise, and had little to do with the conclusion of Facebook’s experiment, explain the engineers at FAIR – Facebook Artificial Intelligence Research.

The program’s goal was to create dialog agents (i.e., chatbots) that would negotiate with people. To quote a Facebook blog,

Similar to how people have differing goals, run into conflicts, and then negotiate to come to an agreed-upon compromise, the researchers have shown that it’s possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.

And then,

To go beyond simply trying to imitate people, the FAIR researchers instead allowed the model to achieve the goals of the negotiation. To train the model to achieve its goals, the researchers had the model practice thousands of negotiations against itself, and used reinforcement learning to reward the model when it achieved a good outcome. To prevent the algorithm from developing its own language, it was simultaneously trained to produce humanlike language.

Read more in my article, “Attack of the Killer Facebook Robot Brains: Is Artificial Intelligence Becoming Dangerous?”

,

A very cute infographic: 10 Marketing lessons from Apple

It’s hard to know which was better: The pitch for my writing about an infographic, or the infographic itself.

About the pitch: The writer said, “I’ve been tasked with the job of raising some awareness around the graphic (in the hope that people actually like my work lol) and wondered if you thought it might be something entertaining for your audience? If not I completely understand – I’ll just lose my job and won’t be able to eat for a month (think of my poor cats).” Since I don’t want this lady and her cats to starve, I caved.

If you like the pitch, you’ll enjoy the infographic, “10 Marketing Lessons from Apple.” One piece from it is reproduced above. Very cute.

,

Cybersecurity pros are hard to get —here’s how to find and keep them

It’s difficult to recruit qualified security staff because there are more openings than humans to fill them. It’s also difficult to retain IT security professionals because someone else is always hiring. But don’t worry: Unless you work for an organization that refuses to pay the going wage, you’ve got this.

Two recent studies present dire, but somewhat conflicting, views of the availability of qualified cybersecurity professionals over the next four or five years. The first study is the Global Information Security Workforce Study from the Center for Cyber Safety and Education, which predicts a shortfall of 1.8 million cybersecurity workers by 2022. Among the highlights from that research, which drew on data from 19,000 cybersecurity professionals:

  • The cybersecurity workforce gap will hit 1.8 million by 2022. That’s a 20 percent increase since 2015.
  • Sixty-eight percent of workers in North America believe this workforce shortage is due to a lack of qualified personnel.
  • A third of hiring managers globally are planning to increase the size of their departments by 15 percent or more.
  • There aren’t enough workers to address current threats, according to 66 percent of respondents.
  • Around the globe, 70 percent of employers are looking to increase the size of their cybersecurity staff this year.
  • Nine in ten security specialists are male. The majority have technical backgrounds, suggesting that recruitment channels and tactics need to change.
  • While 87 percent of cybersecurity workers globally did not start in cybersecurity, 94 percent of hiring managers indicate that security experience in the field is an important consideration.

The second study is the Cybersecurity Jobs Report, created by the editors of Cybersecurity Ventures. Here are some highlights:

  • There will be 3.5 million cybersecurity job openings by 2021.
  • Cybercrime will more than triple the number of job openings over the next five years. India alone will need 1 million security professionals by 2020 to meet the demands of its rapidly growing economy.
  • Today, the U.S. employs nearly 780,000 people in cybersecurity positions. But a lot more are needed: There are approximately 350,000 current cybersecurity job openings, up from 209,000 in 2015.

So, whether you’re hiring a chief information security officer or a cybersecurity operations specialist, expect a lot of competition. What can you do about it? How can you beat the staffing shortage? Read my suggestion in “How to beat the cybersecurity staffing shortage.”

, ,

Ransomware dominates the Black Hat 2017 conference

“Ransomware! Ransomware! Ransomware!” Those words may lack the timeless resonance of Steve Ballmer’s epic “Developers! Developers! Developers!” scream in 2000, but ransomware was seemingly an obsession or at Black Hat USA 2017, happening this week in Las Vegas.

There are good reason for attendees and vendors to be focused on ransomware. For one thing, ransomware is real. Rates of ransomware attacks have exploded off the charts in 2017, helped in part by the disclosures of top-secret vulnerabilities and hacking tools allegedly stolen from the United States’ three-letter-initial agencies.

For another, the costs of ransomware are significant. Looking only at a few attacks in 2017, including WannaCry, Petya, and NotPetya, corporates have been forced to revise their earnings downward to account for IT downtime and lost productivity. Those include ReckittNuance, and FedEx. Those types of impact grab the attention of every CFO and every CEO.

Talking with another analyst at Black Hat, he observed that just about every vendor on the expo floor had managed to incorporate ransomware into its magic show. My quip: “I wouldn’t be surprised to see a company marketing network cables as specially designed to prevent against ransomware.” His quick retort: “The queue would be half a mile long for samples. They’d make a fortune.”

Read my article, “A Singular Message about Malware,” to learn what organizations can and should do to handle ransomware. It’s not rocket science, and it’s not brain surgery.

, ,

The billion-dollar cost of extreme cyberattacks

A major global cyberattack could cost US$53 billion of economic losses. That’s on the scale of a catastrophic disaster like 2012’s Hurricane Sandy.

Lloyds of London, the famous insurance company, partnered with Cyence, a risk analysis firm specializing in cybersecurity. The result is a fascinating report, “Counting the Cost: Cyber Exposure Decoded.” This partnership makes sense: Lloyds needs to understand the risk before deciding whether to underwrite a venture — and when it comes to cybersecurity, this is an emerging science. Traditional actuarial methods used to calculate the risk of a cargo ship falling prey to pirates, or an office block to a devastating flood, simply don’t apply.

Lloyds says that in 2016, cyberattacks cost businesses as much as $450 billion. While insurers can help organizations manage that risk, the risk is increasing. The report points to those risks covering “everything from individual breaches caused by malicious insiders and hackers, to wider losses such as breaches of retail point-of-sale devices, ransomware attacks such as BitLocker, WannaCry and distributed denial-of-service attacks such as Mirai.”

The worry? Despite writing $1.35 billion in cyberinsurance in 2016, “insurers’ understanding of cyber liability and risk aggregation is an evolving process as experience and knowledge of cyber-attacks grows. Insureds’ use of the internet is also changing, causing cyber-risk accumulation to change rapidly over time in a way that other perils do not.”

And that is why the lack of time-tested actuarial tables can cause disaster, says Lloyds. “Traditional insurance risk modelling relies on authoritative information sources such as national or industry data, but there are no equivalent sources for cyber-risk and the data for modelling accumulations must be collected at scale from the internet. This makes data collection, and the regular update of it, key components of building a better understanding of the evolving risk.”

Huge Liability Costs

The “Counting the Cost” report makes for some depressing reading. Here are three of the key findings, quoted verbatim. Read the 56-page report to dig deeply into the scenarios, and the damages.

  • The direct economic impacts of cyber events lead to a wide range of potential economic losses. For the cloud service disruption scenario in the report, these losses range from US$4.6 billion for a large event to US$53.1 billion for an extreme event; in the mass software vulnerability scenario, the losses range from US$9.7 billion for a large event to US$28.7 billion for an extreme event.
  • Economic losses could be much lower or higher than the average in the scenarios because of the uncertainty around cyber aggregation. For example, while average losses in the cloud service disruption scenario are US$53 billion for an extreme event, they could be as high as US$121.4 billion or as low as US$15.6 billion, depending on factors such as the different organisations involved and how long the cloud-service disruption lasts for.
  • Cyber-attacks have the potential to trigger billions of dollars of insured losses. For example, in the cloud- services scenario insured losses range from US$620 million for a large loss to US$8.1 billion for an extreme loss. For the mass software vulnerability scenario, the insured losses range from US$762 million (large loss) to US$2.1 billion (extreme loss).

Read more in my article for Zonic News, “Lloyds Of London Estimates The Billion-Dollar Cost Of Extreme Cyberattacks.”

, , ,

Learn datacenter principles from ISO 26262 standards for automotive safety engineering

Automotive ECU (engine control unit)

Automotive ECU (engine control unit)

In my everyday life, I trust that if I make a panic stop, my car’s antilock brake system will work. The hardware, software, and servos will work together to ensure that my wheels don’t lock up—helping me avoid an accident. If that’s not sufficient, I trust that the impact sensors embedded behind the front bumper will fire the airbag actuators with the correct force to protect me from harm, even though they’ve never been tested. I trust that the bolts holding the seat in its proper place won’t shear. I trust the seat belts will hold me tight, and that cargo in the trunk won’t smash through the rear seats into the passenger cabin.

Engineers working on nearly every automobile sold worldwide ensure that their work practices conform to ISO 26262. That standard describes how to manage the functional safety of the electrical and electronic systems in passenger cars. A significant portion of ISO 26262 involves ensuring that software embedded into cars—whether in the emissions system, the antilock braking systems, the security systems, or the entertainment system—is architected, coded, and tested to be as reliable as possible.

I’ve worked with ISO 26262 and related standards on a variety of automotive software security projects. Don’t worry, we’re not going to get into the hairy bits of those standards because unless you are personally designing embedded real-time software for use in automobile components, they don’t really apply. Also, ISO 26262 is focused on the real-world safety of two-ton machines hurtling at 60-plus miles per hour—that is, things that will kill or hurt people if they don’t work as expected.

Instead, here are five IT systems management ideas that are inspired by ISO 26262. We’ll help you ensure your systems are designed to be Reliable, with a capital R, and Safe, with a capital S.

Read the list, and more, in my article for HP Enterprise Insights, “5 lessons for data center pros, inspired by automotive engineering standards.”

, ,

Cybersecurity has a problem with women — and many opportunities

MacKenzie Brown has nailed the problem — and has good ideas for the solution. As she points out in her three part blog series, “The Unicorn Extinction” (links in a moment):

  • Overall, [only] 25% of women hold occupations in technology alone.
  • Women’s Society of Cyberjutsu (WSC), a nonprofit for empowering women in cybersecurity, states that females make up 11% of the cybersecurity workforce while (ISC)2, a non-profit specializing in education and certification, reports a whopping estimation of 10%.
  • Lastly, put those current numbers against the 1 million employment opportunities predicted for 2017, with a global demand of up to 6 million by 2019.

While many would decry the system sexism and misogyny in cybersecurity, Ms. Brown sees opportunity:

…the cybersecurity industry, a market predicted to have global expenditure exceeding $1 trillion between now and 2021(4), will have plenty of demand for not only information security professionals. How can we proceed to find solutions and a fixed approach towards resolving this gender gap and optimizing this employment fluctuation? Well, we promote unicorn extinction.

The problem of a lack of technically developed and specifically qualified women in Cybersecurity is not unique to this industry alone; however the proliferation of women in tangential roles associated with our industry shows that there is a barrier to entry, whatever that barrier may be. In the next part of this series we will examine the ideas and conclusions of senior leadership and technical women in the industry in order to gain a woman’s point of view.

She continues to write about analyzing the problem from a woman’s point of view:

Innovating solutions to improve this scarcity of female representation, requires breaking “the first rule about Fight Club; don’t talk about Fight Club!” The “Unicorn Law”, this anecdote, survives by the circling routine of the “few women in Cybersecurity” invoking a conversation about the “few women in Cybersecurity” on an informal basis. Yet, driving the topic continuously and identifying the values will ensure more involvement from the entirety of the Cybersecurity community. Most importantly, the executive members of Fortune 500 companies who apply a hiring strategy which includes diversity, can begin to fill those empty chairs with passionate professionals ready to impact the future of cyber.

Within any tale of triumph, obstacles are inevitable. Therefore, a comparative analysis of successful women may be the key to balancing employment supply and demand. I had the pleasure of interviewing a group of women; all successful, eclectic in roles, backgrounds of technical proficiency, and amongst the same wavelength of empowerment. These interviews identified commonalities and distinct perspectives on the current gender gap within the technical community.

What’s the Unicorn thing?

Ms. Brown writes,

During hours of research and writing, I kept coming across a peculiar yet comically exact tokenism deemed, The Unicorn Law. I had heard this in my industry before, attributed to me, “unicorn,” which is described (even in the cybersecurity industry) as: a woman-in-tech, eventually noticed for their rarity and the assemblage toward other females within the industry. In technology and cybersecurity, this is a leading observation many come across based upon the current metrics. When applied to the predicted demand of employment openings for years to come, we can see an enormous opportunity for women.

Where’s the opportunity?

She concludes,

There may be a notable gender gap within cybersecurity, but there also lies great opportunity as well. Organizations can help narrow the gap, but there is also tremendous opportunity in women helping each other as well.

Some things that companies can do to help, include:

  • Providing continuous education, empowering and encouraging women to acquire new skill through additional training and certifications.
  • Using this development training to promote from within.
    Reaching out to communities to encourage young women from junior to high school levels to consider cyber security as a career.
  • Seek out women candidates for jobs, both independently and utilizing outsourcing recruitment if need be.
  • At events, refusing to field all male panels.
  • And most importantly, encourage the discussion about the benefits of a diverse team.

If you care about the subject of gender opportunity in cybersecurity, I urge you to read these three essays.

The Unicorn Extinction Series: An Introspective Analysis of Women in Cybersecurity, Part 1

The Unicorn Extinction Series: An Introspective Analysis of Women in Cybersecurity, Part 2

The Unicorn Extinction Series: An Introspective Analysis of Women in Cybersecurity, Part 3

, ,

Tell your customers about your data breaches!

Did they tell their customers that data was stolen? No, not right away. When AA — a large automobile club and insurer in the United Kingdom — was hacked in April, the company was completely mum for months, in part because it didn’t believe the stolen data was sensitive. AA’s customers only learned about it when information about the breach was publicly disclosed in late June.

There are no global laws that require companies to disclose information about data thefts to customers. There are similarly no global laws that require companies to disclose defects in their software or hardware products, including those that might introduce security vulnerabilities.

It’s obviously why companies wouldn’t want to disclose problems with their products (such as bugs or vulnerabilities) or with their back-end operations (such as system breaches or data exfiltration). If customers think you’re insecure, they’ll leave. If investors think you’re insecure, they’ll leave. If competitors think you’re insecure, they’ll pounce on it. And if lawyers or regulators think you’re insecure, they might file lawsuits.

No matter how you slice it, disclosures about problems is not good for business. Far better to share information about new products, exciting features, customer wins, market share increases, additional platforms, and pricing promotions.

That’s not to say that all companies hide bad news. Microsoft, for example, is considered to be very proactive on disclosing flaws in its products and platforms, including those that affect security. When Microsoft learned about the Server Message Block (SMB) flaw that enabled malware like WannaCry and Petya in March, it quickly issued a Security Bulletin that explained the problem — and supplied the necessary patches. If customers had read the bulletin and applied the patches, those ransomware outbreaks wouldn’t have occurred.

When you get outside the domain of large software companies, such disclosures are rare. Automobile manufacturers do share information about vehicle defects with regulators, as per national laws, but resist recalls because of the expense and bad publicity. Beyond that, companies share information about problems with products, services, and operations unwillingly – and with delays.

In the AA case, as SC Magazine wrote,

The leaky database was first discovered by the AA on April 22 and fixed by April 25. In the time that it had been exposed, it had reportedly been accessed by several unauthorised parties. An investigation by the AA deemed the leaky data to be not sensitive, meaning that the organisation did not feel it necessary to tell customers.

Read more about this in my piece for Zonic News, “Tell Customers about Vulnerabilities – And Data Breaches.”

,

Watch out for threatening emails from Anonymous or Lizard Squad

The Federal Bureau of Investigation is warning about potential attacks from a hacking group called Lizard Squad. This information, released today, was labeled “TLP:Green” by the FBI and CERT, which means that it shouldn’t be publicly shared – but I am sharing it because this information was published on a publicly accessible blog run by the New York State Bar Association. I do not know why distribution of this information was restricted.

The FBI said:

Summary

An individual or group claiming to be “Anonymous” or “Lizard Squad” sent extortion emails to private-sector companies threatening to conduct distributed denial of service (DDoS) attacks on their network unless they received an identified amount of Bitcoin. No victims to date have reported DDoS activity as a penalty for non-payment.

Threat

In April and May 2017, at least six companies received emails claiming to be from “Anonymous” and “Lizard Squad” threatening their companies with DDoS attacks within 24 hours unless the company sent an identified amount of Bitcoin to the email sender. The email stated the demanded amount of Bitcoin would increase each day the amount went unpaid. No victims to date have reported DDoS activity as a penalty for nonpayment.

Reporting on schemes of this nature go back at least three years.

In 2016, a group identifying itself as “Lizard Squad” sent extortion demands to at least twenty businesses in the United Kingdom, threatening DDoS attacks if they were not paid five Bitcoins (as of 14 June, each Bitcoin was valued at 2,698 USD). No victims reported actual DDoS activity as a penalty for non-payment.

Between 2014 and 2015, a cyber extortion group known as “DDoS ‘4’ Bitcoin” (DD4BC) victimized hundreds of individuals and businesses globally. DD4BC would conduct an initial, demonstrative low-level DDoS attack on the victim company, followed by an

email message introducing themselves, demanding a ransom paid in Bitcoins, and threatening a higher level attack if the ransom was not paid within the stated time limit. While no significant disruption or DDoS activity was noted, it is probable companies paid the ransom to avoid the threat of DDoS activity.

Background

Lizard Squad is a hacking group known for their DDoS attacks primarily targeting gaming-related services. On 25 December 2014, Lizard Squad was responsible for taking down the Xbox Live and PlayStation networks. Lizard Squad also successfully conducted DDoS attacks on the UK’s National Crime Agency’s (NCA) website in 2015.

Anonymous is a hacking collective known for several significant DDoS attacks on government, religious, and corporate websites conducted for ideological reasons.

Recommendations

  • The FBI suggests precautionary measures to mitigate DDoS threats to include, but not limited to:
  • Have a DDoS mitigation strategy ready ahead of time.
  • Implement an incident response plan that includes DDoS mitigation and practice this plan before an actual incident occurs. This plan may involve external organizations such as your Internet Service Provider, technology companies that offer DDoS mitigation services, and law enforcement.
  • Ensure your plan includes the appropriate contacts within these external organizations. Test activating your incident response team and third party contacts.
  • Implement a data back-up and recovery plan to maintain copies of sensitive or proprietary data in a separate and secure location. Backup copies of sensitive data should not be readily accessible from local networks.
  • Ensure upstream firewalls are in place to block incoming User Data Protocol (UDP) packets.
  • Ensure software or firmware updates are applied as soon as the device manufacturer releases them.

If you have received one of these demands:

  • Do not make the demand payment.
  • Retain the original emails with headers.
  • If applicable, maintain a timeline of the attack, recording all times and content of the attack.

The FBI encourages recipients of this document to report information concerning suspicious or criminal activity to their local FBI field office or the FBI’s 24/7 Cyber Watch (CyWatch). Field office contacts can be identified at www.fbi.gov/contact-us/field. CyWatch can be contacted by phone at (855) 292-3937 or by e-mail at email hidden; JavaScript is required. When available, each report submitted should include the date, time, location, type of activity, number of people, and type of equipment used for the activity, the name of the submitting company or organization, and a designated point of contact. Press inquiries should be directed to the FBI’s national Press Office at email hidden; JavaScript is required or (202) 324-3691.

, ,

Agylytyx is a silly name, let’s make fun of it

I am unapologetically mocking this company’s name. Agylytyx emailed me this press release today, and only the name captured my attention. Plus, their obvious love of the ™ symbol — even people they quote use the ™. Amazing!

Beyond that, I’ve never talked to the company or used its products, and have no opinion about them. (My guess is that it’s supposed to be pronounced as “Agil-lytics.”)

Agylytyx Announces Availability of New IOT Data Analysis Application

SUNNYVALE, Calif., June 30, 2017 /PRNewswire/ — Agylytyx, a leading cloud-based analytic software vendor, today announced a new platform for analyzing IoT data. The Agylytyx Generator™ IoT platform represents an application of the vendor’s novel Construct Library™ approach to the IoT marketplace. For the first time, companies can both explore their IoT data and make it actionable much more quickly than previously thought possible.

From PLC data streams archived as tags in traditional historians to time series data streaming from sensors attached to devices, the Agylytyx Generator™ aggregates and presents IoT data in a decision-ready format. The company’s unique Construct Library™ (“building block”) approach allows decision makers to create and explore aggregated data such as pressure, temperature, output productivity, worker status, waste removal, fuel consumption, heat transfer, conductivity, condensation or just about any “care abouts.” This data can be instantly explored visually at any level such as region, plant, line, work cell or even device. Best of all, the company’s approach eliminates the need to build charts or write queries.

One of the company’s long-time advisors, John West of Clean Tech Open, noticed the Agylytyx Generator™ potential from the outset. West’s wide angle on data analysis led him to stress the product’s broad applicability. West said “Even as the company was building the initial product, I advised the team that I thought there was strong applicability of the platform to operational data. The idea of applying Constructs to a received data set has broad usage. Their evolution of the Agylytyx Generator™ platform to IoT data is a very natural one.”

The company’s focus on industrial process data was the brainchild of one the company’s investors, Jim Smith. Jim is a chemical engineer with extensive experience working with plant floor data. Smith stated “I recognized the potential in the company’s approach for analyzing process data. Throughout the brainstorming process, we all gradually realized we were on to something groundbreaking.”

This unique approach to analytics attracted the attention of PrecyseTech, a pioneer of Industrial IoT (IIoT) Systems providing end-to-end management of high-value physical assets and personnel. Paul B. Silverman, the CEO of PrecyseTech, has had a longstanding relationship with the company. Silverman noted: “The ability of the Agylytyx Generator™ to address cloud-based IoT data analytic solutions is a good fit with PrecyseTech’s strategy. Agylytyx is working with the PrecyseTech team to develop our inPALMSM Solutions IoT applications, and we are working collaboratively to identify and develop IoT data opportunities targeting PrecyseTech’s clients. Our plans are to integrate the Agylytyx Generator™ within our inPALMSM Solutions product portfolio and also to offer users access to the Agylytyx Generator™ via subscription.”

Creating this IoT focus made the ideal use of the Agylytyx Generator™. Mark Chang, a data scientist for Agylytyx, noted: “All of our previous implementations – financial, entertainment, legal, customer service – had data models with common ‘units of measure’ – projects, media, timekeepers, support cases, etc. IoT data is dissimilar in that there is no common ‘unit of measure’ across devices. This dissimilarity is exactly what makes our Construct Library™ approach so useful to IoT data. The logical next step for us will be to apply machine learning and cluster inference to enable optimization of resource deployment and predictive analytics like predictive maintenance.”

About Agylytyx

Agylytyx provides cloud-based enterprise business analytic software. The company’s flagship product, the Agylytyx Generator™, frees up analyst time and results in better decision making across corporations. Agylytyx is based in Sunnyvale, California, and has locations in Philadelphia and Chicago, IL. For more information about Agylytyx visit www.agylytyx.com.

, ,

Varjo offers a new type of high-def VR/AR display tuned to the user’s eye motions

The folks at Varjo think they’re made a breakthrough in how goggles for virtual reality and augmented reality work. They are onto something.

Most VR/AR goggles have two displays, one for each eye, and they strive to drive those displays at the highest resolution possible. Their hardware and software takes into account that as the goggles move, the viewpoint has to move in a seamless way, without delay. If there’s delay, the “willing suspension of disbelief” required to make VR work fails, and in some cases, the user experiences nausea and disorientation. Not good.

The challenge come from making the display sufficiently high resolution to allow the user to make objects look photorealistic. That lets user manipulate virtual machine controls, operate flight simulators, read virtual text, and so-on. Most AR/VR systems try to make the display uniformly high resolution, so that no matter where the user looks, the resolution is there.

Varjo, based in Finland, has a different approach. They take advantage of the fact that the rods and cones in the human eye sees in high resolution in the spot that the eye’s fovea is pointing at – and much lower elsewhere. So while the whole display is capable of high resolution, Varjo uses fovea detectors to do “gaze tracking” to see what the user is looking at, and makes that area super high resolution. When the fovea moves to another spot, that area is almost instantly bumped up to super high resolution, while the original area is downgraded to a reduced resolution.

Sound simple? It’s not, and that’s why the initial Varjo technology will be targeted at professional applications, like doctors, computer-aided design workers, or remote instrument operators. Prototypes of the goggles will be available this year to software developers, and the first products should ship to customers at the end of 2018. The price of the goggles is said to be “thousands, not tens of thousands” of dollars, according to Urho Konttori, the company’s founder. We talked by phone; he was in the U.S. doing demos in San Francisco and New York, but unfortunately, I wasn’t able to attend one of them.

Now, Varjo isn’t the first to use gaze tracking technology to try to optimize the image. According to Konttori, other vendors use medium resolution where the eye is pointing, and low resolution elsewhere, just enough to establish context. By contrast, he says that Varjo uses super high resolution where the user looks, and high resolution elsewhere. Because each eye’s motion is tracked separately, the system can also tell when the user is looking at objects close to user (because the eyes are at a more converged angle) or farther away (the eyes are at a more parallel angle).

“In our prototype, wherever you are looking, that’s the center of the high resolution display,” he said. “The whole image looks to be in focus, no matter where you look. Even in our prototype, we can move the display projection ten times faster than the human eye.”

Konttori says that the effective resolution of the product, called 20/20, is 70 megapixels, updated in real time based on head motion and gaze tracking. That compares to fewer than 2 megapixels for Oculus, Vive, HoloLens and Magic Leap. (This graphic from Varjo compared their display to an unnamed competitor.) What’s more, he said the CPU/GPU power needed to drive this display isn’t huge. “The total pixel count is less than in a single 4K monitor. you need roughly 2x the GPU compared to a conventional VR set for the same scene.”

The current prototypes use two video connectors and two USB connectors. Konttori says that this will drop to one video connector and one USB connector shortly, so that the device can be driven by smaller professional-grade computers, such as a gaming laptop, though he expects most will be connected to workstations.

Konttori will be back in the U.S. later this year. I’m looking forward to getting my hands (and eyes) on a Varjo prototype. Will report back when I’ve actually seen it.

,

The good and bad of press relations – a view from four editors

What do PR people do right? What do they do wrong? Khali Henderson, a senior partner in BuzzTheory Strategies, recently interviewed me (and a few other technology editors) about “Things Editors Hate (and Like) About Your Press Relations.”

She started the story with,

I asked these veteran editors what they think about interfacing with business executives and/or their PR representatives in various ways – from press releases to pitches to interviews.

The results are a set of guidelines on what to do and, more importantly, what NOT to do when interfacing with media.

If you’re new to media relations, this advice will start you off on the right track.

Even if you’ve been around the press pool a lap or two, you may learn something new.

After that, Khali asked a number of practical questions, including:

  • When you receive a press release, what makes you most likely to follow up?
  • What makes you skip a press release and go to the next one?
  • When a company executive pitches you a story, what makes you take notice?
  • What makes you pass on a story pitch?
  • When you are reporting on a story, what are you looking for in a source?
  • What do you wish business executives and/or their PR representatives knew about your job?

Read and enjoy the story, and my answers to Khali’s questions!

,

Lordy, I hope there are tapes

I received this awesome tech spam message today from LaserVault. (It’s spam because it went to my company’s info@ address).

There’s only one thought: “Lordy, I hope there are backup tapes.”

Free White Paper: Is A Tape-Related Data Disaster In Your Future?

Is a tape-related data disaster in your future? It may be if you currently use tape for your backup and recovery.

This paper discusses the many risks you take by using tape and relying on it to keep your data safe in case of a disaster.

Read how you can better protect your data from the all too common dangers that threaten your business, and learn about using D2D technology, specifically tape emulation, instead of tape for iSeries, AIX, UNIX, and Windows.

This white paper should be required reading for anyone involved in overseeing their company’s tape backup operations.

Don’t be caught short when the need to recover your data is most critical. Download the free white paper now.

Ha ha ha ha ha. I slay me.

, ,

Running old software? It’s dangerous. Update or replace!

The WannaCry (WannaCrypt) malware attack spread through unpatched old software. Old software is the bane of the tech industry. Software vendors hate old software for many reasons. One, of course, is that the old software has vulnerabilities that must be patched. Another is that the support costs for older software keeps going and growing. Plus, of course, newer software has new features that can generate business. Meanwhile, of course, customers running old software aren’t generating much revenue.

Enterprises, too, hate old software. They don’t like the support costs, either, or the security vulnerabilities. However, there are huge costs in licensing and installing new software – which might require training users and IT staff, buying new hardware, updating templates, adjusting integrations, and so-on. Plus, old software has been tested and certified, and better the risk you know than the risk you don’t know. So, they keep using old software.

Think about a family that’s torn between keeping a paid-for 13-year-old car, like my 2004 BMW, instead of leasing a newer, safer, more reliable model. The decision about whether to upgrade or not upgrade is complicated. There’s no good answer, and in case of doubt, the best decision is to simply wait until next year’s budget.

However: What about a family that decides to go car-shopping after paying for a scary breakdown or an unexpectedly large repair bill? Similarly, companies are inspired to upgrade critical software after suffering a data breach or learning about irreparable vulnerabilities in the old code.

WannaCry might be that call to action for some organizations. Take Windows, for example – but let me be quick to stress that this issue isn’t entirely about Microsoft products. Smartphones running old versions of Android or Apple’s iOS, or old Mac laptops that can’t be moved to the latest edition of OS X, are just as vulnerable.

Okay, back to Windows and WannaCry. In its critical March 14, 2017, security update, Microsoft accurately identified a flaw in its Server Message Block (SMB) code that could be exploited; the flaw was disclosed in documents stolen by hackers from the U.S. security agencies. Given the massive severity of that flaw, Microsoft offered patches to old software including Windows Server 2008 and Windows Vista.

It’s important to note that customers who applied those patches were not affected by WannaCry. Microsoft fixed it. Many customers didn’t install the fix because they didn’t know about it, they couldn’t find the IT staff resources, or simply thought this vulnerability was no big deal. Well, some made the wrong bet, and paid for it.

What can you do?

Read more about this in my latest for Zonic News, “Old Software is Bad, Unsafe, Insecure Software.”

,

Streamlining the cybersecurity insurance application process

Have you ever suffered through the application process for cybersecurity insurance? You know that “suffered” is the right word because of a triple whammy.

  • First, the general risk factors involved in cybersecurity are constantly changing. Consider the rapid rise in ransomware, for example.
  • Second, it is extremely labor-intensive for businesses to document how “safe” they are, in terms of their security maturity, policies, practices and technology.
  • Third, it’s hard for insurers, the underwriters, and their actuaries, to feel confident that they truly understand how risky a potential customer can be — information and knowledge that’s required for quoting a policy that offers sufficient coverage at reasonable rates.

That is, of course, assuming that everyone is on the same page and agrees that cybersecurity insurance is important to consider for the organization. Is cybersecurity insurance a necessary evil for every company to consider? Or, is it only a viable option for a small few? That’s a topic for a separate conversation. For now, let’s assume that you’re applying for insurance.

From their part, insurance carriers aren’t equipped to go into your business and examine your IT infrastructure. They won’t examine firewall settings or audit your employee anti-phishing training materials. Instead, they rely upon your answers to questionnaires developed and interpreted by their own engineers. Unfortunately, those questionnaires may not get into the nuances, especially if you’re in a vertical where the risks are especially high, and so are the rewards for successful hackers.

According to InformationAge, 77% of ransomware appear in four industries. Those are business & professional services (28%), government (19%), healthcare (15%) and retail (15%). In 2016 and 2017, healthcare organizations like hospitals and medical practices were repeatedly hit by ransomware. Give that data to the actuaries, and they might look for those types of organizations to fill out even more questionnaires.

About those questionnaires? “Applications tend to have a lot of yes/no answers… so that doesn’t give the entire picture of what the IT framework actually looks like,” says Michelle Chia, Vice President, Zurich North America. She explained that an insurance company’s internal assessment engineers have to dig deeper to understand what is really going on: “They interview the more complex clients to get a robust picture of what the combination of processes and controls actually looks like and how secure the network and the IT infrastructure are.”

Read more in my latest for ITSP Magazine, “How to Streamline the Cybersecurity Insurance Process.”

, , ,

A phone that takes pictures? Smartphone cameras turn 20 years old

Twenty years ago, my friend Philippe Kahn introduced the first camera-phone. You may know Philippe as the founder of Borland, and as an entrepreneur who has started many companies, and who has accomplished many things. He’s also a sailor, jazz musician, and, well, a fun guy to hang out with.

About camera phones: At first, I was a skeptic. Twenty years ago I was still shooting film, and then made the transition to digital SLR platforms. Today, I shoot with big Canon DSLRs for birding and general stuff, Leica digital rangefinders when want to be artistic, and with pocket-sized digital cameras when I travel. Yet most of my pictures, especially those posted to social media, come from the built-in camera in my smartphone.

Philippe has blogged about this special anniversary – which also marks the birth of his daughter Sophie. To excerpt from his post, The Creation of the Camera-Phone and Instant-Picture-Mail:

Twenty years ago on June 11th 1997, I shared instantly the first camera-phone photo of the birth of my daughter Sophie. Today she is a university student and over 2 trillion photos will be instantly shared this year alone. Every smartphone is a camera-phone. Here is how it all happened in 1997, when the web was only 4 years old and cellular phones were analog with ultra limited wireless bandwidth.

First step 1996/1997: Building the server service infrastructure: For a whole year before June 1997 I had been working on a web/notification system that was capable of uploading a picture and text annotations securely and reliably and sending link-backs through email notifications to a stored list on a server and allowing list members to comment.

Remember it was 1996/97, the web was very young and nothing like this existed. The server architecture that I had designed and deployed is in general the blueprint for all social media today: Store once, broadcast notifications and let people link back on demand and comment. That’s how Instagram, Twitter, Facebook, LinkedIn and many others are function. In 1997 this architecture was key to scalability because bandwidth was limited and it was prohibitive, for example, to send the same picture to 500 friends. Today the same architecture is essential because while there is bandwidth, we are working with millions of views and potential viral phenomena. Therefore the same smart “frugal architecture” makes sense. I called this “Instant-Picture-Mail” at the time.

He adds:

What about other claims of inventions: Many companies put photo-sensors in phones or wireless modules in cameras, including Kodak, Polaroid, Motorola. None of them understood that the success of the camera-phone is all about instantly sharing pictures with the cloud-based Instant-Picture-Mail software/server/service-infrastructure. In fact, it’s even amusing to think that none of these projects was interesting enough that anyone has kept shared pictures. You’d think that if you’d created something new and exciting like the camera-phone you’d share a picture or two or at least keep some!

Read more about the fascinating story here — he goes into a lot of technical detail. Thank you, Philippe, for your amazing invention!