You want to read Backlinko’s “The Definitive Guide To SEO In 2018.” Backlinko is an SEO consultancy founded by Brian Dean. The “Definitive Guide” is a cheerfully illustrated infographic – a lengthy infographic – broken up into several useful chapters:

  • RankBrain & User Experience Signals
  • Become a CTR Jedi
  • Comprehensive, In-Depth Content Wins
  • Get Ready for Google’s Mobile-first Index
  • Go All-In With Video (Or Get Left Behind)
  • Pay Attention to Voice Search
  • Don’t Forget: Content and Links Are Key
  • Quick Tips for SEO in 2018

Some of these section had advice that I knew; others were pretty much new to me, such as the voice search section. I’ll also admit to being very out-of-date on how Google’s ranking systems work; it changes often, and my last deep dive was circa 2014. Oops.

The advice in this document is excellent and well-explained. For example, on RankBrain:

Last year Google announced that RankBrain was their third most important ranking factor: “In the few months it has been deployed, RankBrain has become the third-most important signal contributing to the result of a search query.”

And as Google refines its algorithm, RankBrain is going to become even MORE important in 2018. The question is: What is RankBrain, exactly? And how can you optimize for it?

RankBrain is a machine learning system that helps Google sort their search results. That might sound complicated, but it isn’t. RankBrain simply measures how users interact with the search results… and ranks them accordingly.

The document then goes into a very helpful example, digging into the concept of Dwell Time (that is, how long someone spends on the page). The “Definitive Guide” also provides some very useful metrics about targets for click-through rate (CTR), dwell time, length and depth of content, and more. For example, the document says,

One industry study found that organic CTR is down 37% since 2015. It’s no secret why: Google is crowding out the organic search results with Answer Boxes, Ads, Carousels, “People also ask” sections, and more. And to stand out, your result needs to scream “click on me!”…or else it’ll be ignored.

All of the advice is good, but of course, I’m not always going to follow it. For example, the “Definitive Guide” says:

How can you write the type of in-depth content that Google wants to see? First, publish content that’s at least 2,000 words. That way, you can cover everything a Google searcher needs to know about that topic. In fact, our ranking factors study found that longer content (like ultimate guides and long-form blog posts) outranked short articles in Google.

Well, this post isn’t even close to 2,000 words. Oops. Read the “Definitive Guide,” you’ll be glad you did.

Our family’s Halloween tradition: Watch “The Nightmare Before Christmas,” singing along with all the songs. Great songs!

I must make my usual complaints about this Disney movie. The biggest is there’s only one major female character (Sally), who is Jack Skellington’s love interest. Would it have killed Tim Burton to have the Mayor, Doctor Finkelstein, or even Oogie-Boogie be women?

My favorite song from the movie is “Poor Jack.” I tend to sing these two stanzas when something doesn’t go quite right in my personal or professional life:

But I never intended all this madness, never,
And nobody really understood, how could they?
That all I ever wanted was to bring them something great.
Why does nothing ever turn out like it should?

Well, what the heck, I went and did my best.
And, by God, I really tasted something swell, that’s right.
And for a moment, why, I even touched the sky,
And at least I left some stories they can tell, I did

It’s quite cathartic!

What happens when the sun goes disappears during the daytime? Rabbi Margaret Frisch Klein, of Congregation Kneseth Israel in Elgin, Illinois, wrote in her Energizer Rabbi blog on Aug. 22, 2017, just before the solar eclipse:

The sun is going to disappear on Monday. It is going to be hidden. The Chinese thought that a dragon was eating the moon. The Romans thought that the sun was poisoned and dying. Universally, they were seen as a time of fear.

Jews understood that the moon was passing between the sun and earth creating that shadow. And while I am fond of saying that there is a blessing for everything in Judaism, apparently, there is no blessing for an eclipse, while there is for hail, rain, rainbows, flowers. All sorts of natural wonders. But not an eclipse.

The rabbis knew about eclipses. They could even accurately predict them, well into the future. Rambam, the famous commentator was a rabbi, a physician and an astronomer.

The rabbis even believe that they are mentioned all the way back in Genesis One in the description of the Creation. “And G-d said, “Let there be luminaries in the expanse of the heavens…and they shall be for signs and for appointed seasons and for days and years.” Rashi, the medieval commentator told us that “for signs” referred to when the luminaries are eclipsed and that “this is an unfavorable omen for the world.”

But while some argued we should be afraid, Rashi actually concludes his commentary with words of comfort, from Jeremiah, who I find the least comforting of the prophets, As it is said, ‘And from the signs of heaven be not dismayed, etc. (Jeremiah 10:2). When you perform the will of the Holy One, you need not fear retribution.”

Reading down, I was delighted to discover that she cited and reproduced my own poem.

Perhaps Alan Zeichick, a lay leader and former North American board member of the Reform movement, captured it best in his poem he wrote for Selichot after seeing a partial eclipse.

My poem, from 2015, was entitled, “Before I Die, I Want to Know the Face of God.” Read the whole thing in Rabbi Frisch Klein’s blog post, “Finding Joy in Sight: Re’ah.”

Software developers and testers must be sick of hearing security nuts rant, “Beware SQL injection! Monitor for cross-site scripting! Watch for hijacked session credentials!” I suspect the developers tune us out. Why? Because we’ve been raving about the same defects for most of their careers. Truth is, though, the same set of major security vulnerabilities persists year after year, decade after decade.

The industry has generated newer tools, better testing suites, Agile methodologies, and other advances in writing and testing software. Despite all that, coders keep making the same dumb mistakes, peer reviews keep missing those mistakes, test tools fail to catch those mistakes, and hackers keep finding ways to exploit those mistakes.

One way to see the repeat offenders is to look at the OWASP Top 10. That’s a sometimes controversial ranking of the 10 primary vulnerabilities, published every three or four years by the Open Web Application Security Project.

The OWASP Top 10 list is not controversial because it’s flawed. Rather, some believe that the list is too limited. By focusing only on the top 10 web code vulnerabilities, they assert, it causes neglect for the long tail. What’s more, there’s often jockeying in the OWASP community about the Top 10 ranking and whether the 11th or 12th belong in the list instead of something else. There’s merit to those arguments, but for now, the OWASP Top 10 is an excellent common ground for discussing security-aware coding and testing practices.

Note that the top 10 list doesn’t directly represent the 10 most common attacks. Rather, it’s a ranking of risk. There are four factors used for this calculation. One is the likelihood that applications would have specific vulnerabilities; that’s based on data provided by companies. That’s the only “hard” metric in the OWASP Top 10. The other three risk factors are based on professional judgement.

It boggles the mind that a majority of top 10 issues appear across the 2007, 2010, 2013, and draft 2017 OWASP lists. That doesn’t mean that these application security vulnerabilities have to remain on your organization’s list of top problems, though—you can swat those flaws.

Read more in my essay, “The OWASP Top 10 is killing me, and killing you!

Apply patches. Apply updates. Those are considered to be among the lowest-hanging of the low-hanging fruit for IT cybersecurity. When commercial products release patches, download and install the code right away. When open-source projects disclose a vulnerability, do the appropriate update as soon as you can, everyone says.

A problem is that there are so many patches and updates. They’re found in everything from device firmware to operating systems, to back-end server software to mobile apps. To be able to even discover all the patches is a huge effort. You have to know:

  • All the hardware and software in your organization — so you can scan the vendors’ websites or emails for update notices. This may include the data center, the main office, remote offices, and employees homes. Oh, and rogue software installed without knowledge of IT.
  • The versions of all the hardware and software instances — you can tell which updates apply to you, and which don’t. Sometimes there may be an old version somewhere that’s never been patched.
  • The dependencies. Installing a new operating system may break some software. Installing a new version of a database may require changes on a web application server.
  • The location of each of those instances — so you can know which ones need patching. Sometimes this can be done remotely, but other times may require a truck roll.
  • The administrator access links, usernames and password — hopefully, those are not set to “admin/admin.” The downside of changing default admin passwords is that you have to remember the new ones. Sure, sometimes you can make changes with, say, any Active Director user account with the proper privileges. That won’t help you, though, with most firmware or mobile devices.

The above steps are merely for discovery of the issue and the existence of a patch. You haven’t protected anything until you’ve installed the patch, which often (but not always) requires taking the hardware, software, or service offline for minutes or hours. This requires scheduling. And inconvenience. Even if you have patch-management tools (and there are many available), too many low-hanging fruit can be overlooked.

You Can’t Wait for That Downtime Window

As Oracle CEO Larry Ellison made important points about patching at his keynote at OpenWorld 2017,

Our data centers are enormously complicated. There are lots of servers and storage and operating systems, virtual machines, containers and databases, data stores, file systems. And there are thousands of them, tens of thousands, hundreds of thousands of them. It’s hard for people to locate all these things and patch them. They have to be aware there’s a vulnerability. It’s got to be an automated process.

You can’t wait for a downtime window, where you say, “Oh, I can’t take the system down. I know I’ve got to patch this, but we have scheduled downtime middle of next month.” Well, that’s wrong thinking and that’s kind of lack of priority for security.

All that said, patching and updating must be a priority. Dr. Ron Layton, Deputy Assistant Director of the U.S. Secret Service, said at the NetEvents Global Press Summit, September 2017:

Most successful hacks and breaches – most of them – were because low-level controls were not in place. That’s it. That’s it. Patch management. It’s the low-level stuff that will get you to the extent that the bad guys will say, I’m not going to go here. I’m going to go somewhere else. That’s it.

The Scale of Security Issues Is Huge

I receive many regular email from various defect-tracking and patch-awareness lists. Here’s one weekly sample from the CERT teams at U.S. Dept. of Homeland Security. IT pros won’t be surprised at how large it is: https://www.us-cert.gov/ncas/bulletins/SB17-296

There are 25 high-severity vulnerabilities on this list, most from Microsoft, some from Oracle. Lots of medium-severity vulnerabilities from Microsoft, OpenText, Oracle, and WPA – the latter being the widely reported bug in Wi-Fi Protected Access. In addition, there are a few low-severity vulnerability, and then page after page of those labeled “severity not yet assigned.” The list goes on and on, even hitting infrastructure products from Cisco and F5. And lots more WiFi issues.

This is a typical week – and not all those vulnerabilities in the CERT report have patches yet. CERT is only one source, by the way. Want more? Here’s a list of security-related updates from Apple. Here is a list of a security updates from Juniper Networks. A list of from Microsoft. And Red Hat too.

So: When security analysts say that enterprises merely need to keep up with patches and fixes, well, yes, that’s the low-hanging fruit. However, nobody talks about how much of that low-hanging fruit there is. The amount is overwhelming in an enterprise. No wonder some rotten fruit slip through the cracks.

You can call me Ray, or you can call me Jay, or you can call me Johnny or you can call me Sonny, or you can call me RayJay, or you can call me RJ… but ya doesn’t hafta call me Johnson.

That’s a great line from the comedian Bill Saluga in the 1970s… but in this case, it would be “Don’t call me Jay.” My company, Camden Associates, has never hired or given an email address to anyone named Jay Weinberg, or any Jay at all.

Not sure if you’d call this a scam, but buying a “Best of Business” award is pretty slimy. Here’s the full email:

From: Mia Broadbent email hidden; JavaScript is required

Subject: Camden Associates – 2017 Best in Business

To: Jay Weinberg

Hello Jay,

I hope that my email finds you well.

I am getting in contact with you today on behalf of Wealth & Finance INTL Magazine with regards to Camden Associates’s selection within our upcoming 2017 Best in Business Awards.

2017 has been a tumultuous year for the global market, with both economic and political factors taking their toll on the worldwide stage, causing uncertainty for businesses of all sizes and of all sectors. In light of such uncertainty, we have launched our 2017 Best in Business Awards in order to celebrate those firms, such as Camden Associates whom despite an unprecedented amount of uncertainty, have consistently demonstrated excellence and innovation throughout the year. This selection serves to recognise that you are the very best within your sector. One of an elite few.

Now that Camden Associates have been officially recognised within our 2017 Best in Business Awards, you are automatically entitled to a complimentary digital certificate along with a simple listing within the magazine.

If, however you would like to get a head start on your promotional activity prior to the upcoming new year, I have included the different levels of coverage we offer. Alternatively, if there is something specific you wish to have included with the coverage, let me know as I am able to put together a more bespoke package. Please see the options below:

Option #1: Platinum Coverage this is just 1,595 GBP and would include the following:

  • Main front cover image and headline
  • A front-end double page inclusion in the magazine (up to approx. 1800 words)
  • Inclusion of your article on our website
  • 1 personalised Best in Business crystal trophy
  • A 2017 Best in Business logo for your own marketing
  • Camden Associates’s logo in the monthly newsletter
  • Awards certificate
  • Full rights to the PDFs for you to use as you wish

Option #2: Silver Coverage this is just 795 GBP and would include the following

  • A sub front cover headline and image.
  • A double-page inclusion in the magazine (up to approx. 1800 words)
  • Inclusion of your article on our website
  • A personalised Best in Business crystal trophy crystal trophy
  • A 2017 Best in Business logo for your own marketing
  • Awards certificate
  • Full rights to the PDFs for you to use as you wish

Option #3: Bronze Coverage this is just 495 GBP and would include the following

  • A Single Page inclusion within the magazine (up to approx. 900 words)
  • A personalised Best in Business crystal trophy crystal trophy
  • A 2017 Best in Business logo for your own marketing
  • Awards certificate

(P&P is exclusive on packages with a trophy. VAT will be charged for UK and EC sales unless in receipt of a valid EC VAT number)

We will be publishing details of our Best in Business Awards within November both on our website and in our digital publication which will is distributed to our 130,000+ circulation.

We have an entirely in-house editorial and design team who will assist you in putting together all items associated with your package. Trophies and logos are available to order individually on request also.

If you would like to go ahead with one of the stated packages, simply reply to confirm the package and cost, i.e. Agreed Silver Coverage 795 GBP.

To go ahead with the free certificate only please respond confirming this.

If you have any questions regarding the award, the magazine or packages please don’t hesitate to get in touch and I will be more than happy to help.

I look forward to receiving your response,

Have a great day.

Kind regards,

 

Mia Broadbent – Media Executive
Wealth & Finance International
T: +44 (0) 203 725 6844

This is a common scam: The scammer pretends to be a famous person, and links to the bio or a story about that person. That means nothing. A person wants to share some gold with you, and links to a BBC story about a battle in Iraq or Afghanistan. That means nothing. A person claims to be one of the members of the wealthy family that owns Wal-Mart, with links to a Wikipedia page. That means nothing.

Also look at the “from” email address and the email address indicated in the message. First of all, they’re nothing alike. Second, neither address seems like it would belong to the real Alice Walton. And third… why would you be bcc’d on a message like this?

Don’t be fooled by such messages — they’re scams. Every last one of them.

From: “Alice Walton” email hidden; JavaScript is required

Subject: YOUR GIFT

To: email hidden; JavaScript is required

Reply-To: email hidden; JavaScript is required

I, Alice Walton authenticate this email, you can read about me on: http://en.wikipedia.org/wiki/Alice_Walton  ,

I write to you because I intend to give to you a portion of my Net-worth which I have been banking. I want to cede it out as gift, hoping it would be of help to you and others too. Respond to this email: email hidden; JavaScript is required

With joy,

Alice Walton

Open source software (OSS) offers many benefits for organizations large and small—not the least of which is the price tag, which is often zero. Zip. Nada. Free-as-in-beer. Beyond that compelling price tag, what you often get with OSS is a lack of a hidden agenda. You can see the project, you can see the source code, you can see the communications, you can see what’s going on in the support forums.

When OSS goes great, everyone is happy, from techies to accounting teams. Yes, the legal department may want to scrutinize the open source license to make sure your business is compliant, but in most well-performing scenarios, the lawyers are the only ones frowning. (But then again, the lawyers frown when scrutinizing commercial closed-source software license agreements too, so you can’t win.)

The challenge with OSS is that it can be hard to manage, especially when something goes wrong. Depending on the open source package, there can be a lot of mysteries, which can make ongoing support, including troubleshooting and performance tuning, a real challenge. That’s because OSS is complex.

It’s not like you can say, well, here’s my Linux distribution on my server. Oh, and here’s my open source application server, and my open source NoSQL database, and my open source log suite. In reality, those bits of OSS may be from separate OSS projects, which may (or may not) have been tested for how well they work together.

A separate challenge is that because OSS is often free-as-in-beer, the software may not be in the corporate inventory. That’s especially common if the OSS is in the form of a library or an API that might be built into other applications you’ve written yourself. The OSS might be invisible but with the potential to break or cause problems down the road.

You can’t manage what you don’t know about

When it comes to OSS, there may be a lot you don’t know about, such as those license terms or interoperability gotchas. Worse, there can be maintenance issues — and security issues. Ask yourself: Does your organization know all the OSS it has installed on servers on-prem or in the cloud? Coded into custom applications? Are you sure that all patches and fixes have been installed (and installed correctly), even on virtual machine templates, and that there are no security vulnerabilities?

In my essay “The six big gotchas: The impact of open source on data centers,” we’ll dig into the key topics: License management, security, patch management, maximizing uptime, maximizing performance, and supporting the OSS.

For no particular reason, and in alphabetical order, my favorite episodes from the original Star Trek, aka, The Original Series.

Arena

Kirk and the captain of the Gorn ship are told to fight to the death as proxies for a space battle, but neither is happy about it

Balance of Terror

“Run Silent Run Deep” goes into space, with two canny submarine, ahem, starship captains battling the odds.

The Corbomite Maneuver

Appearances aren’t what they seem, and a vicious enemy may only be a lonely alien.

The Devil in the Dark

Not only is there a neat non-humanoid alien, but we get to see Kirk dealing with Federation civilians who aren’t impressed with his authority.

The Doomsday Machine

Captain Ahab takes on the white whale, as we get to see another starship and an argument about rank and Starfleet protocol.

Journey to Babel

We learn about Spock’s family, some of the other important species in the Federation, and what diplomacy is all about.

Let That Be Your Last Battlefield

A parable about race and law-and-order, as black-and-white aliens fight against white-and-black aliens.

Mirror, Mirror

We visit the Mirror Universe for the first time, a place that’s frankly a lot more interesting that the regular universe.

The Trouble with Tribbles

The funniest episode of Classic Trek, which is peculiarly meaningful because writer David Gerrold gave my wife one of the tribbles used on the show.

The Ultimate Computer

Can an AI-based computer operate a self-driving Enterprise? The anti-Elon Musk, Dr. Daystrom (shown), thinks so.

Those are two popular ways of migrating enterprise assets to the cloud:

  1. Write new cloud-native applications.
  2. Lift-and-shift existing data center applications to the cloud.

Gartner’s definition: “Lift-and-shift means that workloads are migrated to cloud IaaS in as unchanged a manner as possible, and change is done only when absolutely necessary. IT operations management tools from the existing data center are deployed into the cloud environment largely unmodified.”

There’s no wrong answer, no wrong way of proceeding. Some data center applications (including servers and storage) may be easier to move than others. Some cloud-native apps may be easier to write than others. Much depends on how much interconnectivity there is between the applications and other software; that’s why, for example, public-facing websites are often easiest to move to the web, while tightly coupled internal software, such as inventory control or factory-floor automation, can be trickier.

That’s why in some cases, a hybrid strategy is best. Some parts of the applications are moved up to the cloud, while others remain in the data centers, with SD-WANs or other connectivity linking everything together in a secure manner.

In other words, no one size fits all. And no one timeframe fits all, especially when it comes to lifting-and-shifting.

Saas? Paas? It Depends.

A recent survey from the Oracle Applications User Group (OAUG) showed that 70% of respondents who have plans to adopt Oracle Cloud solutions will do so in the next three years. About 35% plan to implement Software-as-a-Service (SaaS) solutions to run with their existing Oracle on-premises installations and 29 percent planning to use Platform-as-a-Service (PaaS) services to accelerate software development efforts in the next 12 months.

Joe Paiva, CIO of the U.S. Commerce Department’s International Trade Administration (ITA), is a fan of lift-and-shift. He said at a cloud conference that “Sometimes it makes sense because it gets you there. That was the key. We had to get there because we would be no worse off or no better off, and we were still spending a lot of money, but it got us to the cloud. Then we started doing rationalization of hardware and applications, and dropped our bill to Amazon by 40 percent compared to what we were spending in our government data center. We were able to rationalize the way we use the service.” Paiva estimates government agencies could save 5%-15% using lift-and-shift.

The benefits of moving existing workloads to the cloud are almost entirely financial. If you can shut down a data center and pay less to run the application in the cloud, it’s can be a good short-term solution with immediate ROI. Gartner cautions, however, that lift and shift “generally results in little created value. Plus, it can be a more expensive option and does not deliver immediate cost savings.” Much depends on how much it costs to run that application today.

A Multi-Track Process for Cloud Migration

The real benefits of new cloud development and deployment architectures take time to realize. For many organizations, there may be a multi-track process:

First track: Lift-and-shift existing workloads that are relatively easy to migrate, while simultaneously writing cloud-native applications for new projects. Those provide the biggest and fastest return on investment, while leaving data center workloads in place and untouched.

Second track: Write cloud-native applications for the remaining data-center workloads, the ones impractical to migrate in their existing form. These will be slower, but the payoff would result in the ability to turn off some or all existing data centers – and eliminating their associated expenses, such as power and cooling, bandwidth, and physical space.

Third track: At some point, revisit the lifted-and-shifted workloads to see which would significantly benefit from being rewritten as cloud-native apps. Unless there is an order of magnitude increase in efficiency, or significant added functionality, the financial returns won’t be high – or may be nonexistent. For some applications, it may never make sense to redesign and rewrite them in a cloud-native way. So, those old enterprise applications may live on for years to come.

I can’t believe my luck – Microsoft co-founder Bill Gates wants to give me $5 million. Hurray! And not only that, he’s contacting me from an email address at Nelson Mandela University in South Africa. It’s also a shame to learn that he’s sick and is going to Germany for treatment.

Obviously, this is spam. Don’t be tricked by links to Wikipedia or news stories; those don’t prove the veracity of anything. If you get messages like this, go ahead and laugh – but then delete the message. Don’t reply, don’t be fooled.

From: “Mr. Bill Gates” <email hidden; JavaScript is required>

Subject: DONATION FROM BILL GATES

Date: October 15, 2017 at 3:37:37 AM MST

Greetings You have been gifted $5 MILLION USD From Mr Bill Gates. Contact me at this email for your claim: email hidden; JavaScript is required

I hope this information meet you well as I know you will be curious to know why/how I selected you to receive a sum of $5,000,000,00 USD, our information below is 100% legitimate, please see the link below: https://en.wikipedia.org/wiki/Bill_%26_Melinda_Gates_Foundation

I BILL GATES and my wife decided to donate the sum of $5,000,000,00 USD to you as part of our charity project to improve the 10 lucky individuals all over the world from our $65 Billion Usd I and My Wife Mapped out to help people. We prayed and searched over the internet for assistance and i saw your profile on Microsoft email owners list and picked you. Melinda my wife and i have decided to make sure this is put on the internet for the world to see. as you could see from the webpage above,am not getting any younger and you can imagine having no much time to live. although am a Billionaire investor and we have helped some charity organizations from our Fund.

You see after taken care of the needs of our immediate family members, Before we die we decided to donate the remaining of our Billions to other individuals around the world in need, the local fire department, the red cross, Haiti, hospitals in truro where Melinda underwent her cancer treatment, and some other organizations in Asia and Europe that fight cancer, alzheimer’s and diabetes and the bulk of the funds deposited with our payout bank of this charity donation. we have kept just 30% of the entire sum to our self for the remaining days because i am no longer strong am sick and am writing you from hospital computer.and me and my wife will be traveling to Germany for Treatment.

To facilitate the payment process of the funds ($5,000,000.00 USD) which have been donated solely to you, you are to send me

your full names……………..

your contact address…………….

your personal telephone number……………

SEND YOUR ABOVE DETAILS TO email hidden; JavaScript is required

so that i can forward your payment information to you immediately. I am hoping that you will be able to use the money wisely and judiciously over there in your City. please you have to do your part to also alleviate the level of poverty in your region, help as many you can help once you have this money in your personal account because that is the only objective of donating this money to you in the first place.

Thank you for accepting our offer, we are indeed grateful You Can Google my name for more information: Mr Bill Gates or Bill & Melinda Gates Foundation

Remain Blessed

Regards

Mr Bill Gates

About a decade ago, I purchased a piece of a mainframe on eBay — the name ID bar. Carved from a big block of aluminum, it says “IBM System/370 168,” and it hangs proudly over my desk.

My time on mainframes was exclusively with the IBM System/370 series. With a beautiful IBM 3278 color display terminal on my desk, and, later, a TeleVideo 925 terminal and an acoustic coupler at home, I was happier than anyone had a right to be.

We refreshed our hardware often. The latest variant I worked on was the System/370 4341, introduced in early 1979, which ran faster and cooler than the slower, very costly 3031 mainframes we had before. I just found this on the IBM archives: “The 4341, under a 24-month contract, can be leased for $5,975 a month with two million characters of main memory and for $6,725 a month with four million characters. Monthly rental prices are $7,021 and $7,902; purchase prices are $245,000 and $275,000, respectively.” And we had three, along with tape drives, disk drives (in IBM-speak, DASD, for Direct Access Storage Devices), and high-speed line printers. Not cheap!

Our operating system on those systems was called Virtual Machine, or VM/370. It consisted of two parts, Control Program and Conversational Monitoring System. CP was the timesharing operating system – in modern virtualization terms, the hypervisor running on the bare metal. CMS was the user interface that users logged into, and provide access to not only a text-based command console, but also file storage and a library of tools, such as compilers. (We often referred to the platform as CP/CMS).

Thanks to VM/370, each user believed she had access to a 100% dedicated and isolated System/370 mainframe, with every resource available and virtualized. (I.e., she appeared to have dedicated access to tape drives, but they appeared non-functional if her tape(s) weren’t loaded, or if she didn’t buy access to the drives.)

My story about mainframes isn’t just reminiscing about the time of dinosaurs. When programming those computers, which I did full-time in the late 1970s and early 1980s, I learned a lot of lessons that are very applicable today. Read all about that in my article for HP Enterprise Insights, “4 lessons for modern software developers from 1970s mainframe programming.”

To get the most benefit from the new world of cloud-native server applications, forget about the old way of writing software. In the old model, architects designed software. Programmers wrote the code, and testers tested it on test server. Once the testing was complete, the code was “thrown over the wall” to administrators, who installed the software on production servers, and who essentially owned the applications moving forward, only going back to the developers if problems occurred.

The new model, which appeared about 10 years ago is called “DevOps,” or developer operations. In the DevOps model, architects, developers, testers, and administrators collaborate much more closely to create and manage applications. Specifically, developers play a much broader role in the day-to-day administration of deployed applications, and use information about how the applications are running to tune and enhance those applications.

The involvement of developers in administration made DevOps perfect for cloud computing. Because administrators had fewer responsibilities (i.e., no hardware to worry about), it was less threatening for those developers and administrators to collaborate as equals.

Change matters

In that old model of software development and deployment, developers were always change agents. They created new stuff, or added new capabilities to existing stuff. They embraced change, including new technologies – and the faster they created change (i.e., wrote code), the more competitive their business.

By contrast, administrators are tasked with maintaining uptime, while ensuring security. Change is not a virtue to those departments. While admins must accept change as they install new applications, it’s secondary to maintaining stability. Indeed, admins could push back against deploying software if they believed those apps weren’t reliable, or if they might affect the overall stability of the data center as a whole.

With DevOps, everyone can embrace change. One of the ways that works, with cloud computing, is to reduce the risk that an unstable application can damage system reliability. In the cloud, applications can be build and deployed using bare-metal servers (like in a data center), or in virtual machines or containers.

DevOps works best when software is deployed in VMs or containers, since those are isolated from other systems – thereby reducing risk. Turns out that administrators do like change, if there’s minimal risk that changes will negatively affect overall system reliability, performance, and uptime.

Benefits of DevOps

Goodbye, CapEx, hello, OpEx. Cloud computing moves enterprises from capital-expense data centers (which must be built, electrified, cooled, networked, secured, stocked with servers, and refreshed periodically) to operational-expense service (where the business pays monthly for the processors, memory, bandwidth, and storage reserved and/or consumed). When you couple those benefits that with virtual machines, containers, and DevOps, you get:

  • Easier Maintenance: It can be faster to apply patches and fixes to software virtual machines – and use snapshots to roll back if needed.
  • Better Security: Cloud platform vendors offer some security monitoring tools, and it’s relatively easy to install top-shelf protections like next-generation firewalls – themselves offered as cloud services.
  • Improved Agility: Studies show that the process of designing, coding, testing, and deploying new applications can be 10x faster than traditional data center methods, because the cloud reduces and provides robust resources.
  • Lower Cost: Vendors such as Amazon, Google, Microsoft, and Oracle, are aggressively lowering prices to gain market share — and in many cases, those prices are an order of magnitude below what it could cost to provision an enterprise data center.
  • Massive Scale: Need more power? Need more bandwidth? Need more storage? Push a button, and the resources are live. If those needs are short-term, you can turn the dials back down, to lower the monthly bill. You can’t do that in a data center.

Rapidly evolving

The technologies used in creating cloud-native applications are evolving rapidly. Containers, for example, are relatively new, yet are becoming incredibly popular because they require 4x-10x fewer resources than VMs – thereby slashing OpEx costs even further. Software development and management tools, like Kubernetes (for orchestration of multiple containers), Chef (which makes it easy to manage cloud infrastructure), Puppet (which automates pushing out cloud service configurations), and OpenWhisk (which strips down cloud services to “serverless” basics) push the revolution farther.

DevOps is more important than the meaningless “developer operations” moniker implies. It’s a whole new, faster way of doing computing with cloud-native applications. Because rapid change means everything in achieving business agility, everyone wins.

Loose cyber-lips can sink real ship. According to separate reports published by the British government and the cruise ship industry, large cargo and passenger vessels could be damaged by cyberattacks – and potentially even sent to the bottom of the ocean.

The foreword pulls no punches. “Code of Practice: Cyber Security for Ships” was commissioned by the U.K. Department of Transport, and published by the Institution of Engineering and Technology (IET) in London.

Poor security could lead to significant loss of customer and/or industry confidence, reputational damage, potentially severe financial losses or penalties, and litigation affecting the companies involved. The compromise of ship systems may also lead to unwanted outcomes, for example:

(a) physical harm to the system or the shipboard personnel or cargo – in the worst case scenario this could lead to a risk to life and/or the loss of the ship;

(b) disruptions caused by the ship no longer functioning or sailing as intended;

(c) loss of sensitive information, including commercially sensitive or personal data;

and

(d) permitting criminal activity, including kidnap, piracy, fraud, theft of cargo, imposition of ransomware.

The above scenarios may occur at an individual ship level or at fleet level; the latter is likely to be much worse and could severely disrupt fleet operations.

Cargo and Passenger Systems

The report goes into considerable detail about the need to protect confidential information, including intellectual property, cargo manifests, passenger lists, and financial documents. Beyond that, the document warns about dangers from activist groups (or “hackivism”) where actors might work to prevent the handling of specific cargoes, or even disrupt the operation of the ship. The target may be the ship itself, the ship’s owner or operator, or the supplier or recipient of the cargo.

The types of damage could be as simple as the disruption of ship-to-shore communications through a DDoS attack. It might be as dangerous as the corruption or feeding false sensor data that could cause the vessel to flounder or head off course. What can done? The reports several important steps to maintain the security of critical systems including:

(a) Confidentiality – the control of access and prevention of unauthorised access to ship data, which might be sensitive in isolation or in aggregate. The ship systems and associated processes should be designed, implemented, operated and maintained so as to prevent unauthorised access to, for example, sensitive financial, security, commercial or personal data. All personal data should be handled in accordance with the Data Protection Act and additional measures may be required to protect privacy due to the aggregation of data, information or metadata.

(b) Possession and/or control – the design, implementation, operation and maintenance of ship systems and associated processes so as to prevent unauthorised control, manipulation or interference. The ship systems and associated processes should be designed, implemented, operated and maintained so as to prevent unauthorised control, manipulation or interference. An example would be the loss of an encrypted storage device – there is no loss of confidentiality as the information is inaccessible without the encryption key, but the owner or user is deprived of its contents.

(c) Integrity – maintaining the consistency, coherence and configuration of information and systems, and preventing unauthorised changes to them. The ship systems and associated processes should be designed, implemented, operated and maintained so as to prevent unauthorised changes being made to assets, processes, system state or the configuration of the system itself. A loss of system integrity could occur through physical changes to a system, such as the unauthorised connection of a Wi-Fi access point to a secure network, or through a fault such as the corruption of a database or file due to media storage errors.

(d) Authenticity – ensuring that inputs to, and outputs from, ship systems, the state of the systems and any associated processes and ship data, are genuine and have not been tampered with or modified. It should also be possible to verify the authenticity of components, software and data within the systems and any associated processes. Authenticity issues could relate to data such as a forged security certificate or to hardware such as a cloned device.

With passenger vessels, the report points for the need for modular controls and hardened IT infrastructure. That stops unauthorized people from gaining access to online booking, point-of-sales, passenger management, and other critical ships systems by tapping into wiring cabinets, cable junctions, and maintenance areas. Like we said, scary stuff.

The Industry Weighs In

A similar report was produced for the shipping industry by seven organizations, including the International Maritime Organization and the International Chamber of Shipping. The “Guidelines on Cyber Security Onboard Ships” warns that that incident can arise as the result of,

  • A cyber security incident, which affects the availability and integrity of OT, for example corruption of chart data held in an Electronic Chart Display and Information System (ECDIS)
  • A failure occurring during software maintenance and patching
  • Loss of or manipulation of external sensor data, critical for the operation of a ship. This includes but is not limited to Global Navigation Satellite Systems (GNSS).

This report discusses the role of activists (including disgruntles employees), as well as criminals, opportunists, terrorists, and state-sponsored organizations. There are many potentially vulnerable areas, including cargo management systems, bridge systems, propulsion and other machinery, access control, passenger management systems — and communications. As the report says,

Modern technologies can add vulnerabilities to the ships especially if there are insecure designs of networks and uncontrolled access to the internet. Additionally, shoreside and onboard personnel may be unaware how some equipment producers maintain remote access to shipboard equipment and its network system. The risks of misunderstood, unknown, and uncoordinated remote access to an operating ship should be taken into consideration as an important part of the risk assessment.

The stakes are high. The loss of operational technology (OT) systems “may have a significant and immediate impact on the safe operation of the ship. Should a cyber incident result in the loss or malfunctioning of OT systems, it will be essential that effective actions are taken to ensure the immediate safety of the crew, ship and protection of the marine environment.”

Sobering words for any maritime operator.

“One of these things is not like the others,” the television show Sesame Street taught generations of children. Easy. Let’s move to the next level: “One or more of these things may or may not be like the others, and those variances may or may not represent systems vulnerabilities, failed patches, configuration errors, compliance nightmares, or imminent hardware crashes.” That’s a lot harder than distinguishing cookies from brownies.

Looking through gigabytes of log files and transactions records to spot patterns or anomalies is hard for humans: it’s slow, tedious, error-prone, and doesn’t scale. Fortunately, it’s easy for artificial intelligence (AI) software, such as the machine learning algorithms built into Oracle Management Cloud. What’s more, the machine learning algorithms can be used to direct manual or automated remediation efforts to improve security, compliance, and performance.

Consider how large-scale systems gradually drift away from their required (or desired) configuration, a key area of concern in the large enterprise. In his Monday, October 2 Oracle OpenWorld session on managing and securing systems at scale using AI, Prakash Ramamurthy, senior vice president of systems management at Oracle, talked about how drift happens. Imagine that you’ve applied a patch, but then later you spool up a virtual server that is running an old version of a critical service or contains an obsolete library with a known vulnerability. That server is out of compliance, Ramamurthy said. Drift.

Drift is bad, said Ramamurthy, and detecting and stopping drift is a core competency of Oracle Management Cloud. It starts with monitoring cloud and on-premises servers, services, applications, and logs, using machine learning to automatically understand normal behavior and identify anomalies. No training necessary here: A variety of machine learning algorithms teach themselves how to play the “one of these things is not like the others” game with your data, your systems, and your configuration, and also to classify the systems in ways that are operationally relevant. Even if those logs contain gigabytes of information on hundreds of thousands of transactions each second.

Learn more in my article for Forbes, “Catch The Drift With Machine Learning — Before The Drift Catches You.”

AOL Instant Messenger will be dead before the end of 2017. Yet, instant messages have succeeded far beyond what anyone could have envisioned for either SMS (Short Message Service, carried by the phone company) or AOL, which arguably brought instant messaging to regular computers starting in 1997.

It would be wonderful to claim that there’s some great significance in the passing of AIM. However, my guess is that there simply wasn’t any business benefit to maintaining ia service that nearly nobody used. The AIM service was said to carry far less than 1% of all instant messages across the Internet… and that was in 2011.

I have an AIM account, and although it’s linked into my Apple Messages client, I had completely forgotten about it. Yes, there was a little flurry of news back in March 2017, when AOL began closing APIs and shutting down some third-party AIM applications. However, that didn’t resonate. Then, on Oct. 6, came the email from AOL’s new corporate overload, Oath, a subsidiary of Verizon:

Dear AIM user,

We see that you’ve used AOL Instant Messenger (AIM) in the past, so we wanted to let you know that AIM will be discontinued and will no longer work as of December 15, 2017.

Before December 15, you can continue to use the service. After December 15, you will no longer have access to AIM and your data will be deleted. If you use an @aim.com email address, your email account will not be affected and you will still be able to send and receive email as usual.

We’ve loved working on AIM for you. From setting the perfect away message to that familiar ring of an incoming chat, AIM will always have a special place in our hearts. As we move forward, all of us at AOL (now Oath) are excited to continue building the next generation of iconic brands and life-changing products for users around the world.

You can visit our FAQ to learn more. Thank you for being an AIM user.

Sincerely,

The AOL Instant Messenger team

Interestingly, my wife, who also has an AIM account but never uses it, thought that the message above was a phishing scam of some sorts. So, AIM is dead. But not instant messaging, which is popular for both consumers and business users, and for desktop/notebooks and smartphones. There are so many clients that consumers can use; according to Statistica, here are the leaders as of January 2017, cited in millions of active monthly users. AIM didn’t make the list.

Then there are the corporate instant message platforms, Slack, Lync, and Symphony. And we’re not even talking social media, like Twitter, Google+, Kik, and Instagram. So – Instant messaging is alive and well. AIM was the pioneer, but it ceased being relevant a long, long time ago.

IT managers shouldn’t have to choose between cloud-driven innovation and data-center-style computing. Developers shouldn’t have to choose between the latest DevOps programming using containers and microservices, and traditional architectures and methodologies. CIOs shouldn’t have to choose between a fully automated and fully managed cloud and a self-managed model using their own on-staff administrators.

At an Oracle OpenWorld general session on infrastructure-as-a-service (IaaS) October 3, Don Johnson, senior vice president of product development at Oracle, lamented that CIOs are often forced to make such difficult choices. Sure, the cloud is excellent for purpose-built applications, he said, “and so what’s working for them is cloud-native, but what’s not working in the cloud are enterprise workloads. It’s an unnecessary set of bad choices.”

When it comes to moving existing business-critical applications to the cloud, Johnson explained the three difficult choices faced by many organizations:

  • First, CIOs can rewrite those applications from the ground up to run in the cloud in a platform-as-a-service (PaaS) model. That’s best in terms of achieving the greatest computational efficiency, as well as integration with other cloud services, but it can be time-consuming and costly.
  • Second, organizations can retrofit their existing applications to run in in the cloud, but this can be challenging at best, or nearly impossible in some cases.
  • Or third, CIOs can “lift and shift” existing on-premises applications, including their full software stack, directly into the cloud, using the IaaS model.

Historically, those three models have required three different clouds. No longer. Only the Oracle Cloud Infrastructure, Johnson stated, “lets you run your full existing stack alongside cloud-native applications.” And this is important, he added, because migration to the cloud must be slow and deliberate. “Running in the cloud is very disruptive. It can’t happen overnight. You need to move when and how you want to move,” he said. And a deliberative movement to the cloud means a combination of new cloud-native PaaS applications and legacy applications migrated to IaaS.

Read more in my story for Forbes, “Lift And Shift Workloads — And Write Cloud-Native Apps — For The Same Cloud.”

Despite Elon Musk’s warnings this summer, there’s not a whole lot of reason to lose any sleep worrying about Skynet and the Terminator. Artificial Intelligence (AI) is far from becoming a maleficent, all-knowing force. The only “Apocalypse” on the horizon right now is an over reliance by humans on machine learning and expert systems, as demonstrated by the deaths of Tesla owners who took their hands off the wheel.

Examples of what currently pass for “Artificial Intelligence” — technologies such as expert systems and machine learning — are excellent for creating software. AI software is truly valuable help in contexts that involve pattern recognition, automated decision-making, and human-to-machine conversations. Both types of AI have been around for decades. And both are only as good as the source information they are based on. For that reason, it’s unlikely that AI will replace human beings’ judgment on important tasks requiring decisions more complex than “yes or no” any time soon.

Expert systems, also known as rule-based or knowledge-based systems, are when computers are programmed with explicit rules, written down by human experts. The computers can then run the same rules but much faster, 24×7, to come up with the same conclusions as the human experts. Imagine asking an oncologist how she diagnoses cancer and then programming medical software to follow those same steps. For a particular diagnosis, an oncologist can study which of those rules was activated to validate that the expert system is working correctly.

However, it takes a lot of time and specialized knowledge to create and maintain those rules, and extremely complex rule systems can be difficult to validate. Needless to say, expert systems can’t function beyond their rules.

By contrast, machine learning allows computers to come to a decision—but without being explicitly programmed. Instead, they are shown hundreds or thousands of sample data sets and told how they should be categorized, such as “cancer | no cancer,” or “stage 1 | stage 2 | stage 3 cancer.”

Read more about this, including my thoughts on machine learning, pattern recognition, expert systems, and comparisons to human intelligence, in my story for Ars Technica, “Never mind the Elon—the forecast isn’t that spooky for AI in business.”