Essential Guide

Special Report: Artificial intelligence apps come of age

A comprehensive collection of articles, videos and more, hand-picked by our editors
Get started Bring yourself up to speed with our introductory content.

AI technology: Is the genie (or genius) out of the bottle?

Artificial intelligence has come a long way since SearchCIO columnist Harvey Koeppel studied with some of the discipline's pioneers. He offers 10 steps CIOs must take now to prepare for the AI technology era ahead.

It is with great enthusiasm and a healthy dose of angst that I am writing this post. My enthusiasm comes from the...

undeniable reality that artificial intelligence (AI) technology, after approximately 60 years of research-and-development breakthroughs and breakdowns, is mainstream. IBM supercomputer Watson's February 16, 2011, decisive victory on the game show Jeopardy! over Ken Jennings (who had the longest consecutive winning streak, at 74 games) and Brad Rutter (who was the game's largest monetary winner, at $3.25 million) has dramatically and indelibly made that point.

My angst comes from a deeply personal place; the study and development of AI was my entry point into the field of technology in 1967 as a young and (very) naïve college freshman at the University of Pittsburgh. I spent four intensive years of study with some of the leading AI technology pioneers at Pitt, Carnegie Mellon, MIT and Stanford. As excited as I was about AI's incredibly positive potential contribution to society and mankind, I was equally tormented by the intellectual, psychological and emotional consequences inevitably associated with contributing to the development of what I thought could easily turn out to be the next atomic bomb.

For those readers whose high school history may be a bit rusty, J. Robert Oppenheimer was a theoretical physicist and professor of physics at the University of California, Berkeley, and is often referred to as one of the fathers of the atomic bomb. As a principal scientist in the Manhattan Project, the World War II project that developed and detonated the first atomic bomb, Oppenheimer was quoted as saying the experience reminded him of words from the Bhagavad Gita: "Now I am become Death, the destroyer of worlds."

When I graduated from college, I opted out of artificial intelligence technology; that is, I went screaming out of the space at a million miles an hour with my hair on fire and got a job as a COBOL programmer at a bank.

AI technology before there was Jeopardy! (Pun intended.)

Most industry experts attribute the origins of the field of artificial intelligence to a conference that was held on the campus of Dartmouth in 1956. Those present -- including John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel and Herbert Simon -- ultimately became the preeminent scientists and leaders of AI research and development. Throughout the 1960s, their early work was heavily funded by the Department of Defense and included programs that played checkers, spoke English, solved algebra problems and created proofs for logical theorems.

Significant enthusiasm for these nascent AI technologies reigned, as evidenced by predictions made by Herb Simon: "Machines will be capable, within twenty years, of doing any work a man can do"; Minsky supported Simon's perspective and wrote, "Within a generation … the problem of creating 'artificial intelligence' will substantially be solved."

Needless to say, the early pioneers (as is typical) were a bit overly optimistic, although, in the bigger picture, perhaps not that much. The 1970s brought government funding cuts and the field went from the "Peak of Inflated Expectations" to the "Trough of Disillusionment," to use the modern vernacular of the Gartner Hype Cycle. During the 1980s, commercial success was achieved through the development of expert systems that enhanced knowledge and analytics capabilities, and the market grew to over $1 billion -- the "Slope of Enlightenment," literally and figuratively. Enthusiasm for AI was rekindled.

You awake one morning to find your brain has another lobe functioning. Invisible, this auxiliary lobe answers your questions with information beyond the realm of your own memory, suggests plausible courses of action and asks questions that help bring out relevant facts. You quickly come to rely on the new lobe so much that you stop wondering how it works. You just use it. This is the dream of artificial intelligence. -- Artificial Intelligence, Phil Lemmons, Byte Magazine, April 1985, p. 125

During the 1990s and into the early 21st century, the "Plateau of Productivity" included many significant and highly visible advances in AI technology, including this very small subset of examples:

A brief timeline of AI advances, from VaMP to Watson

The Hype Cycle extended: Singularity

I, as many do, believe that we are at an inflection point. The Google Search dictionary defines the term singularity as "a point at which a function takes an infinite value, especially in space-time when matter is infinitely dense, as at the center of a black hole." Whether you believe in Moore's Law or not, there is no question that advances in AI technology are coming more quickly and are more significant in terms of features and functions and -- of critical importance -- human connection. Bionic human components are no longer the stuff of science fiction or restricted to the realm of academic study. The human-computer interface is no longer just about moving a mouse, snapping a selfie or moving your arms or legs -- it is about connecting and embedding electro-mechanical and silicon-based technologies to and within human physiology.

It is reasonable, I suppose, to be unconvinced … and to be skeptical about whether machines will ever be intelligent. It is unreasonable, however, to think machines could become nearly as intelligent as we are and then stop, or to suppose we will always be able to compete with them in wit and wisdom. Whether or not we could retain some sort of control of the machines, assuming that we would want to, the nature of our activities and aspirations would be changed utterly by the presence on earth of intellectually superior beings. -- Marvin Minsky, "Matter, Mind and Models," Proceedings of IFIPS Congress, Spartan Books, 1965, 65: Vol. I

Perhaps there should be a sixth stage in Gartner's Hype Cycle called Singularity.

Ten CIO imperatives for the AI age

For CIOs, the already existing and new opportunities arising daily from AI technology are immense if not daunting. Here are some ways in which we, as IT executives, can help to steer the course and maximize our influence on the outcomes that today we can only imagine:

  • If you are not already familiar with the basic concepts and principles of AI, read a book, take a course, talk to colleagues and acquaint yourself with the discipline.
  • Familiar or not, continue to monitor new developments in AI technology -- this stuff changes faster than we do.
  • If you already have an innovation function within your enterprise (business or IT), ensure that AI is one of the disciplines being tested and developed. If you don't have an innovation function within your enterprise, create one.
  • Hire and train the best and the brightest AI talent that you can find and afford.
  • As with all new technologies, ensure products and services that you build with AI have commercial viability (i.e., reasonable economic return on investments).
  • With AI applications in particular, be transparent with your internal and external stakeholders. Proper messaging in this area is critical and, if managed well, can be a competitive differentiator.
  • Where the development and deployment of smart technologies may displace human workers, ensure that you have a good HR strategy and plan. Full communication and retraining of affected staff go a long way toward minimizing resistance (sometimes even sabotage) and toward ultimate acceptance.
  • Where AI is being used in an expert advisory context (e.g., sales, service, manufacturing, forecasting), ensure impacted staff understand that the tools are being deployed to help them do a better job, increase their productivity and value, and increase customer satisfaction, which will, in turn, increase employee satisfaction and retention.
  • Of paramount importance, where AI applications are being supported by big data, especially if the data is personal, absolutely ensure that information security and data privacy policies, procedures, methods and tools are employed to protect the data from breach or unintended use. The combination of big data and AI can be extraordinarily powerful, and ways and means to protect it need to be commensurate.
  • Spend quality time with your executive team, C-suite peers and staff to reflect upon potential ethical or moral implications of new AI-based products or applications, including how data is collected, stored, retrieved and utilized. This is especially important for those involved with medical or public infrastructure applications where lives may literally be at stake and/or significant economic or social disruptions could occur when unintended consequences happen -- you know that this one is a "when," not an "if."

Take a closer look at these historic moments in AI technology

Two self-driving cars cruised down a Paris highway for more than 621 miles at speeds up to about 80 miles per hour in 1994.

Deep Blue vs. Garry Kasparov: In 1997, IBM's supercomputer Deep Blue conquered a reigning world chess champion.

The Furby may have been born in 1998, but it didn't have this conversation with Apple's Siri until 2011.

Will Microsoft's Kinect, released in 2010, find a place in the enterprise?

Watson vs. Jeopardy! champions: In 2011, IBM's supercomputer emerged victorious over two of the winningest defenders in the quiz show's history.

End note: Mr. Potato Head

There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. … Even a potato in a dark cellar has a certain low cunning about him which serves him in excellent stead. -- Samuel Butler, Erewhon, 1872, New Zealand Electronic Text Centre

Let me know what you think. Post a comment or drop me a note at hrkoeppel@aol.com. Discuss, debate or even argue -- let's continue the conversation.

Next Steps

Recent columns from Harvey Koeppel:

CIOs can make mobile payment systems better

The enterprise data center is on life support

The digital CIO as data master

This was last published in July 2015

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Special Report: Artificial intelligence apps come of age

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

The deployment of AI technology in business is inevitable. Should we be worried?
Cancel

-ADS BY GOOGLE

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

SearchDataCenter

Close