News Stay informed about the latest enterprise technology news and product updates.

AI future is nigh, but enterprises are unready

The AI future is terrifying, terrific -- and companies are unprepared to exploit it. Also: Big tech testifies on 2016 election meddling; iPhone X privacy concerns.

Fears about job-stealing robots forcing people into abject poverty or doing much worse things -- enslavement or, um, extermination -- should remain the province of Hollywood, said Andrew McAfee, an MIT scientist and co-director for the school's Initiative on the Digital Economy.

McAfee, who studies how technology affects business, the economy and society, spoke during a panel discussion at MIT's AI and the Future of Work conference in Cambridge, Mass., on Wednesday. When asked about what's driving the anxiety, even dread, of an AI future in which artificial intelligence runs amok, McAfee recited a line from AI researcher Andrew Ng: Worrying about robots taking over is like worrying about overpopulation on Mars.

"We are spending way too much time on this sophomoric, dorm room, BS topic," McAfee said. "Did I say that strongly enough?"

McAfee's admonishing remarks came near the end of the event's heady first day, which featured computer engineering professors, business leaders and social scientists at the forefront of a developing field that draws as much optimism as pessimism. So while 72% of Americans are worried that robots may take their jobs, said MIT president L. Rafael Reif, citing a recent study, the roboticist Daniela Rus spoke of organizations today using AI to prevent money laundering, conserve energy and look for cures for breast cancer and other diseases.

Whether energized or terrified about an AI future, CIOs trying to propel their companies into the digital age are looking to tap artificial intelligence for business value. But even if they have the IT infrastructure and sought-after expertise needed to manage AI projects, IT chiefs may be steering toward a technology that's not quite ready for them.

MIT roboticist Daniela Rus at the AI and the Future of Work conference.
MIT roboticist Daniela Rus speaks at the AI and the Future of Work conference in Cambridge, Mass., on Wednesday.

Past and future AI

Simon Thompson, who directs the big data and customer experience practice at global communications company BT, spent two decades trying to improve business processes with AI, which first emerged in the 1950s and had its first boom in the 1980s. Thompson said sometimes AI can be rolled out "quite easily," at, say, a startup.

"This has happened precisely twice in 20 years," he said during a panel discussion on using AI to automate business processes. One was a CRM system built from scratch; the other was during an infrastructure overhaul.

But many companies, especially big corporations, aren't "one thing."

"They're a great collection of old companies that have been bought up and stuck together and re-engineered and rebuilt," resulting in complex IT infrastructure that's not prepared for an ultramodern AI makeover.

Building AI technology into such computing environments often involves designing business processes over again and "massive" risk and capital. "This is a big challenge for AI in the future," Thompson said.

Moderator John Markoff; Simon Thompson, of BT; Josh Tenenbaum, of MIT; and Patrick Winston, of MIT, at AI and the Future of Work.
From left, moderator John Markoff; Simon Thompson, of BT; Josh Tenenbaum, of MIT; and Patrick Winston, of MIT, participate in a discussion at AI and the Future of Work.

Attendee Neena Buck suggested taking a look at an earlier time. Buck has been following AI for years, exploring the application of AI at IBM, advising clients as an analyst and, at MIT's Industrial Liaison Program, connecting companies to MIT research in AI, robotics, energy and other disciplines.

In the 1980s and '90s, Buck said, a number of companies developed software called "expert systems," which served as a kind of template for businesses that wanted to build systems that could reason and make decisions. The vendors "basically built in an inference engine, built in the reasoning capability so that you, an end-user company, could then put your own rules in based on the templates they provided," she said. For today's AI focus of pattern matching and recognition, "That doesn't exist right now."

But companies want the AI capabilities coming out of cutting-edge research. Buck recently started her own advisory practice in Cambridge, Mass., which is focused on machine intelligence, natural language processing and robotics. Her clients are wowed by TV ads showcasing humanlike AI wonders performed by Amazon's Alexa and IBM's Watson.

They'll ask her, "'Why can't we build a system that does that for our industry?' Well, what they don't realize is that IBM spent decades developing speech understanding systems and reasoning systems before they were able to create Watson."

Before building a commercial-grade AI system with a flashy interface, the data needs to be cleaned, integrated with existing business systems, and then tagged and labeled so that a system understands key concepts in the industry it's operating in, whether education or manufacturing or retail.

"It's all pre-processing stuff that needs to be done before you can build any kind of relevant systems," Buck said.

MIT grad students ready to help

Manasi Vartak, a grad student at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), is working to "democratize" machine learning, the algorithms that form the basis of AI, "so that a person who has a reasonable grasp of, say, software engineering can quickly build these models, understand where they work well, where they don't work well."

Vartak echoed concerns about shepherding machine learning into a company's business processes. She gave the example of a telecommunications company that wants to predict which of its customers might want to take their business elsewhere. She can feed a machine learning algorithm a month's worth of customer behavior data, but the company has to take over from there.

"There needs to be a place where this prediction is going to go into their existing data infrastructure so that you make the right offers to the people who are going to churn, A. And B, you want to use the results of that intervention and feed them back into your data," Vartak said. "Not all companies are ready or quite at the stage where they can do that daily or even monthly."

Vartak said she and other students at CSAIL would be happy to sit down with companies that want to apply AI to business and figure out what their challenges are and how to meet them. But first she and her peers need to find those companies. The lab has widened its partner network through projects like ModelDB, an online open source tool for data scientists that manages a company's portfolio of machine learning models, drawing Adobe and several banks.

"That's how we got a lot of inbound interest," she said. "Otherwise it's of your own accord -- you need to go out and reach out to these people. You're like, 'Do you have a problem I could help you with?' And that's not a very efficient way."

Shouldn't AI be able to solve for this?

CIO news roundup for the week of Oct. 30

Our unsteady AI future was not the only tech topic of the week. Here's what else grabbed headlines:

Attorneys for Facebook, Google and Twitter testify before Congress. Representatives from Facebook, Twitter and Google appeared on Capitol Hill on Tuesday and Wednesday to testify about foreign sources using their social media platforms to meddle in the 2016 presidential election. "We are constantly doing better; we could have done more," Google general counsel Kent Walker said about the company's efforts to prevent the Russian meddling. Facebook's general counsel, Colin Stretch, informed lawmakers that roughly 126 million Americans may have been exposed to content "that originated from the Russian operation." Twitter disclosed it found 36,746 accounts that appeared to be associated with Russia, though not necessarily with the Internet Research Agency, a Russian government-linked troll farm that generated election-related content. The social media company representatives agreed to consider Sen. Jack Reed's suggestion that their organizations devise a way to correct false information spread on their platforms.

iPhone X's facial recognition feature raises privacy concerns. Apple's long-awaited iPhone X went on sale today. The smartphone features an edge-to-edge screen, no home button and "Face ID" face recognition technology, and was "quickly backlogged" because of high demand, Apple CEO Tim Cook told CNBC. The iPhone X's facial recognition abilities continue to raise privacy concerns; however, Reuters reported that Apple plans to share facial mapping data captured by the phone's front-facing cameras and sensors with developers creating entertainment features for iPhone X users. Developers will be allowed to take certain facial data off the phone as long as they agree to seek customer permission first and not sell the data to third parties, according to Apple.

IBM unveils IBM Cloud Private. On Wednesday, IBM introduced new software designed to help companies create on-premises cloud capabilities to speed up app development. The product, dubbed IBM Cloud Private, will help companies extend cloud-native tools across public and private clouds, the company touted. "IBM Cloud Private brings rapid application development and modernization to existing IT infrastructure and positions it to be combined with the services and experience of a public cloud platform," said Arvind Krishna, senior vice president for IBM Hybrid Cloud and director of IBM Research, in a statement.

Assistant editor Mekhala Roy contributed to this week's news roundup.

Next Steps

Check out our previous Searchlight roundups on Forrester's top 10 technology trends, the enterprise use case for Google Glass and zero-trust security

Dig Deeper on Enterprise artificial intelligence (AI)