News Stay informed about the latest enterprise technology news and product updates.

Musk, Hawking and other luminaries sign AI principles into being

Introducing the Asilomar AI Principles, a set of guidelines to protect us against an AI apocalypse and ensure that the many, not just the few, reap AI's benefits. Also: Tech vs. Trump on Muslim ban.

Did you watch Westworld, the HBO thriller about artificial intelligence gone bad? By the end of season one, a faction...

of the androids built to satisfy the dark fantasies of rich people has rebelled, slitting throats they're not programmed to slit, having thoughts they're not programmed to have. The twist is that the robot rebellion doesn't stem from a glitch in the programming, but the contrary: The artificially intelligent are acting more like humans, for better and worse.

Just another science fiction served up for our amusement? Not according to Elon Musk, Stephen Hawking and Ray Kurzweil, among the 2,000 signatories to the 23 Asilomar AI Principles, a set of guidelines published Monday aimed ultimately at protecting humankind from rogue AI, including the one spelled out in principle 18 -- "an arms race in lethal autonomous weapons."

The Asilomar AI Principles, available to read here, were hammered out by a group of luminaries, including Musk and Kurzweil, who met in January on the windswept Monterey Peninsula at the Asilomar Conference Center in Pacific Grove, Calif. -- a Westworld of a kind. Their mission: to hash out how to deal with a field that has moved light-years beyond automatic spell-check systems, to cite one early example of AI, to the birth of Siri, Alexa and Cortana, personal digital assistants that can talk.

Agreeing in principle?

In addition to the top AI researchers and Silicon Valley hotshots like Musk and Facebook's Yann LeCun at the Asilomar event, hosted by the nonprofit Future of Life Institute, there were experts there in law, philosophy, ethics and psychology.

The AI principles reflect that broad spectrum, taking on thorny issues such as the risks inherent in the scientific race to be first, the need for mechanisms to ensure the safety and security of AI systems, and the potential of those systems to subvert societal norms. 

In the category "ethics and values," for example, principle 12, on personal privacy, states, "People should have the right to access, manage and control the data they generate, given AI systems' power to analyze and utilize that data."

Principle 10 states, "AI systems should be designed and operated so that their goals and behaviors can be assured to align with human values throughout their operation."

Principle 15 states, "The economic prosperity created by AI should be shared broadly, to benefit all of humanity."

A final category, described as "longer-term issues," addresses the need to mitigate AI risks and the danger of underestimating what AI can do, including principle 23's chilling warning, Common Good: "Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization."

AI disillusionment

"I am overjoyed that people are thinking about this," said Brandon Purcell, senior analyst covering artificial intelligence for Forrester Research. "And when you look at the list of folks who were part of creating the principles, these are some of the biggest names in development of AI and the biggest reason for the recent acceleration of AI -- so, we have the right people thinking about the right things." He is also excited about what the quest for AI principles will teach us about human values.

But Purcell, who is writing a paper on the ethics of AI for his enterprise clients, noted as did others interviewed, that there are not "very many specifics" on how the principles might be applied. For CIOs at companies under pressure to get going on AI, figuring out how to implement systems that reflect the AI principles is going to be "extremely difficult." Purcell cited as an example principle 7, Failure Transparency: "If an AI system causes harm, it should be possible to ascertain why."

"Many AI systems being developed today rely on deep neural networks, and there is an inherent opacity to deep neural networks that makes understanding how the algorithm is making the decisions virtually impossible. I know there are some researchers working on making deep neural networks transparent, but for right now, it is not," he said.

Instead of looking at math and algorithms, he said he believes CIOs who are deploying deep learning tools need to think about devising robust strategies for monitoring outcomes, with ready means to "pull the switch" when AI systems go awry, then to be prepared to train the systems with new data.

"AI systems are only as smart and benevolent as the training data you use to teach them," he said, citing the infamous example of Tay, Microsoft's Twitter-trained, trash-talking chatbot. The systems need to be trained carefully with the right data to get desired outcomes.

Indeed, he said he expects to see plenty of enterprise disillusionment in AI in the near term, in part because there is so much hype now. "Most of the folks I see are struggling just to pull all their customer data together to do advanced analytics on it," he said.

Who makes the decision?

Alan Lepofsky, an analyst with Constellation Research specializing in collaboration software, like Purcell, said he was also "happy to hear" that the leading AI minds are proposing standards that cover the AI spectrum.

"Of course, the vendors are ahead of the standards already," he said. In his field of workplace productivity tools, vendors are busy building "silos of voice activation" that don't communicate with each other. The fast-evolving systems promise "incredible improvements" to the way we work, he said, but they also present ethical issues galore -- and not just in the technology's capacity to replace workers.

Issues like how much decision-making to grant AI productivity systems will come to the fore as AI systems are given access to personal data. What happens when a digital assistant "sees" you're scheduled to meet with the boss at 10 to review a report that's not yet done and, by accessing your health data from your Apple Watch, knows that your blood pressure is through the roof. Does it do what some might consider the humane thing to do and decide to hold off showing you any more emails for the next hour?

"This gets to the ethics of AI," Lepofksy said. "I would love the computer to help me more, but I am not looking for it to do things automatically for me."

Good luck with that.

Stay tuned for upcoming articles on the Asilomar AI Principles, including insight from IBM's Francesca Rossi, research scientist at the IBM Thomas J. Watson Research Center and Asilomar attendee, on the importance and complexity of developing AI systems that are aligned with human values.

CIO news roundup for week of Jan. 30

AI principles were not the only ethical issues making the tech rounds this week:

Tech companies fighting Trump administration's immigration ban. Companies like Google, Apple, Facebook, Microsoft and Uber are condemning a travel ban restricting citizens from seven Muslim-majority countries from entering the U.S. The companies are working together on a letter opposing President Donald Trump's executive order that they said could hurt the country's economy. "As entrepreneurs and business leaders, our ability to grow our companies and create jobs depends on the contributions of immigrants from all backgrounds," according to a draft of the letter. Google is also creating a $4 million "crisis fund" to support immigration causes, while Twitter staff donated more than $1.5 million to the ACLU to help fight the ban. Uber CEO Travis Kalanick stepped down from his position on Trump's economic advisory council Thursday after facing criticism from employees. The Trump administration has also drafted an executive order aimed at overhauling the H1-B visa program that U.S. tech companies depend on to bring foreign engineering talent to their U.S. locations. "Visa programs for foreign workers ... should be administered in a manner that protects the civil rights of American workers and current lawful residents, and that prioritizes the protection of American workers," according to a draft proposal obtained by Bloomberg.

Oculus trial verdict. A Texas jury awarded $500 million to game developer ZeniMax Wednesday following an intellectual property lawsuit against Facebook-acquired virtual-reality company Oculus that went to trial in January. The jury ruled that Oculus co-founder Palmer Luckey violated a nondisclosure agreement, and found Oculus liable for trademark infringement and for establishing false designation. According to the verdict, Oculus wasn't guilty of misappropriating trade secrets, and former ZeniMax employee and Oculus CTO John Carmack wasn't personally liable for any damages. "Our commitment to the long-term success of VR remains the same, and the entire team will continue the work they've done since day one -- developing VR technology that will transform the way people interact and communicate," an Oculus representative said. Oculus plans to appeal the decision.

Data breaches trigger financial losses. Businesses are suffering significant losses because of a rise in security breaches, according to Cisco's 2017 Annual Cybersecurity Report. The report surveyed more than 2,900 security professionals worldwide. Respondents reported that security incidents resulted in loss of business opportunities (23%), revenue (29%) and customers (22%), but only half of security alerts are investigated and less than half of legitimate issues are fixed, the report found. "Lack of budget is a perennial challenge for security teams, as is disparate systems that don't integrate -- where security is bolted on rather than embedded," John Stewart, senior vice president and chief security and trust officer at Cisco, wrote in a blog post. The study also found email spam volume rising, with spam accounting for 65% of all email because of large and thriving spam-sending botnets.

Assistant editor Mekhala Roy contributed to this week's news roundup.

Next Steps

Check out our previous Searchlight roundups on the future of net neutrality, the Oculus trial and  FTC's PrivacyCon.

 

Dig Deeper on Enterprise artificial intelligence (AI)

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

3 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How do you anticipate AI will be misused?
Cancel
One way AI could be harmfully used, is to have it creep around all networks and nodes on the Internet, routers, switches, everything,... be on "watch" and collect all sorts of security data that would allow it to operate silently as another human, unsuspecting internet user. I could see how this could be used to actually frame someone, take over bank accounts, deny access, etc... the sky's the limit.
One good way to use AI would be to unleash "monitors" out there on the Internet that police the whole environment, looking precisely for AI-s lurking around and performing activities, as described above.
Question: during this conference covered by this article, did anyone of the members float the idea that in order to implement such principles, perhaps, there should be some standards to be found where any AI system that relies on deep neural networks should have a core set of moral principles implemented as hard coded, unmodifiable software, that would govern its decisions from the background, exercising a kind of "veto power" against decisions that lie outside of the parameters imposed by the core? If this could be implemented, then each AI must implement the agreed-upon software, and that might make it a lot easier to fend off against AI-s gone rogue...
Cancel
Did anyone watching the Superbowl get both sick and angry as I did when the H&R Block ad featuring IBM's Watson was shown?

Just the premise (which I do not dispute) that AI can give you an edge when preparing your taxes, is disturbing on it's face. It seems to me that there is something fundamentally wrong if the tax code is so complicated that only tax preparation professionals, now using AI, can get you what is rightfully yours in the first place.

I'm curious if citizens in other countries pay billions and billions of dollars every year (H&R Blocks 2015 revenue alone was $3+B) to satisfy the takings of their governments. This just seems so, so, wrong.
Cancel

-ADS BY GOOGLE

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

SearchDataCenter

Close