News Stay informed about the latest enterprise technology news and product updates.

Tim O'Reilly: The flawed genie behind algorithmic systems

The founder of O'Reilly Media explains how the algorithmic systems that power the world's most customer-pleasing companies reflect our humanity back to us -- for better or worse.

In his new book WTF? What's the Future and Why It's Up to Us, technology guru and media publisher Tim O'Reilly highlights lessons learned from data-driven platform companies and what they have to say about the future of business and the future of the economy.

During a keynote talk at the recent Strata Data Conference, O'Reilly highlighted five lessons from his book, including that platform companies rely on algorithmic systems that have objective functions. That's good and bad. Objective functions do exactly what humans tell them to do, which can lead to biases and create inequality. But, as humans learn to build better algorithms, the software's objectivity could translate into a more harmonious future for the human race. (O'Reilly's glass, in this case, is definitely half full.)

Platform companies like Uber, Google, Amazon and Facebook rely on algorithmic systems to connect people in real time using predictive analytics and sensor data. Part of their success is that they've relentlessly pursued the new and in the process redefined business models.

Case in point: Taxicab companies often point to the mobile apps from Uber and Lyft as the competitive differentiator, ignoring how the platform companies have upended a crucial part of the old business model. When demand for cabs was high, supply couldn't scale. Thanks to technology and "cognitive augmentation" tools like Google Maps, supply of Uber and Lyft drivers scales right along with demand, and drivers can find passengers in minutes, according to O'Reilly.

And it's not that taxicab companies haven't tried to innovate. "We actually had connected taxicabs in 2005," he said. The livery industry installed screens in the backseats of cabs, but the screens were -- and still are -- there to show advertisements to captive passengers. This is an example of new technology pasted onto an old business model -- not an example of how technology infuses a business model, O'Reilly said.

Tim O'Reilly talks about his new book at the Strata Data Conference.
Tim O'Reilly talks about his new book WTF? What's the Future and Why It's Up to Us at the Strata Data Conference.

Algorithmic systems gone awry

Yet despite the growth platform companies are experiencing, technology still carries a stigma for many of us. O'Reilly has a theory about why that is: Algorithmic systems can -- and do -- go sideways. Other keynote speakers at Strata also highlighted that fact. Sam Lavigne, artist and programmer, talked about the dangers of predictive policing applications that rely on historical (and perhaps biased) data to make predictions about future crime. "Typical policing methodologies tend to criminalize poverty," he said. "And, therefore, typical predictive policing apps will also criminalize poverty."

These algorithms do exactly what we tell them to do. But we don't always understand what we told them to do.
Tim O'Reillyfounder, O'Reilly Media

Or consider O'Reilly's example of the role Facebook played in the 2016 presidential election. The social media company has algorithmic systems optimized for getting users engaged with content, but it didn't understand all of the nuances of engagement or how people might try to subvert the system, according to O'Reilly. "So they ended up with hyper-partisanship in the last election," he said.

The algorithms took on the biases of the user, delivering content that reflected their likes -- and dislikes. Algorithmic systems, he argued, are a little like the genies of Arabian mythology. "These algorithms do exactly what we tell them to do. But we don't always understand what we told them to do," he said.

Part of the problem is that developers don't know how to talk to algorithms and ask for the right wish, he said. Consider the financial markets, which today are vast algorithmic systems with a master objective function to increase profits.

"The idea was that this would allow businesses to share those profits with shareholders who would use [them] in a socially conscious way," he said. "But it didn't work out that way." Instead, financiers are gaming the system, creating income inequality.

But algorithmic systems don't have to operate that way. O'Reilly, for one, remains optimistic, believing that algorithmic systems hold up a mirror to humanity and will become a driving force in creating a better world. "Bias in code taken to scale becomes visible," he said. "When we see that we have encoded decades of biased policing into the data that we feed our predictive policing algorithms, we correct not just our AI, not just our algorithms -- we can see ourselves."

Sam Lavigne during his presentation at the Strata Data Conference.
During his presentation at the Strata Data Conference, Sam Lavigne shined a light on the potential for bias in predictive policing applications.

Businesses can't stop with the first iteration -- screens in the back of taxicabs weren't the future. They have to seek out the new, learn from their mistakes and figure out how to achieve their real objectives, O'Reilly said.

"This will eventually become a political process. But it's fundamentally right now, a process for all of us in business to understand our values, and what are we encoding into the systems we build," he said.  The operative question for us: "What is the wish we're giving the genie we are about to unleash onto the world?"

Next Steps

What is algorithmic accountability?

Machine learning training data is vulnerable to attack

Platform business model key to digital transformation

 

This was last published in November 2017

Dig Deeper on Enterprise artificial intelligence (AI)

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

3 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How does your data science team ensure it is asking the right question of your algorithmic systems?
Cancel
I have a modest proposal. It reflects the outcome of an old debate over a philosophical group known as the Vienna Circle (early-to-mid 20th century). They promoted the idea that the meaning of a statement is its method of verification. In particular, if a statement cannot be empirically demonstrated, then it is either a truth of logic (e.g., P v ~P) or it is the equivalent of a mere emotional plea. This latter view came to be known at "emotivism"; in particular, emotivism was the view that the meaning of any non-empirical claim was merely its emotional content, that it could not have a truth value, including all statements about ethics, policy, or anything not in the hard sciences. In the academy, this view was eventually discredited. BUT... in the meantime, a century later, Facebook has (accidentally, for all I can see) enshrined emotivism in its "Like" button. It seems to think that, because opinions differ, we ought not to consider their truth values. To correct this failing, here is what I propose: Facebook should add an "Agree" button. This would have the felicitous consequence of encouraging people to consider the *truth* of claims made on social media, not merely how the statements made them feel. Rather than promoting merely an emotivistic view of statements, which encourages influence only via memes and via emotional pleas, Facebook could also promote a *rational* view of statements, in which people argue (in a good sense of that term) over the accuracy of the statements. This would create feedback cycles that might reduce the misinformation and conspiracy mongering so common in social media, forcing people to grapple with their attitudes to the truth status of claims. Now, it's an open question whether people still know how to do this (argue so as to get closer to the truth). We may need some additional education. But imagine what an AI engine could do if it were being fed information from *billions* of people who were trying to get at the truth... how much more insightful and accurate might those results be? Just my two-cents.
Cancel
One additional thing. It's worth noting that "Like" and "Agree" are independent assessments. For example, it is possible to "Like" statements with which I do NOT agree. Likewise, it is possible NOT to "Like" statements with which I DO agree (they make me uncomfortable, hit too close to home, etc.). The attitudes that we take to statements can vary as to how we feel about them, but the attitude of "agreement" (roughly, [i] you believe that P, [ii] I believe that P, and [iii] I believe that [i]) is particularly important, because it is the attitude involved in our achieving *consensus* as to the accuracy of statements. And now I'll shut up.
Cancel

-ADS BY GOOGLE

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

SearchDataCenter

Close