Is there an Uncanny Valley of Machine Intelligence?

Roboticists know a lot about the uncanny valley, that uncomfortable place in utility and appearance where robots look and act almost—but not exactly—lifelike. On one side of the valley diligent self-propelled vacuum cleaners make our domestic lives easier and on the other side (the stuff of sci-fi for now) is the promise of human-replicants doing manual work than no real human could do or would want to for the pay involved. Think of perfect home-health assistants for the world's aging population. However, in the uncanny valley robots totally freak us out. The visceral reaction, from nausea to fear, we somehow naturally feel when confronted with the unnatural is well documented.

Fig 1. Fembot model 33XR. How would you feel about this robot living in your house and watching over you while you sleep?

Machine Intelligence in Your Life Right Now

I've been wondering a lot recently about a different kind of place for machines in human lives. Not physical robots but emotionally intelligence software (chalk this up to the benefits of having some more mental space whilst on sabbatical…). These frameworks already “live” amongst us doing the grunt work we do not care to do for ourselves or in service of others. Intelligent spam filters have raised the signal-to-noise in email inboxes dramatically. Recommendation engines for e-tailers have transformed the online commerce experience. Sentiment derived on public social interactions inform marketing, business, and even financial trading decisions.

Figure 1: Sketch of the proposed Uncanny Valley of Machine Intelligence. [Made using Jake Vanderplas’ excellent XKCDifying matplotlib]

What all these tools have in common is that they are trained to make statements about data, generally heterogeneous, noisy, and dirty, in a way that basically looks like abstract human thought. Underlying much of machine intelligence (the space that my company wise.io works in) is a branch of statistics and computer science called machine learning (ML). Through ML there are deep theoretical underpinnings to the systems that decide that that Viagra email from Nigeria was not something you wanted to read.

Falling into the Valley?

For the most part, machine intelligence is a helpful force for good. But we're already starting to see some creepiness pop up. IBM's Watson kicking everyone's ass in Jeopardy! has a lot to do with good indexing of data to allow for efficient data mining but importantly also has a lot do with the improvements in artificial intelligence. Target‘s data-driven ad marketing software figured out that a teenage girl was pregnant based on her on-line search history. She hadn't told her parents when they found a snail mail advert for baby clothes and cribs. [To be sure, she (and all of us) share a lot about ourselves and it is clear that such an inference could not have been made without tacit consent. Indeed, the open-life approach is often discussed in the context of data collection and online privacy. For the purposes of what follows, I'll assume that the data needed by machines to make statements exists and is collected by whatever means.]

So, how creeped out would you be if:

  • you got a product recommendation on your mobile phone based on a dream you had last night?
  • What if the TV programs you watched dynamically changed as your mood and inner-thoughts morphed? 
  • What if Amazon.com bought and sent a present in your name to a loved one that Amazon perceived as needing more attention from you? 
  • What if your mannerisms caught and analyzed by other people's Google Glass led to an email from match.com suggesting that you harbor a different sexuality than you appear?

I think these hypotheticals suggest that there is indeed the equivalent of an uncanny valley of machine intelligence. It's where data-based, algorithms-assimilated statements about you (and on your behalf) cut deep and freak us out to the max. They might be spot on or only sort of true. But knowing that who we are on the inside is somehow knowable entirely by data collected about us in our physical world is the ultimate creep. I wonder what Freud, Turing (ie. people as Turing machines), and Hawking (as relates to holographic theory) would say about this?

The Other Side of the Valley?

My analogy with the uncanny valley of robotics only works well if there is indeed another side, after we cross out of the valley, into some sort of promised land. What would that look like? Perhaps your father (who is living in Nigeria) offers to help with an impotence problem you just divulged while visiting him? Wouldn't you want your spam filter to figure this out and not nix that email about Viagra? Why wouldn't you be happy that Amazon.com took care of someone you loved?

On the other side of the Uncanny Valley of Machine Intelligence, having a machine truly know us and care about us is a potential paradise. We would all get a realized version of our childhood's imaginary friend, a trusted confident who might actually serve as our protectorate against the very economic forces that currently benefit most from an improving return on knowing you.

Avatar
Joshua Bloom
Professor of Astronomy

Astrophysics Prof at UC Berkeley, former Wise.io cofounder (acquired by GE); Previous Department Chair; Inventor; Dad, Tennis everything. Anti #TransparentMoon. Check out his group activities at ml4science.org and art exhibition CuratingAI.art (Spring 2024).