Quantcast
Channel: Nathan Palmer – Sociology In Focus
Viewing all articles
Browse latest Browse all 118

Ex Machina & Why Robots Don’t Have Common Sense

$
0
0

In this essay, Nathan Palmer uses the movie Ex Machina to discuss why common sense is so hard to replicate in a computer program.

Ex Machnia is a thrilling science-fiction movie that will leave you asking yourself, “what does it mean to be human?” In the film, we first meet Caleb who is a coder at Bluebook, the world’s most popular internet search company that seems like a fictionalized version of Google and Facebook combined. Caleb has been selected to fly to a secret underground Bluebook research facility to work directly with the company’s billionaire CEO, Nathan. There Caleb learns that Nathan has created a robot with artificial intelligence (A.I.) named Ava.

Caleb soon learns that he will be administering the Turing Test on Ava.

The Turing Test is a Test of Appropriate Social Interaction

The Turing Test is a test of interaction. For the A.I. to pass, it will have to interact appropriately with its human evaluator. Which means that the A.I. would need to be pre-programmed with all of the rules that govern human interaction. In addition, the A.I. would need to be able to adapt its existing rules and learn new ones as things change.

This means that to create an A.I. that could pass the Turing Test, programmers would need to replicate the thing that tells each of us how to behave in social interactions. So where do you and I turn when we want to know how to appropriately interact with others? Common sense.

To Be Human is to Have Common Sense That Doesn’t Make Sense

We use common sense to guess both what others expect of us and what we should expect of others. For instance, common sense tells us that when a passer by says, “how are you?” they don’t really want you to answer the question. It is common sense that tells us how to behave in conversations, what emotions are appropriate to display, how to treat strangers vs. how to treat loved ones, what to keep private vs. what to share publicly, and so on.

Therefore, to design a perfectly human A.I./robot that could pass the Turing Test all you need to do is write a computer algorithm that replicates common sense. While that might sound straight forward, it is an astonishingly complex design problem to solve.

Common sense isn’t really a list of rules that we all follow. Instead it is a set of rules that have lots of exceptions. For instance, if you are sitting in a hospital waiting room while a loved one is having surgery when someone asks you, “how are you?”, that person expects you to give them a more detailed answer. Furthermore, in any particular interaction situation there may be multiple rules that contradict one another. For instance, one rule says you should almost always answer anyone who asks you a question, but another rule says when standing in front of a urinal peeing, never talk. So what’s a fella to do when chatty-Charlie in the urinal next to him starts asking questions?

The point is, when it comes to social interaction, it all depends. It depends on where the interaction is taking place. It depends on the statuses of the people who are interacting (e.g. are you talking to your boss, your friend, your mother, etc.). It depends on the emotional state of everyone involved. It also depends on which part of the world (and thus which culture) each of the participants is from. I could go on and on, but the point is, there may be rules for interaction, but all of them depend on a wide variety of factors. To be human is to know what factors are the most important to determining the appropriate course of interaction.

In Duncan Watts (2011:10) excellent book on common sense, Everything is Obvious: Once You Know the Answer, he summarizes the challenges of creating an A.I. with common sense:

  • Attempts to formalize commonsense knowledge have all encountered versions of this problem — That in order to teach a robot to imitate even a limited range of human behavior, you would have to, in a sense, teach it *everything* about the world. Short of that, the endless subtle distinctions between the things that matter, the things that are supposed to matter but don’t, and the things that may or may not matter depending on the other things, would always eventually trip up even the most sophisticated robot. As soon as it encountered a situation that was slightly different from those you had programmed it it handle, it would have no idea how to behave. It would stick out like a sore thumb. It would always be screwing up.

In Ex Machina, Nathan does exactly what Watts suggests; he tries to teach Ava everything about interaction by using the data his Google-like internet company has collected on its billions of users. Internet searchers were used to decipher how people phrase questions and what interests them. Internet communications between people were used to model the rules for interaction[1]. To learn all of the rules for conversation and emotional displays Nathan secretly records billions of calls and video chats by hacking into everyone’s cell phone.

As the film ends we are left still wondering what it means to be human. However, from a sociological point of view, to be human is to use common sense. Despite its simple sounding name, the only way to have common sense is to be able to use the complex and often contradictory rules for interaction. So far, there isn’t technology capable of replicating the vast network of situational contingencies we call common sense.

Dig Deeper:

  1. Imagine that you are trying to program a robot to pass the Turing Test. What aspects of a situation would the robot need to pay attention to?
  2. Create a list of 3 rules (either formal or unspoken) that govern face-to-face conversations.
  3. Create a list of 3 rules (again, either formal or unspoken) that we use to decide what emotions are appropriate to display at a given moment.
  4. In addition to common sense not making sense, it is also not that common. We can only learn what is common sense through interacting with other people, but no individual interacts with the exact same set of people. Duncan Watts argues that common sense is only common to people who share sufficiently similar social locations. Explain in your own words what Wattts means and why common sense isn’t common.

References:

  • Watts, Duncan J. 2011. Everything Is Obvious: Once You Know the Answer. New York, NY: Crown Business.

  1. This is very similar to what the sociologist George Herbert Mead argues we all do when we construct what he calls the generalized other. In part 2 of this series, I’ll discuss the generalized other and Ex Machina in far more detail.  ↩


Viewing all articles
Browse latest Browse all 118

Trending Articles