Entertainment Film Tech

AI Experts On The Reality Behind The Sci-Fi Thriller “Transcendence”

As reported on TechCrunch.

by Peter Barnum, Brad Neuman

An ambitious blend of fact and fiction, “Transcendence” is Hollywood’s latest dystopian-edged take on the implications of artificial intelligence. The movie presents the tale of the wildly brilliant researcher, Dr. Will Caster (Johnny Depp), and his team as they perform a radical experiment in uploading a human mind to a computer. It’s set in the near future and, as otherworldly as the plot may seem, director Wally Pfister and writer Jack Paglen (whoconsulted with electrical engineering and computer science experts) have delivered a pretty cool experience for AI nerds like us, with a script that emanates from some real, cutting-edge research.

Here are some examples of where the film displays some chops in computer vision, intelligent systems and brain imaging.

Computer vision

The AIs in “Transcendence” primarily use cameras to visually sense the environment — and Paglen clearly did his homework, because many of the specifics parallel the latest computer-vision research, which involves making a camera and computer function like our eyes and brains.

Currently, common tasks include detecting faces in a photo album, determining when a suspicious car enters your driveway, and reconstructing the 3D structure of a scene from a bunch of 2D images. And indeed, the AIs in the film are able to perform the computer vision tasks of recognizing faces, reading emotions, and guiding surgical instruments.

Regarding facial recognition, one of the movie’s AI systems — PINN or “physically independent neural network” — is able to recognize individuals walking into its server room and greet them by name. In the past few years, such systems have burst out of the lab and into mainstream applications. Companies like Facebook and Google use it to automatically tag pictures of you and your friends. Governments and police are also using facial recognition to identify terrorists in places such as airports and train stations.

To recognize a face (or any object), many vision algorithms have two steps. First, they convert small patches of raw pixel data into “local descriptors” that encode information, such as shape and texture. Then they match these local descriptors to a database to find the person (or object) that has the most similar descriptors. This gets much trickier as your dataset gets bigger, as you’re looking for a particular tree in an increasingly large forest.

On top of that, the hardest task is often simply determining whether the object you’re looking at is in fact in your database. An algorithm can always find the closest match, but it can be really hard to tell the difference between a known person with a bad hair day and an unknown person. In “Transcendence,” the characters were impressed that PINN recognized someone it had only seen in a photo. But this is doable even with current technology. They would have been way more impressed if PINN realized that it had never seen the person, even in a photo.

As for emotion recognition, a quick look at his wife Evelyn (Rebecca Hall) through a camera revealed to Dr. Caster’s AI that she was feeling upset and, in one scene, perhaps lying. Modern computer vision does not yet nearly approach human capability for emotion detection, but it’s rapidly improving.

TRANSCENDENCE

Today’s algorithms can analyze face and body movements and look for patterns in datasets of known emotions. Sometimes these algorithms use details that make sense to humans, such as a raised eyebrow. Other times, when given a large dataset of face images, they’ll learn models that only a computer can understand. In the film, there’s a moment where we see a screen with a set of bar graphs displaying the values for parameters of the AI’s emotion-estimation model. It’s amusingly similar to MATLAB debugging plots made by real researchers, where each little sub-plot displays the value of some model parameter.

Lastly, although we don’t have fully autonomous robot surgeons, there have been a number of impressive developments in computer-assisted surgery. In general, computers are bad at making complex judgments. But if you point them in roughly the right direction, they can be much more accurate than people.

For example, hip-replacement surgery requires very precise cutting. In an advanced variant, a surgeon holds a laser scalpel, and a computer determines the exact position of the bone and the scalpel’s relative position. If either the surgeon or the computer thinks the scalpel is out of place, the cutting beam is turned off. This way, the surgeon figures out the rough area to cut and the computer assures that the cut has micrometer precision. In the film, the AI did all the work. Perhaps that might be on the horizon, but we’re certainly not there yet.

Intelligent systems

One of the core concepts of the film is what Dr. Caster refers to as strong AI — or “general intelligent action.”  Without question, there have been amazing leaps in “intelligence” technology in recent decades. Early evidence of this came in the triumph of a computer — Deep Blue, developed by IBM — over world chess champion Garry Kasparov, in 1997.

Today, we interact with dozens of intelligent systems every day. Navigation systems can route you to your destination to avoid morning traffic. You can speak to your smartphone in English and have it set a reminder or send an email. You almost never have to look past the first page of search results to find that single website you’re looking for out of a trillion.

A recent example of this kind of smart is IBM’s Watson, a computer that in 2011 outperformed the top human players on the quiz show Jeopardy. Watson is much more powerful and general than an AI for chess — but it also has its downsides.

Many were shocked when Watson answered a question in the “U.S. cities” category with “Toronto.” This incident demonstrates that Watson doesn’t use a large set of “if-then-else” logic. Instead, it has 200 million pages of data, much of which is English text articles. This allows Watson to play Jeopardy like a human, and even think outside the box, sometimes so far outside that it demonstrates its lack of human common sense. Moreover, Watson was designed only to deal with Jeopardy-style prompts. It can’t hold a conversation or play pub trivia. Even though it seems like Watson can think, it doesn’t think the way we do.

12935316785_3c84677c83_b

These applied AI systems are the kind of research that the film’s Dr. Waters (Paul Bettany) is talking about when he says he wants to use existing technology to cure cancer and save lives. Dr. Caster, on the other hand, wants to create a general-purpose strong AI as opposed to applied or “weak” AI. The weak vs. strong AI argument became popular at universities in the 1980s, when philosophy professor John Searle critiqued the Turing test (named after British mathematician Alan Turing), which challenges a machine’s ability to show human-like intelligent behavior.

Turing, who many consider the father of computer science, claimed that if you could interact with a machine without ever being able to tell it was a machine, then it was intelligent. Who cares how it’s implemented if it’s good enough to trick people? Searle, on the other hand, called this weak AI, and claimed that a strong AI would be a system that could actually think and reason, not just perform a complicated set of tricks. This was a recurring theme in the movie. When Dr. Tagger (Morgan Freeman) asks the AI if it can prove that it is self-aware, he’s clearly taken aback by its response: “That’s a difficult question. Can you?”

Currently, many researchers are working on beating the Turing test, and have developed text-based systems that have come close to making the grade. For example, in 2011, the web application Cleverbot was judged by humans to give responses that seemed 59.3 percent human, while the actual humans in the test did only slightly better — 63.3 percent. But still, while Cleverbot might be impressive in mundane conversation, tricking it is fairly easy.

While there is a clear academic interest in creating a strong AI, funding for applied AI is much more prevalent. Researchers are actively working on a general purpose AI, but the AI community as a whole is skeptical as to if, and if so, when, such a technology will be developed. One of the core problems, which is in fact mentioned in the movie, is that we don’t understand consciousness well enough to implement it. The solution to this problem in the film is the use of an existing consciousness, namely Dr. Caster’s, which is scanned and uploaded to the computer. From there, of course, it’s off to the races.

brain

Brain imaging

Speaking of consciousness, while we are very far from being able to capture its essence, researches have been able to tease out various components of thought in subjects’ brains using functional magnetic resonance imaging, or fMRI. While similar to the MRI you may have seen at a hospital, which produces a series of static images, fMRI gives information about which regions of the brain are most active at particular times.

Combining these readings with statistical analysis, it’s possible to determine, to some extent, what a subject is thinking about while “in” the machine. For example, researchers have been able to determine with some reliability when a person is lying by looking at fMRI brain scans. If you tell the truth, only the portion of the brain associated with memory activates. If you lie, however, you typically remember the truth, suppress it, and then create a falsehood, and the difference in brain activity is noticeable. fMRI studies have even demonstrated that the brain patterns of Apple enthusiasts looking at Apple products are similar to those of religious people seeing religious symbols.

There are many promising research directions using fMRI, but we are nowhere near full brain mapping. When Dr. Caster’s brain is scanned in the film, he is asked to read words not only to record his voice, but also to help map brain activity, which makes sense since fMRI can only see parts of the brain that are active. By reading and thinking about each word, you could, in theory, build a map of a person’s thoughts.

Similar techniques are actually used during open brain surgery [warning, graphic video] to avoid damage to critical sections of the brain, such as the speech center. Although research on emulating brains in a computer has been done, we are decades away from being able to transfer these sorts of fMRI scans into a machine to be part of an AI.

In the end, “Transcendence” has some serious roots in today’s fast-expanding AI world. We’ll leave an assessment of the high drama to you, but we can say this: this fiction based in fact certainly got our professional attention. What would Watson say? We’ll see after this opening weekend, but perhaps something like, “I’ll take AI for a quite a few million, Alex.”

Editor’s note: Peter Barnum is a roboticist specializing in perception and sensor processing, with a PhD in Robotics from Carnegie Mellon University. He is currently an Embedded Vision Engineer atAnki, Inc. Brad Neuman is a roboticist and AI Engineer at Anki, Inc., specializing in path planning. Both Peter and Brad are subject-matter-expert advisors to Signal Media Project, a nonprofit organization that promotes and facilitates the accurate portrayal of science, technology and history in popular media.