Computer scientists and engineers are doing fascinating work in AI and robotics. We plowed through a couple of years of articles in the MIT Technology Review to get a picture of what’s happening with these technologies, and we found that all is not well.
Here’s a tongue-in-cheek, humorous look at the state of these new technologies.
Deep Blue beat Kasparov. So what? Reflecting on the 1997 victory of IBM’s computer Deep Blue over chess grandmaster Gary Kasparov, writer Clive Thompson points out that “IBM had spent years and millions of dollars developing a computer to play chess. But it couldn’t do anything else. . . . They didn’t really discover any principles of intelligence, because the real world doesn’t resemble chess” (March/April 2022, p. 75-76.)
What have we created? In response to the lackluster outcome of the Deep Blue venture, computer scientists have been developing “deep learning,” a system of AI that aims to mimic the human brain’s neural networks. However, Daniela Rus, head of MIT’s Computer Science and Artificial Intelligence Laboratory, Thompson calls neural networks “massive black boxes.” Thompson writes that “the mechanics [of an artificial neural network] are not easily understood even by its creator. It’s not clear how it comes to its conclusions—or how it will fail” (March/April 2022, p. 77).
Did God have this problem? “What AI really needs in order to move forward,” writes Thompson, “is the ability to know facts about the world—and to reason about them. . . . The problem is, no one knows how to build neural nets that can reason or use common sense” (March/April 2022, p. 78).
A flawed assumption? Reporting the views of Geoff Hinton, a computer science professor at the University of Toronto, Thompson writes that “neural networks should, in the long run, be perfectly capable of reasoning. After all, humans do it . . .” (March/April 2022, p. 78).
Are we destined to be philosophers? “We’ll live off the production of robots, free to be the next Aristotle or Plato or Newton,” says venture capitalist Steve Jurvetson, who believes that less than 10 percent of people on earth will be doing paid work in 500 years (Vol 118:6, p. 66).
Apologize for being human. “Fetch Robotics is going after one promising area: warehouses and e-commerce fulfillment centers, which are plagued with high turnover, injuries, employee theft, and a chronic shortage of workers, who, of course, have a biological need to sleep” (Robert Hof, 118:5, p. 44).
Blame the dog for your failures. “[Saying] ‘my robot did it’ is not an excuse. We have to take responsibility for our AI” (Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence; 119:4, p. 26).
Those damn curled wires. “Most industrial robots have to be extensively programmed, and they will perform a job properly only if everything is positioned just so. . . . A task that is proving especially hard to automate: attaching a flexible wire to a circuit board. ‘It’s always curled differently,’ [CIG CEO Gerald Wong] says with annoyance” (119:3, p. 46).
Robots are stupid. “If you want to get a robot to learn to walk, or an autonomous vehicle to learn to drive, you can’t present it with a data set of a million examples of it falling over and breaking or having accidents—that just doesn’t work” (Will Knight, 119:1, p. 58).
Robots like bananas. “The goal [of Tellex’s ‘Million Object Challenge’] is for research robots around the world to learn how to spot and handle simple items from bowls to bananas, upload their data to the cloud, and allow other robots to analyze and use the information” (Amanda Schaffer, 119:2, p. 50).
That comforting robotic nurse. “Many of the jobs humans would like robots to perform, such as packing items in warehouses, assisting bedridden patients, or aiding soldiers on the front lines, aren’t yet possible because robots still don’t recognize and easily handle common objects” (Amanda Schaffer, 119:2, p. 48).