Artificial intelligence Needs a (Moral) System
GCN Press Book Review
Power and Prediction: The Disruptive Economics of Artificial Intelligence
By Ajay Agrawal, Joshua Gans, and Avi Goldfarb
Harvard Business Review Press, 2022
Artificial intelligence, in all its forms, has the capability to rapidly process large amounts of data and then make predictions. This means that AI can sometimes improve human decisions, or at least reduce the time it would otherwise take to make them.
If AI predictions are generally accurate, then companies can make decisions in new ways. In fact, the authors of Power and Prediction argue that AI can “decouple” prediction and judgment; that is, AI can shift the prediction aspects of decision-making away from humans. People, if they trust the AI, can make a judgment based on the prediction it provides.
We already do this every day. The weather apps on our phones predict that there will be an 80 percent chance of rain at 3 p.m., just when we plan to walk from work to the subway station. We trust that prediction and therefore decide to carry an umbrella to work. Another example: An AI program uses millions of radiology exams to predict that a new medical image likely shows a benign tumor. The radiologist must then decide whether to trust the AI prediction (don’t order more exams) or distrust the AI (order more exams); that is, to make a judgment call.
As these examples show, AI has the potential to disrupt the roles of workers and existing processes within organizations. It can shift power away from those who traditionally make predictions (e.g., meteorologists).
“In many cases, prediction will so change how decisions are made that the entire system of decision-making and its processes in organizations will need to adjust,” the authors write (p. 16).
Some proponents of AI believe that these technologies will greatly improve productivity and profitability. However, at least for now, companies that use AI for specific applications have not seen many benefits. A 2020 study by MIT’s Sloan Management Review showed that AI improved financial gains for only 11 percent of organizations. And the US Census Bureau recently found that more than 300,000 companies using AI reported only modest increases in productivity.
These lackluster outcomes, according to the book’s authors, all being economists at the University of Toronto, stem from the fact that AI is still used primarily for “point solutions”—to improve an existing procedure—or for “application solutions”—adopting a new procedure within an existing system. The real productivity gains and profit increases from AI, say the authors, will come only when (and if) companies change their entire systems.
“[A system mindset] stands in contrast to a task mindset in that it sees the bigger potential of AI and recognizes that to generate real value, systems of decisions, including both machine prediction and humans, will need to be reconstituted and built,” they write (p. 88).
To describe what they mean by AI needing a new system, the authors use historical examples. When the car was invented, it was faster than a horse, but the car never had a major impact until people revolutionized the entire transportation system—building roads, gas stations, traffic laws, etc. The productivity gains offered by electricity were never realized until nations built massive electric grids and extended wires into homes and factories, which required utilities companies, billing departments, and maintenance teams—a huge, complex system.
Systems are complex, like the human body. One change affects everything else, for better or worse. For this reason, systems change is extremely expensive and slow. Reconfiguration takes extensive time, coordination, and long-term planning. Our world, say the authors, still lacks the systems needed for AI to improve productivity at scale.
This is one reason why reading Power and Prediction is so important. It offers business leaders and young professionals an initial framework for thinking about what types of systems might be needed to fully benefit from AI technology.
Humans Are Responsible
Power and Prediction also unveil some underlying limitations of AI tools. The authors cut through a lot of the hype about AI, and they point out some of the dangers. First, they argue that AI depends entirely on human notions of morals and ethics. Second, they say that AI will produce faulty predictions if the data it uses to make predictions is flawed. Humans are responsible for both.
“Robots and machines, in general, do not decide anything and, hence, do not have power,” they write. “A human or group of humans are making the calls underlying the decisions. To be sure, it is possible to automate things and make it look like a machine is doing the dirty work. But that is an illusion…. Accepting that machines don’t decide is critical if we are going to properly assess the disruptive potential of AI…. Machines follow instructions that must come from somewhere” (pp. 120-121).
Consider first that AI programs require troves of existing data. What happens if the AI program receives factually errant and/or incomplete information? Bad data in an AI system will increase the probability that the AI will generate faulty predictions. This is often called the “garbage in, garbage out” problem. Thus, humans should verify whether the AI’s predictions are accurate, which dramatically reduces the programs’ promised efficiencies.
Second, humans are in charge of designing the instructions (the algorithms) that AI programs use to process the data. This means that all AI systems will be prone to replicate human biases and, possibly, unethical principles. For example, what happens if an AI system is instructed to automatically exclude certain ethnic groups or ages from jobs, insurance policies, or bank loans?
The authors provide a helpful, balanced discussion of these hazards, showing that humans can work harder to ensure that AI uses “clean” and accurate data, and that people can strive to remove biases from algorithms. However, as the authors point out, any biased, unethical algorithm will only be corrected if the humans who control it want it to change. What happens if the demand for high profits conflicts with ethics? Will those who design AI systems do what is right even if it leads to financial loss?
“Fixing such discrimination is not easy,” the authors write. “First, it requires humans who want to fix the bias. If the humans who manage the AI want to deploy an AI that discriminates, they will have little difficulty doing so. And because the AI is software, its discrimination can happen at scale” (p. 231).
The authors (without saying so) are grappling with theological questions about human nature. A scriptural framework tells us that people are made in the image of God, but also fallen. We can therefore expect to see both good uses of AI (e.g., finding cures for diseases) and highly damaging uses (e.g., social media companies designing algorithms that cause division and spread conspiracy theories). Should we expect a different outcome from AI companies?
We give high marks to this book, in part because the authors, as economists, cut through the AI hype.
“One of our skills as economists is to take something exciting and impenetrable and deconstruct it into something boring and understandable,” they write. “While that doesn’t make us great party guests, it does allow us to sometimes see things that others miss” (p. 198).
Power and Prediction remind us that AI does not have a mind of its own. Humans are in control and therefore we are morally responsible for its impact in the world.


