Stanford University


The Turing Trap: Measuring the Impacts of Emerging Technologies



The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds. But not all types of AI are human-like—in fact, many of the most powerful systems are very different from humans—and an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, humans retain the power to insist on a share of the value created. What’s more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers. In this research project, we are studying the interplay, incentivization structures, and current status of augmenting vs. automating technologies, with a focus on the productivity and future of work impacts. 

Furthermore, by revisiting the incomplete contracts theory pioneered by Oliver Hart, John Moore, Sanford Grossman, as well as related work on incentives and firm boundaries by Bengt Holmström and Paul Milgrom, we are working on Incomplete Contracts theory and the Turing Trap; updated for today’s technologies, and in particular, AI. This will be a timely and worthwhile contribution to the field of economics and a meaningful step toward better understanding the economics of augmentation and automation.


Stanford University