On Monday, February 12, 2024, Neil Thomspon from the MIT Initiative on the Digital Economy visited the Lab for his seminar: “Beyond AI Exposure: Which Tasks are Cost-Effective to Automate with Computer Vision?”
The faster AI automation spreads through the economy, the more profound its potential impacts, both positive (improved productivity) and negative (worker displacement). The previous literature on “AI Exposure” cannot predict this pace of automation since it attempts to measure an overall potential for AI to affect an area, not the technical feasibility and economic attractiveness of building such systems. In this article, we present a new type of AI task automation model that is end-to-end, estimating: the level of technical performance needed to do a task, the characteristics of an AI system capable of that performance, and the economic choice of whether to build and deploy such a system. The result is a first estimate of which tasks are technically feasible and economically attractive to automate – and which are not. We focus on computer vision, where cost modeling is more developed. We find that at today’s costs U.S. businesses would choose not to automate most vision tasks that have “AI Exposure,” and that only 23% of worker wages being paid for vision tasks would be attractive to automate. This slower roll-out of AI can be accelerated if costs falls rapidly or if it is deployed via AI-as-a-service platforms that have greater scale than individual firms, both of which we quantify. Overall, our findings suggest that AI job displacement will be substantial, but also gradual – and therefore there is room for policy and retraining to mitigate unemployment impacts.
Neil Thompson is the director of the FutureTech research project, where his group studies the economic and technical foundations of progress in computing, and is cross-appointed at MIT’s Computer Science and AI Lab and MIT’s Initiative on the Digital Economy.
Previously, Neil was an assistant professor of Innovation and Strategy at the MIT Sloan School of Management, where he co-directed the Experimental Innovation Lab (X-Lab), and a visiting professor at the Laboratory for Innovation Science at Harvard. He has advised businesses and government on the future of Moore’s Law, has been on National Academies panels on transformational technologies and scientific reliability, and is part of the Council on Competitiveness’ National Commission on Innovation & Competitiveness Frontiers.
He has a PhD in Business and Public Policy from Berkeley, where he also earned master’s degrees in Computer Science and Statistics. He also has a master’s in Economics from the London School of Economics and undergraduate degrees in Physics and International Development. Prior to academia, He worked at organizations such as Lawrence Livermore National Laboratory, Bain and Company, the United Nations, the World Bank, and the Canadian Parliament.