Generative artificial intelligence tools like ChatGPT and Stable Diffusion have soared in popularity over the past several months—and so have the challenges surrounding them, including issues of authorship and credit, concerns about misinformation and the role AI could have in the media ecosystem, and ultimately the impact the technology will have on skills and jobs.
Because no matter how magical, mysterious, or automatic generative AI may seem, the content didn’t come from nowhere, and it didn’t come from the AI itself. Text-based tools are trained on the writing of human authors while image-generation software sources photos, paintings, and other illustrations made by people. At the same time, humans are ultimately responsible for how these AI tools are used and to what ends they’re applied.
In other words, the science of generative AI is similar to the science of human creativity. Now there’s a team of interdisciplinary researchers working to study how the two sciences interact. The collaboration recently spawned a new paper published in Science titled “Art and the Science of Generative AI.” The researchers, who dub themselves “The Investigators of Human Creativity,” have also produced an extended whitepaper and website (that shares the same title as the Science paper) devoted to explorations of the relationship between art and AI.
“With such a big and complex topic with so many promises and perils, like AI has, everyone has their own perspective that’s founded in their own discipline—but how can we build bridges and try to understand this together?” said Ziv Epstein, the lead author of the papers and a postdoctoral scholar at the Stanford Digital Economy Lab. “In presenting a snapshot of where we are right now, we want to toe the line between optimism and pessimism, and we want to build coalitions of people who are thinking about this so that we can pool our resources and create the space for really valuable conversations.”
Stanford Visiting Scholar
Director, MIT Human Dynamics Laboratory, MIT Media Lab Entrepreneurship Program
The project spans several disciplines and features fourteen experts, including legal scholars, computer scientists, designers, and other leaders studying how AI is impacting their various fields. Epstein is one of several Stanford Digital Economy Lab researchers involved in the project.
Morgan Frank, a Stanford Digital Fellow, led the section on the labor economics of the creative industry and the impact new AI tools will have on the field. “Our main goal was to touch on these different economic, cultural, and legal aspects of generative AI, and highlight where we think we’re going,” Frank said. “We were thinking about policymakers, the media, and the public, and we were trying to highlight what is known, and what is still yet to be known.”
The idea for the initiative arose last year at the International Conference on Computational Creativity in Bolzano, Italy. Aaron Hertzmann, who ultimately became one of the project’s contributors, spoke about the challenge of anthropomorphizing AI, making the case that the AI tools themselves produce no art—they can only put out reinterpretations of the training data that were inputted by humans. In conversations after that presentation, Epstein and others discussed how Hertzmann’s concern about misinterpretation and agency spoke to a larger issue: since the field of generative AI was developing so fast, how could experts and the public alike wrap their heads around what’s happening? These tools have enormous potential, but what are the immediate questions about how they are being created and deployed?
In the months that followed, those conversations coalesced along a few main themes: aesthetics and culture, legality and copyright, labor economics, and the media ecosystem and misinformation. The Art and the Science of Generative AI project presents a survey of those key themes, allowing for an introduction to these complex issues in hopes of creating a shared understanding going forward. Since the AI landscape touches so many industries and areas of expertise, it requires a common language to enable computer scientists to talk to artists, and for labor economists to talk to copyright lawyers. That’s what these papers seek to provide.
“This whole project was so satisfying to me because there was a consensus and a synergy to the way all of these experts were approaching the subject,” said Sandy Pentland, a visiting scholar at Stanford and one of the project’s contributors. “If you can get labor economists, AI experts, and artists to all look at a complex issue like AI in the same way, you’re going to have a pretty good picture of what is possible to know at this point.”
This collaborative approach provides more than a shared understanding of where the world of generative AI sits currently—it also creates clarity in thinking about what to study in the future. For each of the four key themes—culture, misinformation, law, and the job market—the Art and the Science of Generative AI project offers a snapshot of current concerns and trends, as well as a list of potential questions for researchers to pursue.
Theme #1: Aesthetics and Culture
Thinking about the visual quality of AI art, one possibility is that AI outputs would be trained on other AI outputs, which could create a cycle where the same style of content is produced again and again.
Recommendation algorithms, such as those that select what you see on social media feeds, add more nuance to that concern. If online creators are generating the same type of information or imagery to please an internet algorithm, it’s also likely to give us the same output repeatedly.
More worryingly, if an increasing amount of content is drawn from biased AI training data, then the technology may capture, reflect, and even amplify cultural norms and biases, potentially reducing diversity in art and design. On the other hand, using an AI tool presents a lower barrier to entry than creating similar images manually, and this may increase diversity by allowing more people to participate in creative processes. A diverse set of outputs from AI tools could also keep the art from becoming boring and potentially do a better job of reflecting human diversity.
These and other concerns suggest that future research is needed to measure diversity in AI output in order to curb homogeneity.
Theme #2: Media and Misinformation
The issue of recommendation also impacts the relationship between generative AI and the broader media landscape, including misinformation. Generative tools use far less resources than conventional media and lack many of the guardrails that consumers are used to. And production time and costs decrease, concerns about AI-generated misinformation, synthetic media, fraud, and nonconsensual imagery have risen. Deepfakes—videos edited using falsified images—are a prime example of this phenomenon.
Many relevant research questions naturally follow: How should platforms intervene to stop the spread of misinformation? How can the provenance of a piece of content be traced to ensure its reliability? And as the quality of AI-generated imagery improves, what can be done to detect that content in the first place so that audiences can understand the kind of media they’re consuming?
Theme #3: Legality and Copyright
The use of training data poses perhaps the most prominent legal issue of generative AI and art since existing copyright laws are insufficient when dealing with AI-generated content. For example, there is no legislation currently on the books that outlines compensation for the training data that AI tools consume. More research and legislation will need to establish whether AI copies what it’s taught or creates something new. These guidelines would also determine how to protect an artist’s style, keep their work protected, and compensate them appropriately.
The researchers also present a possible alternative: artists could opt-out of having their work used as training data for AI.
“The streaming era has also changed the entire music industry, and that’s what we’re worried about with AI,” Pentland said. “You get a few megastars and then lots of people making pennies. So that’s a good thing for some people, especially the streaming companies, but it’s unequal in a way that is not very positive. That’s what this article was about—there’s a lot that we’re excited about, but there are also threats. We want to be clear about what’s good and what’s bad.”
Theme #4: The Future of Work
Finally, and perhaps most practically, generative AI is set to transform work and employment, especially in creative roles. For a while now, economists have assumed that automation would not affect creative work the same way it has disrupted fields like manufacturing.
But generative AI has upended this assumption. When it comes to creative production, AI is remarkably efficient, which may lead to the same output with fewer workers. This could threaten certain creative occupations, such as writers, composers, and graphic designers—or at the very least, restructure how that work is done.
“We know that technology rarely automates whole occupations, and it’s much more common that technology automates specific tasks within a job,” Frank said. “People might be tempted to look at tools like MidJourney and assume that automation will make graphic designers a thing of the past, and I don’t think that’s likely true. Workers usually have to perform many tasks within their job, and it’s very rare to see technology replace all of them at once.”
‘The real work begins’
The four themes addressed by the project are astoundingly complex, especially given how fast generative AI tools are being developed and released—and the authors know their work can’t answer all the questions that AI raises. However, by providing a common language and a shared understanding of the current moment, Art and the Science of Generative AI can lead the way for those questions to be answered.
“This is very much the first step,” Epstein said. “There’s nothing empiric here, there’s no engineering, we’re just trying to build a common foundation for future science and research to sit on top of. And with this paper out there, now the real work begins.”
Invented a century ago, GDP was never meant to fill as many roles as it does today. A new measure of well-being, GDP-B, promises a much clearer picture of where value emerges in our changing economy.
There are several ways we can describe the value of Google Search: by the number of daily queries it fields, the vast quantity of content it contains, the myriad ways it makes life easier. Alternatively, its value is reflected in the fact that, according to a new study, most people would forgo meeting their friends in person for a month before they would give up the service.
And it costs nothing to use.
“There are lots of amazing goods with zero price these days: apps on our phones, Wikipedia, ChatGPT,” says Erik Brynjolfsson, a professor of economics at Stanford and director of the Stanford Digital Economy Lab, a center within the Stanford Institute for Human-Centered AI (HAI). And yet all of this free stuff doesn’t register in the pulse of our economy; it’s not captured by GDP. “So we’re consuming a lot of free digital things and not doing a good job measuring it.”
A new project spearheaded by Brynjolfsson and Carnegie Mellon assistant professor Avinash Collis is crafting a tool for measuring the size and value of our increasingly digital economy. Called GDP-B — the “B” stands for “benefit” — the basic hope is to understand how much value something like Google creates. Early estimates from the work suggest that trillions of dollars of value from digital goods are currently missing in discussions of the U.S. economy, to say nothing of economies around the world.
Measuring well-being, not just production
Nobel-winning economist Paul Samuelson called GDP “among the great inventions of the twentieth century.” It offers a pithy measure of an economy’s production. But it has limits. As noted above, it overlooks the economic value of most free digital goods. It also conflates the production of beneficial goods, like new wind turbines and schools, with the production that’s connected to houses destroyed by wildfire or people sentenced to prison.
“GDP was never meant to measure well-being,” says Collis. “And yet, because we lack other numbers, economists, policymakers — almost everyone — have defaulted to GDP as a proxy for our general welfare.”
GDP-B is designed to address these shortcomings. A new paper by Brynjolfsson, Collis, Stanford postdoctoral scholar Jae Joon Lee, and several collaborators from Meta offers a proof-of-concept by outlining how GDP-B can be measured and interpreted.
The researchers ran an online survey of nearly 40,000 people across 13 countries. Participants were first asked to rank a set of online goods from those they value most to those they value least. The full set of goods included Facebook, Twitter, Instagram, WhatsApp, Snapchat, TikTok, Google Search, Google Maps, YouTube, and Amazon Shopping. This provided information on the comparative value of each.
They next asked participants if they would be willing to stop using Facebook for a month in exchange for three randomly chosen dollar amounts between $5 and $100. “By seeing how many people accept the offer at each price point, we get a rough sense of its value for a representative group of people,” Brynjolfsson says. “If someone won’t give it up for five dollars but will for ten, then we know that the value of that good — its benefit to the consumer — is somewhere between those two.”
Using Facebook as a benchmark, the researchers were able to extrapolate the relative value of the other nine goods, captured in the first phase of the survey.
Uncovering trillions in missing value
Most fundamentally, GDP-B reveals that these 10 goods alone generate more than $2.5 trillion in annual consumer welfare across the 13 countries. This is roughly equivalent to 6 percent of their combined GDP.
Of particular interest, says Collis, was how much of this value flows to consumers. In the case of, say, breakfast cereal, a person may pay $4 for a box but be willing to pay $8; the company gets half the value and the consumer the other half. When it comes to most digital goods, this ratio tips much more favorably toward the consumer.
“Google Search may make 10 or 20 dollars a year from me, but I would say I get thousands of dollars of value from the product,” Collis says. “I can’t imagine living without it.”
“We need a metrics revolution.”
A clearer articulation of an economy’s topography is one of GDP-B’s great prospects. “In the U.S., Congress and policymakers have to make decisions about how to allocate budget dollars, how to spread R&D money or offer grants, and these decisions are often rooted in ideas about how the economy creates value,” Brynjolfsson says. “If we have the wrong measures, then we come to the wrong conclusions.”
Over the next few years, the researchers hope to expand the number of goods they’re looking at — digital and non-digital — from 10 to several thousand. This effort, Brynjolfsson suggests, could create a representative sample of the U.S. economy, giving a far more granular view into how much value different sectors create, and in what way.
GDP-B also has the potential to measure two types of good that elude the reach of traditional GDP. First, non-market items, like clean water, education, or infrastructure. “Having a nice park in a big city creates a lot of value,” Collis says. “But how much?” With the GDP-B survey methodology, governments could start attaching better numbers to historically vague domains.
Second, GDP-B can be used to define the negative value of something, like air pollution: How much would people pay to not have polluted air?
“There is a massive need for better data about how the economy is changing, how people are faring in it, and what kind of policy decisions will lead to better outcomes for more people,” says Christie Ko, executive director of the Stanford Digital Economy Lab. “The tools we have now give us poorer and poorer insight as time goes on. We need a metrics revolution to actually understand where and how progress is taking shape and how to wield this for the benefit of all.”
It’s impossible to fully grasp how robotics is changing manufacturing in the United States without a complete understanding of where those robots are. Now, new research is providing a snapshot.
And here’s why that’s important: Robotics is what’s known as a general-purpose technology—just like computers, the internet, and electricity—that has the potential to make a fundamental difference in how entire economies operate. In short, robots present an opportunity to transform the manufacturing industry to reach new levels of productivity and growth.
But the first step in maximizing that potential is understanding how the technology is used. “We’re still in the early stages of understanding what’s driving robot adoption,” said Erik Brynjolfsson, director of the Stanford Digital Economy Lab and senior fellow at the Stanford Institute for Human-Centered AI (HAI). “It’s important to have data about where they are and also where they aren’t, and what their other characteristics are.”
Finding the robot hubs
Brynjolfsson is the co-author of a new paper on robotics adoption, along with researchers J. Frank Li, Cathy Buffington, Nathan Goldschlag, Javier Miranda, and Robert Seamans. In “The Characteristics and Geographic Distribution of Robot Hubs in US Manufacturing Establishments,” the team sourced responses from the US Census Bureau’s Annual Survey of Manufacturers to examine which manufacturers use robotics, where the robots are, and how establishments are using them. (A shorter version of the paper, “Robot Hubs: The Skewed Distribution of Robots in US Manufacturing,” is published in AEA Papers and Proceedings 2023.)
“Before we answer all those other questions about productivity and performance and wages, you need to know where the robots are first,” Brynjolfsson said. “For now, the striking finding was how concentrated the robots are.”
More than any other factor, the researchers found that a business is more likely to employ robotics if other establishments in its region report they also use them. In the paper, these concentrated regions of robotics use are called “robot hubs.”
The team ranked regions by the number of robots used in manufacturing, which revealed that robots are highly concentrated in the top 10% of robot-dense areas—and the bottom 50% of regions had almost none. Survey data showed that the top five states with the highest robot adoption share by establishments were Iowa, Michigan, Kansas, Wisconsin, and Minnesota.
The paper also identifies several trends associated with robot hubs, including the presence of “robot integrators,” which are businesses that assist in acquiring and installing robots. Another correlation was a higher share of union membership. Those patterns on their own, however, don’t fully illuminate why a robot hub develops.
“The concentration of robot hubs is a function of several different things, such as the type of manufacturing these firms do, the education of the workforce, the size of the establishments, but it’s also sort of an unexplained dark matter, and that seems important as well,” Brynjolfsson explained. “Something about these environments makes it so that companies are more likely to use robots. A big part of future research will be to find out why that is.”
The advantage of census data
Past attempts to generate insight into robot adoption faced challenges, such as selection bias, inherent to the research process. For example, if researchers mailed out surveys or called manufacturers directly, potential respondents might not have picked up the phone. Or a manufacturer might assume that the survey had nothing to do with them and not mail their response back.
Of course, that assumption would be wrong. To fully and accurately understand the spread of robotics throughout the manufacturing industry, it’s vital to get accurate input from a representative set of establishments.
“There’s never been any careful data gathering on where robots are in America,” said J. Frank Li, a postdoctoral fellow at the Stanford Digital Economy Lab and one of the study’s authors. “There have been plenty of different kinds of bits and pieces that people gathered, but now for the first time, we worked with the US Census to gather some really detailed data on where robots actually are.”
In addition to counting people, the US Census Bureau conducts regular studies into trends and demographics throughout the economy. The Annual Survey of Manufacturers (ASM) was one of these—a set of questions designed to understand the current state of the manufacturing industry across several dimensions.
Since responding to the ASM is a legal requirement, responses don’t suffer from the same selection bias as non-mandatory surveys do. This enabled the team to gain unprecedented insight into the use of robots, confident that responses accurately reflected the nature of the entire manufacturing industry. The researchers developed their questions in 2017, which were included in the surveys for 2018–2020. The findings in the paper are based on 2018 data.
Brynjolfsson noted that he was impressed by the bureau’s process in developing the survey. “When a survey is required by law, you don’t want it to be cluttered up with anything time-wasting, so they did over a year of testing,” he said. “That process was an eye-opener for me, and I think it led to a survey that’s far and away the most representative of the industry, thanks to the Census. Otherwise, the data would have been way too scattered.”
Director, Stanford Digital Economy Lab
Laying the foundation
The researchers’ findings weren’t limited to solely identifying robot hubs, though. Data from the ASM also included information on the size and age of surveyed establishments and their workforces.
From this data, the researchers found that higher robotics use correlated with higher capital expenditures, particularly in information technology. The research suggests that companies willing to pay the price for robots are also more likely to spend more on other innovations and improvements, leading to enhanced automation and digitalization.
That kind of investment creates a spillover effect throughout surrounding local economies, just like the manufacturing output of the robots does. If the use and integration of robots creates better economic outcomes, then a stark divide between robot hubs and everywhere else could challenge overall growth.
Brynjolfsson raises this possible divide as a legitimate concern. “Having robots so concentrated could lead to a separation, where some manufacturing becomes much more high-tech and robust, and other parts get left behind—so it’s valuable to understand what drives the adoption and, ultimately, the diffusion of robots,” he said. “If we want not just productivity, but widely shared productivity, we need to have robots in places where right now they’re not as common.”
Understanding why companies are adopting robots in certain areas and not others will help guide future development throughout the manufacturing industry. Researchers, data agencies, policymakers, and industry stakeholders can all leverage the paper’s insights to work toward a more balanced and inclusive deployment of robotics.
The research team realizes that their work is just the beginning of a long line of research into understanding the impact of robotics in manufacturing. The authors propose several avenues that future researchers could pursue—one looks into the relationship between robot hubs and international trade, while another explores the link between robot adoption and other investments. “Our hope is that the patterns in the data that we document in our paper spark further research in this area that is of use to scholars, practitioners, and policymakers,” the researchers wrote.
Future researchers can also better understand the advantages and obstacles of robotics in manufacturing by examining the influence of robots on productivity and wages in manufacturing establishments. A more collaborative environment will make it easier to enable economic growth and tech advancement in the manufacturing sector—and beyond.
“Our examination of the cross-sectional data indicates that robot adoption is positively associated with the share of production workers but negatively associated with earnings per worker,” said Li. “However, what we can say about causality and the mechanism behind our findings is still limited without longitudinal data and exogenous shocks. I expect that as new waves of survey data become accessible, there will be an increase in research exploring the impact of robots on the US manufacturing sector.”
Any opinions and conclusions expressed herein are those of the authors and do not represent the views of the U.S. Census Bureau. Disclosure review numbers CBDRB-FY22-ESMD011-003, CBDRB-FY23-ESMD011-003, CBDRB-FY22-192, and CBDRB-FY23-ESMD011-004 (DMS# 7508509). We are grateful to the Hewlett Foundation, Kauffman Foundation, National Science Foundation, Stanford Digital Economy Lab and Tides Foundation for generous funding. We thank Jim Bessen, participants at the 2023 AEA Annual Meeting, and Emin Dinlersoz for valuable comments and feedback. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.
The authors are grateful to the Hewlett Foundation, Kauffman Foundation, Markle Foundation, National Science Foundation, Stanford Digital Economy Lab, and Tides Foundation for generous funding.
June 8, 2023
We’re proud to announce that Stanford Digital Economy Lab is joining Project Liberty’s Institute to expand our pathbreaking research into how AI and other digital technologies affect society and the economy. The collaboration will inspire new, impactful projects designed to spark global conversation about the current and future state of the digital economy.
Of course, we did so much more than 10 things during the past year, but to capture all the highlights here—including the Lab’s groundbreaking research—would send you into a forever scroll. So we’ve whittled the list down to the top 10 things we did during the past year that helped advance the collective understanding of the digital economy.
1 / Essay
In Janurary, Lab Director Erik Brynjolfsson released “The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence,” in which he warns that “an excessive focus on developing and deploying human-like artificial intelligence can lead us into a trap.” The essay was later published in the spring issue of Daedalus, “AI & Society” (see #3 on our list) and served as the basis for our spring workshop.
The Turing Trap: A conversation with Erik Brynjolfsson on the promise and peril of human-like AI
Economists Pin More Blame on Tech for Rising Inequality
The New York Times
How to Solve AI’s Inequality Problem
MIT Technology Review
2 / Collaboration
In May of 2022, the ADP Research Institute paused its monthly ADP® National Employment Report in order to refine its methodology and design. Part of that evolution was teaming up with our data scientists to add new perspective and rigor to the report. The newly designed report, which launched in August, uses fine-grained, high-frequency data on jobs and wages to deliver a richer and more useful analysis of the labor market.
3 / Publications
In the spring issue of Daedalus—from the Academy of Arts and Sciences—experts explored various angles of artificial intelligence, including its effects on labor and the economy, its role in law and governance, and what it says about us as humans. The issue, which was edited by James Manyika, featured several contributors from the Lab and Stanford HAI community.
4 / Report
As part of the California 100 initiative, researchers at the Lab and SIEPR examined where the Golden State has been, where it’s at, and where it’s headed when it comes to possible scenarios and policy alternatives for the future. The large-scale report, The Future of Work in California, examines several facets of the California labor market, including its polarized workforce and the erosion of its middle class.
5 / Fall conference
As humans continue to develop brilliant new applications of emerging technologies, such as web3, we need to reimagine how our society is organized so that data serves all communities. The speakers and panelists who participated in “Building the New Economy: Data as Capital,” a special Stanford Digital Economy Lab event as part of Stanford Digital Assets Week, explored the feasibility and implications of human-centered web3.
6 / Spring workshop
What will the workplace look like in 20 years with the rise of artificial intelligence and other digital technologies? Our daylong workshop, Avoiding the Turing Trap, featured interactive panel discussions and presentations by Lab-affiliated researchers showcasing their recent work. Erik Brynjolfsson opened the event by framing the opportunities and challenges of human-like AI. Nela Richardson, chief economist at ADP, closed the day with her keynote address, “AI’s People Problem.”
7 / Fall workshop
In October, the Lab brought together leaders from industry, civil society, and academia to discuss the promise and peril of decentralized digital architecture for our political and economic systems. In the workshop, Decentralized Society: Digitization, Democracy, and Civil Discourse, panelists explored key questions such as new governance strategies, privacy paradigms, business models, and content moderation systems.
8 / Research
While networking on digital platforms can lead to new job opportunities, a study published earlier this year, A Causal Test of the Strength of Weak Ties, suggests that the specific types of connections job-seekers make online matter in terms of their ability to secure new positions. The project, which was conducted by Erik Brynjolfsson (Stanford), Sinan Aral (MIT), Iavor Bojinov (Harvard), and two LinkedIn employees and recent Stanford and MIT Ph.D. graduates Karthik Rajkumar and Guillaume Saint-Jacques, involved more than 20 million LinkedIn members, who made 2 billion new ties and created 600,000 new jobs over a five-year period.
A Causal Test of the Strength of Weak Ties
Looking For a Job? Some LinkedIn Connections Matter More Than Others
Harvard Business Review
9 / Seminar Series
Throughout the year, the Lab welcomed researchers and experts from all over the world to share their work and insights to a larger, broader audience. You can watch (or re-watch) every one of our Seminar Series talks from the past year on our website and on our YouTube channel.
10 / Competition
The Lab, in collaboration with Stanford HAI and SIEPR, put out a call for student submissions during the summer for innovative policy analysis and solutions that leverage emerging technologies to create jobs. The Emerging Technology Policy Writing Competition awarded a total of $10,000 in prizes to three winning entries. The first place prize went to to Aniket Baksy and Avi Gupta for their policy suggestion, “Expanding AI Adoption is an Opportunity for Job Creation.”
And this also happened…
We welcomed several new faces this year to the Lab, including our first-ever visiting scholar, Sandy Pentland. Among those who also joined us in 2022 include Ruyu Chen, Gabriel Unger, Megan Deason, Andrew Wang, Anthony Weng, David Autor, Angela Chen, Christina Langer, and Ruhani Walia. Visit the team section of our site to view everyone who contributes to the Lab.
In June, Lab affiliated faculty member Susan Athey joined the Department of Justice as chief economist of the antitrust division.