What Are Human Rights in an A.I. World?
A.I. has big implications not just for who works and how much, but also for how they work and how they’re managed.
What Are Human Rights in an A.I. World?Edmon de Haro
The entertainment industry has long churned out movies and television shows about machines equipped with artificial intelligence taking over—and destroying—humanity. But this past July, when actress Fran Drescher told the crowd at a press conference that “we are all going to be in jeopardy of being replaced by machines,” it was not part of a science fiction script.
Drescher, who is also president of the Screen Actors Guild-American Federation of Television and Radio Artists, was announcing the beginning of the union’s long-anticipated strike. SAG-AFTRA was joining the Writers Guild of America—an organization of unions representing writers in film, television, radio, and online media—in their first simultaneous strike in more than 60 years. Among their concerns: establishing the rights and roles of human actors, writers, and other creators in an age when artificial intelligence is becoming increasingly adept at imitating their work.
The Hollywood strike highlights a broad concern that A.I. is coming for our jobs, a fear that accelerated in November 2022 when research laboratory OpenAI widely released its large language model–fueled generative A.I. chatbot, ChatGPT.
Such fears are rooted in historical experience. The path of technological progress has produced many head-on collisions between innovation and the labor market, from the scribes put out of work by the printing press to the weavers forced to compete with mechanized looms and the factory workers displaced by robots. A.I., too, will surely shake up the labor force in meaningful ways in the coming decades. But the disruptive potential of A.I. is so apparent to so many people that this time we have an advantage absent from most of those historical experiences: a chance to do something about it.
What policies and programs will help us prepare for the A.I. revolution in the labor market—particularly in a way that benefits workers? What decisions can workers, companies, educators, and governments be making now to minimize negative outcomes such as unemployment and job-market dislocation while still capturing A.I.’s potential to boost productivity? What lessons can economists draw from the data to help guide those decisions?
Rather than wiping out jobs, A.I. could create many new ones, and make the people performing them far more productive than analogous workers are today. But if we’re going to realize the A.I.-powered future we might hope for, we have to start planning for it now.
Artificial intelligence is what economists call a general-purpose technology, or GPT—one that has applications across industries and the potential to transform the broad economy. (The GPT in ChatGPT, on the other hand, stands for generative pretrained transformer.) And while generative A.I. such as the chatbots and image-creation tools produced by OpenAI, Google, and others has refocused public attention, the technology is, of course, hardly new. One-quarter of US companies have already adopted some form of A.I., while Chinese and Indian companies are at nearly 60 percent, according to a 2022 IBM report. A.I. has made inroads in tech, manufacturing, healthcare, banking and financial services, media, retail, hospitality, and automaking.
Some A.I. applications—say, the Netflix algorithm that recommends programs on the basis of pattern recognition in viewing habits, or the tech for self-driving cars—focus on performing repetitive or specific tasks.
Generative A.I. tools, on the other hand, can identify patterns in massive data sets to create new content, including text, audio, and images. A chatbot used for customer-service purposes cannot, say, craft a movie script based on a human prompt. Generative A.I. programs could do this, along with producing what appear to be novel solutions to complex challenges.
Hence the consternation of the screenwriters, as well as white-collar workers in a slew of professions previously considered untouchable by automation.
“It was once a common belief that computers could not take over creative jobs, such as journalism or graphic design,” says Chicago Booth’s Anders Humlum. “However, with the advent of large language models and generative A.I., we’ve seen technologies . . . now automating tasks traditionally done by journalists, graphic designers, or programmers.”
Experts do not all agree on what does and does not constitute generative A.I., but some regard it as being a step toward artificial general intelligence—that is, a technology that is able to operate in a fully human way, perform cognitive tasks just like a person, and ultimately exceed all human capabilities. A group of researchers from Microsoft (a key investor in OpenAI), examining OpenAI’s GPT-4 model in an April 2023 study, finds that it “attains a form of general intelligence, indeed showing sparks of artificial general intelligence,” although they acknowledge “a lot remains to be done to create a system that could qualify as a complete AGI.”
An April Goldman Sachs report estimates that tech advances fueled by generative A.I. such as ChatGPT could affect approximately 300 million jobs worldwide over the next decade.
In the past, says Humlum, industrial transformations have been driven by general-purpose technologies such as the steam engine, which enabled the mechanization of textile production and railroad transportation and brought about the first industrial revolution. Electricity served a similar role during the second industrial revolution, which led to mass production at the assembly line, and computers and the internet during the third industrial revolution gave rise to the information technology age.
One key difference with more recent innovations such as generative A.I., Humlum says, is that they diffuse much faster than previous disruptive tech. “It took almost 100 years from when the first steam engine was installed in the United States at the beginning of the 19th century until peak adoption by the turn of the next century,” he says.
By contrast, in February 2023 a research note from Swiss investment bank UBS found that within two months of its initial release, ChatGPT had 100 million monthly active users around the globe. Around the same time, OpenAI released a premium membership tier, and in September, it began to roll out new voice and image capabilities for ChatGPT, declaring the app could now “see, hear, and speak.” Alternatives to ChatGPT currently include Google Bard, Claude, Bing’s A.I. chatbot, and Perplexity AI.
“In 20 years following the Internet space, we cannot recall a faster ramp in a consumer internet app [than ChatGPT experienced],” the authors of the UBS report write. They note that, by comparison, TikTok took nine months to reach 100 million monthly users, and Instagram about 2.5 years.
A.I.’s productivity curve may also be faster. Research by Stanford’s Erik Brynjolfsson, University of Pennsylvania’s Daniel Rock, and Booth’s Chad Syverson identifies a “productivity J-curve” in past eras of technological change—that is, a time in which productivity lulled, followed by a period of acceleration.
For example, they note, the steam technologies of the US industrial revolution took nearly half a century to show visible productivity effects. There was also a productivity slump in the first 25 years following the invention of the electric motor and combustion engine before the pace of productivity exploded in 1915. Even the early IT era had slow productivity growth for 25 years before powering up from 1995 to 2005.
This is largely because it takes companies a long time to figure out how to best utilize novel tech while also steering new processes, creating fresh business models, training workers, and making other intangible investments, the researchers say.
As of 2017, A.I. was in more of a “lull” period, the researchers argue. However, they write, “the fact that existing A.I. capital has a high market valuation and as such suggests a considerable shadow value for intangible correlates, indicates that we may be entering the period in which A.I.-as-GPT will have noticeable impacts on estimates of productivity growth.”
A.I. has big implications not just for who works and how much, but also for how they work and how they’re managed.
What Are Human Rights in an A.I. World?Economists consider how the technology will affect job prospects, higher education, and inequality.
How Will A.I. Change the Labor Market?The researchers also note that startup funding for A.I. had increased from $500 million in 2010 to $4.2 billion by 2016. (For more on this, read “Why artificial intelligence isn’t boosting the economy—yet.”)
Now, in the first half of 2023 alone, A.I.-related startups raised $25 billion—18 percent of global funding. As Booth’s Stefan Hepp said during a recent episode of Chicago Booth Review’s Big Question video series, “A.I. is attracting all the funding at the moment—and increasing amounts of funding.” (For the conversation, watch “Is A.I. startup funding a rerun of the dot-com bubble?” You can also listen to it on the Chicago Booth Review Podcast as “Are A.I. startups worth the investment?”)
Today, Syverson says, “We’re at the point where investments in A.I. are now enough to have J-curve effects that might be large enough to affect macro productivity levels. Time and more data will tell if it’s really happening, but so many of the things we’ve seen with A.I. technology over the past couple years are consistent with the J-curve dynamic being at work.”
The speedy growth of A.I. concerns some economists, including MIT’s Daron Acemoglu and MIT PhD student Todd Lensman. July research from the pair argues that gradual, rather than fast, adoption of A.I. technologies across all sectors is optimal, as it “allows society to update its knowledge and beliefs about whether this transformative technology will have socially damaging uses.”
And indeed, in March, over 1,000 tech leaders sent an open letter to A.I. labs “to immediately pause for at least 6 months the training of A.I. systems more powerful than GPT-4” in order to “develop and implement a set of shared safety protocols for advanced A.I. design.”
The hype surrounding ChatGPT has fueled increased speculation on what generative A.I. could mean for jobs. For the first time, global outplacement and business and executive coaching firm Challenger, Gray & Christmas in its May layoff report added “artificial intelligence” to its list of reasons for US job cuts—and close to 4,000 jobs fit the category.
But reports published following ChatGPT’s release also focused on the potential productivity benefits, as well as the idea that many jobs would be complemented by rather than lost to A.I.
An April Goldman Sachs report estimates that tech advances fueled by generative A.I. such as ChatGPT could affect approximately 300 million jobs worldwide over the next decade, and that around two-thirds of US occupations are vulnerable. Further, up to 50 percent of the workload of employees in vulnerable professions could be replaced by A.I. automation.
That said, the report also predicts that advances in A.I. could lead to a 7 percent—nearly $7 trillion—increase in global GDP and boost productivity growth by 1.5 percentage points over the next 10 years. And, the report notes, “most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by A.I.”
Not only that, but, as with past tech revolutions that have disrupted the workforce, the creation of new jobs as a result of A.I. advances could help offset the losses, the report notes.
In 2020, the World Economic Forum predicted that by 2025, 85 million jobs across the globe could be displaced due to a division of labor between humans and machines—but 97 million new roles may emerge. Its 2023 report notes, “Agriculture technologies, digital platforms and apps, e-commerce and digital trade, and A.I. are all expected to result in significant labour-market disruption, with substantial proportions of companies forecasting job displacement in their organizations, offset by job growth elsewhere to result in a net positive.”
Such predictions are backed by research from MIT’s David Autor and MIT PhD student Caroline Chin, Utrecht University’s Anna M. Salomons, and Northwestern’s Bryan Seegmiller, which finds that approximately 60 percent of jobs in the US today didn’t exist in 1940, when more than 25 percent of work was in manufacturing and nearly 20 percent in farming and mining.
Georgetown’s Harry Holzer, also a former chief economist for the US Department of Labor, looks back to the dawn of the automobile age at the turn of the 20th century to note a prime example of job loss leading to job gain.
“The horse and buggy drivers’ jobs were all gone,” he says. “But the number of jobs that opened up in the auto industry—especially once Henry Ford created the assembly line and the Model T dropped in price, creating enormous demand in ways that couldn’t have been conceptualized 10 or 20 years [earlier]—produced not just new categories of jobs, but enormous new numbers of jobs.”
Trends regarding the future of work are difficult to predict, but many experts argue that, while A.I. will have a disruptive effect on the labor market in the coming decades, humans are unlikely to be victims of a sweeping job apocalypse in the near future.
As with all past disruptions, however, there will be a significant period of adjustment, and many people, including a variety of workers and those who train them, will have to adapt—perhaps faster than ever before.
Another distinct feature of generative A.I., particularly as compared with the type of automation technology that focuses on performing repetitive tasks—such as robots working on assembly lines in manufacturing plants—is that it targets tasks executed by more educated, white-collar workers.
A study by Tyna Eloundou, Sam Manning, and Pamela Mishkin of OpenAI and University of Pennsylvania’s Rock finds that the jobs that could be most affected by A.I. include accountants and tax preparers, legal assistants, financial analysts, journalists, web designers, mathematicians, court reporters, translators, and public-relations specialists.
Holzer notes the similarities between the effects of globalization and automation. “Traditionally, both have had a skills bias, in which good jobs for less educated workers can be displaced by imports or offshoring,” he says. “A.I. might well be a little different, in that its effects go higher up the skill ladder and in the fact that it will constantly be challenging more human task performance.”
That said, a July McKinsey report predicts the technology will have more of an enhancing than replacing effect on creative, STEM, business, and legal work and is more likely to replace lower-paying office support, customer service, and food-service jobs. Notably, the report says, women and people of color are disproportionately likely to hold jobs at the highest risk of being lost to automation.
“The biggest impact for knowledge workers that we can state with certainty is that generative A.I. is likely to significantly change their mix of work activities,” the report notes.
Holzer says that, in the past, it was indeed easier to replace less educated workers because of the focus on automation performing repetitive tasks. That may be less the case in the future, as A.I. can perform more sophisticated work.
However, Holzer says, the best way to think about this is not at the job level but at the task level. “Every year, A.I. will get a little better and will replace human work on a certain set of tasks,” he says. “And if a worker wants to keep his or her job, he or she will have to pivot to a different set of tasks that the machine cannot yet do.” Higher-skilled and more educated workers will likely be better prepared for this imperative to adapt, he says.
Research by Stanford’s Brynjolfsson, MIT’s Danielle Li, and MIT PhD student Lindsey R. Raymond finds some benefit of A.I. implementation for lower-skilled workers, however. In a study of more than 5,000 customer support agents at a software company, the researchers looked at the introduction of a generative A.I.–based conversational tool. Along with finding that productivity increased by 14 percent on average, the researchers note the greatest positive impact was on novice or lower-skilled workers, because it led them to communicate more like high-skilled workers.
When customer support agents at a software company were given a generative A.I.-based tool to assist in resolving issues, the largest gains in productivity accrued to less skilled agents and to those who were new to the company, according to an analysis. The findings contrast with those observed in earlier waves of computer technology, where highly skilled workers typically benefited.
Similarly, MIT PhD students Shakked Noy and Whitney Zhang studied 450 professionals who were tasked with a variety of mid-level professional writing jobs, including crafting press releases, short reports, and emails. They find that workers who used ChatGPT decreased their time spent by 40 percent and increased quality by 18 percent.
Additionally, they find that workers whose writing ability had been rated by third-party evaluators as lower benefited more, as they were able to improve the quality of their work while reducing time spent. At the same time, higher-skilled workers increased productivity without meaningfully improving work quality. “At the aggregate level, ChatGPT substantially compresses the productivity distribution, reducing inequality,” the researchers write.
However, they also note that if this type of generative A.I. ends up replacing rather than complementing workers, the technology could potentially decrease demand for human labor, “with adverse distributional effects as capital owners gain at the expense of workers.”
Many questions remain about what A.I. will mean for the labor force in the coming decades. Will lower-skilled workers gain on knowledge workers? What kind of new jobs will emerge? Will opportunity and wage inequality increase or decrease?
While the answers are not yet clear, the decisions of companies, governments, and educators will help to determine how A.I. shapes the labor market. Academic research suggests four approaches to guide these decisions:
Research from MIT’s Acemoglu, the International Monetary Fund’s Andrea Manera, and Boston University’s Pascual Restrepo finds that the US tax system—due to a range of reforms enacted between 2000 and 2018—gives incentives for companies to automate jobs by taxing labor heavily but having low tax rates on capital investments in technology.
Acemoglu has long been a voice against so-called blind techno-optimism. His 2023 book with MIT’s Simon Johnson, Power and Progress, argues that throughout human history, the average person has not automatically shared with the elite the prosperity generated by tech innovations.
Acemoglu, Manera, and Restrepo find that the effective tax rate on labor in the US is in the range of 25.5 percent to 33.5 percent, compared with 5–10 percent for capital. At those levels, the researchers say, companies are incentivized to replace workers rather than invest in them.
“The US economy had a total of 2.5 industrial robots per thousand workers in manufacturing in 1993, and this number rose to 20 by 2019,” they write, citing research by Acemoglu and Restrepo from 2020.
Excessive automation, they say, has caused a decline in labor’s share of income from nonfarm businesses, which decreased from about 64 percent in 1980 to 57 percent in 2017; sluggish inflation-adjusted wages for the median worker (up 16 percent in this time period); and a drop in wages for lower-skilled workers—down 6 percent in this period for men with a high-school diploma.
The current tax structure in the US and other advanced economies means A.I. needn’t come close to human capabilities for employers to see more profitable opportunities from investing in it over expanding their workforce, says Katya Klinova, former head of A.I., labor, and the economy at the Partnership on AI, a nonprofit community of academic, civil society, and industry organizations focused on responsible development and use of A.I. Klinova, who now works for the United Nations, notes that policy incentives are “already setting the playing field in such a way that it is tilted toward more automation of labor.”
To stem this trend and incentivize companies to invest more in human labor, Acemoglu, Manera, and Restrepo say, policy makers should set effective tax rates for capital higher than those for labor, with their optimal levels at about 27 percent and 18 percent, respectively. The researchers estimate this could reduce the range of automated tasks and raise the number of employed people in the US by 4 percent, as well as raise the labor share of income by nearly 1 percent.
However, the researchers also note, if moving to “optimal taxes” isn’t feasible, an automation tax on companies could work if applied to technologies that automate tasks in which humans still have a comparative advantage.
“Specifically, with no changes in capital and labor taxes, an automation tax of 10.15 percent—which implies that only tasks where the substitution of labor for capital reduces unit costs by more than 10.15 percent are automated—maximizes welfare and raises employment by 1.14 percent and the labor share by 1.93 percentage points,” the researchers write. This is not a broad “robot tax,” they say, because it doesn’t hit all types of automation.
A quick scan of online listings provides job descriptions that wouldn’t have existed in the recent past, such as an A.I. prompt engineer.
Holzer, agreeing that the current US tax code favors displacement, also notes that he doesn’t advocate any type of broad robot or A.I. tax, as it would be a tax on productivity growth.
He suggests instead a sort of displacement tax. “Every time you displace a worker because of A.I., they [companies] have to pay a tax on that,” he says. “And if you’re going to retrain them [employees], then maybe we subsidize that. So you use taxes to discourage displacement that isn’t necessary, and you use the subsidy system to subsidize retraining. You want there to be the right incentives.”
This matters at two levels—for the companies developing A.I. and for those adopting it—he says. “You want their incentive to be less toward pure replacement and more toward human-centered A.I.”
To that end, companies can strive to invest in A.I. that as much as possible augments rather than replaces the complex tasks that human workers can do. As Autor, Chin, Salomons, and Seegmiller conclude in their research, when technology augments rather than displaces human workers, we see more valuable new work created, labor demand boosted, and wages raised.
Governments can take the lead in funding A.I. research in “areas where A.I. can create new tasks that increase human productivity and new products and algorithms that can empower workers and citizens,” notes Acemoglu in additional research.
The Biden administration in 2023 indeed announced several measures to promote responsible A.I. innovation, including allotting $140 million in funding for research and development and issuing a wide-ranging executive order addressing A.I. topics from safety to ethics to potential job loss. And in June, the European Parliament agreed on a draft of the A.I. Act, which could become one of the first all-encompassing laws regulating the technology. The act focuses largely on stemming risks such as privacy violations and hazards to health or safety, and it includes protection proposals for workers subject to company A.I. platforms that are used to hire and manage them.
The Partnership on AI, as part of its shared prosperity guidelines, suggests that companies adopting A.I.-enabled tech should do thorough evaluations to ensure systems align with worker needs. Among the steps they prescribe: secure systems that improve job quality while eliminating undesirable tasks, recognize any extra work created by A.I. and ensure it’s acknowledged and compensated, create teams that test systems for misuse, establish transparency on worker data collection and use, and allow workers to opt out of those data practices.
Klinova says the Partnership on AI is working with unions, companies, and policy makers to refine, test, and encourage adoption of their shared prosperity guidelines.
In recent years, the idea of creating universal basic income programs to assist workers displaced by automation and A.I. gained some traction, including in the short-lived US presidential campaign of Andrew Yang. However, many economists prefer the idea of upskilling over passive income replacement. “Even if technology is diffusing fast,” Humlum says, “I’m much more supportive of investing in people than just income supporting them.”
University of Cambridge’s Diane Coyle is even more dismissive of UBI, calling it “a chimera advocated by Silicon Valley individualists who don’t want to take responsibility for the social consequences of their innovations.” She adds, “Nobody can buy a transportation system or infrastructure or good public schools with an individual income.”
Research suggests that reskilling programs can help those with lower-demand skills elevate their professional status. A 2022 White House report notes that government should be “investing in training and job transition services so that those employees most disrupted by A.I. can transition effectively to new positions where their skills and experience are most applicable.”
In the US, Holzer points to successful implementation of sector-based training, in which people prepare for jobs in high-demand areas that offer good pay to workers without college degrees.
Typically, an intermediary organization with knowledge on a specific industry brings training providers—most often community or technical colleges—together with employers and industry associations, providing support and services to disadvantaged students.
Harvard’s Lawrence F. Katz, Brown’s Jonathan Roth, and Richard Hendra and Kelsey Schaberg from the research organization MDRC studied several sector-based programs that focused on workers in industries including manufacturing, healthcare, transportation, and IT. They find that the programs (including WorkAdvance, Project QUEST, and Year Up) led to earnings gains of 14–38 percent in the year following training completion. Additionally, they find, earnings gains persisted for at least several years “with little evidence of the fade out of treatment impacts found in many evaluations of past employment programs.”
So far, Holzer says, these programs have been more for disadvantaged than displaced workers, but the model could work for employees who lose work due to A.I.
“You can imagine a worker who’s displaced, say in healthcare or in IT, that you might be able to retool them more quickly if it remains a growing field,” he says. And the place this should be happening most is at the community-college level, he suggests. But, he notes, while community colleges have the scale, lack of support—for one, they are given far fewer public subsidies than four-year colleges—is a major challenge to program quality.
Booth’s Humlum, through research with University of Copenhagen’s Jakob R. Munch and UCPH PhD student Pernille Plato Jørgensen, studied the effects of a reskilling program in Denmark for workers injured on the job. Workers were transitioned from physical to more cognitive occupations, and while this program focused on displacement due to injury, the researchers note it could potentially translate for workers displaced through automation.
The study followed workers who instead of receiving disability payments from the government took the same amount of money to either enroll in formal education courses or participate in an on-site training program. According to the study, the most common choice for those who enrolled in classroom training was a four-year bachelor’s degree that transitioned their physical work to more knowledge-based work in the same field—for example, a former construction worker would enter a degree program in construction architecture. Of those workers, 80 percent found new employment within seven years of their accidents, and on average earned 25 percent more than before they were injured.
Another study Humlum conducted in Denmark, with Munch and UCPH’s Mette Rasmussen, finds similar advantages to upskilling, which the researchers argue is more beneficial than on-the-job training programs. While their study focused largely on workers displaced due to offshoring, the key result could translate to job loss or downsizing due to A.I. and automation. The researchers find that unemployed workers whose jobs were at the most risk for offshoring and who were assigned to classroom training—which allowed them to learn new skills that could help them switch occupations—had gained, on average, 55 hours of work per month compared with their previous employment.
Similarly, the United Kingdom in 2017 announced the National Retraining Scheme program, eventually investing £100 million to train workers, particularly those displaced by A.I. and automation. And US companies are also taking notice. Amazon’s Upskilling 2025 program, for one, is investing $1.2 billion to provide employees access to training programs that prepare them for higher-level jobs.
“I do think reskilling will become increasingly important,” Humlum says, “just because technology is diffusing faster . . . and we are living longer, and those two facts combined mean that we have to confront more technologies during one single career.” In their research, Humlum, Munch, and Rasmussen note that while Denmark spends as much as 2 percent of its GDP on active labor-market policies, the US emphasizes more passive programs such as unemployment and disability insurance.
The White House report notes that “the increased prevalence of shorter contract durations lowers incentives for firms to invest in worker training,” and that it may therefore be useful to subsidize intermediaries such as public employment services and temporary-employment agencies to share the costs and benefits of training.
College graduates in Denmark with a degree in biology, chemistry, or another lab science were more likely to work at companies that use A.I., while information technology, computer science, and mathematics majors were particularly valuable to companies that produce it, research finds. The distinction matters because those employed by A.I. producers tended to enjoy higher salaries.
The educational system as a whole, Holzer says, will need to be better funded, more adaptive to new realities, and better prepared to offer lifelong learning if students are going to be able to navigate fast-moving tech landscapes.
“So there’s a new job,” he says. “You might need a lot more labor-market counselors to help people at all different stages, counselors who know about the labor market, who have studied specific industries, who are talking to employers in those industries.”
The International Society for Technology in Education, a nonprofit that works with educators around the globe, suggests that education about A.I. should start as early as kindergarten, across all subject areas. It offers a hands-on project guide for K–12 teachers that outlines classroom activities such as having students research how A.I. is currently being used in the workplace and envisioning what their own “job of the future” may be, all while considering the ethical implications of the technology.
Research by Humlum and the National Bank of Denmark’s Bjørn Bjørnsson Meyer suggests, however, that when it comes to higher education, preparing students for jobs that use A.I. may not be as valuable as preparing them for careers that actually produce the technology—at least in terms of their earning potential.
The researchers, again in Denmark, ranked college majors according to their share of graduates that worked in companies that either utilized or produced A.I. They find that graduates who worked for companies that produced rather than simply used A.I. technology earned a wage premium from the beginning of their careers, with average starting salaries about $4,100 higher.
“While computer science and mathematics majors specialize in A.I. producer firms, we find that biology, chemistry, and other laboratory sciences concentrate in firms that only use A.I.,” Humlum says. “While college majors relevant to A.I. production earn rising wage premiums in the labor market, we do not see similar positive effects for A.I.-user majors.”
To be sure, this doesn’t mean the job market won’t be strong for A.I. users—they just might not, on average, make as high a salary as the producers.
The joint effort by the WGA and SAG-AFTRA, though not exclusively concerned with A.I., was among the first major calls by labor unions to officially regulate the use of the technology. In September, the WGA strike that had started in May finally ended, and with it came several wins on the union’s A.I demands. Among them: A.I. cannot write or rewrite literary material, A.I. cannot be considered source material, a company cannot force writers to use A.I. (although they can choose to), and all companies must tell writers if materials given to them have been A.I.-generated or incorporate any A.I.-created material. “The humans won,” a column in the Los Angeles Times trumpeted.
On November 8, SAG-AFTRA announced that it, too, had reached a tentative agreement with Hollywood studios to end its strike. The deal—which has been approved by the union’s board but has not yet been voted on by its full membership—requires informed consent from performers, including background actors, each time filmmakers plan to use their A.I.-generated likenesses. It also regulates how actors are compensated for such use and requires studios to obtain consent from actors whose features are used to produce A.I. composite characters. The writers’ and actors’ contracts are both set to expire in 2026.
Holzer notes that while only about 6 percent of US private-sector workers are unionized, the Hollywood strikes demonstrate that when workers have some agency and voice, they can force better working conditions in the face of technological shifts.
“At least for the unionized sectors,” he says, “this is going to send a signal to employers: take these folks into account when you are automating.”
If employers are even threatened by the potential of workers unionizing or protesting in some way, he notes, they may seek to automate with more thought toward the workers—particularly given the current US labor shortage.
“If you’re already having trouble hiring and retaining high-quality workers, you want to [be able to] convince your employees that they’re not going to come in and take this new job, and then maybe be out the door in six months,” he says.
He points to codetermination laws—in which workers can elect board representatives for their companies—in countries including Germany, Finland, and Austria as an example of how “every time workers have more voice, they incentivize the employers to do things differently.” The model barely exists in the US, although some politicians, particularly Democrats, have pushed to expand it.
Some of the A.I. jobs of the future are already here. A quick scan of online listings provides job descriptions that wouldn’t have existed in the recent past, such as an A.I. prompt engineer, whose job expectations involve the “ability to design and develop A.I. prompts, including ChatGPT, using large language models to accelerate coding abilities.”
The World Economic Forum’s 2023 jobs report predicts that, by 2027, there will be a 40 percent rise in job openings for A.I. and machine-learning specialists, a 30–35 percent rise in demand for roles including data analyst and big data specialist, and a 31 percent increase in demand for information security analysts—together adding approximately 2.6 million jobs.
Additionally, the more A.I. is used in the workplace, the more need there may be for cybersecurity experts and ethicists to ensure the proper implementation of A.I. platforms. And in sectors such as healthcare, where A.I. is leading to astounding breakthroughs in areas like diagnosis and drug discovery, openings for consultants and analysts who understand and can manage the technology in medical settings may increase.
Whatever sector workers are employed in, it’s likely they will need to have at least basic knowledge of A.I. to compete in the coming decades. Per the WEF report, A.I. and big data are together the top training priority for companies with over 50,000 employees, and the No. 3 priority for companies of all sizes. That said, as A.I. accelerates, a human touch may be increasingly appreciated, particularly in fields that value that type of connection, from medicine to financial services.
And generative A.I. still depends on human input and expertise to direct and edit what it creates (using oceans of training data generated by humans as well), output that can range from impressive to entirely factually inaccurate. As noted in research by Carl-Benedikt Frey and Michael A. Osborne of the University of Oxford, “Generative A.I. is less likely to be deployed in high-stakes contexts, when mistakes are costly or irreversible.”
Today’s policy makers, employers, and educators are making decisions that will determine whether workers are available to meet the shifting labor demand shaped by A.I. If they act prudently, there’s good reason to hope they can avoid the cataclysmic outcome of a Hollywood techno-thriller—and perhaps write a happier story for the human labor market of the future.
Chicago Booth’s Chad Syverson discusses the role of good management in the age of AI.
Why AI Might Not Make You More ProductiveResearch finds a link between vocal delivery and investor reaction.
On Earnings Calls, Do Executives Mumble on Purpose?In a study, the bot cut fat from corporate disclosures to provide effective summaries.
ChatGPT Could Help Investors Make More Informed DecisionsYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.