How to Design Policy in the Age of AI
An expert panel discusses the yet-unknown and steps policy makers should take.
How to Design Policy in the Age of AIImagine that a ride-sharing driver receives a request for a pickup, and it’s up to the driver to accept or decline. The ride-sharing company’s algorithm has determined that this is a good customer-driver match and that it’s suggested at an acceptable price for the driver. But the driver may second-guess the result and decline it by claiming to be unavailable, perhaps hoping the next pickup request will come from a closer location or be at a higher price.
This kind of gaming is a headache—not just for ride-sharing companies, but for online freelancing markets, cloud computing platforms, and other applications. Deep in the weeds of these platforms and applications are “allocation and pricing algorithms” selecting outcomes for the users and charging them money. And then there are humans who, in trying to get the best deal possible, sometimes muck up these mechanisms, leading to less-optimal outcomes for themselves and others.
But University of Southern California’s Shaddin Dughmi, Northwestern’s Jason Hartline, Cornell’s Robert D. Kleinberg, and Chicago Booth’s Rad Niazadeh have created a tool that could help address the situation by improving the functioning of many these algorithms.
To understand what the tool does, Niazadeh says, set aside human gaming for a moment. It can take an allocation algorithm running on a powerful computer many days, or even years, to find the socially optimal outcome for one of these applications. Because of this computational shortcoming, theoreticians and practitioners have devised several ‘fast’ allocation algorithms that are not necessarily finding the optimal outcome but are good enough for the application.
While these algorithms can find an acceptable outcome in a short time, they do so by either assuming that people act truthfully or ignoring the kind of incentives that cause humans to be less than honest. Can researchers redesign algorithms so that they acknowledge human gaming and still perform quickly?
When gaming is involved, the math problem underlying the allocation and pricing task becomes far harder to solve. In the 1960s and ’70s, a trio of economists—the late Nobel laureate William Vickrey, the late Edward Clarke, and University of California at San Diego’s Theodore Groves—created a partial solution for this problem. The Vickrey-Clarke-Groves (VCG) mechanism imposes monetary costs on people who try to game the system in terms of prices, making them less likely to do so. Since then, many people in various industries have used this, or a variation of it, to essentially make their algorithms immune to rigging.
However, a VCG mechanism only works if the math problem that leads to the socially optimal outcome is exactly solvable in a computationally efficient fashion. But there are a lot of allocation optimization problems that aren’t solvable exactly in a computationally efficient fashion, or at least not in a reasonable amount of time.
The existence of such computationally hard mathematical problems, known as “NP-hard” problems in computer science, is a real barrier for exact computation, Niazadeh says. Clay Mathematics Institute, a foundation devoted to increasing and disseminating math knowledge, offers a $1 million prize to anyone who can solve any of these NP-hard problems. “On a related note, the idea of cryptographical internet security, and the entire industry around that, is based on the premise that there are NP-hard encryption codes that cannot be cracked in a short amount of time,” he says.
A ride-sharing company might run into one of these NP-hard problems when matching up drivers and passengers taking short trips, trying to optimize the situation for drivers over the course of several rounds of matches. Because the problem isn’t solvable, the company will either use an algorithm that will find an approximately optimal outcome, or it will use heuristic algorithms, which generally work in practice but have no theoretical guarantees. But using either of those options means it can’t use the VCG mechanism to address drivers’ gaming.
Dughmi, Hartline, Kleinberg, and Niazadeh created a way to essentially run a replacement for the VCG mechanism in such cases. “The idea is thinking of the given algorithm as a ‘black box’ that generates some outcome,” says Niazadeh. “We then try to use this black box in a computationally efficient fashion using tools and tricks from applied mathematics and computer science to patch up places where this algorithm allows for gaming.”
Thus, a company that is unable to use a VCG mechanism could use this new one, which the researchers call a “black-box reduction.” Its outcome should address gaming and still be just as good as what the original allocation algorithm would have produced (assuming people were honest when interacting with that one). This provides a strong message for practitioners, says Niazadeh: in applications that involve human gaming, the designer can ignore incentives and only focus on the underlying mathematical optimization problem for finding a socially optimal outcome. Then this mechanism can essentially be wrapped around the algorithm to transform it into a new one that has the same performance and is immune to gaming.
Doing this could help a new class of algorithms produce better responses to a host of problems. Our ride-sharing driver would be less likely to turn down a request for pickup, explains Niazadeh. Or, to take a different example, US wireless providers at Federal Communications Commission spectrum auctions would be more likely to bid truthfully rather than strategically, even if the FCC’s allocation algorithm isn’t the optimal one.
Moreover, the researchers’ findings suggest that as the theoretical research develops, eventually algorithms could better tackle challenges such as matching kidneys with transplant recipients, or new medical residents with hospitals.
Shaddin Dughmi, Jason Hartline, Robert D. Kleinberg, and Rad Niazadeh, “Bernoulli Factories and Black-Box Reductions in Mechanism Design,” Journal of the Association for Computing Machinery, April 2021.
An expert panel discusses the yet-unknown and steps policy makers should take.
How to Design Policy in the Age of AIChicago Booth’s Sendhil Mullainathan visits the podcast to discuss if AI is really “intelligent” and whether a profit motive is always bad.
Capitalisn’t: Who Controls AI?Regional transportation networks can blunt the impact of supply-chain disruptions.
Why Some Regions See Gas Price Surges After a Storm—And Why Others Don’tYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.