Request Information from Booth

Loading...

  • Select
  • Submit
  • Success

Algorithms have become increasingly pervasive as organizations in both the public and private sectors have sought to automate tasks that once required human intelligence. From facial recognition to decisions about creditworthiness to medical assessments, decision-makers rely on algorithms to help improve their own perceptions and judgment.

But the use of algorithms in so many domains has been accompanied by equally pervasive concerns that those algorithms may not produce equitable outcomes. What if algorithms output results that are biased, intentionally or unintentionally, against a subset of people, particularly underrepresented groups such as women and people of color? Given their application in contexts with enormous human consequences, and at tremendous scale, a biased algorithm could do significant harm.

Researchers with Chicago Booth’s Center for Applied Artificial Intelligence (CAAI) have seen the kind of harm even well-intentioned algorithms can produce. Sendhil Mullainathan, the Roman Family University Professor of Computation and Behavioral Science and the center’s faculty director, found in research with University of California at Berkeley’s Ziad Obermeyer, Brian Powers of Boston’s Brigham and Women’s Hospital, and Christine Vogeli of Partners HealthCare that an algorithm used to evaluate millions of patients across the United States for enrollment in care-management programs was biased against Black patients, excluding many who should have qualified for such programs from being enrolled.

The publication of that research in the journal Science in October 2019 generated significant engagement from policy makers and health-care organizations: New York State, for instance, launched an investigation of one major health system over its use of the algorithm. Building on this impact, the CAAI established the Algorithmic Bias Initiative, with the goal of helping the creators, users, and regulators of health-care algorithms apply the insights of research to their own work. Thanks in part to the visibility of the Science study, interest from the health-care industry has been strong. “We didn’t really have to do much outreach,” says Emily Bembeneck, the director of the CAAI. “People came to us.”

“Algorithms can do a lot of harm or good. They will either reify (or even worsen) existing biases, or they can—if carefully built—help to fix the inequities in health care.”

— Sendhil Mullainathan

Since its founding in November 2019, the initiative has worked with individual health-care systems and other groups to help them tackle such tasks as taking stock of the algorithms they use, evaluating their organizational structures with algorithmic management in mind, and scrutinizing algorithms themselves for bias. When algorithms go wrong, the problem is often something called label-choice bias, the same shortcoming identified in the Science study, in which an algorithm may be predicting its target variable accurately, but that variable is ill-suited to the decision it’s being used to make.

Health care makes a good setting for studying and combatting algorithmic bias, Mullainathan says, because the stakes involved—personal health, comfort and pain, even life and death—are so high, because algorithms are widely used throughout the industry, and because “health care is a domain notorious for inequities.”

“Historically disadvantaged groups have struggled with systemic and individual racism [in health care] for decades,” Mullainathan says. “Against this backdrop, we are at a crossroads. Algorithms can do a lot of harm or good. They will either reify (or even worsen) existing biases, or they can—if carefully built—help to fix the inequities in health care.”

A Playbook for Preventing Bias

To scale up the help it can provide to organizations working in health care, in June the initiative released the Algorithmic Bias Playbook, an action-oriented guide synthesizing many of the insights the initiative has drawn both from research and from experience in the field. The free playbook offers a framework for identifying, correcting, and preventing algorithmic bias following four steps:

  1. Creating an inventory of all the algorithms being used by a given organization
  2. Screening the algorithms for bias
  3. Retraining or suspending the use of biased algorithms
  4. Establishing organizational structures to prevent future bias

The playbook guides users through every step, breaking each down into discrete actions and offering practical advice and examples drawn from the CAAI’s work with health-care organizations. Although its specific focus is health care, “the lessons we’ve learned are very general,” the authors write. “We have applied them in follow-on work in financial technology, criminal justice, and a range of other fields.”

Bembeneck, one of the authors of the playbook along with Mullainathan, Obermeyer, ideas42’s Rebecca Nissan, Michael Stern of the health-care startup Forward, and Stephanie Eaneff of Woebot Health, says that the playbook can serve as a blueprint not only for organizations hoping to improve their use of algorithms, but also for future regulation of algorithms. “We think one of the best ways to address algorithmic bias is better regulation,” she says.

To further encourage the implementation of best practices for algorithmic management in health care, the CAAI will cosponsor a conference with Booth’s Healthcare Initiative focused on helping those working in health care to take concrete steps toward eliminating algorithmic bias. Taking place in Chicago and online in early spring, the conference will bring together policymakers, health-care providers, payers, providers of A.I. software, and technical experts from outside the health-care industry—groups that may not be in regular contact with each other but could nonetheless benefit from an opportunity for dialogue.

Matthew J. Notowidigdo, professor of economics and codirector of the Healthcare Initiative, says that he’s eager to connect health-care experts with those who have backgrounds in machine learning. Health care poses some unique challenges when it comes to algorithms, he says—for instance, privacy restrictions may limit how data can be shared, including with the designers of algorithms—but “I’m of the belief that there’s a lot that health care can learn from other settings” where algorithms are used.

The conference will feature user stories from organizations that have put the Algorithmic Bias Playbook to use. Panels will focus on topics such as data sharing and building teams for algorithmic management. The conference’s organizers also emphasize the importance of allowing attendees to network, particularly given their diverse professional backgrounds, so they have the opportunity to share knowledge and build connections that can help them take action within their organizations.

The playbook and conference, as well as the ongoing support the CAAI offers through the Algorithmic Bias Initiative, reflect a rising concern within a portion of the health-care industry about how algorithms are used. Bembeneck says that her experience with the Algorithmic Bias Initiative has shown her that many health-care organizations—and, consequently, many of the vendors that supply algorithmic products—are acutely aware of the importance of equitable A.I. “Not only because they don’t want to be on the wrong side of the law,” she says, “but there’s a keen desire from everyone we’ve talked to that they want to give better health care.”

Explore More

Grey Placeholder Image

How Racial Bias Infected a Major Health-Care Algorithm

An algorithm used to make an important health-care determination for millions of patients produces racially biased results, finds Booth’s Sendhil Mullainathan.

How Racial Bias Infected a Major Health-Care Algorithm

More Stories from Chicago Booth