Coronavirus Updates

Tales of A.I. gone wrong often become viral news stories, whether it’s a Twitter bot gone rogue or a problematic search engine, but was the problem really A.I. or did it lie elsewhere? What lessons can be learned so A.I. can better be used for good in the future?

These questions and more were explored by Chicago Booth Professor Sendhil Mullainathan and Dr. Paul Lee, the cofounder and COO of health-care startup Tree3Health, during the latest Social Impact Leadership Series in Hong Kong. As the Roman Family University Professor of Computation and Behavioral Science, Mullainathan brought his extensive research experience to the table, while Lee’s health-care background provided insights he gleaned while building his app and practicing as a physician.

The Rustandy Center for Social Sector Innovation and The Hong Kong Jockey Club Programme on Social Innovation co-hosted the event, which was moderated by Tali Griffin, Senior Director, Marketing Programs and Partnerships at the Rustandy Center.

Below are four key takeaways from the event:

Remove the mystery from A.I.

The key to understanding how A.I. can be used for good—and how to fix its problems—is to first understand how it works, said Mullainathan. The words “artificial intelligence” may sound daunting, but at its core it functions much like an ordinary spreadsheet based on the premise of “what variable is the algorithm trained to predict.”

“The real innovation is we found a way to treat other things exactly like you would treat a column in a spreadsheet,” Mullainathan said, whether that’s images, text, waveforms, X-rays, or satellite images.

Understand the errors

“Much like a spreadsheet, A.I. only works as well as the quality of its data and the way the algorithm is written and tested,” Mullainathan said. “Problems typically occur when A.I. is trained on one type of data, and then applied at scale to another,” he said, or when human error or bias has been introduced into algorithms and data sets. In one famous example, a Google image search of “CEOs” prioritized white men, which highlights the common problem of data sets that fail to be representational.

Another famous, or infamous, example is “Tay,” Microsoft’s Twitter chat bot. Twitter users were able to quickly manipulate Tay into saying outrageous statements because the chat bot’s A.I. had been trained in a friendlier, politer environment rather than the more unpredictable and strident English-language universe of Twitter. “They trained on one data set and hoped it applied to another kind of data. That’s all that happened,” Mullainathan said.

Applying A.I. for good

While designing A.I. is not without its challenges, health care is one area where it can be used for good. Appropriately trained A.I. could be used in remote telemedicine, for example, to analyze the X-ray of a patient in rural Asia or interpret bloodwork data for another, said Mullainathan. Apps like Lee’s Tree3Health make use of the millions of data points gleaned from cell phones and smart devices and interpret them for users and health-care professionals. Push alerts can also notify both that an issue may be brewing that is not immediately apparent.

“What we are foreseeing is not just collecting data and annotating and giving some health insight, we are trying to create a preemptive community health alert for the emergencies, for chronic disease, and for infectious disease management,” said Lee.

Protecting data used in A.I.

Relying on A.I. to manage health-care data inevitably raises questions about privacy and consumer protection, but both Mullainathan and Lee said those issues are not insurmountable. One solution is to factor privacy in as a key issue during app design, said Lee, whose app Tree3Health bifurcates user data and health metrics. “Data privacy and confidentiality are always important issues to tackle when we engage in health-care data. On one hand, we comply to international guidelines concerning electronic health record data handling; on the other hand, we separate the storage and handling of the sensitive user data and health data so that our data analytic is only based on anonymous data sets to ensure privacy,” he said, concerning user data and health data.  

Another solution, said Mullainathan, is to reconsider how privacy is debated in an A.I. context. Sometimes, he said, when users talk about data privacy, what they are really concerned about is how their data will be used.  

“I think we have good regulations for privacy but not yet a good answer to the question of who’s allowed to use what data for whom, with what permission,” Mullainathan said, adding that he was also concerned about how privacy restrictions could keep data from being used for good purposes like medical research. 

“I think we should be moving toward a world where there is a broad accepted social use, and the assumption is your data’s going to be used for that,” he said, rather than always assume that a patient must opt in to share potentially life-saving data.

About The Hong Kong Jockey Club Programme on Social Innovation

The Hong Kong Jockey Club Programme on Social Innovation provides resources and programs to help the city’s NGOs, nonprofit leaders, and social entrepreneurs do their best work. Operated by the University of Chicago Booth School of Business, the Programme offers a range of opportunities, including scholarships, social entrepreneurship workshops, and trainings for NGO boards of directors and board members.

Disclaimer: All the content presented is independently produced by the organizer, creative team, or speaker, and does not reflect the views or opinions of The Hong Kong Jockey Club Programme on Social Innovation or The Hong Kong Jockey Club Charities Trust.

 
Email icon

Get Updates from the Rustandy Center

Please send me updates on social impact events, programs, and research at Chicago Booth.

Get Updates from the Rustandy Center

Recommendations