Capitalisn’t: Who Controls AI?
Chicago Booth’s Sendhil Mullainathan visits the podcast to discuss if AI is really “intelligent” and whether a profit motive is always bad.
Capitalisn’t: Who Controls AI?Shutterstock
Hal Weitzman: We’ve all noticed how marketers use artificial intelligence to sell us more stuff, but can A.I. also be used for social good? Welcome to the Big Question, the video series from Chicago Booth Review. This episode is being filmed in conjunction with the Kilts Center for Marketing at Chicago Booth. I’m Hal Weitzman, and as always, we’ve assembled an expert panel. Sanjog Misra is the Charles H. Kellstadt Professor of Marketing at Chicago Booth, Lisa Sullivan-Cross is head of growth marketing at Pinterest, and Murli Buluswar is head of analytics at the US Consumer Bank of Citibank.
Let’s start, Sanjog Misra, with you, and let’s define our terms. We’ve all heard about A.I., but what do you mean by A.I., and specifically, how does A.I. relate to targeted marketing?
Sanjog Misra: Yeah, the term A.I. isn’t a new one. Originally, it was coined in ’53, and essentially, you want to think of artificial intelligence as being any time a machine can do something that a human being would have expended intelligence to do, right? That’s the broadest definition of A.I. But typically when we think about A.I. in the context of marketing, what we’re really referring to is this ability to get machines to learn something from large quantities of data. And in particular, what we’ve become reasonably good at is taking large amounts of data and essentially uncovering what people’s tastes, preferences, what their needs are, and then using those to build scalable algorithms that can deliver customized marketing intervention. So these could be products, Murli could have designed, could have customized a portfolio of offerings for our customers, or Pinterest would learn about what their customers are interested in and make really smart recommendations. So the benefit to consumers is this close match, if you will, between what their needs and what their preferences are, and what they actually end up getting.
Hal Weitzman: All right, and give us a sense of the scale. How widespread is the use of A.I. in targeted marketing in the US, say?
Sanjog Misra: It’s growing, I would say, close to exponentially. If you go back, let’s say about 15 years ago, the idea kind of existed, but we didn’t have the infrastructure, the machinery, if you will, to put this into practice. Data has been growing faster than computing power, and our ability to learn from data has been growing as well. So now you have tools like deep learning and various new versions of machine learning that can scale through these large amounts of data. As far as actual practice goes, we’ve got people from the financial sector, from the general online space. Any place where there is large amounts of data, there’s been a push towards using A.I., right? I think what we’re going to see is, in the next five to 10 years, even in industries that we would have normally thought of as being lagging in the technology space, it’s going to become ubiquitous, the use of A.I. and machine learning and deep learning.
Hal Weitzman: Right, but we’re still relatively at the early stages of this revolution in marketing?
Sanjog Misra: It depends on what you call early stages, right? I think we’ve learned a lot. The constructs that we think of as being useful in A.I. and marketing, those go back to work that’s being done, for example, at the University of Chicago, 20, 30 years ago. Peter Rossi, and in some sense, the Kilts Center was created to be a repository for large amounts of data that then could be used for algorithmic and algorithmic marketing and A.I.-type tools. What’s changing now is that we’ve been able to do a lot more and scale down to the individual user, right? We knew how to do it in broad classes, now we can essentially tailor marketing interventions down to the most granular level possible.
Hal Weitzman: OK. Well, thanks for that overview, Sanjog. Lisa and Murli, I want to turn to you to hear about how your companies are actually using A.I. specifically to improve your marketing. Lisa, let me start with you. At Pinterest, how do you think about A.I.? How does it help you do your marketing?
Lisa Sullivan-Cross: Yeah. Similar to what Sanjog talked about, we really use the term machine learning, which is a part of A.I., to identify patterns and make decisions, with minimal human intervention, to make a user’s experience with the product better. If done right, A.I. can be instrumental in growing your business, and it can make the user or customer’s life better.
We have three big tenants that we follow when we’re implementing machine learning at Pinterest. One is customer first, respect the customer. Use their data, their behavioral data, to benefit them, not to take advantage of them. The second is, technically be the best. We have amazing engineering and data science resources, and team members that can build out the best products for our users. And then third, make sure that the machine learning, or A.I. feature, is core to your business or your mission. Our mission at Pinterest is to bring everyone the inspiration to create a life they love, and we make sure that any machine learning that we use ladders up to that mission.
I think where a lot of companies make mistakes is, they may use A.I. to build a bell and whistle that’s an adjunct to their product that might just be gratuitous or fun to use, and it doesn’t help really grow the business in that way, because it’s not in line with the mission and not keeping the customer in mind as much as it should. Specifically at Pinterest, one example is, we track a user’s behavior, and say they’re searching for... I’m doing this right now, a DIY kitchen project. We are redoing our kitchen. Because that’s how I’ve used Pinterest, I’m on there searching for kitchen projects, now my experience changes: when I visit Pinterest, it serves up content that’s all about kitchen remodels.
And specifically, if I were looking for lighting options, it would serve that up. And then we use that also in things like our email and mobile push notifications. It’s all tailored to the individual person and it doesn’t take a lot of human involvement on our end, for example, to create an email for a customer, because we look at their behavior and how they use the product to create and automatically put that content together in that email.
Hal Weitzman: Murli, talk to us about how you use A.I. for marketing.
Murli Buluswar: Absolutely, Hal. Our mission, very similar to what Sanjog and Lisa were saying, is really to develop real-time contextual relevance with our customers. And in our world, where we traverse digital and physical, what’s really critical is that the conversation we have with our customers is pretty consistent, regardless of whether it’s a physical branch, or a call center, or a new mobile app, or online. My team’s mission really, as an analytics function, is to say look, historically, analytics teams are pretty good at predicting things, but they’re not necessarily quite as good at saying, what do you do with this, what is the implication and application of this?
So there’s really three dimensions to what we’re focused on. Number one is prediction using advanced analytics. Number two is primary research to understand why the customer is making the choices he or she is making. And number three is to develop a robust design of experimental capabilities that can actually create this industrialized test-and-learn environment where we could actually try different types of engagements and different types of associations to predictions, in order to build more relevance.
And historically, what’s happened is, in large organizations like ours, when you have an interaction in a call center, or a branch, or with an ATM, or with a more digital channel, that data and that interaction has remained in that silo. What we’re in the throes of doing is, being able to stitch that across channels and draw meaning from that in ways that helps us understand you, with the goal of really three things. Number one is, to be able to predict and preempt customer service issues. Number two is, to delight you in ways that you might not have imagined. And number three is, to understand your financial needs, even if you haven’t necessarily come to us, and build more relevance on that dimension as well. We really see data as being at the core, and this notion of machine learning and advanced analytics, being at the core of developing unparalleled cross-channel, cross-product customer intelligence.
Hal Weitzman: And that’s fascinating that you talk about the silos that existed, that you’re trying to break down, because of course, we think of data collection as being something that’s chiefly done in a digital environment, but you’re talking about data collection being at your local bank branch or whatever, which is fascinating. I want to turn to the question, though, of the social impacts of A.I., because that’s what we’re here to discuss. Lisa, let me start with you, because you drew this distinction between doing what’s best for the consumer, as opposed to doing something nefarious with their data. Talk a little bit more about that and how Pinterest navigates that line.
Lisa Sullivan-Cross: Yeah. Yeah. We’re very, very strict about that at Pinterest. I think that guides all of our decisions in this area. I can talk about a really interesting way we’re using machine learning to make a positive difference in people’s lives. One of the products we launched about a year ago is Inclusive Beauty, it’s a skin tone search feature that uses machine vision to sort pins, that’s our content, in the site’s beauty category, by skin tone. The user triggers us initially because we’ve given them the option to refine their searches based on skin tone. Once they do that, we learn from them that this is what they’re interested in, and we’re able to serve up an experience in content for them that’s more in line with what they’re looking for.
It’s something that we developed to make the Pinterest experience better for the user, and it’s ended up really helping our growth. That’s just an example of it. If you have some purity in why you’re developing these A.I. or machine learning features, what’s good for the customer ends up being good for the business, long-term. We’ve developed an augmented reality try-on feature that leverages skin tone, for example, that can really allow users to try on lipstick and eye shadow, but it’s specific to their skin tone, versus before we had this product when it wasn’t. So it wasn’t as good of an experience for people of color on Pinterest, and we strive to make Pinterest accessible to everyone.
Hal Weitzman: OK. Murli, what about at Citi? I know you think about social impact and particularly how you can use A.I. to help communities that are under-banked, for example. Talk to us a little bit more about that. How do you think about using A.I. marketing to have a positive social impact?
Murli Buluswar: Thank you, Hal. There’s really two dimensions to that. Number one is being able to tap into a wider range of data, in addition to the traditional credit reports, to understand customer behavior and to understand customer risk at deeper levels and in more dynamic ways than ever before. Historically, as you know, banks have relied on credit reports, and they’re valuable to a certain point, but are limiting in the sense that they can be a little bit static, and they’re also very retrospective and you’ve got a vast underbanked population. Here in the US we’ve invested in a startup named Perch, and our intent there is to find a gateway into using a wider range of data such as your cell phone behavior patterns or your cell phone payment patterns and things of that nature to understand your credit risk and your creditworthiness at a deeper level than one might be able to tap into using traditional credit variables from the bureaus.
The other piece of the opportunity that I see is being able to influence outcomes, so to engage with customers, prospective and current customers to preempt adverse situations. For example, going back to the instance of being able to stitch together data across channels, our intent is to be able to find patterns around customer payments, such as do they pay their utility bill at a certain time of the month, and to be able to know whether they’ve got enough of a balance in that month in their account to be able to avoid an insufficient fund situation. So really sort of thinking about the customer needs in more proactive ways at a deeper level and engaging with customers in ways that preempts the likelihood of adverse outcomes.
And the third dimension is using a much broader range of data to be able to predict customer creditworthiness.
Hal Weitzman: So it sounds like when you talk about targeted marketing, you’re really targeting right down to the individual, his or her actions or behavior, and then trying to mesh that with the financial circumstances they’re in to help them avoid a bad outcome or achieve a better outcome.
Murli Buluswar: That’s right, that’s right. It really does get at this notion of real time, contextual, highly dynamic, because it’s not just that individual over a broad period of time, our context changes, maybe not day-to-day, but perhaps from week to week and certainly month to month. And how do we separate signal from noise to be able to tap into all of that data that we as a firm have on your habits and to glean from that data meaning that could build deeper relevance to you as a customer, above and beyond wanting to engage with you on your additional financial needs.
Hal Weitzman: Sanjog Misra, let me come back to you because I know you’ve done work with nonprofits, helping them to deploy A.I. to improve their marketing and their efforts generally. Tell us about some of the work you’ve done and what you’ve learned.
Sanjog Misra: Sure. Yeah, so some time ago started thinking about taking some of the tools, the machinery that we had developed here at Booth and using it to do good. So, as the topic for today suggests. So one project that I worked on was with the SNAP program, this is the food stamp program, and it turns out for example, even during the pandemic, people who are eligible and have qualified for SNAP, so what happens is that at the sixth month, there’s a trigger where you have to refile kind of a form, a simple two-page form that makes sure that you’re continually eligible for benefits. And if you don’t fill out that form, you lose your food stamp benefits, right? And you’ll be, I was shocked to learn that about 80 percent of people who come up for that recertification, as it’s called, just failed to fill out the form and then they lose their benefits, right?
So and you could think of this as essentially a churn problem, right? Like from a marketing point of view. So what we did was we ran an experiment and we learned, used deep learning to kind of learn what elements of a message resonate with a particular participant, right? So the goal was to try and get them or remind them in the class of kind of interventions that my colleague here, Dick Thaler, would call a nudge. So we aren’t changing their incentives, we’re not changing anything else, we’re just algorithmically creating nudges that are personalized with language that resonates with the participant, and we managed to get like a 20 percent bump in the recertification rates just by sending them timed reminders which told them to come out and fill out the form, right?
So I think there’s this massive avenue for using A.I. and targeted marketing in the public goods sector, and the success of that campaign has kind of led us, so my partner is this organization called Code for America that tries to help governmental agencies digitize strategies, and so now we are thinking of using it for earned income tax credits or any other place where we might be able to take large amounts of data that are already available, layer on this targeted marketing A.I. layer, and then just try and get more good than otherwise would have happened.
Hal Weitzman: Right. And that’s interesting because I guess we don’t really think of those campaigns as using A.I. traditionally. Is there anything that you’ve learned from doing that, that surprised you, that you can take back and do more research, you want to do more research about?
Sanjog Misra: Yeah. So I think the most interesting thing that I’ve learned is that not personalizing has a massive social cost. So there’s this typical idea that personalization is kind of a bad thing, right? We are exploiting customers. So in the context of the social programs, what I found was if we had one message that went out to everybody, right? So the standard thing that most organizations do, which is we send out our typical reminder, the people who respond are the ones who are better off, right? They have the time, they are comfortable. So for example, in my dataset, they have positive income, they’re not living on the streets. The people who don’t respond to this uniform treatment are the ones that have zero income, they’re on the streets, their living conditions are fairly dire.
But once you personalize, the entire distribution of people who respond shift. So the people who are actually now coming into the fold are those people who have no income, they are living in their car or on the streets and because the message has been tailored for them, you get them to participate. So in some sense it’s this thing where rather than A.I. being bad, without it we would have actually hurt the very people that we were trying to help, right? So that was an interesting kind of angle on personalization that I hadn’t thought about.
And I have continuing work with JP, for example, on showing that even with pricing, this is true, that if you think about personalizing prices, people think of that as being a bad thing, but if you sit down and think for a little bit, the ability to personalize prices allows people who would not have bought your product or avail of your service at kind of the uniform price, now that they’ve gotten a discount because of the personalization, they become your customers, right? So if you start thinking about the distribution of your customer base, rather than kind of just the average or the total profit, you can see huge benefits that machine learning and deep learning and A.I. bring to the table, both on the commercial side and on the social side.
Hal Weitzman: That’s fascinating. But as you said, there are a lot of concerns and particularly around the use of data, privacy issues related to individual consumers, and for example in Europe, we’ve seen, in the European Union, pretty strong legislation to try to protect data privacy for individuals. I wonder Murli, if you ever thought about what effect those regulations have had. Have they done what they were intended to do?
Murli Buluswar: So that’s a gnarly issue, isn’t it? This notion of privacy in a world of free—it reminds me of the quote from the documentary, I guess, Social Dilemma, where the quote is, “Look, if you don’t know what’s being sold, then you’re the product that’s being sold.” And I think the fundamental dilemma we have is we as consumers want things for free, and when we want things for free that’s probably the most powerful way for the providers of free to be making a boatload of money. So my hope is that as the years progress in our thinking, we’re able to come up with different degrees of consumer awareness and consent when it comes to the sharing of data, such that you could have at one end of the spectrum a completely closed loop, no data sharing sort of capabilities in a platform, all the way through “I don’t mind being a product.”
So I don’t think that this issue has been laid to rest anywhere close, but I do think that creating more awareness and giving consumers the ability to make more overt choices around the degree to which they feel comfortable sharing their data, is really, really critical.
On the other hand, coming back to your precise question, Hal, look, you could have all the legislation in the world and there’s always sort of loopholes around it. The guiding principle for me is that any regulation should set the lowest bar, not the highest bar. The highest bar really is what are we using this data for, to what purpose? And do our customers understand what data we’re using and how we’re using it and why they would find that meaningful? And if we could answer that higher order question, especially in a world where we’re stitching together all of these data elements as I alluded to earlier, then we feel like we’ve set a much, much higher bar, for not just what is permissible, but what is right and what is readily defensible.
Hal Weitzman: Right. And it’s an interesting idea to kind of educate consumers about what... To try and get them to understand what their preferences are and how they would like to set the dial. I guess that would require a lot of transparency and greater education about how those data are being used by organizations. And that’s pretty varied, right? Lisa, I wanted to ask you about how it works. At Pinterest, how do you think about the issue of protecting consumer privacy?
Lisa Sullivan-Cross: Yeah. It’s very important to us and I think it does come back to, yes, of course we’re abiding by all of the laws and in guidelines, but I think you have to go further than that, and as I mentioned earlier, really put the customer first and respect the customer and only share, use, collect data that’s going to make their life better or their experience better. So I think we have a really good track record with our customers on this. We use the bare minimum we need to make a user’s experience better.
One example is we’re one of the very few apps that don’t, we don’t collect geolocation in the app, and we do not ask for zip code when somebody signs up. So we’re a bit blind as to that specific information because we haven’t received that from the user. Now we could have grown our revenue probably a lot more quickly if we had that information because advertisers like to advertise locally and that information would help us grow much more quickly there. But instead we’ve chosen to focus on the customer and only collect that when we do have a use for it that would make the customer’s life better. And we just don’t have a compelling enough use for it yet. So I think the result of that is our just unprecedented organic user growth, and our revenue has grown greatly over time, but we’ve really focused on user experience and growing the user base and then the revenue will come, right? And so I think it’s just so important to keep that in mind.
Hal Weitzman: So are you suggesting that... so just so I understand. Do consumers know that they’re not having their geolocation information being collected and stored and they actually like that? Is that what you’re saying?
Lisa Sullivan-Cross: I don’t know if consciously they are making that connection in their head, but they do know that when they install apps, a lot of them have the pop-up that says, we’d like to collect your geolocation, and we don’t do that. So that’s one less barrier to the customer downloading the app and using it. But they probably also, somewhere in their minds, remember or realize that, oh, Pinterest did not ask for that information. And of course they know when they register, we don’t ask for a lot of information. Where with other products and apps, you might have to enter address, zip code, age, things like that. So we just collect, again, the bare minimum we need to make that experience for the customer better.
I mean, I used to be the head of growth at Pandora and in a similar role that I’m at, at Pinterest. And we had a similar approach and a mandate about customer first. And so for most of my time there, we weren’t collecting geolocation, but we finally came up with a compelling reason to do so. And so I think the last year or two I was there, we started collecting geolocation from users because we determined that, hey, we can make playlist and song and music recommendations based on geolocation. Hey, this is what people in your area are listening to. Or if we know there’s a storm on the East Coast, we might serve a different recommendation to you versus if you’re commuting, right? Because people have different listening behaviors when they’re commuting versus they might listen to podcasts. And when they’re at home, they may listen to more laid back music. And so I think you just have to wait until there’s a reason to collect that data that makes the customer’s life better.
Hal Weitzman: OK. Sanjog Misra, I wanted to ask you, I mean, are these concerns about data privacy overblown?
Sanjog Misra: I mean, as Murli said, this is a gnarly question. I think there’s two sides of this, right? One is making sure that people have control over their own data, right? And what they’re sharing, they’re sharing voluntarily, and they know what the cost of the benefits are. And being transparent about that. I think those are all important.
I think the risk that we have is if there’s a blanket regulation that says thou shalt not collect this particular type of data, like the example I gave you with the SNAP program: Let’s say the CCPA, or the California privacy act, let’s say if they mandated that you wouldn’t be able to use the data to target these messages. Well, you can immediately see what the response is, right? So if I shut down the ability to use individuals’ data, then we go back into this world where everybody gets the same message and that literally hurts the bottom rung of society the most.
So I think my view on this is we have to be very, very careful about how we think about data, even the idea of sharing data, right? If consumers are allowed to share data voluntarily, that also comes at a cost in the sense that now firms have to think about who the people are who are sharing the data and for what purpose, right? Are there strategic reasons for sharing and not sharing? So I have a PhD student who kind of worked on this for her thesis. So I think it’s a very difficult question. I think over time we’ll kind of muddle our way through. I think the US obviously needs some kind of a overarching privacy legislative framework. We don’t have that yet. But what it should be, I think, is something we will have to learn about. And I’m evading the question, but I don’t have a fantastic answer for this, right? Maybe Murli or Lisa does.
Murli Buluswar: Sanjog, if I may just add to what you said. Part of it, I think, is bifurcating what data is collected with what purpose is it being used for. So it feels like to me, those two questions go hand in hand. And regulations are a bit too focused on what data is collected, but really the problem that they’re solving for is, is it being used for justifiable moral purposes? And it feels like to me, that’s the direction in which the dialogue needs to shift across sectors.
Sanjog Misra: I agree. I mean, just to add to that, if you look at the language in the GDPR, the early drafts basically would have completely eliminated any kind of personalized pricing. That includes discounts, right? So I think there’s kind of this push towards, let’s just stop data from kind of moving around or impose massive costs. So that’s another example, right? Where the firms that have benefited from GDPR, no offense to Murli, are the big firms because they can hire lawyers. They have a legal team. They can navigate the morass of GDPR fairly easily. There’s an impact on the smaller firm that doesn’t have those resources and the costs that are imposed by a legislation such as the GDPR. Again, I’m not saying that privacy is a bad thing, but the way it’s done, I think, may impose cost on society that we did not see coming.
Hal Weitzman: OK.
Murli Buluswar: One more thought on that, if I may. Pardon me.
Hal Weitzman: Sure. Very quickly because then I want to turn to the audience questions.
Murli Buluswar: Absolutely. Twenty seconds. The irony of this discussion is that the whole point of A.I. is to de-average and create extreme customization, yet some of the regulations are moving precisely in the opposite direction, of broad-brush legislations that average the world again. So just wanted to call that out. Back to you.
Hal Weitzman: In the words of Monty Python, “Yes. We are all individuals.” Let’s go to a question. So I apologize for any mispronunciation of anybody’s names. But these are good questions coming in. A question from [the audience]: Can you give some examples of how A.I. specifically played a role in an initiative? Murli or Lisa.
Murli Buluswar: I can go first. So one simple example is last year around this time of the year in March and April, much like most of the world, Citi was also reeling under the stress that COVID inflicted on us. And one of our specific problems was our ability to staff call centers, whether they were here in the US or internationally. The world had sort of essentially shut down but yet that was precisely when the call volume shot through the roof. And so many customers ended up having to wait longer than they wanted to. And we were able to use the data to understand what the customer issue is and to engage with them in offline digital channels to be able to preempt and to be able to solve that issue for them in ways that we couldn’t because the hold times in the call center was way too much. So really kind of stitching together that omnichannel data to be able to build relevance and solve customer issues in times when they were under extreme duress because of all the financial and health pressures inflicted by COVID.
Hal Weitzman: Great. Thank you. Lisa, did you have a story you wanted to add?
Lisa Sullivan-Cross: Well, I think the inclusive beauty skin tone filter that I mentioned earlier is a really good example. But I would just say that, and since Murli brought it up, in regards to shelter in place and COVID, I think because we built such a strong machine learning based product that it automatically shifts. When people went into shelter at home and they needed ideas for how to build a at home virtual school space for your kid or projects to do with your kids at home, things like that, people started searching for those things more. And so the content that we serve up changes immediately to reflect those needs and to really anticipate what the customer is going to look for next.
Hal Weitzman: OK. Thank you. So here’s another question: A.I. has been effective at interpreting mostly quantitative data. Purchasing behavior, for example. Oftentimes purchases can be emotionally driven. What is your perspective on possibilities of A.I. being used for driving recommendations on real-time interpretation of emotional reaction or potential customers to real-time ads? What companies are already offering such solutions, if you know the answer to that? So emotional reactions and real-time responses, any thoughts there? Sanjog-
Sanjog Misra: Sure. Again, I was looking to Lisa, maybe she wants to jump in. But Pandora has a product where they can figure out the mood that you are in based on the content of the songs that you are consuming, right?
So there are examples of A.I. tools being used to track your facial reaction to... An emotive reaction to kind of content that you’re consuming online, whether it’s videos or pictures and so on. So there’s a lot of firms that have actually made investments in trying to understand human emotion, but also figure out ways that it can be used. I’m personally not... I’ve not played around with it and I’m not aware of an actual product or a service that uses it. But maybe Murli or Lisa can speak to that.
Lisa Sullivan-Cross: And, Sanjog, that’s a great example of an inferred emotion and how Pandora uses that. We have used it not in our product, but in market testing our ads. There are companies that we’ve used where we can have somebody look at an ad that we’ve created, say a video ad, and then... And they’ve overtly signed up before this. The people using it, the customers. And it tracks their facial reactions to the ad and then we get data back from that company saying, this is how people feel when they’re watching the ad or when it got to this point in the ad and that point in the ad. So we’ve used tools like that to help make our advertising better and make sure that it’s hitting the right notes with customers or potential customers.
Hal Weitzman: OK. Thank you. Murli, did you want to add?
Murli Buluswar: Yes. If I may, Hal. Thank you. Look, this sort of goes back to my point earlier in the session around how classical A.I. focuses on prediction. We’re still sort of developing the tools to understand why. But one application that my team has engaged in is we get hundreds of thousands, millions of calls on a monthly basis. And we’re trying to collect all of that voice and text data from the transcript and not just do a simple keyword, but to translate that narrative into an emotional pulse of the customer, to understand how well we did or not in satisfying, addressing their issue. And if not, then taking actions to be able to sort of follow up on that phone conversation that they might have had. So that’s more than natural language processing. It’s really trying to get at that emotional tone of that conversation.
Sanjog Misra: I just remembered one really good application. There’s a company called Cogito that you can plug in into a call center and in real-time it takes the audio and scores that against emotion and then makes a recommendation to the call center agent, if the customer is angry or if the customer has any other emotional response that kind of gets flagged, and the agent sees not just kind of the emotion of the customer and where they’re going, but also suggestions like slow down the way you speak, be more deliberate, and starts making positive recommendations for the agent to be able to deal with that particular customer. So, that’s one example, at least, that uses emotional A.I. in real time.
Hal Weitzman: Well, based on the data that I’m receiving, we have high customer engagement here on this event. So let’s squeeze in another couple of questions. Lots of analytical models are not necessarily understood by everyone in the company. So how should an organization institutionalize the use of A.I. in its business and what risks can you foresee when an inexperienced company implements A.I., even with the best of intentions? Sanjog do you have a thought on that?
Sanjog Misra: Yeah, so I think there’s a push towards something called explainable A.I. That’s my gut reaction to the question, which is there’s work being done that takes the black box that people usually think about in terms of deep learning and machine learning and then creates essentially an English language representation of what that model does. Partly this is again coming about because of requirements in GDPR, which require you to explain how your analytics actually work, if a customer asks you. I think that can also be used internally to explain to employees what that black box does and why and how you should use it.
I think where we are today, at least my interactions with a large number of firms, is this human plus A.I. approach is the way I think we should be. Rather than take the A.I. and just implement the decisions directly. Think of A.I. as being an assistive technology that goes to the human decision maker and gives them options. Just like the call center example. There’s a pricing recommendation engine for hotels, for example, where the software says here’s what the rooms should be priced at today, but then the pricing manager can accept or reject depending on their own judgment, and slowly with repeated interactions A.I. gets better and learns what’s working and what’s not working. So I think it’s a really good question. I think a partnership with the engineering team and the product slash marketing team is a good place to go in terms of making this endeavor work.
Hal Weitzman: OK. Do our other panelists have any thoughts on that?
Murli Buluswar: I’ll jump in if I may. I’m always amused by this question, because I think the problem of explainable human is a much more interesting one than the problem of explainable A.I. Look, I mean, at the end of the day, I do think it’s that magic of human and machine intelligence that really does matter. Now, the A.I. is obviously phenomenal at finding deeper levels of connections, correlations, not causation. And where the human intelligence can come in is, is to be able to make sense of that. Sometimes it might confirm their hypothesis, sometimes it might actually challenge their hypothesis, but the magic almost invariably will happen at that intersection. And another piece of your question, Hal, was around how do people get started and how do they get scale on A.I.? Think of it at the end of the day as a tool. The question that I would ask is what are the problems that you want to solve for and why do they matter?
So unless and until you could reimagine as a senior leader in your firm, what decisions are going to be made and how decisions are going to be made differently as a consequence of this tool, you shouldn’t be focused on the tool, you should be focused on your strategy first.
Hal Weitzman: I think that’s what we’ve seen in the past, companies collecting data just because they could, not because they knew what they wanted to do with it. Lisa, I have a specific question for you. So let me get that in. Can you please comment on how your company thinks about the user data you’re using and any implicit biases in your machine learning models?
Lisa Sullivan-Cross: Yeah. I think there are instances where we need to go in and adjust for that or correct for that, so that is a human involvement and we do, do that. I think one of the implicit biases that we developed the inclusive beauty and skin tone filtering specifically for is that when people would come to Pinterest and search for hairstyles or short hairstyles, it would be a bunch of white women that showed up on the screen and we know that wasn’t reflective of who was using the product. And so instead of just relying on the behavior-based machine learning that is supposed to give the right content, it just wasn’t right enough. And so we had to go in and build an A.I. sort of machine learning feature on top of what we already had thought was a solid machine learning to adjust for that bias.
Hal Weitzman: OK. Sanjog, did you have a thought on implicit bias?
Sanjog Misra: Yeah, I think there’s ongoing research on this as well, right? I mean, our colleagues Sendhil Mullainathan and Marianne Bertrand have worked on this quite a bit. I think partly just being cognizant of the fact that these biases exist and we may or may not be aware of them is a good starting point. As Sendhil likes to say, our best hope for de-biasing algorithms are algorithms themselves. So human beings have biases too, and you have to be a little bit careful that you’re not injecting the human bias into the algorithm. So I think, for example, when it comes to making sure that we don’t condition on characteristics that we are either legally or morally obligated not to include into the model, there’s this tendency to kind of ignore that data. Whereas maybe the algorithm should actually internalize that data and then explicitly not use it for recommendations.
So I think we need to start thinking a little bit deeper in terms of where these biases are, where they could manifest themselves, to recognize the fact that we may not be able to see them and then allow the algorithm to kind of, as Murli put, challenge our hypothesis around what we think is kind of the right way to go.
Hal Weitzman: OK. A couple more questions. I’m paraphrasing here: Has your company’s ability to use A.I. to automate your marketing response or your strategies to COVID been hampered by restrictions on information access. Murli, can you address that?
Murli Buluswar: No. The short answer is that COVID has been a precipitating force because of how fundamental and sudden that change has been. So we’ve really learned to adapt to it in ways that frankly we might not have otherwise, certainly at the speed that we did. One simple example of that is in the height of the COVID crisis particularly in the New York area in March and April of last year, we were grappling with understanding which branches should we open, for what hours, what cash availability is there in these branches in order to make sure that our customers who are in need are supported? And we are able to sort of create that data planning and predictive algorithms to understand where we might have issues and how we sort of manage the whole logistics of branch opening and closure and cash availability in much more preemptive ways. Would we have gotten there eventually? Sure. But COVID was, in that sense, a force for some positive change.
Hal Weitzman: Lisa, did that affect you guys at all? Or because you don’t have physical locations was that less of an issue?
Lisa Sullivan-Cross: Yeah, I mean it impacted our user growth in a very positive way, just organically because people had more of a need for Pinterest because they have more time, they’re at home with their families and they need to find things to do and Pinterest is a good resource for that. I think where it impacted us maybe in a negative way is in our advertising. We had a brand marketing or advertising campaign planned for 2020 before COVID and shelter in place that had a lot of out of home, so subway takeovers and public transit and things like that. So we paused that campaign because obviously people weren’t taking public transportation and out in the world. And so we had to really rethink that and take a step back, and we also didn’t want the creative to be tone deaf. And so that kind of pushed that out a bit. So, that that’s one way that it impacted the marketing organization anyway. But as far as our business, as a whole, Pinterest has been able to really be there to help people through shelter in place, to some extent.
Hal Weitzman: OK. Well, I want to finish off by asking you, because we are here to do marketing, talk about marketing for good. So very briefly, 30 seconds, if you can tell us, how would you use A.I. to tackle one social cause that is important to you? Murli, let me start with you.
Murli Buluswar: Predicting the return on one’s college loans. So it’s a big issue right now. We’ve got many, many students that are in the red borrowing tons of money to further their college education. And we could use A.I. to help those students understand what’s the return on that investment and that loan so that they have a richer understanding of how much they could actually afford to take that loan on for their college education without being burdened excessively post-graduation.
Hal Weitzman: Excellent. Of course, the University of Chicago, particularly Chicago Booth is always a good investment. I have to say that legally. Sanjog. Sanjog what’s your one cause?
Sanjog Misra: Mine is a class of causes, but it’ll qualify as one. Lowering the bandwidth or the cost of making good decisions in the most disadvantaged population. So the SNAP example is one, but it turns out that people who are most disadvantaged have a very difficult time making decisions that you and I would think of as being obvious and easy decisions to make. So using A.I. and data to help them make those decisions or just showing them the way, I think that would be an awesome way to go.
Hal Weitzman: OK. And Lisa, your social cause that you would use A.I. for?
Lisa Sullivan-Cross: Yeah. So mine is really in line with my involvement in the nonprofit Women’s Audio Mission that you mentioned at the beginning. So I would love to find a way to use A.I. to increase representation of women and underrepresented minorities in fields and at career levels where they’re grossly underrepresented.
Hal Weitzman: Excellent. Well, this has been a fascinating conversation, but unfortunately our time is up. My thanks to our panel, Sanjog Misra, Lisa Sullivan-Cross, and Murli Buluswar. And my thanks to the Kilts Center for Marketing for organizing this session. For more research, analysis, and commentary, visit us online at review.chicagobooth.edu and join us again next time for another Big Question. Goodbye.
Chicago Booth’s Sendhil Mullainathan visits the podcast to discuss if AI is really “intelligent” and whether a profit motive is always bad.
Capitalisn’t: Who Controls AI?It may be a political nonstarter, but it would be more effective than a ban on assault weapons.
To Reduce US Gun Sales, Limit Access to HandgunsCommitment counts in corporate responses to social matters.
What Companies Should Remember When They Engage with Social IssuesYour Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.