Chicago Booth Review Podcast What Are We Learning When We’re on Social Media?
- October 25, 2023
- CBR Podcast
Many of us have observed how social media shapes or warps our behavior. Some people are more willing to get into disagreements or show off more. The rest of us see these behaviors and can assume that that’s just how other people behave, even though in reality algorithms may be directing us to the most controversial or explosive content. In this episode, Jeff Cockrell, editor of the Chicago Booth Review website, meets Chicago Booth’s Joshua Conrad Jackson to discuss Jackson’s research on social media’s interaction with “social learning.”
Hal Weitzman: Many of us have observed how social media shapes, maybe even warps our behavior. Some people are more willing to get into disagreements or display righteous indignation online. Some use their feeds to show off more than they do in person. The rest of us see these behaviors and can assume that that’s just how other people behave, even though in reality algorithms may be directing us to the most controversial or explosive content, in order to spark a strong reaction.
Welcome to the Chicago Booth Review Podcast, where we bring you groundbreaking academic research in a clear and straightforward way. I’m Hal Weitzman, and in this episode, we’re going to take a deep dive into one single research paper with one Chicago Booth researcher.
For digital natives, social media can not only be a place to behave differently, but it can shape how they behave in real life and learn about themselves and others. Chicago Booth’s Joshua Jackson is a behavioral scientist who has conducted research in this area, and Jeff Cockrell, editor of the Chicago Booth Review website, sat down with him to learn more.
Jeff Cockrell: For years now, we’ve been going online to do all kinds of things: to shop, to find love, to get the news, to be entertained. But we’re also doing something else online: learning. Not just learning from apps with an expressly educational objective, but learning via the fundamental act of observing other people, what behavioral scientists call social learning. But human strategies for social learning developed over our long history of largely face-to-face interaction. How are those strategies serving us now that we’re socially learning via social media platforms mediated by content algorithms? Chicago Booth’s Joshua Jackson and his co-authors have studied that question and find that our preferences for social learning don’t align very well with the goals of those algorithms.
Joshua Conrad Jackson: And so what we were concerned about is could this transition to online social learning be changing the way that human socially learn, the kinds of things we learn, and the way that we learn. And in particular, what is the impact of the algorithms that manage social media websites on social learning processes?
Jeff Cockrell: Josh explains that human social learning has evolved to prioritize what he and his co-authors call PRIME information.
Joshua Conrad Jackson: Something that we’ve known for a long time in a lot of the social sciences is that we don’t socially learn indiscriminately. We don’t just copy whoever we’ve seen most recently. What we do is we are attuned to certain kinds of information that is worth learning or worth attending to. And so PRIME is an acronym that we come up with to describe the kind of information that makes the largest impression on our brains. It stands for prestigious, that’s the PR, ingroup, moralized and emotional information. So you can break this down into contexts and content. We learn from people who are prestigious, that’s a context, a biased form of social learning. We’re more likely to learn from people who are prestigious than those who aren’t. And we’re more likely to learn from people who are in our ingroup than those who aren’t. And then the content via social learning is towards moralized and emotional information.
We’re a lot more likely to attend and potentially learn from information that has a moralized tone to it than information that’s highly emotional versus more neutral. But one of the important things that people have pointed out who’ve studied these biases in the past, these social learning biases, is that they’re usually functional. It’s usually in our interest to learn from people who are prestigious because they’re probably better at whatever they’re doing than someone who hasn’t accumulated prestige. It’s usually in our interest to learn from people who are part of our group because their lives and experiences are more similar to ours, and so they’re more relevant to us. If we pay attention to them, we might also be able to cooperate with them better. And for a different reason, it’s more likely to attend to things that are moralized or emotional. It’s more likely to attend to things that are moralized and emotional because this is potentially threatening information.
They can give us insight into who might be disrupting a group or who might be more likely to behave selfishly or violent towards us. That’s what moralized information can tell us. And emotional information can give us insight into what a potential threat might be in our environment. Because when people express things that are threatening or life-threatening, they tend to use emotional language to express it. And so these biases are typically functional. The way we talk about it is they help facilitate problem solving and cooperation. Attending to people who are prestigious and people who are part of our group help facilitate both of those things. They help us facilitate problem solving because we can solve problems faster if we copy people who are doing really well, and they help us cooperate because we’re working with our ingroups, and moralizing emotional information is good for facilitating cooperation because it helps us stay away from people who might be threatening or selfish.
Jeff Cockrell: However, in an online context, and particularly when algorithms are deciding what we should see, our preference for PRIME information becomes less helpful.
Joshua Conrad Jackson: Algorithms, what they’re doing is they’re just trying to keep you on a platform. An algorithm doesn’t really care what kind of information you’re reading about as long as it keeps you engaged. That’s why algorithms, if we’re interested in underwater basket weaving, we’re going to get videos about underwater basket weaving when we go on Instagram. It so happens that people are especially interested in information that has those PRIME properties, and it’s because we’ve evolved to be interested in that kind of information because it’s benefited our ancestors. And so what algorithms are doing is without even trying to promote information that’s moralized or trying to promote information that is prestigious, even if an algorithm just wants to show you something that people have looked at or clicked on, it’s going to end up amplifying the information.
And so what happens is if you take a newsfeed that’s just randomly sorted based on the most recent posts, and you compare that to a newsfeed that’s sorted based on what people are clicking on, that second one that’s sorted based on what people are clicking on, it’s going to have a lot more PRIME content on it. It’s going to have a lot more information that’s prestigious. It’s going to have a lot more like someone who has a blue check mark or had a blue check mark, someone who has a lot of followers. It’s going to have more emotional information and moralized information.
Jeff Cockrell: That bias is compounded by the fact that we, the people posting content on social media, have learned from the algorithms what other people want to see. We’re communicating PRIME information online at a higher rate than we are offline.
Joshua Conrad Jackson: One point that we make in this paper, and I think maybe the fundamental point of this paper is that this is all a feedback loop. Algorithms are seeing what we’re attending to, and then they’re giving us more of that information. And then we’re looking at that, we’re attending to PRIME information even within the surplus of PRIME information we get. And then the cycle continues. Now, what becomes really important here is that PRIME information becomes distorted in our social environments. It becomes overrepresented is the word we use in the paper. And you might think, great, that’s the information that we’re interested in. Then why shouldn’t we just give people as much as we can of that information? But that’s like saying we really like fast food. We should give people as much fast food as possible. Having a lot of something isn’t necessarily good just because we’ve developed a taste for it.
In the case of the content biases, it’s actually potentially doing us harm. And so the reason that we’re so attentive to moralized and emotional information is that it’s quite rare in everyday life. It’s rare that you yell at somebody. It’s rare that we call someone a scoundrel or we call someone a liar. And so because it’s so rare, it’s important to pick up on that when it happens. And that’s why potentially we’re biased to attend to it because this is one of those rare people who we just don’t want anything to do with.
But what happens is when you start blowing that up, when you start showing people newsfeeds that are full of this kind of information, then people start to think it’s common. The analogy I like to use is imagine if an algorithm designed a society by looking at blockbuster movies. People like seeing movies about wars and murders, but if an algorithm didn’t realize what it was doing, it would just design this terrible society where people were killing each other and groups were always at war. And what we suggest in the paper is that might be happening, playing out, of course on a different scale, but playing out on social media right now.
Jeff Cockrell: So our online social learning is skewing our perception of how common moralized emotional communication, things like outrage, hostility, and disgust really is.
Joshua Conrad Jackson: We overperceive conflict between groups in society. We overperceive emotionality in everyday language. We also overperceive the amount of morality that we use in everyday language. There’s actually a phenomenon in political science called false polarization where people are increasingly overperceiving the extent to which partisans disagree with each other and that dislike each other. Now, what’s really important is that we think that social media actually is fueling this trend in these misperceptions because when we perceive everyone as really hostile and emotional online, we don’t just think, wow, people online are really hostile and emotional. We think people are really hostile and emotional. And our failure to first of all calibrate that an algorithm is feeding us this information. And second of all, understand that this is relatively restricted to social media, we think that those two failures of calibration are producing a lot of these misperceptions that people have today.
Jeff Cockrell: Jackson says that this inflated sense of conflict and polarization can encourage certain antisocial behaviors such as sharing misinformation or even engaging in violent conflict. There’s also a chance he says that as people become more familiar with and perhaps more cynical about communication that’s mediated by algorithms, they’ll simply stop trying to figure out how well social platforms represent reality.
Joshua Conrad Jackson: So you probably already have heard people in your social circle say, “Well, I don’t know if I can trust that.” Isn’t that a terrible thing that the very idea of truth is becoming so elusive because of deep fakes and misinformation and lies that we’re fed on social media that we don’t even trust information when it seems to come from a credible source? And so what we think is that it’s equally likely that people catch up by just disengaging from trying to understand what’s true or false altogether. There’s no guarantee that we’re going to catch up by doing our homework more and figuring out what the actual base rate of political outrage is or the actual base rate of political violence is rather than just taking for granted what we’re fed on social media.
Jeff Cockrell: Jackson and his co-authors suggest some ways that we could tweak content algorithms to mitigate the misalignment with our social learning biases.
Joshua Conrad Jackson: So there have been some calls to try to change algorithms by making them either eliminating them altogether or just trying to amplify the diversity of content. But we don’t really see a lot of realistic promise in those goals because they aren’t acting in the interest of the companies that have designed these algorithms. What we suggest is that first of all, users should have some sort of say over what their algorithm feeds them. And in this sense, you might get information as customized to the user and that keeps them on the platform, which minimizes some of the exhausting negative content that people are trying to escape from. How exactly do you do that? We suggest that one thing that users might select if they have the option to select is an algorithm that selectively penalizes PRIME information while keeping the other things that you like.
So for example, if you’re a big chess player, you’re still going to get chess videos promoted on your feed, on your social media feed, but you’re going to have that other stuff that you don’t enjoy as much, but you will click on because it’s just attention grabbing. You’re going to have that stuff not promoted. So maybe fewer jabs at political rivals and more videos of chess. And we think that selectively penalizing PRIME information is a way of keeping people engaged on a platform while not having them dislike the platform and also develop these false perceptions of American society.
Jeff Cockrell: The researchers are experimenting with different social media algorithms to test that theory.
Joshua Conrad Jackson: I can’t give away too much away, but in our current studies, what we’re doing is we’re actually designing social media interfaces that have different properties. And so we’re actually able to test out whether manipulating the algorithm and the way we’re recommending actually makes a better experience for users, whether it mitigates misperceptions that they might have, and specifically whether tweaking these PRIME dimensions and downplaying them on social media is more effective than downplaying other kinds of information. And so a lot of the recommendations we’re making, we’re now testing in a social media platform that looks very real and behaves just like any other social media platform, but is a fully controlled laboratory.
Hal Weitzman: That’s it for our interview with Joshua Jackson. Thanks to my colleague Jeff Cockrell for asking the questions. You can find out a lot more about the internet and behavioral science on Chicago Booth Review’s website at chicagobooth.edu/review. When you’re there, sign up for our weekly newsletter so you never miss the latest in business-focused academic research. This episode was produced by Josh Stunkel. If you enjoyed it, please subscribe, and please do leave us a five-star review. Until next time, I’m Hal Weitzman. Thanks for listening to the Chicago Booth Review Podcast.
The technological landscape has changed rapidly for investors.
The Evolution of AI in FinanceChicago Booth’s Anil K Kashyap and former Chicago Fed president Charles Evans discuss how the Fed is thinking about the US economy.
When Will the Fed Start Cutting Interest Rates?To what extent are we accountable for what goes into index funds?
Are You Responsible for Your ‘Passive’ Investments?Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.