Where AI Thrives, Religion May Struggle
A series of studies suggests an inverse relationship between automation and religiosity.
Where AI Thrives, Religion May StruggleManagers often establish thresholds to indicate acceptable levels for employee performance, both for behaviors they want to encourage (sales) and discourage (absenteeism). Those thresholds, says Chicago Booth’s Ed O’Brien, are essentially a prediction of attainable employee performance—but as with many predictions, they can prove inaccurate or unreasonable when real life mixes in complicating factors. O’Brien explains the results of experiments he conducted to probe how people respond to this reality, and what they mean for how managers should set thresholds in the first place.
(gentle music)
A social-judgment threshold is the kind of expectation we have about other people’s behaviors. So for example, a manager might set a certain threshold for the number of sales an employee has to hit before they get a bonus. A parent might set a certain number of strikes before they punish a child. We see these kinds of thresholds being set in all aspects of everyday life. And in many aspects, they’re actually very formally set. For example, in organizations, they set policies of marks that employees have to hit before they get rewarded or punished.
A lot of our earlier research looked at people’s predictions versus their experiences across lots of different kinds of things. And one thing we found consistently is that experiences tend to be more complicated, more complex, busier than predictions. In our predictions, we kind of simplify things.
So for example, when we plan out when we’re gonna kind of complete a project, in our minds, it seems kind of easy. We start to do the project. We run into all sorts of problems. So experience tends to be more complicated than prediction.
Here we were thinking about what’s a domain where that could be really socially important? And we thought, well, one domain that’s socially important is this idea of thresholds. When people set thresholds, like a manager trying to figure out those numbers that employees have to meet, they often set those beforehand. So that’s kind of a moment of prediction. And then reality unfolds. The employee works to hit those numbers. That’s a form of experience. And we thought that could be a really important domain where managers, for example, might be simplifying that number without realizing once employees go to chase that number or avoid that number, reality is kind of complicated. Now they’re making judgment calls. Things are more complicated than they realized beforehand.
In a typical experiment, we describe to participants in their prediction mode, the scenario at hand. So for example, recruiting MBA students as managers, thinking about setting policies for their employees. Think about setting a policy for showing up late or showing up early to work. How many times is acceptable to show up late before you’re going to jump in and punish your employee? And we describe to participants all the different ways in which an employee could show up late. So for example, they show up late just by a couple minutes, or they show up very late, or show up late with attitude, or show up late with being apologetic. We describe all of these ways to participants. And in that prediction mode, we say, based on this, how many times can your employee show up late before they get punished?
We then take that number and put those same participants in experience mode. So they track an employee showing up late or early over time, and we actually just measure when they jump in to punish. And we compare their predicted number to their actual number. And basically what we find is they jump in to punish, for example, quicker or slower than their preset number depending on the specific behavior that unfolds. So, for example, a really egregious violation, like somebody shows up really late with attitude, or whatever the case may be. Even if they only do that once, managers jump in to punish. Even though they said their policy was three. On the positive end, if an employee shows up late, but they have a great excuse, they’re really apologetic, a manager might hold off. So even though they hit their threshold, they’re gonna punish you after two, and you did it twice, they’re gonna wait until three or four. So they’re basically violating their preset rules depending on what actually unfolds.
One way I’ve been thinking about this is the rules that we set to control our organizations, to control our social worlds, to basically navigate everyday life, are constantly violated. And we break them for lots of good reasons. Sometimes lots of bad reasons, but we’re breaking these rules all of the time. We’re adapting to the situations that actually we find ourselves in. We can think about, in some sense, that reflecting just being a rational decision maker that people are responding to new information that unfolds. They’re not stubbornly sticking to the policies they set before they realize that information. So you might think of that as an adaptive advantage.
The problem that our research highlights is when you put this in a social context, that can create kind of interpersonal problems. So even if you’re being a rational decision maker, kind of updating your standards based on what unfolds, other people around you, for example, other employees, might be wondering, “Why is this person changing what they said? And why are they treating this employee like this, and not this employee like this?” So what we find in our research is that the decision maker might think this is beneficial because they’re just following the evidence. But the people that they’re treating the other people around might feel differently. And again, that’s gonna be a source of social conflict that we identify in our research.
So one thing we think a manager, for example, can do a little bit more wisely, is to not set such concrete marks—like it’s this number of sales for a bonus and that’s it. But maybe think about more of a net. To realize that while when certain situations arise, you might have to adjust, let’s build that into your policy, so that when reality unfolds, everybody on board realizes that this threshold has a little bit more flexibility in it than was set beforehand.
A series of studies suggests an inverse relationship between automation and religiosity.
Where AI Thrives, Religion May StruggleFraming changes how different people answer the question.
How Much Should Facebook Pay You for Your Data?Knowing a better iPhone is coming may ruin your experience of the one you have.
Do You Really Need to Upgrade Your Phone?Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.