Imagine a bustling retailer’s contact center on Black Friday, the kickoff to the holiday shopping season. With a limited number of customer-service agents and a deluge of incoming requests, the center—which manages phone calls, emails, live chats, social media, and text messages—has a management challenge. Some people might have transaction issues that could take some time to resolve, while others might have a quick question about an order and therefore be more inclined to abandon if made to wait.

This is a classic scheduling problem in operations management: Which task should be prioritized when resources are limited? Adding to the challenge is that contact centers don’t know all the details of incoming requests, and thus operate with incomplete information. London Business School’s Yueyang Zhong (a recent graduate of Chicago Booth’s PhD program) and Booth’s John R. Birge and Amy R. Ward propose a practical, two-phase approach to this problem designed to balance the twin needs of learning about incoming requests and scheduling effectively.

To illustrate the approach, the researchers say to consider call centers, which are contact centers that primarily answer phone calls. Many such centers seek to put queries into categories. For instance, they might have incoming customers select an option for why they’re calling, such as “transaction issue” or “order inquiry.” Even then, much remains unknown. Which callers have quick and simple questions? Which are more likely to hang up if kept waiting, potentially costing the company their business? The callers themselves often don’t know these details, much less the managers.

Even if a business knew the time each call would take, and the patience level and revenue potential of each customer, solving this scheduling problem wouldn’t be straightforward. Continually arriving calls create infinite possibilities.

The researchers’ proposed solution, which they dub learn-then-schedule (LTS), starts with a learning phase, during which managers gather data about three factors: the complexity of incoming queries, the patience level of callers, and the potential cost of losing each customer.

During this period, the center selects calls from various categories at random—and, the researchers acknowledge, makes inevitable mistakes.

“At the start, managers don’t know if a selected caller is patient or impatient, or if the issue will take one minute to resolve or half an hour,” Zhong explains. Over time, by observing actual call durations, customer drop-off rates, and other patterns, the system learns these unknown characteristics.

These data are then used to assign an “importance score” to each call category, reflecting its priority level. For example, if order questions are generally easier and quicker to resolve than transaction issues, they might receive a higher importance score.

In the scheduling phase, these importance scores guide decisions, with agents answering calls from higher-priority categories first. This score-based policy is intuitive, easy to implement, and effective, says Zhong.

The ‘learning-to-schedule’ algorithm

To improve contact-center service, the researchers developed an algorithm that learns from customer data, applies insights to determine which callers to prioritize, and continually refines its strategy using feedback from the application phase. Below is a simplified illustration of the algorithm.

The researchers tested LTS in a simulated service system in which demand is high. They find that it performs nearly as well as theoretically optimal policies, which are unattainable in practice due both to incomplete information and the computational complexity that would be required. Crucially, says Zhong, LTS improves over time as it gathers more data and refines its estimates, reducing errors in scheduling decisions.

She explains that the concept of LTS resonates with businesses balancing the “explore-exploit trade-off” in machine learning. This trade-off involves choosing between exploring new options to gather information and exploiting existing knowledge to maximize efficiency. Companies (including contact centers) already using ML for data collection and decision-making can readily incorporate the LTS approach, the researchers suggest. The amount of time that a company should spend learning depends on the stability of customer demand. For instance, if call patterns are consistent throughout the day, a short learning phase might suffice. However, in more dynamic environments where those patterns change, a center might add a second learning phase.

The researchers suggest that implementing LTS could lead to fewer customers abandoning in frustration before having their request addressed, higher revenues, and better customer satisfaction. This has implications for retail, where contact centers are omnipresent, but also a variety of other industries, given the number of businesses that rely on contact centers for customer communication. In healthcare, for example, it could shorten wait times for telehealth services or prescription refills.

The key, says Zhong, is that LTS thrives in settings where it’s feasible to experiment during a learning phase, and it can be a game-changing tool in environments where the stakes aren’t high. “By blending simplicity and effectiveness,” she says, “LTS offers a practical way for businesses to navigate uncertainty, improve customer satisfaction, and unlock new levels of efficiency.”

More from Chicago Booth Review

More from Chicago Booth

Results 1-9 of 9

Related Chicago Booth Topics

Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.