The data-analysis revolution that turned words into analyzable data continues to progress. Now models are turning images, audio, and visual files into data as well. Large language models can capture meaning from text data in a way that wasn’t possible before, and these media are mostly untapped territory.

Images

Price charts—the actual charts, not the data underlying them—have been used to predict stock returns.

By applying a deep-learning algorithm called a convolutional neural network to analyze images of historical stock charts, University of Chicago PhD student Jingwen Jiang, Yale’s Bryan T. Kelly, and Chicago Booth’s Dacheng Xiu extracted predictive patterns and converted them into trading signals. The image patterns achieved more accurate return predictions than common trend signals used in technical analysis, they find.

Meanwhile, other research finds that financial analysts’ facial characteristics are associated with their forecast outcomes. Using artificial intelligence and machine-learning models, Baruch College’s Lin Peng, University of California at Los Angeles’ Siew Hong Teoh, Chinese University of Hong Kong’s Yakun Wang, and Cornell PhD student Jiawen Yan scored the LinkedIn photos of approximately 800 sell-side stock analysts for characteristics such as trustworthiness, attractiveness, and dominance and examined their relation with the analysts’ earnings forecast accuracy over the past three decades.

Their study finds that analysts who scored high on trustworthiness produced more accurate forecasts. The researchers surmise this is likely because people are more comfortable sharing information with individuals they trust, thus these trustworthy-looking analysts gained more information that improved their forecasts.

Read More

However, their findings also reveal a striking gender disparity. Male analysts with high dominance scores made more accurate forecasts compared with those with lower scores. Conversely, female analysts perceived as dominant had less accurate forecasts than those with lower dominance scores. Meanwhile, a higher dominance score significantly increased the likelihood that a male analyst would be voted an All-Star, a marker of professional prestige—yet this score substantially decreased the chances for a woman, despite female analysts having, on average, more accurate forecasts than their male counterparts.

The researchers interpret the findings as potential evidence of gender discrimination in this labor market, arguing that the perception of increased masculinity contradicts the female stereotype, making female analysts less likable.

Audio files

Managerial vocal delivery quality is associated with real-time market reactions during earnings calls, research suggests. When investors strain to understand what is said in the call due to mumbling, mispronunciation, or just plain lazy diction, the market reaction to the earnings call tends to be more subdued, find Seoul National University’s Bok Baik, Booth PhD student Alex G. Kim, MIT PhD student David Sunghyo Kim, and Artificial Society’s Sangwon Yoon. The researchers used a DL algorithm to convert audio files from the earnings calls into letters, which were then combined into words and ultimately text. (For more, read “On earnings calls, do executives mumble on purpose?”)

In another study, researchers from Ruhr University Bochum—Jonas Ewertz, Charlotte Knickreh, Martin Nienhaus, and Doron Reichmann—used vocal cues to predict the future earnings of a company. They first visualized the vocal cues of managers on earnings calls with a mel spectrogram, which converts frequencies to the mel scale, a measure that represents sound in a way that humans typically hear it. They fed those images into a DL algorithm. Their model’s predictions for changes in future earnings significantly outperformed models that used numerical and text data.

Similarly, UC Berkeley’s Yuriy Gorodnichenko, University of York’s Tho Pham, and University of Birmingham’s Oleksandr Talavera analyzed the influence of vocal emotion on financial variables such as share price, volatility indices, interest-rate risk, inflation expectations, and exchange rates. Using a DL model to detect vocal emotions in the press conferences after Federal Open Market Committee meetings, the researchers find that a significantly positive tone led to higher share prices. In fact, they write, “switching the tone of the press conference from negative (-1) to positive (+1) could raise S&P 500 returns by approximately 200 basis points.”

Videos

With the advantage of both images and audio, videos may reveal information that can’t be uncovered by other media on their own, research suggests. University of Washington’s Elizabeth Blankespoor, Hong Kong Polytechnic University’s Mingming Ji, University of Hong Kong’s Jeffrey Ng, and PolyU’s Jingran Zhao created a sample of about 500 CEO earnings-announcement-related interviews broadcast on CNBC from 2013 to 2017. They find that when CEOs’ facial expressions were incongruent with their earnings news, the dispersion across analysts’ forecasts increased.

More from Chicago Booth Review

More from Chicago Booth

Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.