Options
August 22, 2024
Conference Paper
Title
The Future is Different: Predicting Reddits Popularity with Variational Dynamic Language Models
Abstract
Large pre-trained language models (LPLM) have shown spectacular success when fine-tuned on downstream supervised tasks. It is known, however, that their performance can drastically drop when there is a distribution shift between the data used during training and that used at inference time. In this paper we focus on data distributions that naturally change over time and introduce four Reddit datasets, namely the Wallstreetbets, AskScience, The Donald, and Politics sub-reddits. First, we empirically demonstrate that LPLM can display average performance drops of about 79% in the best cases, when predicting the popularity of future posts. We then introduce a methodology that leverages neural variational dynamic topic models and attention mechanisms to infer temporal language model representations for regression tasks Our models display performance drops of only about 33% in the best cases when predicting the popularity of future posts, while using only about 7% of the total number of parameters of LPLM and providing interpretable representations that offer insight into real-world events, like the GameStop short squeeze of 2021. Source code to reproduce our experiments is available online.
Author(s)