Impact of model settings on the text-based Rao diversity index
Policymakers and funding agencies tend to support scientific work across disciplines, thereby relying on indicators for interdisciplinarity. Recently, text-based quantitative methods have been proposed for the computation of interdisciplinarity that hold promise to have several advantages over the bibliometric approach. In this paper, we provide a systematic analysis of the computation of the text-based Rao index, based on probabilistic topic models, comparing a classical LDA model versus a neural network topic model. We provide a systematic analysis of model parameters that affect the diversity scores and make the interaction between its different components explicit. We present an empirical study on a real data set, upon which we quantify the diversity of the research within several departments of Fraunhofer and Max Planck Society by means of scientific abstracts published in Scopus between 2008 and 2018. Our experiments show that parameter variations, i.e. the choice of the Number of topics, hyper-parameters, and size and balance of the underlying data used for training the model, have a strong effect on the topic model-based Rao metrics. In particular, we could observe that the quality of the topic models impacts on the downstream task of computing the Rao index. Topic models that yield semantically cohesive topics are less affected by fluctuations when varying over the number of topics, and result in more stable measurements of the Rao index.