Can ChatGPT Predict the Impact of a Research Article?

Can ChatGPT Predict the Impact of a Research Article?

Author: ChemistryViews

Text generators based on artificial intelligence (AI), such as ChatGPT, pose some challenges for scientific publishing, but they could also be helpful. Such tools are based on large language models, which are trained using large amounts of text. Predicting the “impact” metrics of a research article, such as the number of citations or indicators of public engagement (e.g., news articles covering the research or social media posts linking to the paper), would be useful. It could, for example, allow researchers to optimize these values by improving their writing.

Joost de Winter, Delft University of Technology, The Netherlands, has investigated whether ChatGPT can predict some of these metrics, i.e., citation counts, readers in Mendeley (a reference management tool), and social media attention, from an article’s abstract. De Winter used PLOS ONE, a multidisciplinary open-access journal, as a data source and selected articles published in a span of two months (January and February 2022) that were not part of the training data for the ChatGPT version he used. He then asked ChatGPT to rate the resulting 2222 abstracts based on properties such as how original, engaging, easy to understand, methodical, controversial, well-written, etc. the text is. These characteristics were themselves chosen with the help of ChatGPT, giving 60 different scores in total.

De Winter then compared ChatGPT’s 60 different ratings of each abstract with the metrics of interest, i.e., citation counts obtained from different sources, the number of mentions in blogs, on Twitter (now X), on Reddit, and in news articles, as well as Mendeley reader numbers. He found that scores related to quality and reliability were only weakly correlated with the numbers of citations. Scores related to how understandable a paper is were connected to the number of tweets and readers, and weakly correlated with the number of citations, while high scores related to novelty were associated with higher citation numbers. According to de Winter, the fact that the correlations found were only weak to moderate could be due to the restriction to abstracts, which might not always represent the full paper, or the size of the data set. Nevertheless, he states that the work still offers valuable insights and hopes that it can stimulate a meaningful dialogue on how the quality of scientific work is determined.


 

Leave a Reply

Kindly review our community guidelines before leaving a comment.

Your email address will not be published. Required fields are marked *