Speaker: Federico Pianzola
Affiliation: University of Groningen, The Netherlands
Title: Book Reviews as a Proxy for Reader Response: A Cross-Cultural Comparison Focusing on Emotions
Abstract (long version below): We present the results of two research using computational methods to detect emotions in book reviews in Korean, English, and Italian. In recent years there has been an increasing adoption of computational methods to simulate reading processes and the reception of literature. Here, we present a reflection on the relation between actual reading practices and their spontaneous verbalization in the form of reviews is needed. We make recommendations for the collection and analysis of data on digital reading platforms, showing how both the digital infrastructure and the context can strongly influence the results of research.
Work in collaboration with Alessandro Fossati, Marco Viviani (University of Milan-Bicocca, Italy), and Srishti Sharma (University of Ghent, Belgium)
We present the results of two research using computational methods to detect emotions in book reviews in Korean, English, and Italian. In recent years there has been an increasing adoption of computational methods to simulate reading processes and the reception of literature (Boot & Koolen, 2020; Holur et al., 2022; Jacobs & Kinder, 2022; Lendvai et al., 2020; Pianzola et al., 2020). However, a deeper reflection on the relation between actual reading practices and their spontaneous verbalization in the form of reviews is needed.
Looking at the broad affective experiences that people have with fiction, we have to consider that emotional response and appraisal are not uniform outputs of communication or aesthetic experiences, rather language-driven processing of emotions and reader’s evaluative response change during the unfolding of a narrative (‘t Hart et al., 2019). Moreover, literary texts elicit affective states, sometimes called “aesthetic emotions” (Menninghaus et al., 2019), that involve many more variables other than the emotions expressed in a text. Style (Stockwell, 2016), narrative strategies (Sternberg, 2003), the experiential background of readers (Caracciolo, 2014), etc. are all factors intervening in the formation of reader response (Jacobs, 2015). A further complication in the proposed research is that identifying emotions in text and their effect on the audience is a complex task (Eder et al., 2019; Kim & Klinger, 2019; Scherer, 2005; Schindler et al., 2017). However, computational techniques like sentiment analysis and emotion detection are being continuously improved, not only to determine single scores for a text but even to compute affective variations throughout a narrative (Kim et al., 2017; Mohammad, 2018; Reagan et al., 2016; Zehe et al., 2016).
In the first research, we explore the relation between emotions in books and emotions in reviews. More specifically, we focus on the emotional impact a book can have. The plausibility of a direct link between textual emotions and reader response is supported by research that looked at the close relationship between a story’s emotional valence (sentiment) on the level of paragraph and the sentiment found in readers’ comments in the margin of such paragraphs (Pianzola et al., 2020). Similarly, Jacobs and Kinder (2019) found that computationally calculated affect scores of words can predict the human liking of small text chunks (~105 words) of a short story. In brief, existing research showed that there is a link between emotions in texts and the emotional response of readers, but we still do not know the details of this kind of relationship in the case of book reviews.
The main hypothesis that we test regards the influence that the plot’s ending has on the readers’ appraisal of a book: we believe stories ending with a growing positive sentiment elicit reviews with a more positive sentiment than stories ending in a decreasing sentiment. Using narratives excerpts and reviews manually labelled, we train several machine and deep learning algorithms to detect the sentiment of books and their reviews. We then computed these scores for of 450 books, divided across nine genres, and more than 50,000 English reviews. We also compute the Affective-Aesthetic Potential (Jacobs & Kinder, 2019) of the books and compare it with the sentiment of the reviews. The analysis has not been completed yet, but the hypotheses and methodology have been preregistered.
In the second research, we analyse Korean, English, and Italian book reviews gathered from six different online platforms, and compare insights about culture- and language-specificity. We look at the 200 most popular books for five genres of fiction in each language, plus 135 books for which we have reviews in all three languages. We scraped the reviews from the biggest reviewing platforms for each language, one retail and one non-retail platform: Amazon.com (270k reviews collected) and Goodreads (247k) for English (999 books collected), Amazon.it (93k) and Anobii (64k) for Italian (975 books), and Naver Books (40k) and Yes24 (67k) for Korean (900 books). The 135 shared books have been manually sampled including both Western and Asian authors and balancing across genres. First of its kind, this multilingual dataset allows to explore the impact that books have on both Western and Asian readers. A valuable insight is suggested by the average ratings of these shared books: selecting either retail or non-retail platforms can bias the results towards more positive response (Dimitrov et al., 2015; Newell et al., 2016). Moreover, different platforms attract and build different kinds of reading communities. For example, the Italian-dominated platform Anobii seems to be used by readers who are quite critical towards popular fiction, much more than English and Korean speaking readers using non-retail platforms.
The results of these two projects offer useful insight for the design of empirical research on books and readers when digital platforms are involved. The availability of large amount of data should be critically assessed when planning the data collection, the sampling of readers and books, and the tools used for measuring reader response. The IGEL community has much to offer to researchers interested in the computational study of reding practices because of the long-standing expertise with the development of sound empirical methods for the study of books and readers.