From Narrativity to Relevance - A Computational Approach Based on Events

:speech_balloon: Speakers: Evelyn Gius @evelyn.gius & Haimo Stiemer @hpstiemer

:classical_building: Affiliation: Technical University of Darmstadt

Title: From Narrativity to Relevance - A Computational Approach Based on Events

Abstract (long version below): Which passages in narrative texts are crucial for their plot? This contribution proposes a text surface-based computational approach for exploring the relation between narrativity and tellability (and thus relevant text passages). The approach builds on the operationalization of the event concept in narrative theory and its subsequent automated identification. The resulting event annotations are compared to plot summaries.


:movie_camera:


:newspaper: Long abstract

Which passages in narratives are crucial for their plot? This question is discussed in literary studies in the context of various concepts. For instance, the examination of passages referenced in scholarly text interpretations shows that they may contain plot-relevant events (Arnold & Fiechter, 2022). Additionally, empirical reader studies (Groeben 1977; Miali & Kuiken 2001) and reception aesthetics (Iser 1976) focus on the reception of narratives. An alternative approach, starting directly from the text rather than reception, considers concepts such as eventfulness or tellability (e.g., Hühn 2014; Baroni 2012) to be potentially significant. The present contribution aims to introduce an approach relating on textual features and explores the relationship between narrativity and tellability, thereby highlighting the importance of specific passages within the text.

In our EvENT project, we have already identified and classified events in narratives based on textual features (Vauth et al 2021, Vauth & Gius 2021). We distinguished between four event types (Change of state, Process, Stative and Non-events), to which we assigned narrative values according to their eventfulness. Text passages with a high narrative value thus indicate a high level of eventfulness. Narrativity graphs were subsequently generated on the basis of the narrative values in order to be able to map eventfulness over the entire course of the text. In the project we are currently examining the connection between i) these four event types that can be discretely identified on the text surface subsumed under the concept of “event I” in research, cf. Hühn 2015) and ii) more complex, particularly narrative-worthy (or “tellable”) or particularly plot-relevant events (also “event II”). One way of testing the relation between the basic and the more complex event types is to compare our event I annotations of literary texts with various plot summaries of these texts. The text passages mentioned in the summaries, so is the thought, are marked as particularly relevant to the plot precisely by their mention. The annotated literary texts (Kleist’s “Das Erdbeben in Chili”, Droste-Hülshoff’s “Die Judenbuche”, Ebner-Eschenbach’s “Krambambuli” and Kafka’s “Die Verwandlung”) are explored by mutual comparison of the texts with their summaries. These summaries are also in German and belong to four groups: 1. summaries written by students, 2. summaries professional summaries from “Kindlers Literatur Lexikon”, 3. summaries from the online encyclopaedia Wikipedia and 4. summaries generated by the

OpenAI programme Chat-GPT). The students’ summaries were written with the aim of addressing exclusively the plot of the narration. From the contributions of the literary encyclopaedia and Wikipedia, only those passages were used that refer to the plot in the primary texts.

In the following study, we first evaluated the quality of the summaries or rather the similarity between them (Hatzel et al. 2023). To accomplish this, we used three metrics: one based on lexical information (N-grams) and one based on distributional semantics (word embeddings). In addition, for a comparison of our summaries that is more detached from the linguistic structure, we adapted the pyramid method that was originally developed for the automatic evaluation of machine-generated summaries (cf. Nenkova et al. 2004). We then focused on the connection between narrativity, represented by our narrativity graphs, and tellability or relevance, represented by the plot summaries. For this, we examined whether the part of an original text to which a summary refers also has a high narrative value.

After we calculated the narrativity value of the passage referred to in the summary, we put it in relation to the expected total score. This results in an average value of 1.0 for a random selection of passages. A value > 1.0 means that the passages referenced in the summary have more narrativity than the non-referenced passages. A comparison of events that are mentioned in summaries with those that are not also confirms this on the basis of the narrativity values. We calculated a mean value of 3.13 for mentioned and 2.86 for unmentioned events.

Erdbeben Judenbuche Krambambuli Verwandlung
student summaries 1,04 ± 0,06 1,02 ± 0,09 1,04 ± 0,07 1,06 ± 0,10
professional summaries 1,00 1,03 1,02 1,05f
Wikipedia 1,06 1,12 1,07 1,08
ChatGPT 1,24 1,24 0,96 1,02

Expected Narrativity Value Factors. For the student summaries, the mean value (incl. the standard deviation) is given.

The results of our study suggest that the approach we have taken is productive. The use of summaries for further research on events in narrative texts opens up new perspectives. This concerns especially the connection between events and plot, when events are conceptualised and operationalised as phenomena on the surface of the text. While the first findings for the first three summary types have already been presented at the DHd conference (Hatzel et al. 2023), we have now also compared annotations with plot summaries produced by the OpenAI programme Chat-GPT. In our contribution we will present the results of our study in detail and reflect on them theoretically

References

Arnold, Frederik, and Benjamin Fiechter. 2022. “Reading What Really Matters. Identifying key passages through a new citation analysis tool”. In DHd2022. Potsdam, Germany. Lesen, was wirklich wichtig ist - Die Identifikation von SchlĂŒsselstellen durch ein neues Instrument zur Zitatanalyse | Zenodo.

Arnold, Heinz Ludwig, ed. 2020. Kindlers Literatur Lexikon (KLL). Stuttgart: J.B. Metzler. Kindlers Literatur Lexikon (KLL) | SpringerLink.

Baroni, Raphaël. 2012. “Tellability.” In the living handbook of narratology, edited by Peter Hühn, John Pier, Wolf Schmid, and Jörg Schönert. Hamburg: Hamburg University Press. http://hup.sub.uni-hamburg.de/lhn/index.php?title=Tellability&oldid=1577.

Gius, Evelyn, and Michael Vauth. 2022. „Inter Annotator Agreement und Intersubjektivität“. In DHd2022, 147-151. Potsdam, Deutschland.

Groeben, Norbert. 1977. Rezeptionsforschung als empirische Literaturwissenschaft: Paradigma- durch Methodendiskussion an Untersuchungsbeispielen. Empirische Literaturwissenschaft. Vol. 1. Königstein/Ts.: Athenäum.

Hatzel, Hans Ole. 2022. Event Narrativity Classifier. Zenodo. Event Narrativity Classifier | Zenodo.

Hatzel, Hans Ole, Evelyn Gius, Haimo Stiemer, and Chris Biemann. (2023, March 10). Narrativität und Handlung: Zum Verhältnis von Handlungszusammenfassungen und relevanten Ereignissen. DHd 2023 Open Humanities Open Culture. 9. Tagung des Verbands “Digital Humanities im deutschsprachigen Raum” (DHd 2023), Trier, Luxemburg. NarrativitĂ€t und Handlung: Zum VerhĂ€ltnis von Handlungszusammenfassungen und relevanten Ereignissen | Zenodo

Hühn, Peter. 2014. “Event and Eventfulness.” In the living handbook of narratology, edited by Peter Hühn, John Pier, Wolf Schmid, and Jörg Schönert. Hamburg: Hamburg University Press. Event and Eventfulness | the living handbook of narratology.

Iser, Wolfgang. 1976. Der Akt des Lesens. Theorie ästhetischer Wirkung. Munich: Fink.

Miall, David, and Don Kuiken. 2001. “Shifting perspectives: Readers’ feelings and literary response”. In New Perspectives on Narrative Perspective, edited by Willi Van Peer and Seymour Chatman, 289-301. Albany: SUNY Press.

Nenkova, Ani, and Rebecca Passonneau. 2004. „Evaluating Content Selection in Summarization: The Pyramid Method“. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, 145–52. Boston, Massachusetts, USA: Association for Computational Linguistics. https://aclanthology.org/N04-1019.

Vauth, Michael, and Evelyn Gius. 2022. forTEXT/EvENT_Dataset: v.1.1 (Version v.1.1). Zenodo. forTEXT/EvENT_Dataset: v.1.2 | Zenodo.

Vauth, Michael, and Evelyn Gius. 2021. “Richtlinien Für Die Annotation Narratologischer Ereigniskonzepte.” Zenodo. Richtlinien fĂŒr die Annotation narratologischer Ereigniskonzepte | Zenodo.

Vauth, Michael, Hans Ole Hatzel, Evelyn Gius, and Chris Biemann. 2021. „Automated Event Annotation in Literary Texts“. In CHR 2021: Computational Humanities Research Conference, 333–45. Amsterdam, Niederlande. http://ceur-ws.org/Vol-2989/short_paper18.pdf.