Research
We focus on understanding pragmatic language interpretation and production. We are looking at meaning of natural language that goes beyond literal meaning, constructed from context, world knowledge etc., and is used by agents, human or artificial, to achieve various goals. We are also interested in psycholinguistics and cognitive modeling, as well as investigating to which extent there are parallels between human language use and the performance of recent neural language models .
We highly value interdisciplinary perspectives, open science and reproducibility in our work. Some tools and resources we use in our projects that were developed by the lab and affiliated collaborators can be found here. We gratefully acknowledge the support for our work through different programs: Michael is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 39072764 at the University of Tübingen, the coordinator of a new DFG Priority Area LaSTing, and we receive support through the VW Momentum program and the DFG SFB 1718.
Below are descriptions of core research areas and some representative publications. A full list of publications can be found here.
Feel free to get in touch with us if you are interested in collaborating on any of these topics!
Causal language
—
Causal information is crucial for our understanding of the world. However, most of it isn't communicated explicitly (e.g., A causes B), but is inferred from non-causal language (e.g., If A, then B) or correlational language. We investigate which contextual factors influence these causal inferences and develop novel probabilistic models of causal language interpretation (e.g., Lassiter & Franke, 2024).
Neuro-symbolic cognitive modeling with language models
—
Probabilistic cognitive models have helped successfully explain many phenomena in human cognition at the computational level, including how humans interpret and produce pragmatic utterances. However, such models often face a challenge in scalability to more open-ended situations and utterances, because they often rely on manually specified spaces of reasoning alternatives. We combine the explanatory strength of computational cognitive models with the flexibility of recent language models to explore more open-ended cognitive models that can explain more open-ended pragmatic language use (e.g., Tsvilodub, Franke & Carcassi, 2024; Tsvilodub, Hawkins & Franke, 2025).
Pragmatic capabilities of humans and language models
—
Humans effortlessly use language pragmatically, i.e., going beyond its literal meaning by relying on interlocutors' inferences in context. Additionally to knowledge of language, this capability in humans is supported by probabilistic mechanisms for reasoning about other people, rich world knowledge, and more. We investigate diverse phenomena in pragmatic language and work on new probilistic models explaining them (e.g., Achimova, Franke & Butz, 2025; Hawkins, Tsvilodub, Bergey, Goodman & Franke, 2025; Hawkins, Franke & al, 2023 ). We also investigate to which extent language models already are capable of interpreting and generating pragmatic language, and crucially, how these capabilities should be assessed (e.g., Tsvilodub, Wang, Grosch & Franke, 2024).
Mechanistic interpretability
—
State of the art transformer language models have shown impressive performance in many domains, including tasks commonly used in cognitive psychology and pragmatics. However, the mechanisms within the models that lead to this performance remain poorly understood. We investigate to which extent the computational processes in such models may be (non) human-like, or could be influenced in human-like ways, when exposed to, e.g., classical cognitive reflection tasks (e.g., Hu & Franke, 2024; Hu, Lepori & Franke, 2025).
Common ground
—
Common ground is one of the key concepts in linguistics, but the precise definition as well as computational accounts of communication including common ground are still highly debated. We are excited to be part of the SFB 1718 Common Ground at the University of Tübingen, and will work on projects like computational modeling of meaning as it may emerge in non-human communciation (e.g., Franke, Bohn & Fröhlich, 2024)