Mario Giulianelli
Associate Professor of Computational Linguistics · University College London
Director of the Computational Linguistics Postgraduate Programme · UCL PaLS
Member of the European Laboratory for Learning and Intelligent Systems · UCL ELLIS Unit

My research explores the information processing principles that underly the ability to use and learn language — both in human and artificial language processing systems. I'm also increasingly interested in biological and artificial cognitive systems more broadly, and by extension to extra-linguistic aspects of perception, action, and interaction. I am committed to advancing the science of AI evaluation, with a focus on language as well as broader aspects of agency and safety. In 2018, I co-authored a paper that, according to the very kind Aaron Mueller, introduced the first causal (mechanistic?) interpretability method for language models.

News
April 2026 Presenting goal-directedness work at two ICLR 2026 workshops

We will present our work on behavioural and representational evaluation of goal-directedness in LLM agents at the ICLR 2026 Workshop on World Models and at Agents in the Wild: Safety, Security, and Beyond.

March 2026 Call for papers: LM Playschool Workshop and Challenge

We have announced the call for papers for the LM Playschool Workshop and Challenge. We invite submissions exploring the frontier of language agents that learn, adapt, and improve through situated interaction, with a focus on conversational, collaborative, goal-oriented, and multi-turn environments.

February 2026 Two new preprints on LLM agents in extra-linguistic tasks

New work on evaluating language model agents (1) with a combination of behavioural and representational analyses of goal-directedness; (2) with a new active probabilistic reasoning task (inspired from cognitive neuroscience) that isolates two core elements of decision-making under uncertainty: sampling and inference.

February 2026 Paper accepted at EACL 2026

Work with collaborators from Edinburgh on extending information-theoretic models of language production to visually grounded settings was accepted at EACL 2026.

Autumn 2025 Grants and SPAR mentorship supporting agent-evaluation work

A Cohere Labs Catalyst Grant, a Cosmos Grant, and a SPAR cohort are supporting new work on modelling, measuring, and intervening on goal-directedness and emergent self-interest in LLM agents.

Selected publications
2026
Mario Giulianelli, Sarenne Wallbridge, Ryan Cotterell, Raquel Fernandez
Journal of Memory and Language
2025
Christopher Summerfield, Lennart Luettgau, Magda Dubois, Hannah Rose Kirk, Kobi Hackenburg, Catherine Fist, Katarina Slama, Nicola Ding, Rebecca Anselmetti, Andrew Strait, Mario Giulianelli, Cozmin Ududec
Preprint
2025
Eleftheria Tsipidi, Samuel Kiegeland, Franz Nowak, Tianyang Xu, Ethan Wilcox, Alex Warstadt, Ryan Cotterell, Mario Giulianelli
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025)
2024
Mario Giulianelli, Luca Malagutti, Juan Luis Gastaldi, Brian DuSell, Tim Vieira, Ryan Cotterell
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024)
2024
Mario Giulianelli, Andreas Opedal, Ryan Cotterell
Findings of the Association for Computational Linguistics: EMNLP 2024
2023
Dieuwke Hupkes, Mario Giulianelli, Verna Dankers, Mikel Artetxe, Yanai Elazar, Tiago Pimentel, et al.
Nature Machine Intelligence 5, 1161-1174
2018
Mario Giulianelli, Jacqueline Harding, Florian Mohnert, Dieuwke Hupkes, Willem Zuidema
1st Workshop on Analyzing and Interpreting Neural Networks for NLP (EMNLP 2018)