Meet the team!

Researchers

Avatar

Dirk Hovy

Full Professor

Avatar

Debora Nozza

Assistant Professor

Avatar

Maria Nawrocka

Visiting PhD Student

Avatar

Tiancheng Hu

Visiting PhD Student

Alumni

Avatar

Amanda Cercas Curry

Researcher, CENTAI Institute

Avatar

Anne Lauscher

Associate Professor, University of Hamburg

Avatar

Federico Bianchi

Postdoc, Stanford University

Avatar

Tommaso Fornaciari

Direttore Tecnico Superiore della Polizia di Stato

Avatar

Jan Globisz

Master’s student

Avatar

Kilian Theil

Researcher Associate, University of Mannheim

Avatar

Matthias Orlikowski

PhD Student, Bielefeld University

Avatar

Nikita Soni

PhD Student, Stony Brook University

Avatar

Pieter Delobelle

PhD Student, KU Leuven

Avatar

Pietro Lesci

PhD student, Cambridge University

Projects

INDOMITA

Innovative Demographically-aware Hate Speech Detection in Online Media in Italian

PERSONAE

Personalized and Subjective approaches to Natural Language Processing

MENTALISM

Measuring, Tracking, and Analyzing Inequality using Social Media

INTEGRATOR

Incorporating Demographic Factors into Natural Language Processing Models

Twitter Healthy Conversations

Devising Metrics for Assessing Echo Chambers, Incivility, and Intolerance on Twitter

Recent Twitter activity

Recent Publications

Quickly discover relevant content by filtering publications.

Divine LLaMAs: Bias, Stereotypes, Stigmatization, and Emotion Representation of Religion in Large Language Models

September, 2024
Emotions play important epistemological and cognitive roles in our lives, revealing our values and guiding our actions. Previous work …

Compromesso! Italian Many-Shot Jailbreaks Undermine the Safety of Large Language Models

August, 2024
As diverse linguistic communities and users adopt large language models (LLMs), assessing their safety across languages becomes …

My Answer is C: First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models

August, 2024
The open-ended nature of language generation makes the evaluation of autoregressive large language models (LLMs) challenging. One …

XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models

July, 2024
Without proper safeguards, large language models will readily follow malicious instructions and generate toxic content. This risk …