Meet the team!

Researchers

Avatar

Dirk Hovy

Full Professor

Avatar

Debora Nozza

Assistant Professor

Alumni

Avatar

Anne Lauscher

Associate Professor, University of Hamburg

Avatar

Federico Bianchi

Postdoc, Stanford University

Avatar

Tommaso Fornaciari

Direttore Tecnico Superiore della Polizia di Stato

Avatar

Jan Globisz

Master’s student

Avatar

Kilian Theil

Researcher Associate, University of Mannheim

Avatar

Matthias Orlikowski

PhD Student, Bielefeld University

Avatar

Nikita Soni

PhD Student, Stony Brook University

Avatar

Pieter Delobelle

PhD Student, KU Leuven

Avatar

Pietro Lesci

PhD student, Cambridge University

Projects

INDOMITA

Innovative Demographically-aware Hate Speech Detection in Online Media in Italian

MENTALISM

Measuring, Tracking, and Analyzing Inequality using Social Media

INTEGRATOR

Incorporating Demographic Factors into Natural Language Processing Models

Twitter Healthy Conversations

Devising Metrics for Assessing Echo Chambers, Incivility, and Intolerance on Twitter

Recent Twitter activity

Recent Publications

Quickly discover relevant content by filtering publications.

DADIT: A Dataset for Demographic Classification of Italian Twitter Users and a Comparison of Prediction Methods

May, 2024
Social scientists increasingly use demographically stratified social media data to study the attitudes, beliefs, and behavior of the …

The Empty Signifier Problem: Towards Clearer Paradigms for Operationalising 'Alignment' in Large Language Models

November, 2023
In this paper, we address the concept of ‘alignment’ in large language models (LLMs) through the lens of post-structuralist …

SimpleSafetyTests: a Test Suite for Identifying Critical Safety Risks in Large Language Models

November, 2023
The past year has seen rapid acceleration in the development of large language models (LLMs). For many tasks, there is now a wide range …

XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models

October, 2023
Without proper safeguards, large language models will readily follow malicious instructions and generate toxic content. This risk …

The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values

October, 2023
Human feedback is increasingly used to steer the behaviours of Large Language Models (LLMs). However, it is unclear how to collect and …