Meet the team!

Researchers

Avatar

Dirk Hovy

Full Professor

Avatar

Debora Nozza

Assistant Professor

Alumni

Avatar

Anne Lauscher

Associate Professor, University of Hamburg

Avatar

Federico Bianchi

Postdoc, Stanford University

Avatar

Tommaso Fornaciari

Direttore Tecnico Superiore della Polizia di Stato

Avatar

Jan Globisz

Master’s student

Avatar

Kilian Theil

Researcher Associate, University of Mannheim

Avatar

Matthias Orlikowski

PhD Student, Bielefeld University

Avatar

Nikita Soni

PhD Student, Stony Brook University

Avatar

Pieter Delobelle

PhD Student, KU Leuven

Avatar

Pietro Lesci

PhD student, Cambridge University

Projects

MENTALISM

Measuring, Tracking, and Analyzing Inequality using Social Media

INTEGRATOR

Incorporating Demographic Factors into Natural Language Processing Models

Twitter Healthy Conversations

Devising Metrics for Assessing Echo Chambers, Incivility, and Intolerance on Twitter

Recent Twitter activity

Recent Publications

Quickly discover relevant content by filtering publications.

Leveraging Label Variation in Large Language Models for Zero-Shot TextClassification

July, 2023
The zero-shot learning capabilities of large language models (LLMs) make them ideal for text classification without annotation or …

A Multi-dimensional study on Bias in Vision-Language models

July, 2023
In recent years, joint Vision-Language (VL) models have increased in popularity and capability. Very few studies have attempted to …

MilaNLP at SemEval-2023 Task 10: Ensembling Domain-Adapted and Regularized Pretrained Language Models for Robust Sexism Detection

July, 2023
We present the system proposed by the MilaNLP team for the Explainable Detection of Online Sexism (EDOS) shared task. We propose an …

Respectful or Toxic? Using Zero-Shot Learning with Language Models to Detect Hate Speech

July, 2023
Hate speech detection faces two significant challenges: 1) the limited availability of labeled data and 2) the high variability of hate …

Temporal and Second Language Influence on Intra-Annotator Agreement and Stability in Hate Speech Labelling

July, 2023
Much work in natural language processing (NLP) relies on human annotation. The majority of this implicitly assumes that annotator’s …