Home
Projects
People
Publications
Coding Aperitivo
Reading Group
Join us
Contact
Paul Röttger
Latest
Compromesso! Italian Many-Shot Jailbreaks Undermine the Safety of Large Language Models
My Answer is C: First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models
XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models
Beyond Flesch-Kincaid: Prompt-based Metrics Improve Difficulty Classification of Educational Texts
Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions
SafetyPrompts: a Systematic Review of Open Datasets for Evaluating and Improving Large Language Model Safety
The Empty Signifier Problem: Towards Clearer Paradigms for Operationalising 'Alignment' in Large Language Models
SimpleSafetyTests: a Test Suite for Identifying Critical Safety Risks in Large Language Models
The Past, Present and Better Future of Feedback Learning in Large Language Models for Subjective Human Preferences and Values
The Ecological Fallacy in Annotation: Modeling Human Label Variation goes beyond Sociodemographics
Data-Efficient Strategies for Expanding Hate Speech Detection into Under-Resourced Languages
Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models
Two Contrasting Data Annotation Paradigms for Subjective NLP Tasks
Cite
×