Coding Aperitivo

The Coding Aperitivo is our take on a weekly seminar series. We end the working week and wind down with some relaxed academic chatter, a drink and some snacks.

Dirk’s Negroni

When in Milan, drink as the Milanese. Though the official recipe calls for equal parts gin, Campari and red vermouth, here we opt for a punchier negroni, heavy on the gin. For a sbagliato, substitute the gin for prosecco.

  • 3-4 parts gin (to taste)
  • 2 parts campari
  • 1 part red vermouth.

Pour the ingredients into a mixing glass with ice and stir until the glass feels very cold. Strain into a glass with a (very) large ice cube and a twist of orange (and rub the glass rim with it).


The sessions are hybrid (online/in-person), with the guest generally being remote. As well as our guest, we also have an internal whiteboard-only talk about current work or a topic that has been tugging at our heartstrings.

We encourage our guests to try different formats with us such as guided discussions, hands-on activities, PechaKucha, debates or just a nice academic chat. Got some research ideas you and you want a sounding board? We are very happy to to discuss ongoing or upcoming research.

In previous sessions, some guests have:

  • Used the Jigsaw method in order to introduce a topic in a hands-on manner.
  • Introduced a premise for debate, then separated the audience into two teams to discuss among ourselves, finally bringing the teams together in order argue their points.
  • Used one existing work as a basis for questioning our assumptions about the research process
  • Briefly presented ongoing work, highlighting challenges or potential research avenues to discuss.
  • Given us a hands-on activity (e.g. data annotation) meant to highlight the challenges of a particular task.

Past Guests

  • Emily Sheng (University of Southern California) on Biases in NLG and Dialogue Systems
  • Nedjma Ousidhoum (University of Cambridge) on Expectations vs. Reality when Working on Toxic Content Detection in NLP
  • Nils Reimers on Training State-of-the-Art Text Embedding & Neural Search Models
  • Maarten Sap (Allen AI/CMU) on Detecting and Rewriting Socially Biased Language
  • Sunipa Dev (Google Research) on “Towards Interpretable, Fair and Socially-Aware of Language Representations”
  • Alba Curry (UC Riverside) on Philosophy of Emotion and Sentiment Detection
  • Rob van der Goot (IT University of Copenhagen) on Multi-lingual and Multi-task learning: from Dataset Creation to Modeling
  • Su Lin Blodgett (Microsoft FATE) on Social and Ethical Implications of NLP Technologies
  • Gabriele Sarti (University of Groningen) on Interpreting Neural Language Models for Linguistic Complexity Assessment
  • Paul Röttger (University of Oxford) on Two Contrasting Data Annotation Paradigms for Subjective NLP Tasks
  • Chia-Chien Hung (University of Mannheim) on Multi-domain and Multilingual Dialog
  • Anna Wegmann (Utrecht University) on Does It Capture STEL? A Modular, Similarity-based Linguistic Style Evaluation Framework
  • Abhilasha Ravichander (Carnegie Mellon University) on Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance?
  • Samson Tan (AWS AI Research & Education) on Towards Sociolinguistically-Inclusive NLP: An Adversarial Approach
  • Christine de Kock on I Beg to Differ: A study of constructive disagreement in online conversations
  • Eliana Pastor (Politecnico di Torino) on Pattern-based algorithms for Explainable AI
  • Dave Howcroft (Napier University) on Low-Resource NLG
  • Zeerak Talat (Digital Democracies Institute) led a discussion on Ethics, bias
  • Christopher Klamm (University of Mannheim) on Defining and Measuring Polasiration Across Disciplines
  • Swabha Swayamdipta (Allen Institute for AI) on Annotation Challenges in NLP
  • Carlo Schwarz (Università Bocconi) on How Polarized are Citizens? Measuring Ideology from the Ground-Up
  • Lorenzo Bertolini (University of Sussex) on Testing Language Models on Compositionality
  • Alessandro Raganato (University of Milano-Bicocca)
  • Mark Dingemanse and Andreas Liesenfeld on Language Diversity in Conversational AI Research
  • Agostina Calabrese (University of Edinburgh) on If Data Patterns is the Answer, What was the Question?
  • Aida Mostafazadeh (Google Research) on incorporating annotators' psychological profiles into modeling language classification tasks
  • Myrthe Reuver (Free University of Amsterdam (VU)) on Viewpoint diversity in news recommendation: Theories, Models, and Tasks to support democracy.
  • Tommaso Caselli (Rijksuniveristeit Groningen) on Language Resources to Monitor Abusive Language in Dutch
  • Valentin Hoffman (University of Oxford) on Semantic Diffusion: Deep Learning Sense of network
  • Beatrice Savoldi (University of Trento) on Designing a course for Ethics in NLP
  • Hannah Rose Kirk (University of Oxford) on Bias harms and mitigation
  • Juan Manuel Perez (Instituto de Investigación en Ciencias de la Computación UBA/CONICET) on Assessing the impact of contextual information in hate speech detection
  • Daryna Dementieva(SkolTech) on Text detoxification
  • Fabio Tollon (Bielefeld University) on From designed properties to possibilities for action
  • Ryan Cotterell (ETH Zurich) on Some Thoughts on Compositionality
  • William Agnew (University of Washington) on Values, Ethics and NLP
  • Rami Aly (University of Cambridge) on automatic fact checking
  • Indira Sen (Leibnitz Institute for the Social Sciences) on Measuring social constructs with NLP: Two case studies of abusive language and workplace depression
  • Maurice Jakesch on Assessing the Effects and Risks of Large Language Models in AI-Mediated Communication
  • Marco del Tredici led a discussion on current trends in NLP
  • Fatma Elsafoury on hate speech and toxicity.
  • Mor Geva on annotation bias sources and prevention.
  • Emanuele Bugliarello on language modelling as pixels.
  • Tess Buckley on computational creativity and the ethics of AI-generated music.
  • Marina Rizzi on Self-regulation and the Evolution of Content: A Cross-Platform Analysis.
  • Giovanni Cassani, Marco Bragoni, and Paul Schreiber on Multimodal Representations for Words that Don’t Exist Yet
  • Laura Vasquez-Rodriguez gave us a thorough intorduction to text simplification with NLP.
  • Raj Ammanabrolu on Interactive Language Learning
  • Suchin Gururangan led a discussion on all things language models, open-sourcing and regulation.
  • Giada Pistilli led a discussion on ethics in NLP
  • Edoardo Ponti on Modular Deep Learning
  • Julie-Anne Meaney on Demographically-aware Computational Humour
  • Giorgio Franceschelli on on creativity and machine learning.
  • Aubrie Amstutz on managing toxicity and hate speech in the private sector
  • Tom McCoy on Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve
  • Camilo Carvajal Reyes on the EthicApp, analysing and understanding how people debate ethical issues.
  • Tanvi Dinkar on safety and robustness in conversational AI