Coding Aperitivo

“Have a negroni. Have two. Be open to a world where you may not understand or agree with the person next to you, but have a drink with them anyways." –Anthony Bourdain

The Coding Aperitivo is our take on a weekly seminar series. We end the working week and wind down with some relaxed academic chatter, a drink and some snacks.

Format

We usually host external speakers on Fridays at 4pm Milan time. Talks are mostly virtual, and sometimes in person. We encourage our guests to try different formats with us such as guided discussions, hands-on activities, debates or just a nice academic chat. Got some research ideas you and you want a sounding board? We are very happy to discuss ongoing or upcoming research.

Past Guests

2024

  • Emanuele La Malfa: “Code Simulation Challenges for Large Language Models”
  • Enrico Liscio: “Context-Specific Value Inference via Hybrid Intelligence”
  • Eve Fleisig: “When the Majority is Wrong: Modeling Annotator Disagreement for Language Tasks”
  • Vishakh Padmakumar: “Does Writing with Language Models Reduce Content Diversity?”
  • Enrico Bertino: “AI at a Milanese Chatbot Start-Up”
  • Fangru Lin: “Graph-enhanced Large Language Models in Asynchronous Plan Reasoning”
  • Xuhui Zhou: “Towards Socially Aware and Interactional NLP Systems”
  • Minje Choi: “Towards Evaluating and Measuring the Social Capabilities of Large Language Models”
  • Sachin Kumar: “Adapting Language Models to Improve Reliability: Experiments with Refusals and Diverse Preference Modeling”
  • Nino Scherrer: “Evaluating (Moral) Beliefs Encoded in LLMs”
  • Mary Sanford: “Political Discourse on Climate Change in EU Party Manifestos: A Computational Text Analysis Approach”
  • Anna Rogers, Faeze Brahman and Elman Mansimov: Workshop on LLMs in Research and Industry
  • Eugenia Stamboliev: “Can we Explain AI? On the Pitfalls of XAI”
  • Maria Antoniak: “Computational Approaches to Narratives”
  • Lucy Li: “AboutMe: Using Self-Descriptions in Webpages to Document the Effects of English Pretraining Data Filters”
  • Jasmijn Bastings: “Bits, Bats & Bots: Deconstructing Gender in Language Technology”
  • Fatma Elsafoury: “On the Sources of Bias in NLP Models: Origin, Impact, Mitigation, and the Ways Forward”
  • Caleb Ziems: “How to Use Large Language Models for Computational Social Science”
  • Rose Wang: “Scaling Expertise via Language Models with Applications to Education”
  • Amin al Hazwani: “Collaborating to Create a Language-Independent Encyclopedia”
  • Luna De Bruyne: “Emotions without Borders: Challenges in Multilingual Emotion Detection”
  • Alina Leidinger: “How are LLMs Mitigating Stereotyping Harms?”
  • Giuseppe Russo: “The Causal Impact of Content Curation Practises in Online Platforms”
  • Julie Jiang: “Social Approval and Network Homophily as Motivators of Online Hate Speech”
  • Joachim Baumann: “Fact-Checking and Music Recommendation”
  • Niklas Stöhr: “Relating Items on a Shared Scale: Making Measurements with LMs”
  • Gavin Abercrombie: “Tackling Online Gender-Based Violence”
  • Mikel Ngueajio: “Can Explainable AI Help Mitigate the Toxic Synergy between Hate Speech and Fake News?”
  • Andrew Bean: “LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles”
  • Sofie Labat: “Emotion Research in Interactions: Bridging NLP and Social Psychology Through Role Playing”

2023

Click HERE to expand.

2022

Click HERE to expand.

2021

Click HERE to expand.
  • Emily Sheng: “Biases in NLG and Dialogue Systems”
  • Nedjma Ousidhoum: “Expectations vs. Reality when Working on Toxic Content Detection in NLP”
  • Nils Reimers: “Training State-of-the-Art Text Embedding & Neural Search Models”
  • Maarten Sap: “Detecting and Rewriting Socially Biased Language”
  • Sunipa Dev: “Towards Interpretable, Fair and Socially-Aware of Language Representations”
  • Alba Curry: “Philosophy of Emotion and Sentiment Detection”
  • Rob van der Goot: “Multi-lingual and Multi-task learning: from Dataset Creation to Modeling
  • Su Lin Blodgett: “Social and Ethical Implications of NLP Technologies”
  • Gabriele Sarti: “Interpreting Neural Language Models for Linguistic Complexity Assessment”
  • Paul Röttger: “Two Contrasting Data Annotation Paradigms for Subjective NLP Tasks”
  • Chia-Chien Hung: “Multi-domain and Multilingual Dialog”
  • Anna Wegmann: “Does It Capture STEL? A Modular, Similarity-based Linguistic Style Evaluation Framework”
  • Abhilasha Ravichander: “Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance?”
  • Samson Tan (AWS AI Research & Education): “Towards Sociolinguistically-Inclusive NLP: An Adversarial Approach”

Dirk’s Drinks

When in Milan, drink as the Milanese. There are many excellent drink options, but they all start with a bitter and a red vermouth. The big names here are Campari and Martini, but there are plenty of other options worth exploring. Though the official recipes call for equal parts bitter and red vermouth, here we opt for a punchier taste, heavier on the bitter.

Base:

  • 3 parts bitter
  • 2 parts red vermouth

Options:

You can now take this into several directions, by adding different mixers:

  • 3 parts sparkling water (or fill it up) will get you an Americano (not to be confused with the coffee drink of the same name)
  • For an interesting and refreshing twist, try tonic water instead of sparkling
  • 3 parts prosecco get you a negroni sbagliato (the “messed up negroni”)
  • 3 parts gin get you the original negroni
  • 3 parts bourbon get you a boulevardier

Pour the ingredients into a mixing glass with some ice and stir until the glass feels very cold. Strain into a glass with a large ice cube (the larger the better: it will melt more slowly) and a twist of orange or lemon peel (and rub the glass rim with it). Enjoy!