Coding Aperitivo
The Coding Aperitivo is our take on a weekly seminar series. We end the working week and wind down with some relaxed academic chatter, a drink and some snacks.
Format
We usually host external speakers on Fridays at 4pm Milan time. Talks are mostly virtual, and sometimes in person. We encourage our guests to try different formats with us such as guided discussions, hands-on activities, debates or just a nice academic chat. Got some research ideas you and you want a sounding board? We are very happy to discuss ongoing or upcoming research.
Past Guests
2021
- Emily Sheng: “Biases in NLG and Dialogue Systems”
- Nedjma Ousidhoum: “Expectations vs. Reality when Working on Toxic Content Detection in NLP”
- Nils Reimers: “Training State-of-the-Art Text Embedding & Neural Search Models”
- Maarten Sap: “Detecting and Rewriting Socially Biased Language”
- Sunipa Dev: “Towards Interpretable, Fair and Socially-Aware of Language Representations”
- Alba Curry: “Philosophy of Emotion and Sentiment Detection”
- Rob van der Goot: “Multi-lingual and Multi-task learning: from Dataset Creation to Modeling
- Su Lin Blodgett: “Social and Ethical Implications of NLP Technologies”
- Gabriele Sarti: “Interpreting Neural Language Models for Linguistic Complexity Assessment”
- Paul Röttger :Two Contrasting Data Annotation Paradigms for Subjective NLP Tasks”
- Chia-Chien Hung: “Multi-domain and Multilingual Dialog”
- Anna Wegmann: “Does It Capture STEL? A Modular, Similarity-based Linguistic Style Evaluation Framework”
- Abhilasha Ravichander: “Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance?”
- Samson Tan (AWS AI Research & Education): “Towards Sociolinguistically-Inclusive NLP: An Adversarial Approach”
2022
- Christine de Kock: “I Beg to Differ: A study of constructive disagreement in online conversations”
- Eliana Pastor: “Pattern-based algorithms for Explainable AI”
- Dave Howcroft: “Low-Resource NLG”
- Zeerak Talat: “Ethics and Bias”
- Christopher Klamm: “Defining and Measuring Polasiration Across Disciplines”
- Swabha Swayamdipta: “Annotation Challenges in NLP”
- Carlo Schwarz: “How Polarized are Citizens? Measuring Ideology from the Ground-Up”
- Lorenzo Bertolini: “Testing Language Models on Compositionality”
- Alessandro Raganato
- Mark Dingemanse and Andreas Liesenfeld: “Language Diversity in Conversational AI Research”
- Agostina Calabrese: “If Data Patterns is the Answer, What was the Question?”
- Aida Mostafazadeh: “Incorporating annotators' psychological profiles into modeling language classification tasks”
- Myrthe Reuver: “Viewpoint diversity in news recommendation: Theories, Models, and Tasks to support democracy”
- Tommaso Caselli: “Language Resources to Monitor Abusive Language in Dutch”
- Valentin Hoffman: “Semantic Diffusion: Deep Learning Sense of network”
- Beatrice Savoldi: “Designing a course for Ethics in NLP”
- Hannah Rose Kirk: “Bias harms and mitigation”
- Juan Manuel Perez: “Assessing the impact of contextual information in hate speech detection”
- Daryna Dementieva: “Text detoxification”
- Fabio Tollon: “From designed properties to possibilities for action”
- Ryan Cotterell: “Some Thoughts on Compositionality”
- William Agnew: “Values, Ethics and NLP”
- Rami Aly: “Automatic fact checking”
- Indira Sen: “Measuring social constructs with NLP: Two case studies of abusive language and workplace depression”
2023
- Maurice Jakesch: “Assessing the Effects and Risks of Large Language Models in AI-Mediated Communication”
- Marco del Tredici: “Current trends in NLP”
- Fatma Elsafoury: “Hate Speech and Toxicity”
- Mor Geva: “Annotation bias sources and prevention”
- Emanuele Bugliarello: “Language modelling as pixels”
- Tess Buckley: “Computational creativity and the ethics of AI-generated music”
- Marina Rizzi: “Self-regulation and the Evolution of Content: A Cross-Platform Analysis”
- Giovanni Cassani, Marco Bragoni, and Paul Schreiber: “Multimodal Representations for Words that Don’t Exist Yet”
- Laura Vasquez-Rodriguez: “Introduction to text simplification with NLP”
- Raj Ammanabrolu: “Interactive Language Learning”
- Suchin Gururangan: “All things language models, open-sourcing and regulation”
- Giada Pistilli: “Ethics in NLP”
- Edoardo Ponti: “Modular Deep Learning”
- Julie-Anne Meaney: “Demographically-aware Computational Humour”
- Giorgio Franceschelli: “Creativity and machine learning”
- Aubrie Amstutz: “Managing toxicity and hate speech in the private sector”
- Tom McCoy: “Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve”
- Camilo Carvajal Reyes: “EthicApp: analysing and understanding how people debate ethical issues”
- Tanvi Dinkar: “Safety and robustness in conversational AI”
2024
- Emanuele La Malfa: “Code Simulation Challenges for Large Language Models”
- Enrico Liscio: “Context-Specific Value Inference via Hybrid Intelligence”
- Eve Fleisig: “When the Majority is Wrong: Modeling Annotator Disagreement for Language Tasks”
- Vishakh Padmakumar: “Does Writing with Language Models Reduce Content Diversity?”
- Enrico Bertino: “AI at a Milanese Chatbot Start-Up”
- Fangru Lin: “Graph-enhanced Large Language Models in Asynchronous Plan Reasoning”
Dirk’s Negroni
When in Milan, drink as the Milanese. Though the official recipe calls for equal parts gin, Campari and red vermouth, here we opt for a punchier negroni, heavy on the gin. For a sbagliato, substitute the gin for prosecco.
- 3-4 parts gin (to taste)
- 2 parts campari
- 1 part red vermouth.
Pour the ingredients into a mixing glass with ice and stir until the glass feels very cold. Strain into a glass with a (very) large ice cube and a twist of orange (and rub the glass rim with it).