Keynotes

Keynotes

We are delighted to announce that the esteemed speakers listed below have graciously accepted our invitation to deliver keynote speeches at the main conference of ACL 2024:

Sunita Sarawagi

Gary Marcus

Does In-Context-Learning Offer the Best Tradeoff in Accuracy, Robustness, and Efficiency for Model Adaptation?

Abstract: Adapting a model trained on vast amounts of data to new tasks with limited labeled data has long been a challenging problem, and over the years, a diverse range of techniques have been explored. Effective model adaptation requires achieving high accuracy through task-specific specialization without forgetting previous learnings, robustly handling the high variance from limited task-relevant supervision, and doing so efficiently with minimal compute and memory overheads. Recently, large language models (LLMs) have demonstrated remarkable ease of adaptation to new tasks with just a few examples provided in context, without any explicit training for such a capability. Puzzled by this apparent success, many researchers have sought to explain why in-context learning (ICL) works, but we still have only an incomplete understanding. In this talk, we examine this emerging phenomenon and assess its potential to meet our longstanding model adaptation goals in terms of accuracy, robustness, and efficiency.

Bio: Sunita Sarawagi researches in the fields of databases, machine learning, and applied NLP. She got her PhD in databases from the University of California at Berkeley and a bachelors degree from IIT Kharagpur. She has also worked at Google Research, CMU, and IBM Almaden Research Center. She is an ACM fellow, was awarded the Infosys Prize in 2019 for Engineering and Computer Science, and the distinguished Alumnus award from IIT Kharagpur. She has several publications in database, machine learning, and NLP conferences including notable paper awards at ACM SIGMOD, ICDM, and NeurIPS conferences.

Subbarao Kambhampati

Neil Cohn

Can LLMs Reason and Plan?

Abstract: Large Language Models (LLMs) are on track to reverse what seemed like an inexorable shift of AI from explicit to tacit knowledge tasks. Trained as they are on everything ever written on the web, LLMs exhibit “approximate omniscience”–they can provide answers to all sorts of queries, but with nary a guarantee. This could herald a new era for knowledge-based AI systems–with LLMs taking the role of (blowhard?) experts. But first, we have to stop confusing the impressive style/form of the generated knowledge for correct/factual content, and resist the temptation to ascribe reasoning, planning, self-critiquing etc. powers to approximate retrieval by these n-gram models on steroids. We have to focus instead on LLM-Modulo techniques that complement the unfettered idea generation of LLMs with careful vetting by model-based verifiers (the models underlying which themselves can be teased out from LLMs in semi-automated fashion). In this talk, I will reify this vision and attendant caveats in the context of our ongoing work on understanding the role of LLMs in planning tasks.

Bio: Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. He is a fellow of Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery. He served as the president of the Association for the Advancement of Artificial Intelligence, a trustee of the International Joint Conference on Artificial Intelligence, the chair of AAAS Section T (Information, Communication and Computation), and a founding board member of Partnership on AI. Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He can be followed on Twitter @rao2z.

Barbara Plank

Mona Diab

Are LLMs Narrowing Our Horizon? Let’s Embrace Variation in NLP!

Abstract: NLP research made significant progress, and our community’s achievements are becoming deeply integrated in society. The recent paradigm shift due to rapid advances in Large Language Models (LLMs) offers immense potential, but also led NLP to become more homogeneous. In this talk, I will argue for the importance of embracing variation in research, which will lead to more innovation, and in turn, trust. I will give an overview of current challenges and show how they led to the loss of trust in our models. To counter this, I propose to embrace variation in three key areas: inputs to models, outputs of models and research itself. Embracing variation holistically will be crucial to move our field towards more trustworthy human-facing NLP.

Bio: Barbara Plank is Professor and co-director of the Center for Information and Language Processing at LMU Munich. She holds the Chair for AI and Computational Linguistics at LMU Munich and is an affiliated Professor at the Computer Science department at the IT University of Copenhagen. Her MaiNLP research lab (Munich AI and NLP lab, pronounced “my NLP”) focuses on robust machine learning for Natural Language Processing with an emphasis on human-inspired and data-centric approaches. Her research has been funded by distinguished grants, including an Amazon Research Award (2018), the Danish Research Council (Sapere Aude Research Leader Grant, 2020-2024), and the European Research Council (ERC Consolidator Grant, 2022-2027). Barbara is a Scholar of ELLIS (the European Laboratory for Learning and Intelligent Systems) and regularly serves on international committees, including the Association for Computational Linguistics (ACL), the European Chapter of the ACL, and the Northern European Association for Language Technology (NEALT).