Keynote Speakers
We are proud to welcome the following keynote speakers to the 44th TABU Dag in June 2024:
- Emily Hofstetter (Linköping University)
- Andrea Ravignani (Sapienza University of Rome)
- Odette Scharenborg (Delft University)
- Martina Wiltschko (Pompeu Fabra University Barcelona)
You can read their bio and an abstract of their talk below:
Emily Hofstetter
Linköping University
Emily Hofstetter started out in undergraduate in linguistics but switched once it became clear that there would not be any study of actual language use. Emily now researches human interaction at Linköping University, Sweden, having wound back to linguistics after all. Current projects include examining how the body’s physiology and sensations are made available to co-participants with vocalizations (nonlexicalvocalizations.com), and how to use interactional approaches to understand and facilitate discussions about transitioning to sustainable futures, particularly when using ‘megagames’ for pedagogy (gamesforsocialtransformation.com). Emily is also an occasional Youtuber, making videos that use popular media to explain interaction research concepts (emilyhofstetter.ca).
Abstract
Humans are deeply, jointly concerned with making ourselves understandable. Our lives are permeated with ‘accountability’, the requirement to (be able to) explain one’s actions to others. My goal with this talk is to introduce accountability and how it features in what language users do in everyday talk. While public figures’ gaffes might take centre stage in media, in fact, when we look at language in actual use, we find that humans constantly make themselves accountable to their interlocutors. I dive into the discipline of analyzing human interactions, focusing on how accountability brings structure to the apparent messiness of everyday use. I will draw on two of my current research projects where accountability is done with quite different communicative resources. One study looks at how players of a large-scale game about climate change hold each other accountable for antagonistic actions; these accounts are lexical. The other examines rock climbing, where participants use both non-lexical vocalizations and syntactic disruptions to account for strain in the body. Language users take advantage of all available resources to make themselves understandable to others, including their own spontaneous physiological events. I aim to open discussion into what we include in linguistic analysis. When we include the apparent ‘chaos’ (incomplete utterances to semantic variations, multimodal resources to multiple participation frameworks) in our analysis, we can see how astoundingly nuanced, proficient, and most of all, systematic interlocutors are.Andrea Ravignani
Sapienza University of Rome
Andrea Ravignani is a Professor at the Department of Human Neurosciences at La Sapienza University of Rome, Italy. He received his PhD in Biology from the University of Vienna, Austria in 2014. Following his doctoral work, he has been affiliated with several research institutions across Europe including the Vrije Universiteit Brussel, the Marine Science Center Rostock, and the Max Planck Institute for Psycholinguistics. Andrea performs research on the cognitive, biological, and evolutionary connections between human music, speech, and language. To understand why humans and other species have musical and linguistic capacities, he takes a multidisciplinarity approach. He combines non-invasive animal research with human experiments, neurobiological evidence, quantitative modelling of evolutionary dynamics, and agent-based simulations. Currently, he is studying the evolution of rhythm and vocal learning, funded by an ERC Starting Grant, an HFSP Grant, and a Sapienza PI grant.
Abstract
The origins of rhythm and vocal learning: A comparative approach
Who’s got rhythm? And why are we such chatty animals? Human music and speech are peculiar behaviors from a biological perspective: Although extremely common in humans, at first sight they do not seem to confer any direct evolutionary advantage. Many hypotheses try to explain the origins of acoustic rhythm capacities in our species, but few are empirically tested and compared. Because music and speech do not fossilize, and lacking a time machine, the comparative approach provides a powerful tool to tap into human cognitive history. Notably, homologous or analogous building blocks underlying human rhythm can be found across a few animal species and developmental stages. Hence, investigating rhythm across species is not only interesting in itself, but it is crucial to unveil music-like and speech-like behaviors present in early hominids. In this talk, I will discuss the major hypotheses for the evolution of vocal rhythmicity in humans and other animals, which link acoustic rhythms to vocal learning (a precursor to speech), gait, breathing, or chorusing. I will suggest how integrating approaches from ethology, psychology, neuroscience, modeling, voice sciences, and physiology is needed to obtain a full picture. I will then zoom in on some crucial species which are key to test alternative hypotheses on rhythm origins, with particular attention to the rhythm-vocal learning link. I will show how three strands of research - partly neglected until now - can be particularly fruitful in shedding light on the evolution of rhythm and vocal learning. I will present rhythm experiments in marine mammals, primates, and other species, suggesting that rhythm research in non-human animals can also benefit from ecologically-relevant setups, combining strengths and knowledge from human cognitive neuroscience and behavioral ecology. Second, I will discuss the interplay between vocal anatomy, learning, and development in harbor seal pups, arguing for their importance as model species for human speech origins. Finally, I will present human experiments where musical rhythm is created and evolves culturally due to cognitive and motoric biases, showing the importance of an interplay between biology and cultural transmission. These results suggest that, while some species may share one or more building blocks of speech and music, the ‘full package’ may be uniquely human.Odette Scharenborg
Delft University
Odette Scharenborg is an associate professor at the Department of Intelligent Systems, Delft University of Technology, The Netherlands. Her research focuses on human speech-processing inspired automatic speech processing with the aim to develop inclusive speech technology, i.e., speech technology that works for everyone irrespective of how they speak or the language they speak. Odette has been a Board member of the International Speech Communication Association (ISCA) since 2017. From 2021-2023 she served as Vice-President, and currently as President. From 2018-2022, she was a member of the IEEE SPS Speech and Language Processing Technical, and from 2019-2023, a (Senior) Associate Editor of IEEE Signal Processing Letters. She will be the General Chair of Interspeech Rotterdam, 2025
Read more at: https://odettescharenborg.wordpress.com
Abstract
Inclusive speech technology: Developing automatic speech recognition for everyone
Automatic speech recognition (ASR) is increasingly used, e.g., in emergency response centers, domestic voice assistants, and search engines. Because of the paramount relevance spoken language plays in our lives, it is critical that ASR systems are able to deal with the variability in the way people speak (e.g., due to speaker differences, demographics, different speaking styles, and differently abled users). ASR systems promise to deliver objective interpretation of human speech. Practice and recent evidence however suggests that the state-of-the-art ASRs struggle with the large variation in speech due to e.g., gender, age, speech impairment, race, and accents. The overarching goal in our research is to uncover bias in ASR systems to work towards proactive bias mitigation in ASR. In this talk, I will present systematic experiments aimed at quantifying, identifying the origin of, and mitigating the bias of state-of-the-art ASRs on speech from different, typically low-resource, groups of speakers, with a focus on bias against gender, age, regional accents and non-native accentsMartina Wiltschko
Pompeu Fabra University Barcelona
Martina Wiltschko is an ICREA research Professor at the Universitat Pompeu Fabra in Barcelona. She is a theoretical linguist, focussing on syntax and its interfaces. She obtained her doctorate in 1995 at the University of Vienna, spent much of her career at the University of British Columbia until 2019 when she assumed her current position. During her tenure at UBC, she focussed mainly on language variation and fieldwork (culminating in her 2014 Cambridge University monograph on the universal structure of categories). She then developed an interest in the nature of language in interaction culminating in her 2021 Cambridge University monograph on the grammar of interactional language. She has recently started a project on the nature of human machine interaction, focussing on the role of interactional language.
Abstract
Emotions do not enter grammar because they are constructed (by grammar)
In this talk I explore the relation between language and emotion. While my focus is a linguistic one, I tackle this question informed by insights of theories of emotions within the affective sciences. The core empirical claim I introduce is that there are no grammatical categories dedicated to encoding emotions. This seems to be universally the case and hence appears to be no accident and tells us something about the make-up of human cognition. The absence of grammatical categories dedicated to encoding emotions is surprising given the otherwise close connection between language and emotions as evidenced by phylogenetic, ontogenetic, and neurological phenomena. Hence, one cannot attribute the absence of emotion categories to a complete disconnect between language and emotions (or cognition more generally). Moreover, one might expect such categories to exist, based on cognitive and evolutionary considerations. The conclusion to be drawn is that emotions are not to be considered primitives that could be directly linked to grammatical categories, but instead that emotions are constructed. In this way, the properties of grammar provide new evidence for the theory of constructed emotions. I further explore the idea that linguistic theory may shed light on how emotions are constructed. Specifically, I will introduce the hypothesis that the same architecture that is responsible for the construction of complex linguistic expressions (a grammar of sorts) is also responsible for the construction of emotions.