SIGMORPHON 2024 will be co-located with NAACL 2024 in Mexico City, Mexico.

SIGMORPHON aims to bring together researchers interested in applying computational techniques to problems in morphology, phonology, and phonetics. Work that addresses orthographic issues is also welcome. Papers will be on substantial, original, and unpublished research on these topics, potentially including strong work in progress. Appropriate topics include (but are not limited to) the following as they relate to the areas of the workshop:

  • New formalisms, computational treatments, or probabilistic models of existing linguistic formalisms
  • Unsupervised, semi-supervised, or machine learning of linguistic knowledge
  • Analysis or exploitation of multilingual, multi-dialectal, or diachronic data
  • Integration of morphology, phonology, or phonetics with other NLP tasks
  • Algorithms for string analysis and manipulation, including finite-state methods
  • Models of psycholinguistic experiments
  • Approaches to orthographic variation
  • Approaches to morphological reinflection
  • Corpus linguistics
  • Machine transliteration and back-transliteration
  • Morpheme identification and word segmentation
  • Speech technologies relating to phonetics or phonology
  • Speech science (both production and comprehension)
  • Instructional technologies for second-language learners
  • Tools and resources

SIGMORPHON encourages interaction between work in computational linguistics and work in theoretical phonetics, phonology and morphology, and to ensure that each of these fields profits from the interaction. Our recent meetings have been successful in this regard, and we hope to see this continue in 2024.

Many mainstream linguists studying phonetics, phonology and morphology are employing computational tools and models that are of considerable interest to computational linguists. Similarly, models and tools developed by and for computational linguists may be of interest to theoretical linguists working in these areas. This workshop provides a forum for these researchers to interact and become exposed to each others’ ideas and research.

Important Dates

January 4, 2024: First Call for Workshop Papers
March 1017, 2024: Workshop Paper Due Date
April 14, 2024: Notification of acceptance
April 24, 2024: Camera-Ready papers due
TBA: Pre-recorded video due
June 20, 2024: Workshop Date

Paper submission

Submission Link


Long papers should be original, topical, and clear. Completed work is preferable to intended work. Either way, the paper must disclose the state of completion of the reported results. We also encourage short submissions. These can either cover research or describe important problems (new or old).

Submission format

The only accepted format for submitted papers is Adobe PDF. Submissions should be anonymous, without authors or an acknowledgement section; self-citations should appear in third person. Submissions should follow the two-column format of ACL proceedings, and long papers should not exceed eight (8) pages, short papers should not exceed four (4) pages. Unlimited additional pages are allowed for the references section in both cases. However, all material other than the bibliography must fall within the first 8/4 pages! The camera-ready submission will be allowed to have 1 extra page to address reviewer concerns. We encourage the submission of supplemental material such as data and code, as well as appendices; however, supplemental material should not be essential for the understanding of the submission. Appendices, as well as Limitations / Impact sections are not required, but if they are included, they do not count towards the page limit. We strongly recommend the use of the LaTeX style files or Microsoft Word document template on the ACL conference website. We reserve the right to reject submissions that do not conform to these styles, including font size restrictions.

Anonymity period

Following NAACL’s policy this year, we will not require an anonymity period prior to SIGMORPHON submission.


Details for our program be found here


Our workshop proceedings are available here

Invited Talks

Naomi Feldman, University of Maryland

“Modeling speech perception at scale”

Speech processing is a perfect test case for scaling up cognitive modeling. Recent advances in speech technology provide new tools that can be leveraged to better understand how human listeners perceive speech in naturalistic settings. At the same time, building cognitive models of human speech perception can highlight capabilities that are not yet captured by standard representation learning models in speech technology. I begin by showing how incorporating unsupervised representation learning into cognitive models of speech perception can impact theories of early language acquisition. Infants’ patterns of speech perception have traditionally been interpreted as evidence that they possess certain types of knowledge, such as phonetic categories (like ’r’ and ’l’) and representations of speech rhythm, but our cognitive modeling results point toward a different interpretation. If correct, this could radically change our view of how phonetic knowledge supports infants’ acquisition of words and grammar, and could have broad implications for understanding the challenges associated with learning a new language in adulthood. I then outline ongoing work exploring the mechanisms that could support and, eventually, reproduce human listeners’ ability to flexibly adapt to different accents and listening conditions. Together, these studies illustrate how speech representations can be optimized over short and long time scales to support robust speech processing. This is joint work with Thomas Schatz, Yevgen Matusevych, Ruolan (Leslie) Famularo, Nika Jurov, Ali Aboelata, Grayson Wolf, Xuan-Nga Cao, Herman Kamper, William Idsardi, Emmanuel Dupoux, and Sharon Goldwater.

Naomi Feldman is an associate professor in the Department of Linguistics and the Institute for Advanced Computer Studies at the University of Maryland, where she is a member and former director of the Computational Linguistics and Information Processing (CLIP) Lab. Her research uses methods from machine learning and automatic speech recognition to formalize questions about how people learn and represent the structure of their language. She primarily uses these methods to study speech representations, modeling the cognitive processes that support learning and perception of speech sounds in the face of highly complex and variable linguistic input. She also computationally characterizes the strategies that facilitate language acquisition more generally, both from the perspective of learners, and from the perspective of clinicians.

Jian Zhu, University of British Columbia

“Towards crosslinguistically generalizable speech technologies”

The diversity of human speech presents a formidable challenge to multilingual speech processing systems. Recently, accumulating evidence indicated that scaling up multilingual data and model parameters can tremendously improve the performance of multilingual speech processing. However, gathering large-scale data from every language in the world is an impossible mission. To tackle this challenge, my research group aims to develop multilingual speech processing systems that generalize to unseen and low-resource languages. Since most, if not all, human speech can be represented by around 150 phonetic symbols and diacritics, I argue that using International Phonetic Alphabet (IPA) as modeling units, rather than orthographic transcriptions, enables speech models to process and recognize sounds in unseen languages. In the past years, leveraging IPA, large-scale multilingual corpora and deep learning, my research team has built a series of massively multilingual speech datasets and technologies including multilingual grapheme-to-phoneme conversion, multilingual keyword spotting, multilingual forced alignment and multilingual phone recognition systems. In this talk, I will introduce our recent works towards crosslinguistically generalizable speech technologies and lessons we learned from working with a diversity of languages.

Jian Zhu is currently an assistant professor in the Linguistics Department at the University of British Columbia. He is primarily interested in developing multilingual speech and language technologies for low resource and zero resource languages. Trained as both a linguist and an engineer, he combines linguistic theories with data-driven methods in speech processing, natural language processing, network science and machine learning. Before that, he was a post-doctoral research fellow at Blablablab at the School of Information, University of Michigan. He obtained his Ph.D. in Linguistics and Scientific Computing from the Department of Linguistics and the Michigan Institute for Computational Discovery & Engineering at the University of Michigan.

Shared Tasks

SIGMORPHON is hosting 3 shared tasks this year. Please visit the respective pages for more information.

Data-efficient Inflectional Morphology
Grapheme-to-phoneme Prediction
Subword Tokenization

Program Committee



  • Garrett Nicolai, University of British Columbia
  • Eleanor Chodroff, University of York
  • Çağrı Çöltekin, University of Tübingen
  • Fred Mailhot, Dialpad, Inc.

Email address