The motivation behind this project is to reduce the friction in learning and using these models, and to pave the way for future research in phonology.
Constraint ranking via Recursive Constraint Demotion, with optional BCD bias.
Learn constraint weights from frequency data; explore the weight–probability mapping.
@misc{DaiPhonologyLab,
author = {Dai, Huteng},
title = {Phonology Lab},
note = {Accessed: 2026-01-28},
url = {https://hutengdai.com/phonology-lab.html}
}
@unpublished{HayesMooreCantwell2025,
author = {Hayes, Bruce and Moore-Cantwell, Claire},
title = {A Guide to Analysis in {MaxEnt} {Optimality Theory}},
year = {2025},
url = {https://ling.auf.net/lingbuzz/008790}
}
@unpublished{Prince2002,
author = {Prince, Alan},
title = {Entailed Ranking Arguments},
year = {2002},
note = {Rutgers Optimality Archive, ROA-500}
}
@incollection{PrinceTesar2004,
author = {Prince, Alan and Tesar, Bruce},
title = {Learning Phonotactic Distributions},
booktitle = {Constraints in Phonological Acquisition},
editor = {Kager, Ren\'{e} and Pater, Joe and Zonneveld, Wim},
year = {2004},
pages = {245--291},
publisher = {Cambridge University Press}
}
@book{TesarSmolensky2000,
author = {Tesar, Bruce and Smolensky, Paul},
title = {Learnability in {Optimality Theory}},
year = {2000},
publisher = {MIT Press},
address = {Cambridge, MA}
}
A MaxEnt grammar assigns each candidate a Harmony score from its violation profile and learned weights, then converts to probabilities via softmax:
Violations are negative (−1, −2, …). We assume w > 0. More violations × higher weight = more negative Harmony = lower probability.
Two candidates with Harmony scores H = [−1, −2]:
The candidate with the less negative Harmony gets higher probability.
Upload CSV or paste a tableau. Columns: Input, Output, Count, Constraint1, … Violations are negative (0, −1, −2). Count = 0 for prediction only.
K3 and K4 come from Hayes & Moore-Cantwell (2025) — check it out and cite them!
Target: 73% [p] / 27% [w].
1. *p = 1.0, Ident = 2.0 → ≈73%/27%. 2. Equal weights? 3. *p >> Ident?
Multiple solutions exist — e.g. *p = 3.0, Ident = 4.0 also works.
A perturber is context-dependent. *V[−son] is only active after a vowel.
BCD (Prince & Tesar 2004): bias eligible constraints by class. M ≫ F = markedness first; F ≫ M = reverse.
Columns: Input, Output, Winner, Constraint1, … Winner = 1 for winner. Violations: integers, stars (*), or empty.