Home
OT
MaxEnt

Phonology Lab

Huteng Dai (University of Michigan)

The motivation behind this project is to reduce the friction in learning and using these models, and to pave the way for future research in phonology.

OT Lab→

Constraint ranking via Recursive Constraint Demotion, with optional BCD bias.

MaxEnt Lab→

Learn constraint weights from frequency data; explore the weight–probability mapping.

If you find this tool useful, you are welcome to cite:
Dai, H. (2026). Phonology Lab. Retrieved January 28, 2026, from hutengdai.com/phonology-lab.html.
Feedback is welcome! If you'd like to suggest or contribute new features or improvement, feel free to email me at huteng@umich.edu.
References
  • Hayes, B. & Moore-Cantwell, C. (2025). A guide to analysis in MaxEnt Optimality Theory. Lingbuzz 008790. lingbuzz
  • Prince, A. (2002). Entailed ranking arguments. ROA 500.
  • Prince, A. & Tesar, B. (2004). Learning phonotactic distributions. In R. Kager, J. Pater & W. Zonneveld (Eds.), Constraints in Phonological Acquisition, 245–291. Cambridge University Press.
  • Tesar, B. & Smolensky, P. (2000). Learnability in Optimality Theory. MIT Press.
All BibTeX entries
@misc{DaiPhonologyLab,
  author       = {Dai, Huteng},
  title        = {Phonology Lab},
  note         = {Accessed: 2026-01-28},
  url          = {https://hutengdai.com/phonology-lab.html}
}

@unpublished{HayesMooreCantwell2025,
  author = {Hayes, Bruce and Moore-Cantwell, Claire},
  title  = {A Guide to Analysis in {MaxEnt} {Optimality Theory}},
  year   = {2025},
  url    = {https://ling.auf.net/lingbuzz/008790}
}

@unpublished{Prince2002,
  author = {Prince, Alan},
  title  = {Entailed Ranking Arguments},
  year   = {2002},
  note   = {Rutgers Optimality Archive, ROA-500}
}

@incollection{PrinceTesar2004,
  author    = {Prince, Alan and Tesar, Bruce},
  title     = {Learning Phonotactic Distributions},
  booktitle = {Constraints in Phonological Acquisition},
  editor    = {Kager, Ren\'{e} and Pater, Joe and Zonneveld, Wim},
  year      = {2004},
  pages     = {245--291},
  publisher = {Cambridge University Press}
}

@book{TesarSmolensky2000,
  author    = {Tesar, Bruce and Smolensky, Paul},
  title     = {Learnability in {Optimality Theory}},
  year      = {2000},
  publisher = {MIT Press},
  address   = {Cambridge, MA}
}

MaxEnt Lab

Maximum Entropy Grammar

1. MaxEnt Grammar

A MaxEnt grammar assigns each candidate a Harmony score from its violation profile and learned weights, then converts to probabilities via softmax:

H(candidate) = violations · weights
P(candidate) = exp(H) / Σ exp(Hj)

Violations are negative (−1, −2, …). We assume w > 0. More violations × higher weight = more negative Harmony = lower probability.

Example

Two candidates with Harmony scores H = [−1, −2]:

exp(H) = [exp(−1), exp(−2)] = [0.368, 0.135]
Z = 0.368 + 0.135 = 0.503
P = [0.368/0.503, 0.135/0.503] = [0.731, 0.269]

The candidate with the less negative Harmony gets higher probability.

2. Your Data

Upload CSV or paste a tableau. Columns: Input, Output, Count, Constraint1, … Violations are negative (0, −1, −2). Count = 0 for prediction only.

3. K3: Function Word Lenition

K3 and K4 come from Hayes & Moore-Cantwell (2025) — check it out and cite them!

Target: 73% [p] / 27% [w].

1.0
2.0

Exercises

1. *p = 1.0, Ident = 2.0 → ≈73%/27%.   2. Equal weights?   3. *p >> Ident?

Multiple solutions exist — e.g. *p = 3.0, Ident = 4.0 also works.

4. K4: Perturbers

A perturber is context-dependent. *V[−son] is only active after a vowel.

1.0
2.0
3.0

5. Sigmoid Curve

sigmoid(w) = 1 / (1 + e−w)
0.0

OT Lab

Optimality Theory · Recursive Constraint Demotion

1. Algorithm

1. Extract winner~loser pairs → comparative tableau (W / L / e).
2. RCD: rank every constraint with no L, discard won pairs, repeat.
3. Verify winners are correctly selected.

BCD (Prince & Tesar 2004): bias eligible constraints by class. M ≫ F = markedness first; F ≫ M = reverse.

2. Your Data

Columns: Input, Output, Winner, Constraint1, …   Winner = 1 for winner. Violations: integers, stars (*), or empty.