About
I am a Research Scientist at FAIR in Paris. I study
failure modes in large models, with a focus on contextualized
measurement for AI governance: construct validity, brittleness,
spurious correlations, and fairness under distribution shift.
Previously, I was a postdoctoral fellow at EPFL and ENS Paris
as part of the Simons Collaboration on Cracking the Glass Problem.
I received my Ph.D. in Mathematics from the Courant Institute of
Mathematical Sciences at NYU.
Research interests
- Contextualized measurement for AI governance: construct validity and value-dependence
- Failure modes in large models: brittleness, spurious correlations, and bias amplification
- Robustness and reliability of large models and distribution shift
- Optimization, over-parameterization, and inductive biases in deep learning
Selected recent work
Recent publications spanning contextualized evaluation, representational harms, and model brittleness:
-
LLM Knowledge is Brittle: Truthfulness Representations Rely on Superficial Resemblance.
P. Haller, M. Ibrahim, P. Kirichenko, L. Sagun, S. Bell.
arXiv 2025.
-
Issues in Measuring the Fairness of Social Representation in Synthetic (Speech) Data.
A. Subramonian, B. Sheppard, L. Sagun.
Synthetic Data Workshop at Aarhus Decennial Conference 2025.
-
Learning the Wrong Lessons: Syntactic-Domain Spurious Correlations in Language Models.
C. Shaib, V. Suriyakumar, L. Sagun, B. Wallace, M. Ghassemi.
Spotlight at NeurIPS 2025.
-
On the lack of queer voices in diverse speech datasets.
B. Sheppard, E. Ovalle, A. Williams, L. Sagun.
Social Science and Language Models workshop at Weizenbaum Institute 2025 & Speech AI for All Workshop at CHI 2025.
-
The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models.
A. Ovalle, K. L. Pavasovic, L. Martin, L. Zettlemoyer, E. M. Smith, K.-W. Chang, A. Williams, L. Sagun.
Queer in AI at NeurIPS 2024 & FAccT 2025.
-
An Effective Theory of Bias Amplification.
A. Subramonian, S. J. Bell, L. Sagun, E. Dohmatob.
ICLR 2025.
-
A Differentiable Rank-Based Objective For Better Feature Learning.
K. Lehman Pavasovic, D. Lopez-Paz, G. Biroli, L. Sagun.
ICLR 2025.
-
On generated vs collected data.
L. Sagun, K. Ahuja, E. Dohmatob, J. Kempe.
The Workshop on Global AI Cultures at ICLR, 2024.
-
Simplicity bias leads to amplified performance disparities.
S. Bell, L. Sagun.
FAccT 2023.
-
Fairness Indicators for Systematic Assessments of Visual Feature Extractors.
P. Goyal, A. R. Soriano, C. Hazirbas, L. Sagun, N. Usunier.
FAccT 2022.
For a full and up-to-date list of publications, see
Google Scholar.
Mentoring
I've been fortunate to be able to support brilliant PhD students, postdocs, and interns working on robustness, evaluation, and the social impact of large models.
PhD Interns
- Chantal Shaib (2025)
- Brooklyn Sheppard (2024)
- Arjun Subramonian (2024)
- Elia Ovalle (2023)
- Arjun Subramonian (2022)
- Sam Bell (2021)
- Berfin Şimşek (2020)
PhD students
- Nicole Osayande (starting 2026)
- Krunoslav Lehman Pavasovic (2024–2025)
- Stéphane d’Ascoli (2019-2022)
Postdocs
- Sam Bell (2022–2023)
Teaching
I have taught and assisted courses in probability, statistics, machine
learning, and data science at NYU’s Courant Institute and Center for
Data Science, and have given invited lectures and short courses on deep
learning and values in AI.