Evolution and Compression in LLMs: on the emergence of human-aligned categorization

1New York University

Abstract

Converging evidence suggests that human systems of semantic categories achieve near-optimal compression via the Information Bottleneck (IB) complexity-accuracy tradeoff. Large language models (LLMs) are not trained for this objective, which raises the question: are LLMs capable of evolving efficient human-aligned semantic systems?

To address this question, we focus on color categorization --- a key testbed of cognitive theories of categorization with uniquely rich human data --- and replicate with LLMs two influential human studies. First, we conduct an English color-naming study, showing that LLMs vary widely in their complexity and English-alignment, with larger instruction-tuned models achieving better alignment and IB-efficiency.

Second, to test whether these LLMs simply mimic patterns in their training data or actually exhibit a human-like inductive bias toward IB-efficiency, we simulate cultural evolution of pseudo color-naming systems in LLMs via a method we refer to as Iterated in-Context Language Learning (IICLL). We find that akin to humans, LLMs iteratively restructure initially random systems towards greater IB-efficiency. However, only a model with strongest in-context capabilities (Gemini 2.0) is able to recapitulate the wide range of near-optimal IB-tradeoffs observed in humans, while other state-of-the-art models converge to low-complexity solutions. These findings demonstrate how human-aligned semantic categories can emerge in LLMs via the same fundamental principle that underlies semantic efficiency in humans.

BibTeX

@inproceedings{
    imel2026evolution,
    title = {Evolution and compression in {LLM}s: on the emergence of human-aligned categorization},
    author = {Nathaniel Imel and Noga Zaslavsky},
    booktitle = {The Fourteenth International Conference on Learning Representations},
    year = {2026},
    url = {https://openreview.net/forum?id=s7gSTR2AqA}
}