Richard Futrell

The team, including Richard Futrell, computational linguist at the University of California, Irvine, decided to use relatively small transformer networks modeled after GPT-2, a 2019 predecessor to the language model that powers ChatGPT. … The results show that language models, like humans, prefer to learn some linguistic patterns over others. Their preferences bear some resemblance to human preferences, but they’re not necessarily identical …“This has the potential to be a research program that many people do,” Futrell said. “It’s supposed to be a genre, not a franchise.”

For the full story, please visit https://www.quantamagazine.org/can-ai-models-show-us-how-people-learn-impossible-languages-point-a-way-20250113/.