Boots walking across stones marked with letters spelling out the word words

Psychology, Computational Linguistics and the Evolution of Word Meaning

Barbara Malt and her collaborators examine how we talk about objects across multiple languages—and how that reflects human thought processes.

Languages, like plants and animals, are continually evolving. As part of that evolution, words gain new meanings over time. One hypothesis is that meanings are solely a reflection of culture and that we develop words to reflect new needs. As those needs pop up in unpredictable ways, new words do, too.

The work of Barbara Malt, professor of psychology, leans in another direction: that “the development of word meaning is not so arbitrary and culture-dependent. What we’re seeing is that there is an underlying predictability that is a reflection of the process of the mind as opposed to these external, more cultural forces,” she says.

By combining psychology and computational linguistics, Malt and her collaborators—Yang Xu, a computational linguist at the University of Toronto, and Mahesh Srinivasan, an associate professor of psychology at the University of California, Berkeley—are examining how we talk about objects and actions across multiple languages, and showing how the human mind works to evolve language.

In a study published in Cognitive Psychology, the team looked at common words with more than one meaning. For example, the word “run” can be used to describe the physical act, but also takes on more of a metaphorical meaning when it’s used to describe things like a tear in a stocking or the act of seeking political office.

The researchers found that, across languages, words developed senses—other shades of meaning—following the same pattern: from concrete to abstract.

“There are mental processes that give rise to these regular patterns in the development of meaning,” Malt explains. “The reason for proposing that words start with concrete senses and proceed to metaphorical ones is that there is an idea out there that we understand complex, abstract ideas by grounding them in something more concrete and easily perceived. That process of human thought seems to be reflected in the way that word meanings developed.”

In another study published in the Proceedings of the National Academy of Sciences (PNAS), the team developed computational algorithms to predict the historical order in which senses of a word have emerged, and then tested their predictions against records of the English language over the past 1,000 years. They found that the progression of word meaning development takes place in small steps, one stepping stone to another, and not a giant leap.

“Our findings suggest that word senses emerge in ways that minimize cognitive costs, providing an efficient mechanism for expressing new ideas via a compact lexicon,” the team writes.

Illustration of boy holding the moon with the word moon written over it

In a forthcoming publication, the team examined how often the same items or ideas are assigned the same word, across languages.

The appearance of a whole new word in the English language is rare, says Malt. She and her colleagues also determined that new word senses are more often expressed by attaching them to existing words than by making up a new word to express them.

In another forthcoming publication, the team examined how often the same items or ideas are assigned the same word, across languages. For example, they found that in many languages, the ideas expressed by English “fire” and “flame” are both expressed by a single word; same with “moon” and “month.”

“Presumably, when meanings get labeled by the same word across many different languages, it’s a product of how we think and how we produce language,” Malt says. “Meanings that are conceptually similar will tend to be labeled by the same word.”

The implications of this research can be found in two different sectors. First, if we know how our brains work to shape language, that process information can be used to help children learn language, especially children with language delays. “It gives us a grasp of what it is they need to learn in order to be a proficient user of a language,” explains Malt.

The second implication is in improving how computers talk back to us, making artificial intelligence less artificial. However, Malt says, we won’t have a fully human-like, talking AI soon because we don’t yet completely grasp how our own minds work.

“When we fully understand how humans do it, we should be able to program it into AI,” she says.

Story by Jen A. Miller

Illustrations by Michelle Thompson

This story originally appeared as "The Evolution of Word Meaning" in the 2019 Lehigh Research Review.