The Institute for Mathematical Behavioral Sciences Colloquium Series presents

Unification as a Cognitive Process for Language Acquisition
with Sean Fulop, Professor of Linguistics, Fresno State University

Thursday, November 13, 2014
4:00–5:00 p.m.
Social Science Plaza A, Room 2112

Unification is the operation that equalizes terms containing variables, by means of substitution. Ever since Robinson (1965) showed unification to be useful for logical deduction, it has played a prominent role in artificial intelligence algorithms. In spite of this, it has rarely been mentioned in the context of modeling real intelligence, i.e. Cognitive Science. Cognitive modeling of syntactic grammar learning for natural language has often invoked the idea of "distributional learning" from the phrase and sentence structures presented to a child. In the first part of the talk, Fulop will summarize work to develop algorithms which model distributional learning as unification of syntactic categories. The dual of unification is anti-unification, which forms a common generalization from two or more distinct terms with similar structure. Once again, this has been used to model analogical reasoning in artificial intelligence, but never in Cognitive Science. Whole Word Morphology (Ford et al. 1997) uses analogical relations to represent the internal structures of words (morphology) in natural language without need for morphemes. In the second part of the talk, Fulop will summarize work with Neuvel to develop algorithms which learn morphology using both unification and anti-unification. Given their apparent utility for language learning, he proposes that such operations on terms containing variables could be cognitively real. This pertains to Marcus's (2001) suggestion that the brain must compute operations over variables in order to manipulate symbols.