報告人:麻省理工學院 詹美琳
報告時間:2019年12月23日13:30-14:30
報告地點:文南樓204室
報告題目:Investigating speaker choice in language production using Chinese classifiers
報告摘要:
When multiple options are available to express more or less the same meaning, what general principles govern speaker choice? Here we investigate the influence of contextual predictability on the encoding of linguistic content manifested by speaker choice in a classifier language.
In English, a numeral modifies a noun directly (e.g., three tables). In classifier languages such as Mandarin Chinese, it is obligatory to use a classifier (CL) with the numeral and the noun (e.g., three CL.flat table, three CL.general table). While different nouns are compatible with different specific classifiers, there is a general classifier “ge” (CL.general) that can be used with most nouns. We focus on the alternating options between using the general classifier versus a specific classifier with the same noun where the options are nearly semantically invariant.
When the upcoming noun has high surprisal, the use of a specific classifier would reduce surprisal at the noun thus potentially facilitate comprehension (predicted by the Uniform Information Density account (Levy & Jaeger, 2007)), but the use of that specific classifier may be dispreferred from a production standpoint if accessing the general classifier requires less effort (predicted by the Availability-Based Production account (Bock, 1987; Ferreira & Dell, 2000)). Using a combination of corpus study and behavioral experiments, our results confirmed two predictions made by Availability-Based Production: 1) Speakers are more likely to produce the general classifier under greater time pressure; 2) Speakers are more likely to produce the general classifier when the noun is less frequent or less predictable.
報告人簡介:
Meilin Zhan (詹美琳) is a PhD candidate in Cognitive Science at Massachusetts Institute of Technology. She is a member of the MIT's Computational Psycholinguistics Lab. Her research seeks to understand the cognitive underpinning of the production and comprehension of natural language through analysis of large linguistic datasets, psycholinguistic experiments, and computational modeling. She is a recipient of various awards including the Marr Prize for Best Student Paper at CogSci 2018, National Science Foundation Doctoral Dissertation Research Improvement Award 2019, and Henry E. Singleton Fellowship 2017.