Files

Abstract

All spoken languages encode syllables and constrain their internal structure. But whether these restrictions concern the design of the language system, broadly, or speech, specifically, remains unknown. To address this question, here, we gauge the structure of signed syllables in American Sign Language (ASL). Like spoken languages, signed syllables must exhibit a single sonority/energy peak (i.e., movement). Four experiments examine whether this restriction is enforced by signers and nonsigners. We first show that Deaf ASL signers selectively apply sonority restrictions to syllables (but not morphemes) in novel ASL signs. We next examine whether this principle might further shape the representation of signed syllables by nonsigners. Absent any experience with ASL, nonsigners used movement to define syllable-like units. Moreover, the restriction on syllable structure constrained the capacity of nonsigners to learn from experience. Given brief practice that implicitly paired syllables with sonority peaks (i.e., movement)—a natural phonological constraint attested in every human language—nonsigners rapidly learned to selectively rely on movement to define syllables and they also learned to partly ignore it in the identification of morpheme-like units. Remarkably, nonsigners failed to learn an unnatural rule that defines syllables by handshape, suggesting they were unable to ignore movement in identifying syllables. These findings indicate that signed and spoken syllables are subject to a shared phonological restriction that constrains phonological learning in a new modality. These conclusions suggest the design of the phonological system is partly amodal.

Details

Actions

Preview

from
to
Export
Download Full History