Files

Abstract

Children are observed acquiring multiple languages simultaneously from birth without explicit instruction, and the course of development in each language looks very much like the trajectory followed by monolingual children. We try to simulate humans' syntax acquisition of simultaneous multilingualism via a modified Visually Grounded Neural Syntax Learner, an unsupervised neural network model for learning syntax representation. We incorporate a language classifier for the multilingual text corpus and introduce the predicted language label for each sentence to the training process of the VGNSL model. We train multilingual models using Flickr8k data with text in (English, French) or (English, tokenized Chinese). We find that compared to the monolingual model, the multilingual dataset makes the model less accurate for syntax parsing in each language, and more significantly in datasets with high word overlap, such as (English, French) language pairs. The improved version of the model with the introduction of language classifiers can improve the model's accuracy for syntax parsing in each language. However, it still falls short of the accuracy of the monolingual model. Thus it cannot fully simulate the learning performance of simultaneous multilingual children.

Details

Actions

from
to
Export
Download Full History