Advantages of the flux-based interpretation of dependency length minimization
Résumé
Dependency length minimization (DLM, also called dependency distance minimization) is studied by many authors and identified as a property of natural languages. In this paper we show that DLM can be interpreted as the flux size minimization and study the advantages of such a view. First it allows us to understand why DLM is cognitively motivated and how it is related to the constraints on the processing of sentences. Second, it opens the door to the definition of a big range of variations of DLM, taking into account other characteristics of the flux such as nested constructions and pro-jectivity.
Domaines
LinguistiqueOrigine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...