Coarse-to-Fine Natural Language Processing by Slav Petrov (auth.)

By Slav Petrov (auth.)

The impression of computers that could comprehend common language might be super. To enhance this potential we have to manage to instantly and successfully examine quite a lot of textual content. Manually devised ideas aren't enough to supply assurance to deal with the advanced constitution of normal language, necessitating platforms that could instantly study from examples. to address the flexibleness of average language, it has develop into usual perform to exploit statistical versions, which assign possibilities for instance to the various meanings of a observe or the plausibility of grammatical constructions.

This ebook develops a basic coarse-to-fine framework for studying and inference in huge statistical types for traditional language processing.

Coarse-to-fine ways make the most a chain of versions which introduce complexity steadily. on the most sensible of the series is a trivial version within which studying and inference are either reasonable. each one next version refines the former one, till a last, full-complexity version is reached. purposes of this framework to syntactic parsing, speech popularity and desktop translation are awarded, demonstrating the effectiveness of the method by way of accuracy and velocity. The e-book is meant for college students and researchers attracted to statistical techniques to normal Language Processing.

Slav’s work Coarse-to-Fine traditional Language Processing represents a big enhance within the sector of syntactic parsing, and an excellent commercial for the prevalence of the machine-learning approach.

Eugene Charniak (Brown University)

Show description

Read or Download Coarse-to-Fine Natural Language Processing PDF

Best ai & machine learning books

Artificial Intelligence Through Prolog

Synthetic Intelligence via Prolog booklet

Language, Cohesion and Form (Studies in Natural Language Processing)

As a pioneer in computational linguistics, operating within the earliest days of language processing by means of computing device, Margaret Masterman believed that that means, no longer grammar, was once the most important to realizing languages, and that machines may perhaps ensure the that means of sentences. This quantity brings jointly Masterman's groundbreaking papers for the 1st time, demonstrating the significance of her paintings within the philosophy of technological know-how and the character of iconic languages.

Handbook of Natural Language Processing

This learn explores the layout and alertness of average language text-based processing platforms, in keeping with generative linguistics, empirical copus research, and synthetic neural networks. It emphasizes the sensible instruments to deal with the chosen process

Extra info for Coarse-to-Fine Natural Language Processing

Sample text

The method for sampling derivations of a PCFG is given in Finkel et al. (2006). It requires a single inside-outside computation per sentence and is then efficient per sample. Note that for refined grammars, a posterior parse sample can be drawn by sampling a derivation and projecting away the subcategories. 4 shows the results of the following experiment. We constructed 10best lists from the full grammar G in Sect. 2. We then took the same grammar and extracted 500-sample lists using the method of Finkel et al.

Like Matsuzaki et al. (2005) and Prescher (2005), we induce refinements in a fully automatic fashion. However, we use a more sophisticated split-merge approach that allocates subcategories adaptively where they are most effective, like a linguist would. The grammars recover patterns like those discussed in Klein and Manning (2003a), heavily articulating complex and frequent categories like NP and VP while barely splitting rare or simple ones (see Sect. 6 for an empirical analysis). Empirically, hierarchical splitting increases the accuracy and lowers the variance of the learned grammars.

The parameters of the refined productions Ax ! By Cz , where Ax is a subcategory of A, By of B, and Cz of C , can then be estimated in various ways; past work on grammars with latent variables has investigated various estimation techniques. Generative approaches have included basic training with expectation maximization (EM) (Matsuzaki et al. 2005; Prescher 2005), as well as a Bayesian nonparametric approach (Liang et al. 2007). Discriminative approaches (Henderson 2004) and Chap. 3 are also possible, but we focus here on a generative, EM-based split and merge approach, as the comparison is only between estimation methods, since Smith and Johnson (2007) show that the model classes are the same.

Download PDF sample

Rated 4.51 of 5 – based on 18 votes