Authors
Fei Sha, Fernando Pereira
Publication date
2003
Conference
Proceedings of the 2003 human language technology conference of the North American Chapter of the Association for Computational Linguistics
Pages
213-220
Description
Conditional random fields for sequence labeling offer advantages over both generative models like HMMs and classifiers applied at each sequence position. Among sequence labeling tasks in language processing, shallow parsing has received much attention, with the development of standard evaluation datasets and extensive comparison among methods. We show here how to train a conditional random field to achieve performance as good as any reported base noun-phrase chunking method on the CoNLL task, and better than any reported single model. Improved training methods based on modern optimization algorithms were critical in achieving these results. We present extensive comparisons between models and training methods that confirm and strengthen previous results on shallow parsing and training methods for maximum-entropy models.
Total citations
200320042005200620072008200920102011201220132014201520162017201820192020202120222023202419401031161251431541721281441311269992817956412221233
Scholar articles
F Sha, F Pereira - Proceedings of the 2003 human language technology …, 2003