Authors
Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira
Publication date
2007/12
Journal
Advances in neural information processing systems
Volume
19
Pages
137
Publisher
MIT; 1998
Description
Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. In many situations, though, we have labeled training data for a source domain, and we wish to learn a classifier which performs well on a target domain with a different distribution. Under what conditions can we adapt a classifier trained on the source domain for use in the target domain? Intuitively, a good feature representation is a crucial factor in the success of domain adaptation. We formalize this intuition theoretically with a generalization bound for domain adaption. Our theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model. It also points toward a promising new model for domain adaptation: one which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.
Total citations
20072008200920102011201220132014201520162017201820192020202120222023202413233124425262547177108127175271368401414175
Scholar articles
S Ben-David, J Blitzer, K Crammer, F Pereira - Advances in neural information processing systems, 2006