Back to Search Start Over

Minimax Bounds for Active Learning.

Authors :
Castro, Rui M.
Nowak, Robert D.
Source :
IEEE Transactions on Information Theory. May2008, Vol. 54 Issue 5, p2339-2353. 15p. 4 Graphs.
Publication Year :
2008

Abstract

This paper analyzes the potential advantages and theoretical challenges of "active learning" algorithms. Active learning involves sequential sampling procedures that use information gleaned from previous samples in order to focus the sampling and accelerate the learning process relative to "passive learning" algorithms, which are based on nonadaptive (usually random) samples. There are a number of empirical and theoretical results suggesting that in certain situations active learning can be significantly more effective than passive learning. However, the fact that active learning algorithms are feedback systems makes their theoretical analysis very challenging. This paper aims to shed light on achievable limits in active learning. Using minimax analysis techniques, we study the achievable rates of classification error convergence for broad classes of distributions characterized by decision boundary regularity and noise conditions. The results clearly indicate the conditions under which one can expect significant gains through active learning. Furthermore, we show that the learning rates derived are tight for "boundary fragment" classes in d-dimensional feature spaces when the feature marginal density is bounded from above and below. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189448
Volume :
54
Issue :
5
Database :
Academic Search Index
Journal :
IEEE Transactions on Information Theory
Publication Type :
Academic Journal
Accession number :
32056319
Full Text :
https://doi.org/10.1109/TIT.2008.920189