Current page: Information->Indexed and Annotated Bibliography
B. Shc{\"{o}}lkopf
ABSTRACT
Is there anything worthwhile to learn about the new SVM algorithm, or does it fall into the category of “yet-another-algorithm,” in which case readers should stop here and save their time for something more useful? In this short overview, I will try to argue that studying support-vector learning is very useful in two respects. First, it is quite satisfying from a theoretical point of view: SV learning is based on some beautifully simple ideas and provides a clear intuition of what learning from examples is about. Second, it can lead to high performances in practical applications. In the following sense can the SV algorithm be considered as lying at the intersection of learning theory and practice: for certain simple types of algorithms, statistical learning theory can identify rather precisely the factors that need to be taken into account to learn successfully. Real-world applications, however, often mandate the use of more complex models and algorithms —such as neural networks—that are much harder to analyze theoretically. The SV algorithm achieves both. It constructs models that are complex enough: it contains a large class of neural nets, radial basis function (RBF) nets, and polynomial classifiers as special cases. Yet it is simple enough to be analyzed mathematically, because it can be shown to correspond to a linear method in a high-dimensional feature space nonlinearly related to input space. Moreover, even though we can think of it as a linear algorithm in a high-dimensional space, in practice, it does not involve any computations in that high-dimensional space. By the use of kernels, all necessary computations are performed directly in input space. This is the characteristic twist of SV methods—we are dealing with complex algorithms for nonlinear pattern recognition,1 regression,2 or feature extraction, 3 but for the sake of analysis and algorithmics, we can pretend that we are working with a simple linear algorithm. I will explain the gist of SV methods by describing their roots in learning theory, the optimal hyperplane algorithm, the kernel trick, and SV function estimation. For details and further references, see Vladimir Vapnik’s authoritative treatment,2 the collection my colleagues and I have put together, 4 and the SV Web page at http://svm. first.gmd.de. 
ECVision indexed and annotated bibliography of cognitive computer vision publications
This bibliography was created by Hilary Buxton and Benoit Gaillard, University of Sussex, as part of ECVision Specific Action 8-1
The complete text version of this BibTeX file is available here: ECVision_bibliography.bib
, SVMs --- a practical consequence of learning theorySite generated on Friday, 06 January 2006