Amazon cover image
Image from Amazon.com

Large-scale kernel machines / [edited by] Léon Bottou [and others].

Contributor(s): Bottou, LéonMaterial type: TextTextSeries: Neural information processing seriesPublication details: Cambridge, Mass. : MIT Press, ©2007. Description: 1 online resource (xii, 396 pages) : illustrationsContent type: text Media type: computer Carrier type: online resourceISBN: 0262255790; 9780262255790Subject(s): Data structures (Computer science) | Machine learning | Structures de données (Informatique) | Apprentissage automatique | COMPUTERS -- Data Processing | Data structures (Computer science) | Machine learning | Maschinelles Lernen | Support-Vektor-Maschine | Engineering & Applied Sciences | Computer Science | COMPUTER SCIENCE/Machine Learning & Neural NetworksGenre/Form: Electronic books. | Electronic books. Additional physical formats: Print version:: Large-scale kernel machines.DDC classification: 005.7/3 LOC classification: QA76.9.D35 | L38 2007ebOther classification: ST 301 Online resources: Click here to access online Summary: Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets. At the same time it offers researchers information that can address the relative lack of theoretical grounding for many useful algorithms. After a detailed description of state-of-the-art support vector machine technology, an introduction of the essential concepts discussed in the volume, and a comparison of primal and dual optimization techniques, the book progresses from well-understood techniques to more novel and controversial approaches. Many contributors have made their code and data available online for further experimentation. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically.ContributorsLǒn Bottou, Yoshua Bengio, Stp̌hane Canu, Eric Cosatto, Olivier Chapelle, Ronan Collobert, Dennis DeCoste, Ramani Duraiswami, Igor Durdanovic, Hans-Peter Graf, Arthur Gretton, Patrick Haffner, Stefanie Jegelka, Stephan Kanthak, S. Sathiya Keerthi, Yann LeCun, Chih-Jen Lin, Gal͡le Loosli, Joaquin Quiǫnero-Candela, Carl Edward Rasmussen, Gunnar Rt̃sch, Vikas Chandrakant Raykar, Konrad Rieck, Vikas Sindhwani, Fabian Sinz, Sr̲en Sonnenburg, Jason Weston, Christopher K. I. Williams, Elad Yom-TovLǒn Bottou is a Research Scientist at NEC Labs America. Olivier Chapelle is with Yahoo! Research. He is editor of Semi-Supervised Learning (MIT Press, 2006). Dennis DeCoste is with Microsoft Research. Jason Weston is a Research Scientist at NEC Labs America.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Collection Call number Status Date due Barcode Item holds
eBook eBook e-Library

Electronic Book@IST

EBook Available
Total holds: 0

Includes bibliographical references (pages 361-387) and index.

Print version record.

Pervasive and networked computers have dramatically reduced the cost of collecting and distributing large datasets. In this context, machine learning algorithms that scale poorly could simply become irrelevant. We need learning algorithms that scale linearly with the volume of the data while maintaining enough statistical efficiency to outperform algorithms that simply process a random subset of the data. This volume offers researchers and engineers practical solutions for learning from large scale datasets, with detailed descriptions of algorithms and experiments carried out on realistically large datasets. At the same time it offers researchers information that can address the relative lack of theoretical grounding for many useful algorithms. After a detailed description of state-of-the-art support vector machine technology, an introduction of the essential concepts discussed in the volume, and a comparison of primal and dual optimization techniques, the book progresses from well-understood techniques to more novel and controversial approaches. Many contributors have made their code and data available online for further experimentation. Topics covered include fast implementations of known algorithms, approximations that are amenable to theoretical guarantees, and algorithms that perform well in practice but are difficult to analyze theoretically.ContributorsLǒn Bottou, Yoshua Bengio, Stp̌hane Canu, Eric Cosatto, Olivier Chapelle, Ronan Collobert, Dennis DeCoste, Ramani Duraiswami, Igor Durdanovic, Hans-Peter Graf, Arthur Gretton, Patrick Haffner, Stefanie Jegelka, Stephan Kanthak, S. Sathiya Keerthi, Yann LeCun, Chih-Jen Lin, Gal͡le Loosli, Joaquin Quiǫnero-Candela, Carl Edward Rasmussen, Gunnar Rt̃sch, Vikas Chandrakant Raykar, Konrad Rieck, Vikas Sindhwani, Fabian Sinz, Sr̲en Sonnenburg, Jason Weston, Christopher K. I. Williams, Elad Yom-TovLǒn Bottou is a Research Scientist at NEC Labs America. Olivier Chapelle is with Yahoo! Research. He is editor of Semi-Supervised Learning (MIT Press, 2006). Dennis DeCoste is with Microsoft Research. Jason Weston is a Research Scientist at NEC Labs America.

Powered by Koha