Learning theory : 19th Annual Conference on Learning Theory, COLT 2006, Pittsburgh, PA, USA, June 22-25, 2006 : proceedings / Gabor Lugosi, Hans Ulrich Simon (eds.).

By: (19th : Conference on Learning Theory (19th : 2006 : Pittsburgh, Pa.)
Contributor(s): Lugosi, Gábor | Simon, Hans Ulrich, 1954-
Material type: TextTextSeries: SerienbezeichnungLecture notes in computer science: 4005.; LNCS sublibrary: Publisher: Berlin ; New York : Springer, 2006Description: 1 online resource (xi, 656 pages) : illustrationsContent type: text Media type: computer Carrier type: online resourceISBN: 9783540352969; 3540352961; 3540352945; 9783540352945Other title: 19th Annual Conference on Learning Theory | Nineteenth Annual Conference on Learning Theory | COLT 2006Subject(s): Machine learning -- Congresses | Apprentissage automatique -- Congrès | Apprentissage informatique, Théorie de l' -- Congrès | Intelligence artificielle -- Congrès | Computational learning theory | Machine learning | Artificial intelligence | Informatique | Machine learning | Learning theory | algoritmen | algorithms | computeranalyse | computer analysis | wiskunde | mathematics | computerwetenschappen | computer sciences | kunstmatige intelligentie | artificial intelligence | computational science | logica | logic | Information and Communication Technology (General) | Informatie- en communicatietechnologie (algemeen)Genre/Form: Electronic books. | Conference papers and proceedings. Additional physical formats: Print version:: Learning theory.DDC classification: 006.31 LOC classification: Q325.5 | .C665 2006ebOther classification: O234-532 Online resources: Click here to access online
Contents:
Invited Presentations -- Random Multivariate Search Trees -- On Learning and Logic -- Predictions as Statements and Decisions -- Clustering, Un-, and Semisupervised Learning -- A Sober Look at Clustering Stability -- PAC Learning Axis-Aligned Mixtures of Gaussians with No Separation Assumption -- Stable Transductive Learning -- Uniform Convergence of Adaptive Graph-Based Regularization -- Statistical Learning Theory -- The Rademacher Complexity of Linear Transformation Classes -- Function Classes That Approximate the Bayes Risk -- Functional Classification with Margin Conditions -- Significance and Recovery of Block Structures in Binary Matrices with Noise -- Regularized Learning and Kernel Methods -- Maximum Entropy Distribution Estimation with Generalized Regularization -- Unifying Divergence Minimization and Statistical Inference Via Convex Duality -- Mercer's Theorem, Feature Maps, and Smoothing -- Learning Bounds for Support Vector Machines with Learned Kernels -- Query Learning and Teaching -- On Optimal Learning Algorithms for Multiplicity Automata -- Exact Learning Composed Classes with a Small Number of Mistakes -- DNF Are Teachable in the Average Case -- Teaching Randomized Learners -- Inductive Inference -- Memory-Limited U-Shaped Learning -- On Learning Languages from Positive Data and a Limited Number of Short Counterexamples -- Learning Rational Stochastic Languages -- Parent Assignment Is Hard for the MDL, AIC, and NML Costs -- Learning Algorithms and Limitations on Learning -- Uniform-Distribution Learnability of Noisy Linear Threshold Functions with Restricted Focus of Attention -- Discriminative Learning Can Succeed Where Generative Learning Fails -- Improved Lower Bounds for Learning Intersections of Halfspaces -- Efficient Learning Algorithms Yield Circuit Lower Bounds -- Online Aggregation -- Optimal Oracle Inequality for Aggregation of Classifiers Under Low Noise Condition -- Aggregation and Sparsity Via?1 Penalized Least Squares -- A Randomized Online Learning Algorithm for Better Variance Control -- Online Prediction and Reinforcement Learning I -- Online Learning with Variable Stage Duration -- Online Learning Meets Optimization in the Dual -- Online Tracking of Linear Subspaces -- Online Multitask Learning -- Online Prediction and Reinforcement Learning II -- The Shortest Path Problem Under Partial Monitoring -- Tracking the Best Hyperplane with a Simple Budget Perceptron -- Logarithmic Regret Algorithms for Online Convex Optimization -- Online Variance Minimization -- Online Prediction and Reinforcement Learning III -- Online Learning with Constraints -- Continuous Experts and the Binning Algorithm -- Competing with Wild Prediction Rules -- Learning Near-Optimal Policies with Bellman-Residual Minimization Based Fitted Policy Iteration and a Single Sample Path -- Other Approaches -- Ranking with a P-Norm Push -- Subset Ranking Using Regression -- Active Sampling for Multiple Output Identification -- Improving Random Projections Using Marginal Information -- Open Problems -- Efficient Algorithms for General Active Learning -- Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints.
Tags from this library: No tags from this library for this title. Log in to add tags.
    Average rating: 0.0 (0 votes)
Item type Current location Collection Call number Status Date due Barcode Item holds
eBook eBook e-Library

Electronic Book@IST

EBook Available
Total holds: 0

Includes bibliographical references and index.

Print version record.

Invited Presentations -- Random Multivariate Search Trees -- On Learning and Logic -- Predictions as Statements and Decisions -- Clustering, Un-, and Semisupervised Learning -- A Sober Look at Clustering Stability -- PAC Learning Axis-Aligned Mixtures of Gaussians with No Separation Assumption -- Stable Transductive Learning -- Uniform Convergence of Adaptive Graph-Based Regularization -- Statistical Learning Theory -- The Rademacher Complexity of Linear Transformation Classes -- Function Classes That Approximate the Bayes Risk -- Functional Classification with Margin Conditions -- Significance and Recovery of Block Structures in Binary Matrices with Noise -- Regularized Learning and Kernel Methods -- Maximum Entropy Distribution Estimation with Generalized Regularization -- Unifying Divergence Minimization and Statistical Inference Via Convex Duality -- Mercer's Theorem, Feature Maps, and Smoothing -- Learning Bounds for Support Vector Machines with Learned Kernels -- Query Learning and Teaching -- On Optimal Learning Algorithms for Multiplicity Automata -- Exact Learning Composed Classes with a Small Number of Mistakes -- DNF Are Teachable in the Average Case -- Teaching Randomized Learners -- Inductive Inference -- Memory-Limited U-Shaped Learning -- On Learning Languages from Positive Data and a Limited Number of Short Counterexamples -- Learning Rational Stochastic Languages -- Parent Assignment Is Hard for the MDL, AIC, and NML Costs -- Learning Algorithms and Limitations on Learning -- Uniform-Distribution Learnability of Noisy Linear Threshold Functions with Restricted Focus of Attention -- Discriminative Learning Can Succeed Where Generative Learning Fails -- Improved Lower Bounds for Learning Intersections of Halfspaces -- Efficient Learning Algorithms Yield Circuit Lower Bounds -- Online Aggregation -- Optimal Oracle Inequality for Aggregation of Classifiers Under Low Noise Condition -- Aggregation and Sparsity Via?1 Penalized Least Squares -- A Randomized Online Learning Algorithm for Better Variance Control -- Online Prediction and Reinforcement Learning I -- Online Learning with Variable Stage Duration -- Online Learning Meets Optimization in the Dual -- Online Tracking of Linear Subspaces -- Online Multitask Learning -- Online Prediction and Reinforcement Learning II -- The Shortest Path Problem Under Partial Monitoring -- Tracking the Best Hyperplane with a Simple Budget Perceptron -- Logarithmic Regret Algorithms for Online Convex Optimization -- Online Variance Minimization -- Online Prediction and Reinforcement Learning III -- Online Learning with Constraints -- Continuous Experts and the Binning Algorithm -- Competing with Wild Prediction Rules -- Learning Near-Optimal Policies with Bellman-Residual Minimization Based Fitted Policy Iteration and a Single Sample Path -- Other Approaches -- Ranking with a P-Norm Push -- Subset Ranking Using Regression -- Active Sampling for Multiple Output Identification -- Improving Random Projections Using Marginal Information -- Open Problems -- Efficient Algorithms for General Active Learning -- Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints.

English.

University staff and students only. Requires University Computer Account login off-campus.

There are no comments for this item.

to post a comment.

Powered by Koha