Learning from dependent data (Record no. 374025)

000 -LEADER
fixed length control field 02152ntm a22003137a 4500
003 - CONTROL NUMBER IDENTIFIER
control field AT-ISTA
005 - DATE AND TIME OF LATEST TRANSACTION
control field 20190822082631.0
008 - FIXED-LENGTH DATA ELEMENTS--GENERAL INFORMATION
fixed length control field 190822s2018 au ||||| m||| 00| 0 eng d
040 ## - CATALOGING SOURCE
Transcribing agency IST
100 ## - MAIN ENTRY--PERSONAL NAME
Personal name Zimin, Alexander
9 (RLIN) 4470
245 ## - TITLE STATEMENT
Title Learning from dependent data
260 ## - PUBLICATION, DISTRIBUTION, ETC. (IMPRINT)
Name of publisher, distributor, etc. IST Austria
Date of publication, distribution, etc. 2018
500 ## - GENERAL NOTE
General note Thesis
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note Abstract
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note Acknowledgements
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note About the Author
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note List of Figures
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note 1 Introduction
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note 2 Background
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note 3 Theory of Conditional Risk Minimization
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note 4 Conditional risk Minimization in Practice
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note 5 Online Multi-task learning
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note 6 Conclusion and Future Work
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note Bibliography
505 ## - FORMATTED CONTENTS NOTE
Formatted contents note A Proofs from Chapter 3
520 ## - SUMMARY, ETC.
Summary, etc. The most common assumption made in statistical learning theory is the assumption of the independent and identically distributed (i.i.d.) data. While being very convenient mathematically, it is often very clearly violated in practice. This disparity between the machine learning theory and applications underlies a growing demand in the development of algorithms that learn from dependent data and theory that can provide generalization guarantees similar to the independent situations. This thesis is dedicated to two variants of dependencies that can arise in practice. One is a dependence on the level of samples in a single learning task. Another dependency type arises in the multi-task setting when the tasks are dependent on each other even though the data for them can be i.i.d. In both cases we model the data (samples or tasks) as stochastic processes and introduce new algorithms for both settings that take into account and exploit the resulting dependencies. We prove the theoretical guarantees on the performance of the introduced algorithms under different evaluation criteria and, in addition, we compliment the theoretical study by the empirical one, where we evaluate some of the algorithms on two real world datasets to highlight their practical applicability.
856 ## - ELECTRONIC LOCATION AND ACCESS
Uniform Resource Identifier <a href="https://doi.org/10.15479/AT:ISTA:TH1048">https://doi.org/10.15479/AT:ISTA:TH1048</a>
942 ## - ADDED ENTRY ELEMENTS (KOHA)
Source of classification or shelving scheme
Holdings
Withdrawn status Lost status Source of classification or shelving scheme Damaged status Not for loan Permanent Location Current Location Date acquired Barcode Date last seen Price effective from Koha item type
  Not Lost       Library Library 2019-08-22 AT-ISTA#001881 2019-08-22 2019-08-22 Book

Powered by Koha