Continuous-Time Markov Decision Processes [electronic resource] : Theory and Applications / by Xianping Guo, Onésimo Hernández-Lerma.

By: Guo, Xianping [author.]
Contributor(s): Hernández-Lerma, Onésimo [author.] | SpringerLink (Online service)
Material type: TextTextSeries: Stochastic Modelling and Applied Probability: 62Publisher: Berlin, Heidelberg : Springer Berlin Heidelberg, 2009Description: XVIII, 234 p. online resourceContent type: text Media type: computer Carrier type: online resourceISBN: 9783642025471Subject(s): Mathematics | Mathematical optimization | Operations research | Management science | Probabilities | Mathematics | Optimization | Probability Theory and Stochastic Processes | Operations Research, Management ScienceAdditional physical formats: Printed edition:: No titleDDC classification: 519.6 LOC classification: QA402.5-402.6Online resources: Click here to access online
Contents:
and Summary -- Continuous-Time Markov Decision Processes -- Average Optimality for Finite Models -- Discount Optimality for Nonnegative Costs -- Average Optimality for Nonnegative Costs -- Discount Optimality for Unbounded Rewards -- Average Optimality for Unbounded Rewards -- Average Optimality for Pathwise Rewards -- Advanced Optimality Criteria -- Variance Minimization -- Constrained Optimality for Discount Criteria -- Constrained Optimality for Average Criteria.
In: Springer eBooksSummary: Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.
Tags from this library: No tags from this library for this title. Log in to add tags.
    Average rating: 0.0 (0 votes)
Item type Current location Collection Call number Status Date due Barcode Item holds
eBook eBook e-Library

Electronic Book@IST

EBook Available
Total holds: 0

and Summary -- Continuous-Time Markov Decision Processes -- Average Optimality for Finite Models -- Discount Optimality for Nonnegative Costs -- Average Optimality for Nonnegative Costs -- Discount Optimality for Unbounded Rewards -- Average Optimality for Unbounded Rewards -- Average Optimality for Pathwise Rewards -- Advanced Optimality Criteria -- Variance Minimization -- Constrained Optimality for Discount Criteria -- Constrained Optimality for Average Criteria.

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

There are no comments for this item.

to post a comment.

Powered by Koha