Warriors B2G1 Free
 
 

Special Offers see all

Enter to WIN a $100 Credit

Subscribe to PowellsBooks.news
for a chance to win.
Privacy Policy

Visit our stores


    Recently Viewed clear list


    Book News | May 11, 2015

    Chris Hedges: IMG Powell’s Q&A: Chris Hedges



    Describe your latest book. Wages of Rebellion looks at the nature of rebellion, those who do it, why they do it, and the price they pay for being a... Continue »
    1. $18.89 Sale Hardcover add to wish list

      Wages of Rebellion

      Chris Hedges 9781568589664

    spacer
Qualifying orders ship free.
$120.25
New Hardcover
Ships in 1 to 3 days
Add to Wishlist
available for shipping or prepaid pickup only
Available for In-store Pickup
in 7 to 12 days
Qty Store Section
25 Remote Warehouse Computers Reference- General

Automation and Control Engineering #39: Reinforcement Learning and Dynamic Programming Using Function Approximators

by

Automation and Control Engineering #39: Reinforcement Learning and Dynamic Programming Using Function Approximators Cover

 

Synopses & Reviews

Publisher Comments:

From household appliances to applications in robotics, engineered systems involving complex dynamics can only be as effective as the algorithms that control them. While Dynamic Programming (DP) has provided researchers with a way to optimally solve decision and control problems involving complex dynamic systems, its practical value was limited by algorithms that lacked the capacity to scale up to realistic problems.

However, in recent years, dramatic developments in Reinforcement Learning (RL), the model-free counterpart of DP, changed our understanding of what is possible. Those developments led to the creation of reliable methods that can be applied even when a mathematical model of the system is unavailable, allowing researchers to solve challenging control problems in engineering, as well as in a variety of other disciplines, including economics, medicine, and artificial intelligence.

Reinforcement Learning and Dynamic Programming Using Function Approximators provides a comprehensive and unparalleled exploration of the field of RL and DP. With a focus on continuous-variable problems, this seminal text details essential developments that have substantially altered the field over the past decade. In its pages, pioneering experts provide a concise introduction to classical RL and DP, followed by an extensive presentation of the state-of-the-art and novel methods in RL and DP with approximation. Combining algorithm development with theoretical guarantees, they elaborate on their work with illustrative examples and insightful comparisons. Three individual chapters are dedicated to representative algorithms from each of the major classes of techniques: value iteration, policy iteration, and policy search. The features and performance of these algorithms are highlighted in extensive experimental studies on a range of control applications.

The recent development of applications involving complex systems has led to a surge of interest in RL and DP methods and the subsequent need for a quality resource on the subject. For graduate students and others new to the field, this book offers a thorough introduction to both the basics and emerging methods. And for those researchers and practitioners working in the fields of optimal and adaptive control, machine learning, artificial intelligence, and operations research, this resource offers a combination of practical algorithms, theoretical analysis, and comprehensive examples that they will be able to adapt and apply to their own work.

Access the authors' website at www.dcsc.tudelft.nl/rlbook/ for additional material, including computer code used in the studies and information concerning new developments.

Book News Annotation:

Dynamic programming is an approach to optimal control designed for situations in which a model of the system to be controlled is available; when no model is available, reinforcement learning divines control policies solely from the knowledge of transition samples or trajectories that are collected beforehand or by online interaction with the system. Otherwise, the two are pretty similar, and Busoniu, Robert Babuska, Bard De Schutter (all systems and control, Delft U. of Technology, the Netherlands), and Belgian researcher Damien Ernst consider them together. They cover their application in large and continuous spaces, approximate value iteration with a fuzzy representation, approximate policy iteration for onlining learning and continuous-action control, and approximate policy search with cross-entropy optimization of basis functions. Annotation ©2010 Book News, Inc., Portland, OR (booknews.com)

Synopsis:

Reinforcement learning and dynamic programming are key research areas in machine learning and artificial intelligence. This monograph describes algorithms from each of the three major classes of these techniques: value iteration, policy iteration, and direct policy search. Using illustrative examples, the text discusses the performance of these algorithms in a variety of applications, starting with simple examples such as servo-system control and then continuing on to more challenging problems such as robotic manipulator stabilization and bicycle balancing. The book concludes with the highly challenging problem of optimizing human immunodeficiency virus treatment.

Product Details

ISBN:
9781439821084
Author:
Busoniu, Lucian
Publisher:
CRC Press
Author:
Babuska, Robert
Author:
De Schutter, Bart
Subject:
Machine Theory
Subject:
Electricity
Subject:
Electronics - General
Subject:
Digital control systems
Subject:
Dynamic programming
Subject:
Computers-Reference - General
Series:
Automation and Control Engineering
Series Volume:
39
Publication Date:
20100431
Binding:
Hardcover
Language:
English
Pages:
280

Related Subjects

Computers and Internet » Computers Reference » General
History and Social Science » Politics » General
Reference » Science Reference » Technology
Religion » Comparative Religion » General
Science and Mathematics » Electricity » General Electricity
Science and Mathematics » Electricity » General Electronics

Automation and Control Engineering #39: Reinforcement Learning and Dynamic Programming Using Function Approximators New Hardcover
0 stars - 0 reviews
$120.25 In Stock
Product details 280 pages CRC Press - English 9781439821084 Reviews:
"Synopsis" by , Reinforcement learning and dynamic programming are key research areas in machine learning and artificial intelligence. This monograph describes algorithms from each of the three major classes of these techniques: value iteration, policy iteration, and direct policy search. Using illustrative examples, the text discusses the performance of these algorithms in a variety of applications, starting with simple examples such as servo-system control and then continuing on to more challenging problems such as robotic manipulator stabilization and bicycle balancing. The book concludes with the highly challenging problem of optimizing human immunodeficiency virus treatment.
spacer
spacer
  • back to top

FOLLOW US ON...

     
Powell's City of Books is an independent bookstore in Portland, Oregon, that fills a whole city block with more than a million new, used, and out of print books. Shop those shelves — plus literally millions more books, DVDs, and gifts — here at Powells.com.