Dynamic Programming and Optimal Control (2 Vol Set)
- Binding: Hardcover
- Author: Dimitri P. Bertsekas
- Publish Date: 2012-06-18
A two-volume set, consisting of the latest editions of the two volumes (4th edition (2017) for Vol. I, and 4th edition (2012) for Vol. II). Much supplementary material can be found at the book's web page. The first volume is oriented towards modeling, conceptualization, and finite-horizon problems, but also includes a substantive introduction to infinite horizon problems that is suitable for classroom use, as well as an up-to-date account of some of the most interesting developments in approximate dynamic programming. The second volume is oriented towards mathematical analysis and computation, treats infinite horizon problems extensively, and provides a detailed account of approximate large-scale dynamic programming and reinforcement learning. This is a textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization. The treatment focuses on basic unifying themes, and conceptual foundations. It illustrates the versatility, power, and generality of the method with many examples and applications from engineering, operations research, and other fields. It also addresses extensively the practical application of the methodology, possibly through the use of approximations, and provides an introduction to the methodology of Neuro-Dynamic Programming, which is the focus of much recent research.