Reinforcement learning is surveyed from the perspective of optimization and control with a focus on continuous control applications. Benjamin Recht from the Department of Electrical Engineering and Computer Sciences at University of California in Berkeley surveys the general formulation, terminology and typical experimental implementations of reinforcement learning and reviews competing solution paradigms.
In order to compare the relative merits of various techniques, a case study is presented of the Linear Quadratic Regulator (LQR) with unknown dynamics, perhaps the simplest and
best studied problem in optimal control.
The manuscript describes how merging techniques from learning theory and control can provide non-asymptotic characterizations of LQR performance and shows that these characterizations tend to match experimental behavior.
In turn, when revisiting more complex applications, many of the observed phenomena in LQR persist. In particular, theory and experiment demonstrate the role and importance of models and the cost of generality in reinforcement learning algorithms.
This survey concludes with a discussion of some of the challenges in designing learning systems that safely and reliably interact with complex and uncertain environments and how tools from reinforcement learning and controls might be combined to approach these challenges.