This talk discusses the evolution of modern control and estimation theory from its beginning (1956) and concludes with recent research trends. We discuss the early breakthroughs in optimal control, namely the birth of state-space methods, Bellman's dynamic programming, and the maximum principle of Pontryagin. We also address the evolution of optimal estimation theory, the Kalman filter, the extended Kalman filter, and the blending of ideas of multiple hypotheses-testing and estimation theory. We overview recent trends in robust identification and control, adaptive estimation and control, hybrid systems, and decentralized decision-making and control for large-scale dynamic systems.