Explore a comprehensive lecture on the challenges and potential of integrating AI tools into safety-critical systems. Delve into the work of Adam Wierman from the California Institute of Technology as he addresses the crucial question of providing guarantees for black-box AI tools in critical applications. Examine the structure of constraints in sequential decision-making and discover various projects aimed at developing robust, localizable tools that combine model-free and model-based approaches. Learn about the efforts to create AI tools with formal guarantees on performance, stability, safety, and sample complexity for use in data centers, electricity grids, transportation, and other vital sectors. Gain insights into overcoming the limitations of machine-learned algorithms, such as lack of worst-case performance guarantees and difficulties in distributed, networked settings. Understand the importance of addressing distribution shift and global information availability in local controllers to ensure reliable AI implementation in safety-critical networked systems.
Overview
Syllabus
Learning to Control Safety-Critical Systems
Taught by
Simons Institute