The course "Artificial Intelligence Industrial Control Systems Security" explores the intersection of artificial intelligence (AI) and industrial control systems (ICS) security, focusing on the safety, trust, and privacy of AI technologies within critical infrastructures. Learners will gain a comprehensive understanding of the key cybersecurity challenges faced by ICS and the role AI can play in mitigating these risks. Through the exploration of large language models (LLMs), regulatory frameworks, and advanced ICS protocols, students will learn how to implement robust security measures for AI systems and industrial control environments.
The course stands out by providing hands-on learning experiences in critical areas such as supply chain risks, cybersecurity for PLCs, and OT protocols. By combining AI principles with real-world ICS security practices, learners will be equipped to analyze and respond to emerging threats in both AI and ICS sectors. This unique approach ensures a deeper, more integrated understanding of how AI can be applied to enhance cybersecurity in industrial environments. Whether you're a professional or a beginner, this course will prepare you to tackle the most pressing security challenges at the intersection of AI and industrial control systems.
Overview
Syllabus
- Course Introduction
- This course provides a comprehensive exploration of the safety, cybersecurity, and privacy implications of AI systems and large language models (LLMs). Students will evaluate the accountability of AI under the law and investigate governance frameworks. The course also focuses on the security challenges specific to Operational Technology (OT) and Industrial Control Systems (ICS), highlighting key vulnerabilities and operational differences from IT. Practical skills in using tools like Wireshark for cybersecurity analysis and implementing protective measures for PLCs will be emphasized. Through discussions and hands-on activities, learners will gain a robust understanding of the complex interplay between technology, security, and regulatory requirements.
- Artificial Intelligence (AI): Safety, Trust, Security and Privacy
- This module explores the safety and cybersecurity aspects of AI systems, focusing on evaluating their security, privacy, and autonomous concerns. It also covers the applicability and functionality of large language models (LLMs), including how algorithms and learning patterns enhance their performance.
- Artificial Intelligence: Policy and Governance
- This module explores the legal accountability of AI systems, examining how they are regulated and governed. It covers human factors in computing systems and the policies that AI systems fall under. Students will assess how ChatGPT contributes to or detracts from value and analyze the impact of chatbots on large language models (LLMs).
- Industrial Control Systems I
- This course explores the Purdue Enterprise Reference Architecture Model and Unified Facilities Criteria, providing a framework for industrial control systems (ICS). It covers basic control systems like PLC, SCADA, and DCS, and examines the operational differences between IT and OT environments. Students will learn about key cybersecurity control elements to enhance ICS security, identify top vulnerabilities from a cyber perspective, and understand the usage and implementation of PLC application firewalls and software whitelisting.
- Industrial Control Systems II
- In this module, we will continue our discussion of Industrial Control System (ICS) networks. Specifically discussing what encompasses supply chain risks, Original Equipment Manufacturers/Vendors, and techniques for mitigating these risks. We will also discuss and examine several different types of ICS/OT protocols regarding their characteristics, purposes/implementations, the cybersecurity concerns with various OT proprietary and open protocols, and the differences between OT and IT environments.
Taught by
Jason Crossland