Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Udacity

Adversarial Search

via Udacity

Overview

Learn how to search in multi-agent environments (including decision making in competitive environments) using the minimax theorem from game theory. Then build an agent that can play games better than any human.

Syllabus

  • Introduction to Adversarial Search
    • Extend classical search to adversarial domains, to build agents that make good decisions without any human intervention—such as the DeepMind AlphaGo agent.
  • Search in Multiagent Domains
    • Search in multi-agent domains, using the Minimax theorem to solve adversarial problems and build agents that make better decisions than humans.
  • Optimizing Minimax Search
    • Some of the limitations of minimax search and introduces optimizations & changes that make it practical in more complex domains.
  • Build an Adversarial Game Playing Agent
    • Build agents that make good decisions without any human intervention—such as the DeepMind AlphaGo agent.
  • Extending Minimax Search
    • Extensions to minimax search to support more than two players and non-deterministic domains.
  • Additional Adversarial Search Topics
    • Introduce Monte Carlo Tree Search, a highly-successful search technique in game domains, along with a reading list for other advanced adversarial search topics.

Taught by

Thad Starner

Reviews

Start your review of Adversarial Search

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.