Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Microsoft

Operationalize AI responsibly with Azure AI Studio

Microsoft via Microsoft Learn

Overview

  • Module 1: Learn how to choose and build a content moderation system in Azure AI Studio.

    By the end of this module, you're able to:

    • Configure filters and threshold levels to block harmful content.
    • Perform text and image moderation for harmful content.
    • Analyze and improve Precision, Recall, and F1 Score metrics.
    • Detect groundedness in a model’s output.
    • Identify and block AI-generated copyrighted content.
    • Mitigate direct and indirect prompt injections.
    • Output filter configurations to code.
  • Module 2: Learn how to choose and build a content moderation system with Azure AI Content Safety.

    By the end of this module, you're able to:

    • Perform text and image moderation for harmful content.
    • Detect groundedness in a models output.
    • Identify and block AI-generated copyrighted content.
    • Mitigate direct and indirect prompt injections.
  • Module 3: Learn how to measure and mitigate risks for a generative AI app leveraging responsible AI tools and features within Azure AI Studio.

    By the end of this module, you're able to:

    • Upload data and create an index
    • Set up a system message
    • Create and add a content filter
    • Execute a manual evaluation
    • Execute and analyze an AI-assisted evaluation

Syllabus

  • Module 1: Module 1: Moderate content and detect harm in Azure AI Studio with Content Safety
    • Introduction
    • Content Safety
    • Prepare
    • Harm categories and severity levels
    • Exercise – Text moderation
    • Exercise – Image moderation
    • Exercise – Groundedness detection
    • Exercise – Prompt shields
    • Exercise – Integrate with the Contoso Camping Store platform
    • Knowledge check
    • Summary
  • Module 2: Module 2: Moderate Content and Detect Harm with Azure AI Content Safety
    • Introduction
    • Azure AI Content Safety
    • Prepare
    • Harm categories and severity levels
    • Exercise - Text moderation
    • Exercise - Image moderation
    • Exercise - Groundedness detection
    • Exercise - Prompt shields
    • Exercise - Integrate with the Contoso Camping Store platform
    • Knowledge check
    • Summary
  • Module 3: Module 3: Measure and mitigate risks for a generative AI app in Azure AI Studio
    • Introduction
    • Prepare
    • Choose and deploy a model
    • Upload data and create an index
    • Create a system message
    • Create a content filter
    • Run a manual evaluation
    • Run and compare automated evaluations
    • Knowledge check
    • Summary

Reviews

Start your review of Operationalize AI responsibly with Azure AI Studio

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.