Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Operational Excellence for LLMs Using Amazon Bedrock - Conf42 LLMs 2024

Conf42 via YouTube

Overview

Explore operational excellence for Large Language Models (LLMs) using Amazon Bedrock in this conference talk by Suraj Muraleedharan at Conf42 LLMs 2024. Delve into the design principles of operational excellence, compare DevOps, MLOps, and LLMOps, and understand the model lifecycle. Learn about LLM selection criteria, customization options, and architectural patterns for Amazon Bedrock. Discover how to implement knowledge bases, privately customize models, and integrate Amazon Bedrock API with various AWS services. Gain insights on invocation logging, metrics, model evaluation, and implementing guardrails for generative applications. Examine real-world examples and best practices for achieving operational excellence with Amazon Bedrock in LLM deployments.

Syllabus

intro
preamble
agenda
what is operation excellence?
what are the design principles?
devops v/s mlops v/s llmops
people, process and technology
model lifecycle
llmops can be different for each type of users
llm selection criteria
comparison of llm customizations
customizing model responses for your business
amazon bedrock
knowledge bases for amazon bedrock
privately customize models with your data
amazon bedrock api with amazon api gateway
amazon api gateway models
aws lambda invokin amazon bedrock api
amazon bedrock api from a generic application
using aws sdk
invocation logging
metrics
model evaluation
building generative apps brings new challenges
using nvidia/nemo-guardrails
amazon bedrock examples

Taught by

Conf42

Reviews

Start your review of Operational Excellence for LLMs Using Amazon Bedrock - Conf42 LLMs 2024

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.