Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to use OpenAI's new GPT 3.5 embedding model text-embedding-ada-002 for semantic search in this 16-minute video tutorial. Discover the process of generating language embeddings using the OpenAI Embedding API and indexing them in the Pinecone vector database for efficient and scalable vector search. Explore the powerful combination of these tools for building semantic search, question-answering, threat-detection, and other NLP-based applications. Gain hands-on experience with OpenAI's latest embedding model, which offers improved performance, cost-effectiveness, and the ability to index approximately 10 pages into a single vector embedding. Follow along as the tutorial covers initializing the OpenAI API connection, creating embeddings, setting up a Pinecone vector index, populating the index with embeddings from a Hugging Face dataset, and performing semantic search queries. Conclude with instructions on deleting the environment and final insights into this cutting-edge technology.
Syllabus
Semantic search with OpenAI GPT architecture
Getting started with OpenAI embeddings in Python
Initializing connection to OpenAI API
Creating OpenAI embeddings with ada
Initializing the Pinecone vector index
Getting dataset from Hugging Face to embed and index
Populating vector index with embeddings
Semantic search querying
Deleting the environment
Final notes
Taught by
James Briggs