Courses from 1000+ universities
Two years after its first major layoff round, Coursera announces another, impacting 10% of its workforce.
600 Free Google Certifications
Graphic Design
Data Analysis
Digital Marketing
El rol de la digitalización en la transición energética
First Step Korean
Supporting Successful Learning in Primary School
Organize and share your learning with Class Central Lists.
View our Lists Showcase
Explore how fine-tuning in large language models can amplify privacy risks, focusing on the novel Janus attack that recovers forgotten personal information from pre-training data.
Explore knowledge circuits in pretrained transformers, uncovering computational mechanisms behind language models' articulation of specific knowledge. Gain insights into AI's inner workings.
Explore a universal evaluation framework for large language models using Hierarchical Prompting Taxonomy. Gain insights into assessing dataset complexity and model capabilities.
Explore EvalGen, an interface for automated assistance in generating evaluation criteria and implementing assertions for LLM outputs aligned with human preferences.
Explore PromptEval, a novel method for estimating LLM performance across multiple prompts, enhancing evaluation accuracy within practical budgets. Gain insights from University of Michigan researcher Felipe Polo.
Explore RapidIn, a scalable framework for estimating training data influence in large language models. Learn about token-wise retrieval and its two-stage approach.
Explore advanced agentic RAG systems with expert Atita Arora, overcoming traditional limitations and revolutionizing information retrieval for AI and machine learning applications.
Supercharge LLM deployment by integrating Baseten model endpoints into Unify Platform. Learn dynamic routing, open-source model usage, and practical demonstrations for optimized AI workflows.
Explore YOCO, a decoder-decoder architecture for LLMs that improves inference memory, prefill latency, and throughput by caching key-value pairs once across context lengths and model sizes.
Explore monosemanticity in neural networks through sparse autoencoders. Learn how extracting interpretable features enhances understanding of language model behavior and improves reasoning capabilities.
Explore ReFT, a novel approach to fine-tuning language models by modifying internal representations, achieving efficiency with fewer parameters than traditional methods.
Explore LayerSkip, an LLM acceleration method that speeds up inference by strategically restricting model layers, achieving 2x speed-ups on various tasks through innovative techniques.
Explore DSPy, a programming model for LM pipelines that enables self-improving AI systems through declarative modules and computational graphs. Learn its potential to enhance AI performance.
Explore knowledge distillation techniques for large language models, focusing on reverse KLD to improve student model precision and response quality.
Learn to build an interactive ChatBot using Unify, exploring Synchronous and Asynchronous clients and integrating with various LLMs for dynamic AI conversations.
Get personalized course recommendations, track subjects and courses with reminders, and more.