Courses from 1000+ universities
Two years after its first major layoff round, Coursera announces another, impacting 10% of its workforce.
600 Free Google Certifications
Graphic Design
Data Analysis
Digital Marketing
El rol de la digitalización en la transición energética
First Step Korean
Supporting Successful Learning in Primary School
Organize and share your learning with Class Central Lists.
View our Lists Showcase
Explore systematic prompting techniques through an in-depth analysis of template structures, zero-shot methods, emotion prompting, and thought generation approaches for enhanced AI interactions.
Dive into the technical architecture and methodology behind Flux, exploring rectified flow transformers, latent diffusion models, and the innovative approaches that led to superior image generation results.
Dive into Meta's groundbreaking Llama 3 architecture, exploring pre-training techniques, model capabilities, synthetic data quality, and implementation strategies for advanced AI development.
Dive into the mechanics of Samba, a hybrid state space model built on Mamba that enables efficient unlimited context language modeling for advanced AI applications.
Dive into efficient text-to-image generation using PixART-α, exploring fine-tuning techniques, design principles, and practical implementation steps for diffusion transformers.
Dive into implementing text generation using discrete diffusion modeling, covering implementation details from basic concepts to training scripts for creating GPT-2 competitive models.
Dive into discrete diffusion modeling for text generation, exploring probability distributions, score-based modeling, and practical applications in generative AI that rival GPT-2's capabilities.
Dive into the revolutionary concept of 1-bit Large Language Models, exploring how weights can be represented with simple integers (0, 1, -1) instead of complex floating-point numbers.
Dive into Meta AI's I-JEPA framework for self-supervised image learning, exploring its architecture, methodology, and advantages over traditional approaches in computer vision and neural networks.
Dive into implementing Meta's Self-Rewarding Language Model with Mistral 7B, covering fine-tuning techniques, data preparation, prompt generation, and practical demonstrations.
Dive into the technical foundations behind OpenAI's Sora, exploring diffusion transformers, U-Nets, and latent diffusion models that power this groundbreaking video generation technology.
Dive into the mechanics of Medusa, a framework that accelerates LLM inference through parallel token prediction and tree-based attention, enhancing AI model performance and efficiency.
Explore Lumiere's innovative space-time diffusion model for video generation, covering its architecture, solutions to common problems, and practical applications in coherent motion synthesis.
Explore neural network techniques for generating accurate depth maps from single images, covering architecture, evaluation methods, and real-world applications of the Depth Anything model.
Explore how language models can self-learn to utilize tools through API calls, covering architecture, training data generation, filtering mechanisms, and experimental results from Meta's Toolformer paper.
Get personalized course recommendations, track subjects and courses with reminders, and more.