Courses from 1000+ universities
Two years after its first major layoff round, Coursera announces another, impacting 10% of its workforce.
600 Free Google Certifications
Digital Marketing
Computer Science
Graphic Design
Mining Massive Datasets
Making Successful Decisions through the Strategy, Law & Ethics Model
The Science of Well-Being
Organize and share your learning with Class Central Lists.
View our Lists Showcase
Explore the groundbreaking Generative Adversarial Networks paper, covering its innovative approach, theoretical foundations, and practical implications for AI-driven image generation and deep learning advancements.
Explore Word2Vec, a groundbreaking technique for creating word vectors. Learn about distributed representations, skip-gram model, negative sampling, and subsampling methods to improve vector quality and training efficiency.
Explore deep residual learning for image recognition, covering the problem of depth, residual connections, and their impact on neural network performance in computer vision tasks.
Explore deep ensembles' effectiveness in neural networks, their superiority over Bayesian models, and how they capture non-convex loss landscapes to improve generalization and robustness in AI.
Detailed explanation of NVAE, a deep hierarchical variational autoencoder for high-resolution image generation, covering architecture, training techniques, and state-of-the-art results.
Explore supermasks in neural networks for lifelong learning, tackling catastrophic forgetting and automatic task identification. Learn about mask superpositions, entropy minimization, and innovative extensions.
Explores SpineNet, a novel CNN architecture using scale-permuted features and cross-scale connections, outperforming traditional models in object detection and classification tasks.
Explore a novel approach to transformers using linear attention, reducing computational complexity and revealing connections to RNNs. Gain insights into faster, more efficient deep learning models for long sequences.
Explores how BERT models trained on protein sequences learn biological properties, revealing insights into protein structure, binding sites, and biophysical characteristics through attention mechanism analysis.
Detailed explanation of Google's 600 billion parameter transformer for multilingual translation, focusing on scaling techniques, mixture-of-experts, and automatic sharding implementation on TPUs.
Explanation of a novel deep learning module for object-centric visual scene understanding. Covers architecture, algorithm, experiments, and implications of Slot Attention for improved object recognition and reasoning.
Explores a novel framework for generating and encoding sets of images, introducing Set Distribution Networks that can create new object/identity sets while preserving key attributes.
Explores Direct Feedback Alignment as a biologically plausible alternative to backpropagation, demonstrating its effectiveness in training modern deep learning architectures for various challenging tasks.
Exploring how Graph Neural Networks and symbolic regression can derive accurate symbolic equations from observational data, combining deep learning with physics-inspired inductive biases.
Explore the process of understanding Facebook AI's DETR paper through a step-by-step analysis, from title to conclusions, with insights on interpreting research effectively.
Get personalized course recommendations, track subjects and courses with reminders, and more.