Courses from 1000+ universities
Two years after its first major layoff round, Coursera announces another, impacting 10% of its workforce.
600 Free Google Certifications
Digital Marketing
Computer Science
Graphic Design
Mining Massive Datasets
Making Successful Decisions through the Strategy, Law & Ethics Model
The Science of Well-Being
Organize and share your learning with Class Central Lists.
View our Lists Showcase
Explore RepNet's innovative approach to video repetition counting using temporal self-similarity matrices, synthetic data, and a constrained model for class-agnostic period prediction in real-world videos.
Explore SIREN neural networks for representing complex signals like images and 3D shapes. Learn about their unique periodic activation functions, initialization, and applications in solving differential equations.
Explore SimCLRv2, a powerful semi-supervised learning approach combining self-supervised pre-training, fine-tuning, and distillation to achieve state-of-the-art results with minimal labeled data.
Explores the relationship between intelligence, generality, and prior knowledge, focusing on human priors as a basis for comparing human and AI intelligence. Discusses optimizing for generality and the role of experience in skill acquisition.
Exploring generative pretraining for image processing, adapting NLP techniques to visual tasks. Discusses model architecture, experiments, and potential applications in computer vision.
Explore BYOL, a novel self-supervised learning approach that outperforms baselines without negative samples. Learn about its innovative use of online and target networks for image representation.
Explores SynFlow, a novel data-agnostic algorithm for pruning neural networks at initialization, avoiding layer collapse and achieving maximum compression capacity without training or data.
Explore VirTex, a novel approach to visual representation learning using high-quality image captions, outperforming traditional methods with fewer images for various computer vision tasks.
Detailed explanation of the Linformer model, which reduces self-attention complexity in transformers from O(n²) to O(n), improving efficiency while maintaining performance in NLP tasks.
Explore unsupervised neural machine translation for code migration between Python, C++, and Java. Learn about shared embeddings, objectives, evaluation, and results of this innovative approach.
Explora BLEURT, una métrica de evaluación basada en BERT para generación de texto, su preentrenamiento con datos sintéticos y su capacidad de modelar juicios humanos con pocos ejemplos.
Explore a novel approach to Neural Architecture Search using a Synthetic Petri Dish model, which evaluates architectural motifs in small networks with synthetic data to accelerate performance prediction and optimization.
Explores CornerNet, a novel object detection approach using paired keypoints. Covers corner pooling, heatmap and embedding outputs, loss functions, and experimental results.
Explores alternative attention mechanisms in Transformer models, proposing the Synthesizer model that learns synthetic attention weights without token-token interactions, showing competitive performance across various NLP tasks.
Comprehensive analysis of GPT-3's capabilities, exploring its performance on various NLP tasks and discussing its potential impact on AI research and applications.
Get personalized course recommendations, track subjects and courses with reminders, and more.