Overview
Watch a 38-minute AutoML seminar presentation exploring a novel approach to Neural Architecture Search (NAS) through fundamental operations. Learn about 'einspace', a versatile search space based on parameterized probabilistic context-free grammar that enables the discovery of diverse network architectures. Understand how this innovative framework supports various sizes and complexities while incorporating components like convolutions and attention mechanisms. Discover the experimental results from the Unseen NAS datasets, demonstrating both the ability to create competitive architectures from scratch and improve existing baseline models. Gain insights into how this transformative NAS paradigm, presented by Linus Eriksson, combines search space expressivity with strategic initialization to advance neural architecture development. Access supplementary materials including the project webpage, implementation code on GitHub, and the complete research paper on arXiv.
Syllabus
einspace: Searching for Neural Architectures from Fundamental Operations
Taught by
AutoML Seminars