The 7 Lines of Code You Need to Run Faster Real-time Inference

The 7 Lines of Code You Need to Run Faster Real-time Inference

MLOps.community via YouTube Direct link

[] Introduction to OpenVino Toolkit

2 of 37

2 of 37

[] Introduction to OpenVino Toolkit

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

The 7 Lines of Code You Need to Run Faster Real-time Inference

Automatically move to the next video in the Classroom when playback concludes

  1. 1 [] Introduction to Adrian Boguszewski
  2. 2 [] Introduction to OpenVino Toolkit
  3. 3 [] The 7 Lines of Code You Need to Run Faster Real-time Inference
  4. 4 [] Demo
  5. 5 [] The challenge
  6. 6 [] "But running on CPU is slow..."
  7. 7 [] The solution
  8. 8 [] OpenVINO
  9. 9 [] Installation methods
  10. 10 [] Model optimizer
  11. 11 [] Neural Network any format
  12. 12 [] Intermediate Representation IR
  13. 13 [] Post-training Optimization Tool POT
  14. 14 [] OpenVINO Runtime
  15. 15 [] Supported Devices
  16. 16 [] Auto device
  17. 17 [] Performance hints
  18. 18 [] Auto Batching
  19. 19 [] Input data with different shapes
  20. 20 [] Static shape
  21. 21 [] Dynamic shape
  22. 22 [] Open Model Zoo
  23. 23 [] Performance
  24. 24 [] Demo
  25. 25 [] Run live object detection
  26. 26 [] Quantize NLP Model with post
  27. 27 [] OpenVINO Notebooks
  28. 28 [] Intel Developer Cloud for the Edge
  29. 29 [] OpenVINO Ecosystem Adoption
  30. 30 [] Intel AI Software Portfolio
  31. 31 [] Main Takeaways
  32. 32 [] Win $50 in Swag!
  33. 33 [] Adrian's start with OpenVINO
  34. 34 [] Deploying OpenVINO outside the local system
  35. 35 [] Developing tricks with OpenVINO
  36. 36 [] OpenVINO Hosted Service
  37. 37 [] Wrap up

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.