Course Overview
This course introduces OpenVINO, an open-source toolkit for optimizing and deploying AI inference. OpenVINO enables users to convert, optimize, and deploy AI models across Intel and third-party hardware without restrictions. The course covers the three-step process of model, optimize, and deploy, and explores the tools available in OpenVINO, including the Neural Network Compression Framework (NNCF) and OpenVINO Model Server (OVMS). Students will learn how to use OpenVINO to accelerate inference, reduce footprint, and optimize hardware utilization while maintaining accuracy. The course also discusses the benefits of OpenVINO, including performance, usability, and versatility, and provides examples of how OpenVINO is used across various industries, such as healthcare, retail, and finance.