Program Overview
Acceleration
5 Courses
Share With
The Intel® Distribution of OpenVINO™ toolkit enables you to optimize, tune, and run comprehensive AI inference using the included model optimizer, and runtime and development tools. This course will introduce you to the components and features of the toolkit and walk you through the workflow of using the toolkit to deploy AI-based workloads across Intel® hardware. Find out how you can accelerate applications with high-performance, AI, and deep learning inference deployed from edge to cloud.
The Intel® Geti™ Platform enables teams to rapidly create Computer Vision models and accelerate business automation with AI.
This course provides an in-depth overview of the 4th Gen Intel® Xeon® Scalable Processor, including its built-in accelerators, memory and I/O capabilities with DDR5 memory that solve your most rigorous workload challenges. Partners will also learn about the benefits of AI model performance optimizations on Intel Architecture. We’ll explore heterogeneous compute using the Intel® Distribution of OpenVINO™ Toolkit on the latest 4th Gen Intel Xeon and Intel® Data Center GPU Flex Series. With OpenVINO’s Auto-Device Plugin, you can experience parallel execution of the same network on multiple devices.
This session provides an in-depth overview of our Intel® Xeon® Scalable, Intel® Xeon® D, and Intel® Atom® Processors for Networking and Edge (IoT) usages ramping in 2023-2024. Learn how integrated accelerators, enhanced memory and I/O capabilities maximize performance and power efficiency for optimizing Networking and Edge workloads. We’ll highlight key market areas which can benefit from Xeon processors and dive into the technical capabilities to address how these processors are uniquely positioned to service today's network and edge use cases.
Optimize and deploy AI inference solutions using this open source AI toolkit. Accelerate AI inference and optimize deployment on popular hardware platforms by maximizing the available compute across accelerators while using a common API. Choose from a wide range of pre-trained models that provide flexibility for your use case and preferred framework, like TensorFlow* or PyTorch*. Retrain or fine-tune models using post-training quantization techniques.