text
hidden text to trigger early load of fonts ПродукцияПродукцияПродукцияПродукция Các sản phẩmCác sản phẩmCác sản phẩmCác sản phẩm المنتجاتالمنتجاتالمنتجاتالمنتجات מוצריםמוצריםמוצריםמוצרים

Principles of AI

-- Generating

If download hasn't started

Program Overview

Principles of AI Certification

20 Courses

Share With

Ushering in the AI PC era takes leadership products at scale coupled with a broad software ecosystem to deliver the best AI experiences. Come immerse yourself in the AI PC journey. Learn about the Intel® products, ISV partners, and how Intel is leading this new era of client computing.

You have likely heard that AI runs well on Intel® Xeon® processors. Customers are looking to deploy AI on the general computing systems they already have, but they need solutions that they can deploy within their time, resource, and expertise constraints. Learn about AI reference solutions that are optimized on Intel Xeon CPUs based on both the infrastructure and software stacks that are predominantly used by enterprise customers.

AI is the defining workload of our time, and organizations are racing to adopt AI into their businesses and services. At the same time, governments around the world are passing new regulations to help ensure AI evolves in a way that is secure, trustworthy, and respectful of individual privacy. Confidential AI is a method to protect the data and model while it is actively in use, helping organizations stay compliant with regulations and protect their IP.

We are witnessing the convergence and use of Artificial Intelligence (AI) in High-Performance Computing (HPC) due to the availability of large amounts of data and the rapid development and use of AI frameworks and models. This convergence has begun to reshape the landscape of scientific computing, enabling scientists to address problems in ways that were not possible before. A large number of HPC workloads are governed by the laws of physics, medical science, material science, etc. For example, take fluid flow using numerical methods for partial differential equations.

A lot of AI preparation, development, prototyping, and increasingly, deployment is happening on workstations. Workstations liberate the AI developer and data scientist from negotiating server time while also providing the increased memory capacity and cores to handle larger AI datasets that would cripple a consumer PC or laptop. With an AI Workstation from Intel, organizations benefit from a robust platform for AI experimentation, thus avoiding expensive production costs. Finally, with the growth of Generative AI and Small Language Models (SLMs), the AI Workstation from Intel offers a compelling solution for enterprises to maximize their AI investments. By using industry-specific and proprietary data with SLMs in a workstation, enterprises can achieve multiple objectives: efficiency, accuracy, customization, and security.

ChatGPT and other massive models represent an amazing step forward in AI that is moving at light speed. This course will survey how the AI ecosystem has worked non-stop to take these all-purpose multi-task models and optimize them so they can be used by organizations to address domain-specific problems. Learn how Intel can help you become a trusted thought leader who can demystify this topic for your partners and customers.

Intel® Gaudi® AI Accelerator is our AI-specific training and inference solution covering the largest foundational models and is competitive with Nvidia H100. The primary objective of this course is to provide comprehensive education to experienced deep learning engineers and data scientists who possess prior expertise in PyTorch* and DeepSpeed*. Throughout this course, participants will gain insights into the numerous advantages and exceptional capabilities offered by the Intel Gaudi AI accelerator for PyTorch* training and inference workloads.

This module will guide users through the initial steps of using Intel® Gaudi® AI accelerators and address model migration, making it accessible to a wide audience.

Learn where to find and how to deploy Intel's open-source AI software optimizations for 4th and 5th Gen Intel® Xeon® Scalable processors to ensure you're maximizing AI performance on the systems that are running your business.

Learn how you can build computer vision models with Intel® Geti™ platform and deploy with OpenVINO™ toolkit using cross-platform solutions. OpenVINO toolkit has seen 111% YOY increase in downloads... learn why. It's a toolkit that is easy to use when implementing compression and optimization techniques for your deep learning inferencing needs. Most recently, in the 2023.1 release, the toolkit has expanded to support more LLM models, making Gen AI workloads more accessible on client and edge. Learn how customers can benefit from this great toolkit.

(Chapter 1 out of 1)

oneAPI AI ToolKit, IPEX, OpenVINO™ - Learn the basics of positioning Intel® AI Software.

Understand how Intel® Data Center GPU Flex Series is a viable solution for your growing demand for AI and general-purpose workloads. In this course, you will learn the basics of the technology, its software and framework readiness, and examples of where to use it effectively.

Understand how Intel® Data Center GPU Max Series is a viable solution for your growing demand for AI and general-purpose workloads. In this course you will learn the basics of the technology, its software and framework readiness, and examples of where to use it effectively.

The matrix multiplication acceleration provided by Intel® Advanced Matrix Extensions (Intel® AMX) in Intel® Xeon® CPU Max makes it an exceptional value for AI. Pairing that acceleration with the increased memory bandwidth of the Intel® Xeon® CPU Max provides even better performance on many workloads and can greatly speed up workflows where AI is used to augment HPC as well as in LLM Inference. This submodule will provide a summary of the technical characteristics, their benefits, and performance results to show how customers and users can make use of these technologies to solve their problems within the Intel® Xeon® CPU ecosystem they already know and love.

This module will provide a foundational overview of Intel® Gaudi® AI accelerators, ensuring that everyone can grasp the core concepts including MLPerf results.

Intel® Xeon® processors can be a great fit for AI, from ML/DL applications and even Generative AI. Note that while inference is a certain target AI use case we can also sell into retraining and fine tuning with Intel Xeon processors.

AI is enabling business transformations everywhere across the network and edge. Vision, language, and other use cases deploy AI across the edge--across a broad array of locations--in manufacturing, smart cities, transportation, and networking. Learn about the tools and enablement for edge deployments in this module.

It is of huge importance for Intel to establish that AI runs on PCs. Intel® Core™ processors and Intel® ARC™ GPUs enable many inference use cases on client systems. This module will educate you on the AI applications and how Intel Core CPUs with neural processing unit (NPU), and Intel ARC GPUs enable these use cases.

Contrary to popular belief, Nvidia GPUs are not the ONLY viable AI solution in the data center. Intel delivers outstanding solutions ranging from Intel® Xeon® CPUs to GPUs (GPU MAX and GPU FLEX) to Intel® Gaudi® 2 AI accelerators. This module highlights how Intel solves business challenges with AI and offers compelling alternatives to Nvidia.

(Chapter 1 out of 1)

What does AI Everywhere mean? What product should I consider for what applications or AI development stage? Learn all this and get sales guidance around AI from client to edge and cloud.