text
hidden text to trigger early load of fonts ПродукцияПродукцияПродукцияПродукция Các sản phẩmCác sản phẩmCác sản phẩmCác sản phẩm المنتجاتالمنتجاتالمنتجاتالمنتجات מוצריםמוצריםמוצריםמוצרים

Intel® Cloud.U: Cloud Solution Architect (CSA)

-- Generating

If download hasn't started

Program Overview

Cloud Solution Architect (CSA) Tech Talks: Learn more about all of the related technical topics across a variety of courses.

88 Courses

Share With

This course covers the application of DevOps for the Google Cloud Platform. It introduces the GCP developer tools, including Google Cloud Build and Google Kubernetes Engine, to manage the software development life cycle. The course demonstrates how to build an environment and deploy a pipeline into the GCP cloud using these tools. It also covers the use of Infrastructure as Code (IaC) and continuous integration and continuous delivery (CI/CD) methodologies. By the end of the course, learners will be able to identify the tools used to build a CI/CD pipeline, create and manage applications using CI/CD methodologies, and model their solution using IaC with Google Cloud Build and Google Kubernetes Engine.

This course covers the application of DevOps for the Azure Cloud Platform. Students will learn how to identify tools used to build a continuous integration and continuous delivery (CI/CD) pipeline, build a serverless static web application using Azure developer CLI, and create, manage, and deploy applications using continuous integration and CI/CD methodologies. The course will also cover Azure development tools, such as Azure Developer CLI, AZD Templates, Application Insights, and Bicep files. By the end of the course, students will be able to manage the entire Software Development Life Cycle (SDLC), model their application using Infrastructure as Code (IaC) with Azure, and deploy applications using continuous integration and continuous delivery (CI/CD) methodologies.

This course covers the application of DevOps in the AWS Cloud Platform, focusing on managing the software development life cycle using the AWS Cloud Development Kit (AWS CDK). Students will learn how to build and deploy Infrastructure as Code into the AWS cloud, utilizing various AWS developer tools such as AWS Cloud9, CodeCommit, CodeBuild, and CodePipeline. The course demonstrates how to create a CI/CD pipeline using AWS CDK, automating the deployment of serverless applications and infrastructure updates. By the end of the course, students will understand how to leverage AWS services to improve the speed of development, security, and reliability of their applications.

This course explores the practical applications of DevOps, a culture of philosophies, tools, and practices that enables improved communication, collaboration, product development, and release acceleration. It describes how DevOps leads to faster innovation and is responsive to business needs, such as improved software quality and more frequent software releases. The course also examines the hardware and software ecosystem required to support DevOps and the value of leveraging Intel hardware features and cloud services. Additionally, it discusses the importance of automation, continuous integration, and continuous delivery in a DevOps environment. By the end of this course, learners will be able to describe the benefits of DevOps, explain the DevOps ecosystem, and articulate the Intel value proposition for DevOps environments.

(Chapter 1 out of 1)

This course covers the CI/CD pipeline, a crucial aspect of DevOps, and its benefits over traditional delivery processes. Students will learn to differentiate between traditional delivery and CI/CD, identify stages within the CI/CD pipeline, and recognize tools that support various stages. The course also emphasizes the importance of security within the CI/CD pipeline, including best practices such as source code scanning, security testing, and runtime security. Through demonstrations and lessons, students will gain hands-on experience with building and executing a CI/CD pipeline in the Google Cloud Platform, using tools like GitHub, Jenkins, and Docker. By the end of the course, students will be able to create a secure CI/CD pipeline, automate testing and deployment, and ensure the quality and reliability of their applications.

(Chapter 1 out of 1)

This course introduces the concept of Infrastructure as Code (IaC) and its benefits, including increased deployment speed, reduction in human errors, and cost savings. Students will learn about the different tools and services available for IaC, such as Terraform, AWS CloudFormation, and Azure Resource Manager. The course covers the basics of IaC, including configuration management, server templating, and infrastructure provisioning. It also discusses the security aspects of IaC, including best practices for securing IaC code and preventing common security risks. Through hands-on demonstrations and examples, students will learn how to set up infrastructure in the cloud using IaC tools and how to implement security measures to protect their infrastructure. By the end of the course, students will have a solid understanding of IaC and how to apply it in real-world scenarios.

This course covers the importance of containers in DevOps, explaining their benefits and uses within a DevOps environment. It defines what artifact registries are and their role in the CI/CD pipeline. The course also explores DevSecOps tools and scan tests, including vulnerability scanning, source code analysis, and dependency analysis. Additionally, it discusses the need for scanning and testing tools and their common implementations. By the end of the course, learners will understand the value of containers, artifact registries, and scan/test tools in a DevOps pipeline.

(Chapter 1 out of 1)

This course covers the basics of source control systems, including the definition, importance, and leading technologies such as Git and GitHub. Students will learn how to practice and apply Git's concepts, examine the commit approach from different team perspectives, and understand the benefits of hosted Git solutions. The course includes demonstrations of basic Git commands, branching with multiple users, and strategies for team collaboration. By the end of the course, students will be able to define what a source control system is, recognize the need for it, and leverage important Git features and concepts.

(Chapter 1 out of 1)

This course introduces the concept of DevOps, its evolution, and its importance in software development. It covers the components of DevOps, its benefits, and the risks of not implementing it properly. The course also explores the future trends in DevOps, including emerging jobs, improved collaboration, and developer-driven observability. Through case studies and video transcripts, learners will gain a deeper understanding of DevOps and its applications in real-world scenarios. By the end of the course, learners will be able to recognize the importance of DevOps, define its components, and predict its future direction.

(Chapter 1 out of 1)

This course explores the culture of DevOps, a crucial aspect of implementing DevOps in an organization. It delves into the importance of culture in DevOps, discussing how cultural changes are essential for realizing the goals of DevOps. The course covers the Three Ways of DevOps: system flow, feedback loops, and high-trust teams. It explains how understanding the system's flow, shortening feedback loops, and building high-trust teams are critical to a successful DevOps environment. The course also discusses the characteristics of high-trust teams, including high levels of cooperation, shared responsibilities, and a culture that encourages experimentation and learning from failure. By the end of the course, learners will be able to explain why culture is critical to implementing DevOps, recognize the importance of feedback loops, and identify the characteristics of high-trust teams.

(Chapter 1 out of 1)

This course introduces students to DevOps, a software engineering practice that aims to align IT resources towards the common goal of deploying and running better solutions. The course covers the foundation of DevOps, including its definition, cultural changes required to adopt it, and the tools used in a DevOps environment such as source control systems, infrastructure as code, containers, artifact registries, and the CI/CD pipeline. The course also explores practical applications of DevOps and how to get started using it with hyperscaler CSPs like AWS, Azure, and GCP. By the end of the course, students will be able to explain what DevOps is, describe its various components, and compare development tools offered by different cloud providers.

(Chapter 1 out of 1)

This course, Workload Analysis for Cloud Migration or Repatriation, prepares learners to effectively present and support their recommendations for migrating workloads to the cloud or repatriating them to on-premises. The course covers key aspects such as understanding business vision and program charter, identifying key stakeholders, and analyzing the impact of public cloud services on current data, application, and technology architectures. Learners will also learn how to prepare and present their analysis, including creating a preparation checklist, supporting their recommendation with KPIs and program costs, and engaging their audience with a strong presentation. The course emphasizes the importance of considering business needs and objectives when making migration decisions and provides guidance on how to navigate complex and challenging programs. By the end of the course, learners will be able to develop a comprehensive recommendation for workload migration or repatriation and effectively communicate it to business leadership.

This course focuses on developing and analyzing key performance indicators (KPIs) to inform decisions about migrating workloads to or from the cloud. Students will learn how to identify business goals, develop and score KPIs, and interpret data using a weighted scorecard. The course covers topics such as good KPIs, interviewing stakeholders, identifying KPIs, aligning KPIs with solutions, and designing and testing solutions. By the end of the course, students will be able to make informed recommendations about workload placement using KPIs and a weighted scorecard. The course is designed for professionals who need to make decisions about cloud migration and want to learn a structured approach to evaluating options. The course includes video transcripts, knowledge checks, and interactive elements to engage students and reinforce learning.

(Chapter 1 out of 1)

This course focuses on architecture alignment, which is crucial for supporting an organization's business goals. It covers three primary domains of architecture: data, application, and technology. The course begins with an introduction to architecture alignment, explaining its importance and the three domains. It then delves into each domain, starting with data architecture, which involves documenting an organization's data assets and managing data flow. The application architecture domain is discussed next, emphasizing the importance of understanding how software solutions interact to meet business requirements. Finally, the technology architecture domain is explored, highlighting the need for a technical blueprint to deliver the target architecture. Throughout the course, real-world examples and use cases, such as TeraByte Clothing Company's digital transformation, are used to illustrate key concepts and best practices. By the end of the course, learners will understand how to align these architectures to support their organization's business goals and facilitate a successful cloud migration.

This course focuses on the importance of having a well-defined business vision and architecture strategy before moving workloads to the cloud. It covers key aspects such as workload identification and assessment, application data flow, software licensing issues, dependencies on other applications, and supportability. The course also discusses common strategies and methodologies for successful migration, including the 6 R's of migration (Retain, Replatform, Retire, Repurchase, Rehost, and Re-architect) and the importance of cloud cost and skills assessment. By the end of the course, learners will be able to identify the business vision and key performance indicators driving the decision to move to or from the cloud and follow a process for creating a well-informed business case for migration.

This course provides an introduction to cloud migration workload analysis, covering the structure of the course, architectural frameworks, leadership, and governance. Students will learn how to evaluate the migration of an enterprise workload into the public cloud or repatriating it back to on-premises. The course will cover key concepts, practical tools, and basic processes to help enterprises make informed decisions. It will also explore the use of The Open Group Architecture Framework (TOGAF) and other frameworks, as well as the importance of leadership and governance in the migration process. By the end of the course, students will be able to develop a migration recommendation for a hypothetical company, TeraByte Clothing Company, and understand the key factors that impact the success or failure of a project.

This course covers the importance of multi-cloud management tools, including their benefits, areas of management, and considerations for selection. It explores six areas of multi-cloud management: provisioning and orchestration, cost management and resource optimization, cloud migration, backup and disaster recovery, identity, security and compliance, and monitoring and observability. The course also discusses key factors to consider when selecting multi-cloud management tools, such as collaboration, CSP integration, enterprise identity system integration, ease of use, scale, and hosting. Additionally, it covers inventory and classification in multi-cloud environments, including the use of tags and automation. By the end of the course, learners will be able to review the importance of multi-cloud management tools, identify areas of multi-cloud management, and describe considerations for selecting multi-cloud tools.

(Chapter 1 out of 1)

This course explores ISV multi-cloud strategies, focusing on Infrastructure as Code (IaC) components and multi-cloud strategies from Red Hat and VMware. It discusses the benefits of IaC, such as lower cost, higher speed, and consistent environments. The course also delves into Red Hat's open hybrid cloud strategy, which is rooted in Red Hat Enterprise Linux, Red Hat OpenShift, and Ansible Automation Platform. Additionally, it covers VMware's hybrid and multi-cloud strategy, which provides a consistent infrastructure for applications across all cloud environments. The course highlights the partnership between Intel and these providers, enabling businesses to innovate quickly and maintain performance. By the end of the course, learners will understand how to automate deployment and operation in multiple clouds using IaC and how Intel is the core component for these platforms.

This course explores OEM cloud strategies and management, focusing on Intel's role in OEM infrastructure and the cloud strategies of HPE and Dell. It covers the importance of consistent underlying architecture, the benefits of hybrid and multi-cloud environments, and the various services offered by HPE and Dell, such as HPE GreenLake and Dell APEX. The course also discusses the advantages of using Intel-based clouds, including optimized performance, cost savings, and simplicity. Additionally, it highlights the need for businesses to consider factors such as latency, user experience, data gravity, and security when choosing cloud offerings. By the end of the course, learners will be able to explain Intel's role in OEM infrastructure, HPE's and Dell's cloud strategies and management, and the importance of consistent underlying architecture in cloud deployments.

This course covers Google Cloud Platform's (GCP) hybrid and multi-cloud strategies, including tiered hybrid, partitioned multi-cloud, analytics hybrid multi-cloud, edge hybrid cloud, environment hybrid cloud, business continuity hybrid, and cloud bursting approaches. Students will learn how to evaluate GCP hybrid and multi-cloud solutions, describe the Distributed Cloud Edge architecture, and understand the best practices and advantages of each approach. The course also covers the importance of business continuity in a hybrid multi-cloud scenario and how to implement cloud bursting to reuse existing investments in data centers and private computing environments.

This course covers AWS multi-cloud strategies, including extending AWS services and infrastructure into on-premises and edge locations. It explores the role of VMware Cloud on AWS in extending hybrid strategies and applies AWS multi-cloud strategies to run container-based applications on-premises and in the cloud. The course discusses various AWS services such as AWS Outposts, AWS Local Zones, AWS Wavelength, and AWS Snow Family, as well as Amazon Elastic Kubernetes Services (EKS) Anywhere and Amazon Elastic Container Services (ECS) Anywhere. By the end of the course, learners will be able to identify how AWS services can be extended into on-premises and edge locations, understand the role of VMware Cloud on AWS, and apply AWS multi-cloud strategies to run container-based applications.

This course covers Microsoft Azure Multi-Cloud Strategies, focusing on extending hybrid and multi-cloud environments. It discusses Azure infrastructure solutions such as Azure VNet, Azure VNet Gateway, Azure ExpressRoute, Azure Front Door, and Azure Traffic Manager. The course also explores Azure Arc and VMware solutions on Azure, including Azure Stack, Azure IoT, and Azure Edge Zones. Students will learn about the various Azure Edge options, including Azure Stack HCI, Azure Hybrid Cloud, and Azure Edge Zones, and their use cases. The course aims to provide a comprehensive understanding of Azure's hybrid and multi-cloud capabilities, enabling students to identify and implement the best solutions for their organization's needs.

This course covers containerization strategies for multi-cloud environments, exploring how containers support multi-cloud strategies and the value Intel adds to these strategies. It delves into the benefits and challenges of using containers in hybrid and multi-cloud deployments, including workload portability, governance, and security. The course also discusses best practices for utilizing containers, such as CI/CD workflows and DevSecOps, and examines use cases like Azure Hybrid, Multi-Cloud with Red Hat OpenShift Container Platform, and Multi-Cloud Innovations with VMware. Additionally, it highlights Intel's enablements, including the Network and Cloud Edge Reference System Architectures Portfolio and software components like Node Feature Discovery and Multus. By the end of the course, learners will understand how to apply containerization strategies in multi-cloud environments effectively.

This course focuses on improving Total Cost of Ownership (TCO) in a multi-cloud environment. It explores how workload placement, data gravity, tooling, governance, automation, and people skills impact TCO. The course discusses strategies for optimizing TCO, including creating an architecture cost analysis, selecting cloud-agnostic tools, and developing a single enterprise governance framework. It also examines the importance of considering data gravity, network egress costs, and data access latency when making decisions about workload placement and cloud services. Additionally, the course covers the impact of expanding to multi-cloud on people and skills TCO, and how to optimize skills for multi-cloud environments. By the end of the course, learners will be able to describe how workload placement and data gravity influence TCO, recognize how tooling, governance, and automation choices impact TCO, and understand how to optimize people and their skills for multi-cloud environments.

This course covers cloud workload placement strategies, including business and technical considerations. Students will learn about the factors that drive workload placement decisions, such as executive-level mandates, cost, capabilities, and controls. The course also explores the 3 C's (cost, capabilities, and controls) and how they define business needs. Additionally, students will learn about technical considerations, including compute, network, storage, and licensing costs, as well as capabilities and controls. The course also touches on emerging technologies and the importance of upskilling talent pools. By the end of the course, students will be able to explain the business and technical considerations for workload placement and make informed decisions about where to place workloads.

(Chapter 1 out of 1)

This course explores the importance of multi-cloud and hybrid cloud, covering the drivers of hybrid and multi-cloud, market dynamics and trends, and best practices for implementation. It discusses the fundamentals of hybrid and multi-cloud, workload placement, total cost of ownership, and multi-cloud offerings from various vendors. The course also delves into the challenges of digital transformation, workload modernization, and cloud migration, and provides guidance on developing a cohesive cloud strategy and operating model. Additionally, it covers the potential barriers to cloud implementation and the importance of continuous improvement. By the end of the course, learners will understand the significance of multi-cloud and hybrid cloud and be able to apply a structured approach to respond to business challenges and objectives.

This course covers the use of Intel VTune Profiler for profiling production Java workloads in the cloud. It provides an overview of VTune Profiler, its capabilities, and how to configure a cloud instance for profiling. The course also covers the setup of a Spark workload and how to attach to a Java service to get a collection. Additionally, it discusses the different ways to profile a remote target, including using the VTune web server capability, running the VTune GUI on a local system, and installing VTune on the target instance. The course also explores the configuration of a cloud instance for Java-based sampling and the use of the VTune Profiler server. By the end of the course, learners will be able to profile a production Java workload in the cloud using Intel VTune Profiler and understand how to configure their cloud instance for optimal performance.

This course provides an overview of PerfSpect, a telemetry tool based on Linux perf, and its application in characterizing workloads and detecting performance anomalies. The course covers the challenges that PerfSpect solves, its support for 3rd Gen Intel Xeon Scalable processors, and new features such as workload similarity analysis. Students will learn how to apply PerfSpect to compare and analyze real-world benchmarks, and how to use its output to optimize application performance. The course also covers the architecture of PerfSpect, its key features, and its validation process. Additionally, the course provides real-world use cases where PerfSpect is invaluable, such as identifying memory bottlenecks, debugging application performance, and detecting performance anomalies.

This course covers the performance methodology in the cloud, focusing on performance characterization for full-stack workload profiling. It introduces the PerfSpect tool and discusses the importance of characterization in identifying performance issues. The course also covers profiling, tracing, and the use of tools such as perf and flame graphs to analyze performance data. Additionally, it highlights the need for widespread observability from silicon to applications and the importance of a disciplined approach to performance analysis and optimization.

This course covers the map and zoom methodology for full-stack workload profiling in the cloud. The methodology consists of four parts: platform and infrastructure health check, map, zoom, and corrective action. The course explains how to use the map and zoom methodology to optimize performance by identifying latency, communication activity, and utilization hotspots. It also discusses the importance of a platform and infrastructure health check to ensure that the hardware and software are properly configured. The course includes examples of how to use tools such as the Intel Memory Latency Checker to analyze memory bandwidth and latency. By the end of the course, students will be able to describe the four parts of the map and zoom methodology and apply it to optimize performance in the cloud.

This course covers the methodology for full-stack workload profiling in the cloud, including the use of performance characterization and flame graphs to resolve real-world problems. The course uses the example of FlowGo, an open-source microservice-based blockchain application, to demonstrate how to identify and resolve performance issues. Students will learn how to use tools such as TCP Life and PMU to trace TCP communications and analyze performance data. The course also covers the concept of CPI (cycles per instruction) and how it can be used to measure performance. By the end of the course, students will be able to apply the full-stack workload profiling methodology to resolve performance issues in their own applications.

This course is designed for cloud solutions architects who want to develop a deeper understanding of performance optimization. The course consists of multiple lessons that cover the methods and best practices of performance optimization, including tools to profile and analyze performance bottlenecks in applications. The course focuses on a holistic performance approach when tuning software specifically for cloud environments and covers topics such as full-stack workload profiling, performance methodology, and the use of tools like PerfSpect and VTune. By the end of the course, students will be able to understand the methods and best practices of performance optimization, implement a holistic performance approach, and use various tools to analyze and optimize application performance in the cloud.

This course covers emerging network security protocols, including TLS, IPsec, QUIC, and WireGuard. It explains the application of these protocols in networks, their limitations, and the key features of each. The course also discusses the impact of these protocols on Secure Access Service Edge (SASE) and how they are used to provide encryption and authentication in networks. Students will learn about the evolution of QUIC and WireGuard, their advantages over traditional protocols, and how they are being adopted in various applications and network infrastructure. The course provides a comprehensive overview of the current state of network security protocols and their role in providing secure communication over networks.

This course covers the SASE Point of Presence (POP) reference architecture for vendor POPs. It describes the use case for a SASE POP, creating an Intel-centric POP architecture, and understanding how key Intel components fit together to enable a SASE POP. The course explores the design of a POP rack architecture, including performance objectives, platform performance requirements, data center constraints, and rack configuration. It also discusses the Intel components that make up the SASE POP, including the Intel Xeon Scalable Processors, Ethernet connectivity, and open-source software offerings. Additionally, the course covers base deployment models, extensions, and software contributions. By the end of the course, learners will be able to describe the use cases for a SASE POP, create an Intel-centric POP architecture, and understand how Intel components enable a SASE POP.

(Chapter 1 out of 1)

This course covers the concept of Zero Trust and its evolution in replacing older security solutions. It describes the challenges of Zero Trust implementation and identifies key Intel technologies that support Zero Trust implementation, such as Intel SGX and WireGuard Acceleration. The course also covers the design and implementation of an Intel Zero Trust reference architecture, including user authentication, service authorization, and secure network tunnels. Additionally, it discusses the system workflow for Zero Trust solutions and the role of the controller in enforcing policy rules. The course provides a comprehensive overview of Zero Trust and its applications, as well as the benefits of using Intel-optimized Zero Trust solutions.

This course covers the adoption of network security AI in Secure Access Service Edge (SASE) to prevent cyberattacks. It discusses the challenges of traditional methods in detecting cyberattacks and how AI can help. The course also explores the role of deep learning in preventing cyberattacks and introduces Intel's solutions, including Intel oneDNN and Intel Neural Compressor, to improve deep learning performance. Additionally, it covers the importance of major neural networks and Intel's offerings in detail. The course provides a comprehensive understanding of how to adopt network security AI in SASE without sacrificing performance and achieving it without impacting the total cost of ownership.

This course covers the topic of Optimized Software-Defined WAN Solutions for the Edge. It discusses the importance of SASE and its application in the cloud and enterprise, Zero Trust, key management, and the next generation of networking protocol. The course also explains the difference between traditional WAN and SD-WAN, defines SD-WAN building blocks, and designs a uCPE solution based on performance requirements. Additionally, it identifies the essential ingredients to architect an SD-WAN solution for optimal performance and discusses the transformation of uCPE solutions to cloud-native. The course is designed for those involved in edge networking and security solutions, including solutions architects, developers, and technical experts.

The course Run Intel Tools in the Cloud: Intel AMX & Intel AVX-512 Demonstration is designed to help users understand how to take advantage of hardware optimizations to get optimal AI model performance. The course focuses on the difference between performance with and without Intel AVX-512 and evaluates the performance difference between Intel AVX-512 and VNNI, as well as the difference between Intel AVX-512 and AMX. Through a demonstration, users will learn how to benchmark AI models using different instructions and models, including FP32, int8, and AVX-512. The course covers the use of Intel Model Zoo, a GitHub repository that provides optimized deep learning models, and shows how to use NUMA control to bind workloads to physical cores. By the end of the course, users will be able to set a baseline on a system, enable higher instruction sets, and compare performance to show the additional performance gained with those instruction sets.

(Chapter 1 out of 1)

This course introduces the concept of federated learning, a technique that enables the training of AI models on distributed data without compromising data privacy. The course covers the basics of federated learning, its applications, and the benefits it offers. It also explores the challenges associated with implementing federated learning and how Intel's OpenFL tool can help address these challenges. The course includes real-world examples of federated learning in action, such as its use in medical research and financial fraud detection. By the end of the course, learners will understand how federated learning can be used to add value to AI applications while respecting data privacy and security regulations.

This course covers the basics of distributed AI in the cloud, focusing on deep learning training and parallelism. It explores the different types of distributed training models and topologies, including data parallelism and model parallelism. The course also discusses the challenges and communication overhead associated with deep learning, highlighting the importance of considering compute, memory, and communication. Additionally, it introduces Intel's Habana Gaudi platform, an ASIC-based platform designed for deep learning training with flexible topologies. The course also compares CPU, GPU, and XPU architectures for model parallelism, discussing their strengths and weaknesses. By the end of the course, learners will understand the fundamentals of deep learning and parallelism, as well as the importance of considering multiple factors when designing distributed AI systems.

This course covers the basics of distributed AI in the cloud, focusing on deep learning and parallelism. It introduces the concept of distributed training models and topologies, including data parallelism and model parallelism. The course also explores the challenges and communication overhead associated with deep learning, highlighting the importance of considering compute, memory, and communication. Additionally, it discusses Intel's Habana Gaudi platform and its flexible topologies, as well as the comparison between CPU, GPU, and XPU on model parallelism. By the end of the course, learners will understand the fundamentals of deep learning and parallelism, the importance of network topology, and the benefits and challenges of using different types of parallelism.

The course provides an overview of the Intel Gaudi AI accelerator, a purpose-built deep learning acceleration processor for both deep learning training and inference at scale. The course covers the hardware and software architecture of the Intel Gaudi AI accelerator, including its matrix multiplication engine, Tensor processing cores, and 96 gigabytes of onboard HBM2E memory. It also discusses the SynapseAI software stack, which is designed for performance and ease of use, and supports PyTorch and TensorFlow models. The course explains how to migrate models from GPUs to the Intel Gaudi AI accelerator using the GPU Migration Toolkit and how to run optimized generative AI and large language models on the Intel Gaudi 2. Additionally, it covers rack-level integration for the Intel Gaudi 2 AI accelerator, including the reference server, connectivity, and administrative tools needed to manage the platform.

The course provides an overview of the Intel Gaudi AI accelerator, a purpose-built deep learning acceleration processor for both deep learning training and inference at scale. The course covers the hardware and software architecture of the Intel Gaudi AI accelerator, including its matrix multiplication engine, Tensor processing cores, and 96 gigabytes of onboard HBM2E memory. It also discusses the SynapseAI software stack, which is designed for performance and ease of use, and supports PyTorch and TensorFlow models. The course explains how to migrate models from GPUs to the Intel Gaudi AI accelerator using the GPU Migration Toolkit and how to run optimized generative AI and large language models on the Intel Gaudi 2. Additionally, it covers rack-level integration for the Intel Gaudi 2 AI accelerator, including the reference server, connectivity, and administrative tools needed to manage the platform.

This course provides an overview of the key AI services and platforms offered by the three largest cloud service providers: Amazon Web Services, Microsoft Azure, and Google Cloud. Students will learn about the major categories of AI services and tools, including turnkey services, AI tools and platform services. The course covers the importance of optimized software stacks and images, as well as the impact of careful hardware or instance selection. By the end of the course, students will be able to identify the major categories of AI services and tools, understand the importance of optimized software stacks and images, and recognize the impact of careful hardware or instance selection.

This course introduces OpenVINO, an open-source toolkit for optimizing and deploying AI inference. OpenVINO enables users to convert, optimize, and deploy AI models across Intel and third-party hardware without restrictions. The course covers the three-step process of model, optimize, and deploy, and explores the tools available in OpenVINO, including the Neural Network Compression Framework (NNCF) and OpenVINO Model Server (OVMS). Students will learn how to use OpenVINO to accelerate inference, reduce footprint, and optimize hardware utilization while maintaining accuracy. The course also discusses the benefits of OpenVINO, including performance, usability, and versatility, and provides examples of how OpenVINO is used across various industries, such as healthcare, retail, and finance.

This course covers the concept of running AI end-to-end in the cloud, focusing on understanding what constitutes an end-to-end AI pipeline, the importance of looking at the problem/performance holistically, and applying end-to-end AI optimization strategies. The course explores the various phases of an AI pipeline, including data collection, data ingestion, feature engineering, model training, and deployment. It also discusses the importance of optimizing AI workflows, including data+AI software acceleration, system-level tuning, runtime parameter optimizations, workload scaling, and learning optimizations. The course provides real-world examples of optimization strategies and techniques, including model quantization, pruning, and knowledge distillation, and demonstrates how to achieve significant performance improvements using optimized software libraries and frameworks. By the end of the course, learners will be able to understand the end-to-end AI pipeline, apply optimization strategies, and improve the efficiency and performance of AI applications in the cloud.

This course, Running AI End-to-End in the Cloud, covers the importance of optimizing AI pipelines for better performance and efficiency. It introduces the concept of an end-to-end AI pipeline, which includes data collection, data ingestion, feature engineering, model training, and deployment. The course highlights the need to look at AI problems holistically, rather than focusing on individual components. It also discusses various optimization strategies, including AI software acceleration, system-level tuning, runtime parameter optimizations, workload scaling, and learning optimizations. The course provides real-world examples of AI workflows and demonstrates how to apply these optimization strategies to achieve significant performance boosts. By the end of the course, learners will be able to understand what constitutes an end-to-end AI pipeline, comprehend the importance of holistic optimization, and apply various optimization strategies to achieve efficient AI performance.

This course covers the process of choosing the best public instance for AI workloads on the cloud. It explores the different types of services available for public cloud instances, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). The course delves into the nuances of each service, including the differences between instances with and without VNNI, and how hardware considerations, data considerations, model development stages, and optimized software can affect instance selection. Students will learn how to identify AI workload requirements, including hardware, data, and software considerations, and how to choose the right instance size and type for their specific needs. The course also covers methods for deploying AI workloads, including the use of pre-optimized containers and the importance of selecting the right image for Intel processors.

(Chapter 1 out of 1)

This course is designed for cloud solutions architects who want to gain a deeper understanding of Artificial Intelligence (AI) in the cloud. The course covers the basics of AI in the cloud, including the different classes of training and inference, and the distinct price/performance tradeoffs. Students will learn about the four main types of AI inference, including server application inference, client application inference, batch or streaming inference, and edge inference. The course also explores how Intel hardware can make a difference in AI workloads, including the use of Intel Xeon scalable processors and Intel accelerators. Through a combination of video lessons, demos, and hands-on labs, students will gain practical experience with AI pipeline, benchmarking, instance selection, and federated learning. By the end of the course, students will be able to identify which parts of the AI pipeline run best in the cloud, benchmark instances for best AI performance, and run AI workloads in the cloud using Intel tools and optimizations.

This course introduces learners to Intel Software Guard Extensions (Intel SGX) in a container, specifically focusing on Azure Confidential computing on AKS. It covers how Gramine and Gramine Shielded Containers support unmodified applications to run inside an Enclave and how to set up Gramine secret provisioning. The course includes a lab where learners practice creating server and client images, deploying them in an AKS confidential compute cluster, and checking SGX quote generation and verification. The course is designed for learners with a basic understanding of Azure Confidential Computing, Kubernetes, Gramine, and Gramine Shielded Containers. By the end of the course, learners will be able to explain Azure Confidential computing on AKS, demonstrate how Gramine supports unmodified applications, and practice setting up Gramine secret provisioning.

(Chapter 1 out of 1)

This course covers advanced networking concepts for containers, including Multiple Network Interfaces (Multus), SR-IOV, and DPDK in container-based network functions. It also explores Intel's optimized service mesh for container networking and how to implement a secure container network with an ISV Firewall. The course delves into the benefits and usage of Multus, including network segregation for functional and non-functional purposes, and link aggregation/bonding for network interface redundancy. Additionally, it discusses the UserSpace CNI plugin and its role in implementing UserSpace networking. The course also covers the Zero Trust service mesh model and how Intel technology enables crypto acceleration through Intel Advanced Vector Extensions 512 and Intel QuickAssist Technology. By the end of the course, learners will be able to relate how Multus is deployed, demonstrate how to use SR-IOV and DPDK in container-based network functions, and express the benefits of using Intel's optimized service mesh for container networking.

This course covers the concept of containerized applications on the Edge, explaining why emerging Edge applications need containerization, how containerized Edge applications work, and exploring Intel solutions for Edge containerization. The course delves into the architecture for Edge, including examples and requirements, and discusses the benefits of using containerized architecture for Edge compute solutions. It also introduces Intel enablements such as Intel Edge Insights for Industrial and Intel Edge Controls for Industrial, which provide a modular software for industrial Edge inference IoT use cases and a versatile, interoperable platform for industrial control, respectively. By the end of the course, learners will understand the importance of containerization for Edge applications, the requirements for running containerized applications on Edge compute, and the Intel solutions available for Edge containerization.

(Chapter 1 out of 1)

This course covers the concept of Ingress for microservices, including its definition, purpose, and implementation. It introduces Envoy as an Ingress API controller and explores its key features, such as dynamic configuration, filters, and observability. The course also discusses Intel's contributions to Envoy, including performance improvements, security enhancements, and compression acceleration. By the end of the course, learners will understand how to use Ingress in practice, including configuring an Ingress API and deploying an Ingress controller. The course also touches on the relationship between Ingress and service mesh, and how Ingress is used in conjunction with service mesh implementations. Overall, the course provides a comprehensive overview of Ingress for microservices and its role in cloud-native applications.

(Chapter 1 out of 1)

This course covers the security challenges and privacy concerns associated with public cloud container deployment. It explores how to secure data and secrets in multi-cloud environments using Intel SGX containers and the hybrid trust model for container deployment. The course also delves into confidential container services and solutions offered by public cloud and open-source projects, including Azure Enclave Aware Container, Inclavare Containers, and SCONE. Students will learn about the benefits and challenges of public cloud container deployment, how to secure sensitive data in a cloud environment, and how to use Intel SGX technology to create isolated enclave environments. The course also covers the use of Occlum, a memory-safe, multi-process library OS for Intel SGX, and the Enclave Attestation Architecture (EAA) for remote attestation. By the end of the course, students will have a comprehensive understanding of how to secure containers for privacy in a multi-cloud environment.

This course provides knowledge and tactical feedback on Intel container technologies throughout multiple markets and pipelines. It covers container security, isolation, and privacy, as well as Ingress, applications at the edge, and advanced networking functions. The course includes lessons, demos, and hands-on labs to make students familiar with performance methods, tools, and workloads in the cloud. It also explores Intel technologies for containers, including Intel Kata Containers and Intel Software Guard Extensions (Intel SGX). By the end of the course, students will be able to explain key container fundamentals, demonstrate container ingress details, and examine Intel technologies for containers.

This course provides a demonstration of Azure File Sync, a service that enables users to extend their on-premise file server storage capacity into the cloud. The course covers the evaluation of server compatibility, deployment, testing, and troubleshooting of Azure File Sync. It also includes a step-by-step guide on how to install the Azure File Sync agent, register the server, and create a sync group. The course is designed to provide a simple example of how to leverage Azure File Sync to extend an HCI environment into the cloud, and it highlights the benefits of using a hybrid cloud approach for storage. By the end of the course, users will be able to evaluate their server's compatibility with Azure File Sync, deploy the service, test and troubleshoot it, and understand how to use it to extend their on-premise storage capacity into the cloud.

(Chapter 1 out of 1)

This course explores the future of cloud storage, focusing on the technological advancements and innovations driving its evolution. It delves into the impact of AWS Nitro on cloud computing, Intel's role in shaping the future of cloud computing, and the significance of the Compute Express Link (CXL) standard. The course discusses how these developments will transform the way data is processed, stored, and managed in the cloud, enabling more efficient, scalable, and secure cloud infrastructure. By the end of the course, learners will understand the key benefits of AWS Nitro, Intel's offload capability, and the CXL standard, as well as their roles in revolutionizing cloud storage.

(Chapter 1 out of 1)

This course covers the evolution of modern hyper-converged infrastructure (HCI) and its role in driving more services into the ecosystem. It explains the components, key features, and benefits of HCI, as well as Intel's affinity for HCI options. The course also explores three use cases for HCI in a multi-cloud environment and discusses the transformation of on-premise infrastructures. Additionally, it delves into hybrid cloud infrastructure business solutions, including Azure Stack HCI and Nutanix, and highlights the value Intel brings to these solutions. By the end of the course, learners will be able to explain the concepts and benefits of HCI, identify its use cases, and understand the role of Intel in the HCI market.

(Chapter 1 out of 1)

This course covers the economics of cloud storage, including the three rules of storage: capacity, availability, and performance. It also explores the concept of data temperature, which categorizes data into hot, warm, and cold classifications. The course discusses technical value vectors such as latency, IOPS, and throughput, and how they impact storage decision-making. Students will learn about the tradeoffs between different storage solutions and how to choose the most cost-effective and user-centered solutions. The course also covers the evolution of object storage and its role in cloud storage, as well as the importance of considering workloads and data temperature when designing storage architectures.

This course covers the key features and benefits of using MinIO for private, public, and hybrid implementations as part of a well-architected solution. It compares and contrasts MinIO with other contenders, such as AWS and Ceph storage, and explains how the MinIO high-performance storage solution works. The course also describes typical deployment models for MinIO clusters and provides a real-world use case for implementing a private cloud with MinIO, factoring in requirements for fault tolerance, storage sizing, performance, scalability, and securing access to the data store. By the end of the course, learners will be able to explain the key features and benefits of MinIO and apply it to real-world scenarios. The course includes a video transcript that provides an overview of MinIO and its applications. Overall, the course provides a comprehensive understanding of MinIO and its role in private cloud storage. The course is designed to be completed in a single lesson, and learners can expect to gain practical knowledge and skills in implementing MinIO in their own environments. The course is suitable for learners who want to learn about private cloud storage and MinIO, and who want to gain hands-on experience with the technology.

This course covers the key features and benefits of using MinIO for private, public, and hybrid implementations as part of a well-architected solution. It compares and contrasts MinIO with other contenders in the space, such as AWS and Ceph storage. The course explains how the MinIO high-performance storage solution works and describes typical deployment models for MinIO clusters. It also covers the process of deploying MinIO, including minimum viable small-scale deployments, eight-node clusters, and multi-site deployments. The course provides an in-depth look at the considerations for deploying MinIO clusters, including fault tolerance, storage sizing and performance requirements, scalability constraints, and security. By the end of the course, students will be able to explain the key features and benefits of MinIO, compare and contrast it with other object storage solutions, and describe typical deployment models for MinIO clusters.

(Chapter 1 out of 1)

This course provides a deep dive into object storage, covering its fundamentals, protocols, and APIs. Students will learn about the differences between block, file, and object storage, and how object storage is useful and scalable. The course also explores traditional object workloads, emerging drivers, and new use cases, including the use of object storage in cloud-native applications and microservices. Additionally, the course covers the Simple Storage Service (S3) and its extensions, such as S3A and S3 Select, and how they are used in various scenarios. By the end of the course, students will have a thorough understanding of object storage and its applications in modern cloud-based systems.

(Chapter 1 out of 1)

This course covers the concepts of cloud storage, including the evolution of block, file, and object storage. It explains how local vs remote storage works in the cloud and the importance of core storage features. The course also discusses the taxonomy of cloud storage, data storage types, and storage-oriented applications. Additionally, it covers containers for storage, storage architecture features and accelerators, and future trends in storage infrastructure. By the end of the course, students will understand the fundamentals of cloud storage and its applications.

(Chapter 1 out of 1)

This course on cloud storage is designed to help cloud solutions architects understand the underlying technologies of cloud storage, critical in designing and developing demanding complex solutions. The course covers key concepts, trends, and architectures related to cloud storage services, including public cloud and hybrid cloud solutions. It also explores the increasing need to support advances in high-performance computing, artificial intelligence, and machine learning. Through hands-on labs and additional resources, participants will gain practical experience and develop their knowledge base with cloud-centric storage products. The course aims to enable participants to excel as cloud solutions architects and understand the immense potential of the cloud.

This course summarizes Intel's position on cloud monitoring, specifically focusing on cloud telemetry based on hardware-level PMUs. It explores when to use these tools, how they differ from native tools provided by Cloud Service Providers (CSPs), and compares PMUs exposed by Intel CPUs to those exposed by non-Intel architectures. The course also covers how telemetry metrics drive workload and infrastructure efficiency and ways to instrument these telemetry tools in the cloud. By the end of the course, learners will understand Intel's perspective on cloud telemetry and its application in optimizing cloud environments.

This course covers Intel's position on cloud monitoring, focusing on the use of cloud telemetry tools, Intel Performance Monitoring Units (PMUs), and their differentiation from others. It explores how telemetry metrics drive workload and infrastructure efficiency, the availability of PMU-based telemetry across major cloud service providers, and the differences between on-premises and cloud PMU-based telemetry usage. The course also delves into selecting the right telemetry tools, integrating them with DevOps processes, and using Intel tools such as VTune Profiler and PerfSpect for performance optimization and analysis. By the end of the course, learners will be able to recognize when to use cloud telemetry tools, differentiate between various PMUs, predict how telemetry metrics can drive efficiency, and integrate telemetry tools with common DevOps processes.

(Chapter 1 out of 1)

This course covers the Data Pipeline for Telemetry, focusing on the key requirements driving data platform design, comparing data ingestion pipelines, and explaining how end users can access data to provide value for their projects. The course delves into the data platform architecture, including the data lake, data mart, and user sections, and discusses the importance of a unified data model. It also explores the various ingestion pipelines, such as batch and real-time pipelines, and the tools available for accessing data, including SQL and Jupyter. Through a demonstration, the course shows how to submit data sets to the data platform and use the available analytics tools to derive insight. By the end of the course, learners will be able to describe the key requirements for the data platform, compare the data ingestion pipelines, and explain how to access data to provide value for their projects.

This course covers the concept of observability with microservices, focusing on OpenTelemetry (OTel), an open-source observability framework. Students will learn about the importance of telemetry, its types, and its role in observability. The course delves into OTel architecture, its capabilities, and how to use them for debugging modern applications via distributed tracing and metric correlation. It also explores system root cause analysis and system monitoring. Through demonstrations and hands-on activities, students will understand how to apply OTel in real-world scenarios, including instrumenting microservices, collecting and processing telemetry data, and visualizing traces with tools like Jaeger. By the end of the course, students will be able to apply the context of telemetry to observability, recognize the importance of OTel, and demonstrate how to debug and monitor modern applications effectively.

This course covers the topics of power telemetry, sustainability, and power capping in data centers. It discusses Intel's role in data center sustainability, how to use Intel technology and software to create a sustainable ecosystem, and how to implement sustainability practices that meet tomorrow's carbon emissions standards. The course also explores key features and telemetry that allow for granular control, such as using ipmitool and Intel Data Center Manager (DCM) to monitor and manage power consumption, thermal output, and performance. Additionally, the course provides demonstrations of how to use these tools to optimize data center efficiency and reduce carbon emissions.

This course covers advanced telemetry use cases, including the function of performance monitoring units (PMUs) and how they drive decision-making. It also explores how Intel's most advanced customers are using telemetry in their production environments and how to replicate these use cases in your own environment. The course discusses the importance of perfmon-metrics, a hardware feature available on Intel Xeon Scalable processors, and how it is used to measure software performance and profile workloads. Additionally, the course delves into Intel's collaboration with Google to create improved toolsets and the development of open-source perfmon-metrics for tooling. The course concludes with a demonstration of how to use the perf metric files released in Intel's perfmon open-source repository.

This course provides an overview of global cybersecurity regulations and policy developments with a focus on the United States, Europe, and Asia. Led by Dr. Amit Elazari, it explores foundational cyber law principles, national security directives, product security regulations, and sector-specific laws. It emphasizes the evolution of cloud security standards, Zero Trust, supply chain transparency, and confidential computing as essential strategies for compliance and resilience.

This course offers a comparative exploration of key confidential computing technologies across leading cloud service providers—AWS, Azure, and Google Cloud. It covers AMD SEV, AWS Nitro Enclaves, and NVIDIA Confidential GPUs, and contrasts these with Intel's advanced security technologies such as Intel SGX, TME-MK, and TDX. The course highlights deployment options, architectural differences, and hardware-enforced isolation strategies used to secure sensitive data in cloud-native workloads.

This course provides a comprehensive guide to Intel® Microcode Updates (MCUs), focusing on their importance in server security, stability, and performance. Designed for system administrators, the course explains how to check, fetch, apply, and verify MCU updates using both automated and manual methods in Linux environments. Learners will understand how to mitigate firmware vulnerabilities, minimize downtime, and adopt advanced strategies like seamless firmware updates. Real-world demonstrations using tools like lscpu, dmesg, grep, and the Intel microcode utility are included for practical learning.

This course provides a comprehensive overview of Intel® QuickAssist Technology (Intel® QAT) and how it accelerates compute-intensive workloads such as cryptographic ciphers, public key cryptography, and lossless compression/decompression. Participants will learn how Intel QAT enhances performance in networking, storage, cloud, and big data environments. The course dives into cryptographic concepts, AES encryption, public key infrastructure, TLS/QUIC protocols, and compression algorithms like Deflate.

This is the second part of the Cryptography Lab on Crypto Acceleration, building on Part One. The course offers a hands-on lab using WordPress with TLS to demonstrate the performance and security benefits of Intel® Crypto Acceleration on 3rd Gen Intel® Xeon® CPUs. Learners explore how SSL and TLS processes are handled more efficiently, improving both speed and protection. The course emphasizes the value of Crypto-NI instructions and how system administrators or developers can prioritize security without compromising performance.

This course introduces the concept of cryptographic acceleration using 3rd Gen Intel® Xeon® Scalable processors. It emphasizes the role of Intel® Crypto Acceleration in enhancing performance for cryptography-heavy applications such as websites using WordPress, NGINX, PHP, and MariaDB. Participants will explore new cryptographic instruction sets, understand their performance benefits, and apply this knowledge in a hands-on lab involving TLS load testing with Siege. The course is intended for those with a basic understanding of website infrastructure, security, and performance optimization.

(Chapter 1 out of 1)

This course introduces learners to Confidential Computing with a focus on Intel® technologies such as SGX (Software Guard Extensions) and TDX (Trust Domain Extensions). Participants will gain a foundational understanding of how these technologies enable secure execution environments, and how they are supported in modern cloud platforms like Microsoft Azure. The course also includes hands-on labs using the Gramine Library OS and walks through application deployment and configuration for secure enclaves in Intel environments

(Chapter 1 out of 1)

This course covers the basics of OpenJDK, including its components, performance features, and how to select the right JDK for your processor. It explores the latest performance features in OpenJDK, such as vectorization, string array operations, and math libraries, and provides guidance on optimizing Java applications for Intel-based instances in the cloud. The course also discusses the benefits of upgrading to the latest JDK and how to choose the right JDK for your workload. Additionally, it covers the different Intel generations of processors and how to determine which one to use for optimal performance. By the end of the course, learners will be able to articulate the basics of OpenJDK, describe the latest performance features, and select the right JDK for their processor.

(Chapter 1 out of 1)

This course provides an overview of Apache Spark, a unified analytics engine for large-scale data processing. It covers the basics of Hadoop and its limitations, and how Apache Spark enhances the performance of data processing. The course also explores the components of Apache Spark, its uses, and the added value that Intel offers to Spark. Students will learn how to distinguish between Hadoop and Spark, differentiate between Spark and other processing engines, and describe how Intel adds value and improves efficiency in Spark. The course includes topics such as data integration, stream processing, machine learning, and interactive analytics, and provides an introduction to Intel Distributed Deep Learning (Intel BigDL) and Spark tuning.

(Chapter 1 out of 1)

This course provides an introduction to Databricks, a cloud-based data engineering platform that combines the best of data lakes and data warehouses. The course covers the benefits of adopting the lakehouse paradigm, differentiating the capabilities of Databricks, and selecting the right compute nodes to accelerate data processing. It also explores how to leverage optimized libraries to accelerate AI processing and define Intel goodness that supports Databricks. The course includes a demonstration of the Databricks platform and discusses how Intel technologies, such as the latest generation Intel Xeon Scalable processors, accelerate Databricks' capabilities. By the end of the course, learners will be able to identify the benefits of the lakehouse architecture, explain the different capabilities in Databricks, and leverage optimized libraries to accelerate AI processing.

(Chapter 1 out of 1)

This course covers the optimization of NoSQL databases, with a focus on MongoDB. It begins by introducing the history and evolution of NoSQL databases, discussing their key features and architectures. The course then delves into the specifics of MongoDB, including its advantages, architectural design, and deployment architectures. Additionally, it explores how Intel technology can be leveraged to benchmark and optimize MongoDB performance. The course concludes by summarizing the key takeaways and providing guidance on how to test and validate MongoDB optimization using benchmarking tools like YCSB.

This course covers the optimization of relational databases, including the evolution of databases, differentiating between OLTP and OLAP workloads, and techniques to optimize relational databases in four aspects: hardware, architecture, application, and database engine. The course also explores Intel's capabilities in databases, including reference architecture, proof of concept validation, and hardware features. Students will learn how to optimize relational databases using various methods, such as decoupling storage and compute, using in-memory cache, and leveraging Intel technologies like Optane and SPDK. The course also discusses database engine optimization, including parallel execution, multi-version concurrency control, and row and column storage.

(Chapter 1 out of 1)

This training module introduces the features and benefits of Granulate's tools — gProfiler, gAgent, and gCenter — used for real-time optimization of cloud-based workloads. Through video-based instruction and real-world use cases, learners will gain a deep understanding of flame graphs, profiling strategies, real-time system optimization, and security considerations. The course also includes a step-by-step walkthrough of the Proof of Value (POV) process and highlights customer success stories showcasing tangible performance and cost benefits.

(Chapter 1 out of 1)

This course introduces learners to Granulate's optimization tools—gProfiler and gAgent—offering insights into performance profiling and continuous optimization. Learners will explore differences between profiling and root cause analysis, gain hands-on experience with the gProfiler tool in a lab environment, and understand how gAgent enables real-time optimization without code changes. Case studies highlight cost savings and performance improvements in real-world applications.

Intel Cloud Optimizer by Densify is an advanced cloud optimization tool designed to help organizations optimize their public cloud infrastructure across AWS, Azure, and GCP, as well as Kubernetes environments. This course provides an in-depth look at how ICO analyzes workload performance, generates actionable recommendations, and integrates with CI/CD pipelines and ITSM tools to drive cloud cost efficiency and infrastructure performance. Through guided video lessons and hands-on demos, participants learn to navigate the ICO portal, interpret optimization reports, and apply automated recommendations to both virtual instances and containerized workloads.

This course introduces cloud workload solutions available on Google Cloud Platform (GCP) and highlights how Intel technologies enhance performance across three key GCP services: Compute Engine, SAP HANA, and VMware Engine. Learners will understand how GCP differentiates from other hyperscalers in terms of scalability, security, and networking, and how Intel's optimized processor architectures improve workload performance in GCP environments.

(Chapter 1 out of 1)

This course covers Azure Workload Solutions, including an overview of Microsoft Azure, Intel processor series, and specific workloads such as Confidential Compute, SQL Server, and Azure Virtual Desktop. Students will learn about the features and benefits of each workload, as well as how to optimize performance and choose the right instances for their workloads. The course also covers the value of the partnership between Intel and Azure, and how it can help organizations achieve their goals. By the end of the course, students will be able to understand the different instance types available in Azure, identify the correct instances for their workloads, and optimize their workloads for better performance and cost-effectiveness.

This course focuses on Amazon Web Services (AWS) workload solutioning and how Intel's processor technologies play a critical role in optimizing performance, resiliency, and cost efficiency in the public cloud. It walks learners through key workloads such as Elastic Beanstalk, Kafka, and Splunk, and connects Intel's processor evolution to actionable strategies within AWS. The course aims to deepen understanding of how to align workload architecture with business goals using the best compute infrastructure, and explores how Intel's processors enhance performance and cost-effectiveness in cloud-native environments.