text
hidden text to trigger early load of fonts ПродукцияПродукцияПродукцияПродукция Các sản phẩmCác sản phẩmCác sản phẩmCác sản phẩm المنتجاتالمنتجاتالمنتجاتالمنتجات מוצריםמוצריםמוצריםמוצרים
Banner Image

Document Library

Reference architectures, white papers, and solutions briefs to help build and enhance your network infrastructure, at any level of deployment.


Displaying Document(s) For : All Categories

White Paper

This white paper provides an overview of the artificial intelligence (AI) capabilities of Intel® Core™ Ultra Processors for the edge, delving into hardware architecture, technical specifications, and software enablement, including a detailed case study. The paper begins with introducing the family of Intel® Core™ Ultra Processors,highlighting the key differences between the various SKUsof the processor family and outlining thehardware capabilities in the processors that provide AI acceleration.
Tags: 
OpenVINO™5GContainersVirtualizationHealthcareUtilitiesIndustrialSmart CitiesMedia AnalyticsVideo AnalyticsVisual CloudIntel® Smart Edge
Categories: 
Array

White Paper

Large Language Models (LLMs) are deep learning algorithms that have gained significant attention in recent years due to their impressive performance in natural language processing (NLP) tasks. However, deploying LLM applications in production has a few challenges ranging from hardware-specific limitations, software toolkits to support LLMs, and software optimization on specific hardware platforms. In this whitepaper, we demonstrate how you can perform hardware platform-specific optimization to improve the inference speed of your LLaMA2 LLM model on the llama.
Tags: 
OpenVINO™5G
Categories: 
Array

White Paper

This document presents the BKMs for optimizing and quantizing YOLOv7 model using Intel® Distribution of OpenVINO™ Toolkit. The object detection use case based on the optimized YOLOv7 model is evaluated on Intel platforms to demonstrate the improved throughput. Furthermore, the potential cost savings by the optimized model in cloud deployment is illustrated with the real-time use case of an ISV.
Tags: 
OpenVINO™5GContainers
Categories: 
Array

White Paper

Large language models (LLMs) have enabled breakthroughs in natural language understanding, conversational AI, and diverse applications such as text generation and language translation. Additionally, large language models are massive, often over 100 billion parameters and growing. A recent study published in Scientific American1 by Lauren Leffer articulates the challenges with large AI models, including scaling to small devices, accessibility when disconnected from the internet, as well as power consumption and environmental concerns.
Tags: 
Artificial Intelligence
Categories: 
Array

White Paper

With the increase of processing capability year over year as well as additional compute accelerators like the Neural Processing Unit (NPU), a single system is now capable of running multiple end-to-end workloads like QnA (Question and Answer), video surveillance analytics, and retail applications. Intel® Core™ Ultra Processors have up to 14 CPU cores with 20 threads as well as additional 2 low-power efficient cores that can be allocated to each of the workloads using containerization. The CPU is suitable for low latency workloads, the integrated GPU (iGPU) is for high throughput tasks, and the NPU is targeted for low power sustain workloads.
Tags: 
OpenVINO™uCPEContainersVideo Analytics
Categories: 
Array

White Paper

This document presents two methods for optimizing the YOLOv4 and YOLOv4-tiny models with customized classes and anchors using Intel® Distribution of OpenVINO™ Toolkit. The object detection use case based on the optimized models is used to verify the corresponding inference results. Furthermore, the decreased inference time of the optimized model is demonstrated with the real-time use case of an ISV.
Tags: 
OpenVINO™
Categories: 
Array

White Paper

Intel has architected an end-to-end network edge-based smart city solution that uses the latest Intel technologies, best in class artificial intelligence (AI) models, and private 5G networks from industry leaders. This solution provides the necessary compute and network edge platform to support an expanded portfolio of digital services from various solution providers. It can be centrally managed to optimize the Total Cost of Ownership (TCO). This solution is validated in Intel labs and provides the best system stability, functionality and performance, and better TCO.
Tags: 
Government
Categories: 
Array

White Paper

Artificial Intelligence (AI) at the Edge is the deployment of AI applications on or close to the device where data is generated or located, as opposed to centralized cloud computing facilities. The edge can be any device outside a central data center, such as network video recorders, security cameras, industrial robots, ultrasound machines, on-prem servers, etc. Instead of sending data to the cloud for processing, AI algorithms are run on the edge device.
Tags: 
Intel® Xeon® Scalable ProcessorsOpenVINO™5GHealthcareSmart Cities
Categories: 
Array

White Paper

The OpenVINO™-based C++ demos in Open Model Zoo are built using CMake that allows developers to build simple to complex software across multiple platforms with a single set of input files. In real business use cases, the OpenVINO™ applications are integrated into the end user’s target application and the build steps may vary depending on the build tool in the production setup. However, if the end users are using g++ compiler to build their applications, they do not want to change their build tool. To facilitate this, this paper presents the steps to build the C++ object detection demo using g++ compiler on Ubuntu 20.04.
Tags: 
OpenVINO™5G
Categories: 
Array

White Paper

This guide is to help users install and run Ollama with Open WebUI on Intel Hardware Platform on Windows* 11 and Ubuntu* 22.04 LTS.
Tags: 
OpenVINO™5G
Categories: 
Array

White Paper

This document introduces the BKM for optimizing and implementing the unconstrained automatic license plate recognition (ALPR) using OpenVINO™ Toolkit 2022.1. Unlike the conventional methods, the unconstrained ALPR can detect distorted license plates (LPs) captured at non-frontal views. It is achieved by resizing the input vehicle image according to its aspect ratio before being fed to the license plate detection model.
Tags: 
Government
Categories: 
Array

White Paper

In the era for multi-workloads consolidations, customers need both agile resource allocation and ensure improved security. These requirements are to protect business critical workloads and adequate compute resources are being allocated for various workloads.
Tags: 
Retail
Categories: 
Array
Showing - of results