Qualcomm has unveiled an AI On-Prem Appliance Suite and AI Inference Suite at the Consumer Electronics Show (CES) in Las Vegas.

The chip firm said the new offerings will allow organizations to run custom and off-the-shelf AI applications on-premises, reducing operational costs and TCO when compared to the costs associated with renting third-party AI infrastructure.

Qualcomm On-Prem AI Appliance Solution
Qualcomm On-Prem AI Appliance Solution – Qualcomm

Powered by Qualcomm’s cloud AI accelerators, the AI On-Prem Appliance Solution is a hardware offering designed for generative AI inference and computer vision workloads, while the AI Inference Suite provides software and services, including applications, agents, tools, and libraries to deploy AI across on-premises solutions and cloud offerings. The AI Inference Suite supports integration with generative AI models, frameworks, and deployment using Kubernetes or bare containers.

Aetina, Honeywell, and IBM have already collaborated with Qualcomm to deploy AI models and workflow automation use cases across the new offerings.

The new product "changes the TCO economics of AI deployment by enabling processing of generative AI workloads from cloud-only to a local, on-premises deployment,” claimed Nakul Duggal, group general manager for automotive, industrial IoT, and cloud computing, Qualcomm Technologies.

“Enterprises can now accelerate deployment of generative AI applications leveraging their own models, with privacy, personalization, and customization while remaining in full control, with confidence that their data will not leave their premises.”