Lenovo has unveiled its new ThinkEdge SE100 AI inferencing server at Mobile World Congress.

In a statement, the company said the offering was the “first-to-market, entry-level AI inferencing server,” designed to make Edge AI accessible and affordable.

Lenovo ThinkEdge SE100
The Lenovo ThinkEdge SE100 server – Lenovo

The SE100 forms part of Lenovo’s family of new ThinkSystem V4 servers and supports hybrid cloud deployments and machine learning for AI tasks like object detection and text recognition.

Powered by Intel Xeon 6 processors, the server also contains the company’s new Neptune liquid cooling technology, Neptune Core Compute Complex Module, which, according to Lenovo, supports faster workloads with reduced fan speeds, resulting in quieter operations and lower power consumption.

The company added that the technology has been specifically engineered to reduce air flow requirements, whilst also lowering fan speed and power consumption, and keeping parts cooler in order to extend their system health and lifespan.

The SE100 is also 85 percent smaller than a standard 1U server, making it the “most compact AI-ready Edge solution on the market,” and has also been designed to stay under 140W, even at its fullest GPU-equipped configuration.

“Lenovo is committed to bringing AI-powered innovation to everyone with continued innovation that simplifies deployment and speeds the time to results,” said Scott Tease, VP of Lenovo infrastructure solutions group, products. “The Lenovo ThinkEdge SE100 is a high-performance, low-latency platform for inferencing. Its compact and cost-effective design is easily tailored to diverse business needs across a broad range of industries. This unique, purpose-driven system adapts to any environment, seamlessly scaling from a base device, to a GPU-optimized system that enables easy-to-deploy, low-cost inferencing at the Edge.”

Subscribe to The Compute, Storage & Networking Channel for regular news round-ups, market reports, and more.