OpenAI is working with TSMC and Broadcom to build its own AI inference chip.

According to Reuters, OpenAI has, for now at least, dropped its plans to establish its own foundries due to cost and time restraints, and is instead focusing all its efforts on in-house chip design.

OpenAI.width-880.width-880
– OpenAI

Citing sources with knowledge of the situation, the report added that OpenAI has secured manufacturing capacity with TSMC and hoped to have its first custom AI chips manufactured by 2026, although that timeline is subject to change.

It was first reported in July 2024 that OpenAI had been in talks with chip designers to discuss the possibility of developing a new AI server chip, although rumors that it was considering making its own AI chips first emerged in late 2023.

The company’s CEO Sam Altman has long been pushing for it to develop its own AI chips and had been seeking investment to build an artificial intelligence chip company codenamed project ‘Tigris’ before he was suddenly fired and then rehired by OpenAI in November 2023.

The company currently has approximately 20 people working in its chip team. Last year, DCD exclusively reported that OpenAI had hired former Lightmatter chip engineering lead and Google TPU head Richard Ho as the head of hardware at the generative artificial intelligence company.

Broadcom was heavily involved in the development of Google's TPU AI chip.

Reuters also noted that, in addition to designing its own AI chips, the generative AI giant has also reportedly started to diversify its hardware deployment, with the company now using AMD chips alongside Nvidia GPUs to train its AI models.

One of the reasons originally cited by sources as to why OpenAI was considering entering the custom chip design space was to reduce its reliance on Nvidia, whose GPUs are expensive and have previously struggled to meet demand.

In May 2024, Microsoft – which has invested nearly $14 billion in OpenAI, making it the company's largest investor – announced it would be making AMD’s MI300X accelerators available to customers through the company’s Azure cloud computing service.

It’s through Microsoft’s Azure cloud platform that OpenAI will access the AMD chips.

At the time, EVP of Microsoft’s Cloud and AI group, Scott Guthrie, described the MI300X as the “most cost-effective GPU out there right now for Azure OpenAI.”

Subscribe to The Compute, Storage & Networking Channel for regular news round-ups, market reports, and more.