OpenAI is adapting its strategy and relying on AMD chips alongside Nvidia to distribute the AI workload more efficiently
OpenAI may be working on its own AI hardware and has hired Broadcom to develop custom chips. They need to handle large AI workloads. The partnership gives OpenaI access to TSMC’s secure and state-of-the-art factories, Reuters reports. These chips are expected to go into production in 2026.
AMD strengthens OpenAI’s infrastructure
OpenAI remains one of Nvidia’s largest customers, but shortages and rising costs mean the company is also using AMD’s MI300X chips. With this strategic shift, OpenAI joins other technology companies such as Microsoft and Meta as they seek to limit their dependence on Nvidia. This diversification is essential for OpenAI due to the high computational costs optimized in this way.
Despite partnering with AMD and Broadcom, OpenAI continues to invest in its relationship with Nvidia, including work with Nvidia’s latest Blackwell chips. This makes it possible to continue training AI models like ChatGPT. However, competitors such as Google, Microsoft and Amazon are already several years further along in their chip development, so OpenAI may need more funding to become a full-fledged chip manufacturer.
Good news for AMD
The interest in AMD hardware is good news for AMD itself. It wants to compete with Nvidia, but is faced with a chicken-and-egg situation: Nvidia is the biggest, so has the most extensive ecosystem, so is the most popular, so remains the biggest. When parties like OpenAI use AMD’s Instinct accelerators in addition to Nvidia’s chips, they become more popular and the ecosystem around them becomes more mature. This, in turn, can encourage further adoption.