Cloudflare is introducing a new feature that enables developers to deploy AI applications globally with one click through an integration with Hugging Face.
Cloudflare announced that developers can now deploy AI applications directly to its global network through Hugging Face, an open platform for AI developers. This capability is powered by Workers AI, making Cloudflare the first serverless inference partner on the Hugging Face Hub.
This initiative simplifies the global deployment of AI applications by eliminating the need for developers to manage infrastructure or pay extra for unused computing capacity.
The collaboration between Cloudflare and Hugging Face meets the need for companies to experiment and iterate with AI applications quickly and cost-effectively. Now available with GPU support in more than 150 cities worldwide, Workers AI enables developers to efficiently scale and deploy AI models. This supports the development of specialized, domain-specific applications and improves global access to low-latency AI inference.
By integrating Hugging Face with Cloudflare’s Workers AI, developers can choose from a variety of popular open source models and deploy them with one click. This process not only makes AI models easier to access and use, but also supports the rapid and global scaling of AI applications.
Earlier this week, Cloudflare announced three new tools to simplify full-stack development. D1, Hyperdrive and Workers Analytics Engine