AMD announced a series of launches today focused on improving performance in data centers, artificial intelligence applications and high-performance systems. Among the introduced novelties, the 5th generation EPYC processors stand out, new AMD Instinct MI325X accelerators and Ryzen AI PRO 300 Series processors. These solutions are designed to respond to growing industry demands such as cloud computing, artificial intelligence and business productivity.
The new ones 5th generation EPYC processorsbased on the Zen 5 architecture, they offer up to 192 cores per processor and are aimed at optimizing performance in data centers and AI-intensive workloads. Along with these, AMD Instinct MI325X accelerators with CDNA 3 architecture are designed to improve performance in AI model training and inference tasks, thanks to HBM3E memory capacity and high bandwidth.
On the other hand, Ryzen AI PRO 300 Series processors are aimed at empowering commercial computers with advanced artificial intelligence capabilities such as real-time transcription and translation, while improving energy efficiency and security. With these launches, AMD continues to expand its portfolio of solutions to deliver greater performance and efficiency in business and technology environments.
However, let’s take a look at the main new features of these announcements below.

EPYC
Among today’s announcements, AMD introduced its new 5th generation EPYC processors, aimed at improving performance and efficiency in data centers and for enterprise applications. Based on the Zen 5 architecture we’ve already seen debut in the Ryzen 9000 for PC, these integrated offer up to 192 cores per unitmaking them the best choice for environments and workloads that require high computing power, such as artificial intelligence, cloud services, and mission-critical business applications. This new generation combines increased performance with a focus on energy efficiency, an increasingly important feature in modern data centers.
Among the new features of these processors are their SP5 platform supportwidely used and the ability to handle up to 12 channels of DDR5 memory per processor. These enhancements enable increased bandwidth and optimized performance for demanding workloads. Additionally, they include support for PCIe Gen5 and AVX-512, features that enhance their ability to handle compute-intensive applications. These specifications can improve operational efficiency and reduce latency in data centers that you know support multiple concurrent workloads.
The 5th generation EPYC processors have shown remarkable performance compared to the competition. Tested by AMD, they provide up to 3.9x higher performance in high-performance scientific (HPC) applications. and up to four times faster video transcoding compared to Intel Xeon Platinum 8592+ processors. These numbers highlight the new EPYC’s ability to drive mission-critical tasks while providing significant improvements in energy efficiency, making them an option to consider when optimizing infrastructure.

In the field of artificial intelligence, AMD has developed specific models such as the EPYC 9575Fwhich offers a 28% increase in processing power compared to alternatives. This capability is crucial in environments where AI solutions require real-time data processing and high computing capacity. Additionally, these processors are designed to perform optimally in workloads that combine CPU and GPU, maximizing performance in advanced AI systems.
Finally, AMD has confirmed support from server manufacturers such as Dell, Lenovo and HPEwhich facilitates the adoption of this new series in data centers. Compatibility with existing infrastructure and a focus on energy efficiency and performance make the 5th generation EPYC processors a competitive option for organizations looking to improve the performance of their technology operations while optimizing their operating costs.

Ryzen Pro
Another important announcement today was its new line of Ryzen AI PRO 300 processorsdesigned to improve the performance of business teams with advanced artificial intelligence (AI) capabilities. Based on the Zen 5 architecture and supported by XDNA 2, these processors integrate NPUs to perform AI tasks such as real-time transcription and translation, among other business applications that require intensive processing and, of course, the privacy that computing provides to AI in the client.
Ryzen AI PRO 300 series includes models that reach up to TOP 50 in AI processingenabling you to meet advanced enterprise software requirements. According to AMD, the new processors offer up to three times the performance in AI tasks compared to the previous generation. In addition, they are designed to provide greater energy efficiency and optimize battery life without compromising performance in the most demanding applications.
Among the presented models, the Ryzen AI 9 HX PRO 375 stands out with its offer, according to AMD tests, Up to 40% more performance compared to competing productssuch as the Intel Core Ultra 7 165U. This new line of processors also includes enhanced security features such as secure boot and cloud recovery, enabling more efficient device management in enterprise environments.. Manufacturers like HP and Lenovo have already announced the equipment which will use these new processors and expand the options available to business users.

AMD Instinct Accelerators
Aware of the enormous weight that artificial intelligence and its specialized hardware have assumed, AMD today introduced its new AMD Instinct MI325X acceleratorsdesigned to improve the performance of artificial intelligence applications and high-performance workloads in the data center. Based on the CDNA 3 architecture, these accelerators are optimized for tasks that require intensive parallel processing, such as training artificial intelligence models.
AMD Instinct MI325X has HBM3E memory with a capacity of 256 GB and a bandwidth of 6.0 TB/swhich enables large volumes of data to be processed quickly and efficiently, which is key in environments that require real-time information management or training of large-scale models. According to AMD, the new accelerators offer up to 1.8x more capacity and 1.3x more bandwidth compared to their predecessors.
In terms of performance, the MI325X delivered significant improvements in AI tasks such as inference in models such as Mistral 7B and Llama 3.1. These accelerators excel in accurate FP16 and FP8 calculations and outperform other products on the market in some tests. In addition, AMD announced that these accelerators will be available in systems from manufacturers such as Dell, Lenovo and Supermicro, facilitating their adoption in data centers.

AMD thinking
Last but not least, today we also saw the presentation of DPUs (data processing units) of two AMD Pensando families: Salina and Pollara.
AMD Thinking Salina
The AMD Pensando Salina data processing unit delivers a two-fold increase in performance compared to previous generations. This model is designed for part management front-end artificial intelligence networkwhich optimizes the way data is transferred to AI clusters. Supporting transfer rates of up to 400 Gbps, the Salina DPU aims to improve efficiency and security in networks that process large volumes of real-time data, which is critical for AI applications and large-scale data centers.
AMD Pensando Salina is also designed to improve the scalability and performance of systems, making it a faster and more secure solution for managing data streams. In addition, it enables data center operators to optimize their infrastructure reduce the load on other processorsthus freeing them up for other critical tasks. This new DPU is undergoing customer testing and is scheduled to be available in the first half of 2025.
AMD Thinking Pollara 400
AMD Pensando Pollara 400 is the first network interface card (NIC) compatible with Ultra Ethernet Consortium standards (UEC), a consortium of manufacturers working to develop advanced technologies for high-performance networks. This model is designed to improve data transmission inl backend artificial intelligence systemsmore effective management of communication between accelerators and clusters. Pollara 400 features the AMD P4 programmable engine developed to support next-generation networks that require high performance and low latency in AI-intensive environments.
The Pollara 400 network card is aimed at meeting the growing needs of data centers working with infrastructures based on artificial intelligence. This network card with support for UEC standards and advanced data management capabilities promises to improve the performance and scalability of operationsensuring greater efficiency when managing large volumes of data. This model should be available in the first half of 2025.
More information