A notable move came from US-based technology giant AMD. The company organized an event AMD-135M He announced the little language model he mentioned. Announced as part of Meta’s “Llama” model, AMD-135M is more private companies It seems to be preferred by
AMD-135M focuses on two different versions within itself. These versions appeared as AMD-Llama-135M and AMD-Llama-135M code. According to AMD’s statement, AMD-Llama-135M, 670 billion Trained on public data token. 4 AMD Instinct MI250s were used during the training process. As AMD-Llama-135M code specifically for coding It uses 20 billion additional tokens.
Can be optimized for specific tasks

AMD’s small language models are designed for specific tasks can be optimized and used. Naturally, AMD-Llama-135M code will mainly be used for coding-related tasks. According to AMD’s statement, the new language model is predictive decode uses technology. This allows language models to be executed very quickly.
According to AMD’s statement, AMD-135M has not yet arrived in the initial stages. The company will further develop its small language model in the future. So better results are achieved in terms of both performance and speed. Let’s see if you can find yourself in the artificial intelligence industry. company trying to acceptWill he be able to achieve the desired success with the small language model?
Follow Webtekno on X and don’t miss the news