NVIDIA updates ChatRTX with new LLM and other features
- May 1, 2024
- 0
This week we have ChatRTX as the big protagonist of AI Decoded, a weekly series that NVDIA devotes to explaining “in humans” what artificial intelligence is made of,
This week we have ChatRTX as the big protagonist of AI Decoded, a weekly series that NVDIA devotes to explaining “in humans” what artificial intelligence is made of,
This week we have ChatRTX as the big protagonist of AI Decoded, a weekly series that NVDIA devotes to explaining “in humans” what artificial intelligence is made of, based on a large amount of the company’s development in the field. This series of publications is particularly recommended for anyone interested in AI, regardless of their level of knowledge about this technology, as it ranges from general aspects, such as the supply dedicated to LLM, to specific implementations, as we can see in the case of Instant NeRF.
If you don’t know him, ChatRTX allows us to use a local chatbot, i.e. without having to resort to an online service. For this purpose, it relies on the specific features related to the artificial intelligence of the Ampere and Ada architectures (RTX 30 and RTX 40 in relation to the domestic market). At this point it is not necessary to go into details about the benefits of running a chatbot locally, but if you want more information about ChatRTX and these benefits, I recommend the detailed article we dedicated to it when its first version was published in February.
Well, as I hinted at the beginning, this week’s edition of AI Decoded will be of particular interest to those who have already tried the said version, as it informs us that NVIDIA has updated ChatRTX, adding more LLMs and new features which undoubtedly increases its usefulness by many points compared to the previous version. Before we look at these new features, let’s remember that in its original version it allows you to use the LLM Llama2 13B INT4 and also with the Mistral 7B INT4, and that we can also “retrain” it with a YouTube video of our choice, so that we can later learn about their content.
The first news is that for these two models two new LLMs are added, jewelwhich is an open source and Google published version of Gemini, a ChatGLM3an open and bilingual LLM (English and Chinese) that expands (actually doubles) the possibilities of finding the answer that best matches what we are looking for, because different LLMs can give very different answers from each other for the same prompt.
A particularly interesting feature of ChatRTX is, as I already mentioned, the addition of a video of our choice to the previous training of the model, thus adapting the response to a specific context. And now, with this new version, We can also retrain the model by adding our images and photos.. And no, they won’t need to be labeled because the app already includes OpenAI-based Contrast Linguistic Image Pretraining (CLIP).
We do not depart from the creators of ChatGPT to review the third ChatRTX news, and the fact that this new version also uses a technological solution Whisperan AI-based automatic speech recognition system that, as you may have already guessed, allows us to communicate with the chatbot using voiceinstead of always having to resort to text prompts. This represents a great advance in terms of usability, but when we talk about accessibility, it rises to an exceptional rating.
ChatRTX is a completely free application that requires a graphics adapter RTX 30 or 40 (including mobile versions) with min 8 gigabytes of VRAM memory, 16 gigabytes of RAM in the system and not less than 100 gigabytes of free space in the storage unit. If your device meets these requirements, you can download it from its website.
Source: Muy Computer
Donald Salinas is an experienced automobile journalist and writer for Div Bracket. He brings his readers the latest news and developments from the world of automobiles, offering a unique and knowledgeable perspective on the latest trends and innovations in the automotive industry.