April 28, 2025
Trending News

Nvidia steps into the conscience of chatbots with NeMo Guardrails

  • April 25, 2023
  • 0

Nvidia launches Nemo Guardrails: an open-source toolkit that keeps chatbots on track, admitting when they don’t know and remaining silent when they shouldn’t. Nvidia suggests NeMo Guardrails. This

Nvidia steps into the conscience of chatbots with NeMo Guardrails

Nvidia launches Nemo Guardrails: an open-source toolkit that keeps chatbots on track, admitting when they don’t know and remaining silent when they shouldn’t.

Nvidia suggests NeMo Guardrails. This is a toolkit that makes it relatively easy to program behavioral rules for chatbots. Nvidia is immediately opening NeMo Guardrails open source and making the code available via GitHub. Developers across the country should continue to build on this so that chatbots become as secure and reliable as possible as quickly as possible.

“As a user, you don’t normally speak to you directly Large language model, but with a toolkit in between,” explains Jonathan Cohen. He is Vice President of Applied Research at NVIDIA and introduced us to NeMo Guardrails ahead of the launch. “LangChain is such a popular toolkit. NeMo Guardrails sits between the user and the LLM or similar toolkit.”

3 domains

NeMo Guardrails must provide guidance to chatbots in three areas:

  • themed: NeMo can restrict which topics a ChatBot can talk about. For example, a company may prohibit an LLM from delving deeper into competing products.
  • Security: NeMo Guardrails prevents hallucinations, where a Chabot invents an answer with no factual basis. The toolkit is also designed to counter dangerous answers or misinformation.
  • Security: The open-source toolkit ensures that an LLM cannot simply be linked to an external application.

colang

NeMo Guardrails is easy to program using a language developed by Nvidia: Colang. It is very similar to English and is used to tell NeMo what a chatbot can and cannot do. Nvidia gives the example of an HR chatbot in its own company. He can only answer personnel questions. In order to prevent the chatbot from providing information about Nvidia’s financial results, which may even be incorrect, it is enough to provide a few example questions on the subject and link them to an example of a refusal.

If you ask a question about the quarterly results, the chatbot will say something like, “Sorry, I’m an HR bot, I can’t answer this question.” The same goes for all kinds of unauthorized or malicious questions. You don’t have to type all sorts of unauthorized prompts through Colang; two examples will suffice.

An LLM for your LLM

So NeMo is quite intelligent. That’s because the toolkit itself works through an LLM trained for the job. “Right now, one of the best ways to verify an LLM is to use another LLM to verify the answers,” clarifies Cohen. If a chatbot has an answer to a question, NeMo might be on fact check-LLM asks if this answer is correct based on available company information.” According to Cohen, this system works quite well.

As a user, you speak to NeMo and not directly to the LLM. NeMo then verifies the answers, but also ensures that malicious questions never reach the LLM. For example, in the HR chatbot example, you as an employee can ask if Nvidia can support employees who are looking to adopt. This legitimate question goes to the LLM, followed by a factual response. If you ask how many people who work at Nvidia have already adopted, NeMo blocks the question. You are told that the chatbot cannot share the answer to this question. In fact, it is NeMo that is communicating the rejection, and not the underlying chatbot itself.

Not selfless, but very relevant

Cohen thinks NeMo Guardrails is a good system to protect chatbots of all kinds. “We’re making the toolkit open source because we believe better chatbots are a good thing for everyone,” he says. But we also see a bit of self-interest. Cohen: “The NeMo engine is modest and consumes little processing power, but if NeMo Guardrails has to request something from an LLM itself, there is a cost.” In other words, NeMo itself relies on inference by LLMs. And who would be the market leader in hardware for such workloads?

NeMo Guardrails seems to be a pretty complete solution that solves the biggest problems of today’s chatbots. By verifying LLM responses, preventing hallucinations, and narrowing down conversations to topics for which the underlying LLM has the right information, the technology suddenly becomes much more reliable. Also, it’s not a big hassle for programmers to implement the right tools for a chatbot via Colang.

NeMo Guardrails is now available via GitHub. Nvidia provides various templates with handles that developers can use.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *