May 12, 2025
Trending News

Microsoft announces new Azure AI tools to improve LLM quality and security

  • March 29, 2024
  • 0

Microsoft is launching five new Azure AI tools to improve the security and quality of LLMs. Microsoft recently unveiled its latest Azure AI tools in a blog to

Microsoft is launching five new Azure AI tools to improve the security and quality of LLMs.

Microsoft recently unveiled its latest Azure AI tools in a blog to help users build safer and more reliable generative AI applications. To this end, Microsoft has introduced five new AI tools, including Prompt Shields, Groundedness Detection, Security System Messages, Security Assessments, and Risk and Security Monitoring. These features aim to improve the security and quality of LLMs. Most of the new tools are currently in preview and will soon be generally available.

New AI tools

In addition to the significant challenge of prompt injection attacks, companies also have concerns about the security and reliability of their LLMs. Microsoft wants to eliminate these security risks and is bringing five new Azure AI tools onto the market. These new programs include:

  • Shields prompts: Prompt injection attacks bypass an AI system’s security measures to allow intruders to access sensitive information. To this end, Microsoft has introduced Prompt Shields, which detect and block suspicious inputs in real time before they reach the base model. This feature is currently in preview for Azure AI Content Safety and will be generally available soon.
  • Ground detection: Hallucinations in generative AI indicate erroneous output of the AI ​​model, ranging from minor inaccuracies to incorrect output. The new Ground detection identifies text-based hallucinations to support output quality.
  • Security system messages: These messages can guide your model’s behavior toward safe and responsible output. Microsoft will soon offer standard templates for such security system messages in the Azure AI Studio and Azure OpenAI Service playgrounds.
  • Safety ratings: These assessments assess the vulnerability of applications to jailbreak attacks and the emergence of content risks. This feature is available in preview.
  • Risk and security monitoring: This measures the sensitivity of applications to, for example, jailbreak attempts. This function also provides explanations of the evaluation results and provides appropriate solutions. This feature is currently in preview in the Azure OpenAI Service and will be generally available soon.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version