May 1, 2025
Trending News

OpenAI and Google Bard will have a seal to identify AI-generated content

  • July 21, 2023
  • 0

An important milestone has been reached in the field Artificial intelligence (AI), bringing together industry leading companies for a voluntary commitment to ensure the safety and reliability of

An important milestone has been reached in the field Artificial intelligence (AI), bringing together industry leading companies for a voluntary commitment to ensure the safety and reliability of AI-generated content. giants like Open AI, alphabet This Target lead the movement with Anthropic, Inflection, Amazon and Microsoft, an OpenAI partner.



07/19/2023 at 18:00
video

End of ChatGPT? What Google is really planning with Bard

What is the difference between Bard, what tools can use it and what is the security of the new chatbot from …

This was announced by the administration of the President of the United States. Joe Bidenwhich sees this initiative as an important step towards regulating and mitigating the risks associated with artificial intelligence, a technology that has seen rapid growth in investment and popularity in recent years.

Generative AI, which uses data to create unpublished content with surprisingly human content, has caught the attention of policy makers around the world who are looking for ways to prevent national security and economic threats resulting from the misuse of this new technology.

In June, U.S. Senate Majority Leader Chuck Schumer defended the need for “comprehensive legislation” to provide safeguards in the development and use of artificial intelligence. In this sense, the US Congress is also considering a bill that would require disclosure of information about the use of AI in the creation of images and other content in political advertising.



To contribute to these regulatory efforts, President Joe Biden met with executives from seven companies at the White House, where they discussed the development of an executive order and bipartisan legislation focused on AI technology.

watermark system

One of the major commitments made by the companies was to implement a watermarking system on all AI-generated content, including text, images, audio and video. This technical watermark will allow users to easily identify deepfake content that may, for example, portray non-existent situations of violence, create more sophisticated fraudulent schemes, or misrepresent public figures in a slanderous manner.

The issue of visibility of watermarks in the exchange of information still remains without a clear answer. However, the initiative aims to ensure transparency in the use of AI and increase user confidence in determining the origin of this content.



In addition, the companies also pledged to focus on protecting user privacy during AI development, as well as ensuring that the technology is impartial and non-discriminatory towards vulnerable groups. Other commitments include developing AI solutions for scientific challenges such as medical research and climate action.

This collaborative effort by leading AI companies represents a significant step forward towards self-regulation and the promotion of an ethical approach to the development and use of AI. The impact of these measures on the future of AI and cybersecurity will be watched closely as technology continues to shape our world in ever more amazing ways.

Mundo Conectado Deal Center: Selection of Discounts and Lowest Prices
Best deals on electronics, cell phones, TVs, soundbars, drones and more

Source: Reuters.

…..

Source: Mundo Conectado

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version