April 28, 2025
Trending News

Meta is taking a big step in AI imaging

  • November 20, 2023
  • 0

Two new AI tools are available at Meta. Both are based on the Emu Foundation model, which was launched earlier this year. As one of the most well-known

Two new AI tools are available at Meta. Both are based on the Emu Foundation model, which was launched earlier this year.

As one of the most well-known names in technology, Meta cannot be left behind in the AI ​​race (just think of the virtual training rooms for AI robots). The company has now launched two tools based on the Emu Foundation model introduced in September. This was also used in the new AI assistant that Meta then shared with the world.

Emu video

Allows users to generate videos from text prompts based on diffusion models. This moving image infrastructure responds to different forms of input:

  • text
  • Picture
  • Text and image together

Meta’s research team divided the process into two parts. First you create an image via a text prompt and then you can generate a video by addressing both the text and the image. This allows researchers to better train video generation models.

Now only two diffusion models are required to produce a four-second video at 16 frames per second. This model also has the ability to animate images via a text prompt.

Emu Edit

This tool was designed with the idea that sometimes text prompts take a while to achieve exactly what you envision. It’s common to have to adjust a prompt multiple times before an AI image generator shows you what you want.

From Emu Edit Meta wants to streamline this process even further. You can make general or very specific adjustments to an image: change backgrounds, adjust geometric shapes, or play with colors.

The goal with Meta is to adjust only the pixels that are relevant to the mapping. Emu Edit should therefore be significantly more precise than other models. For example, if you want to add text to an object, this model leaves the pixels of that object itself completely untouched.

Meta trained its model with more than ten million synthesized samples, each containing:

  • An input image
  • The task in question
  • The intended image

Responsibly creative

According to Meta, the possibilities lie in a spectrum of creativity. From personal animated stickers to the best GIFs.

Editing videos and images is possible without any technical knowledge and even animating photos is possible. Meta emphasizes that professional graphic designers do not have to be afraid; this technology should support them and not replace them.

The question remains how this research will develop further. Over the weekend it was announced that Meta had split up its RAI (Responsible AI) team and distributed it to other AI projects. This team had to monitor the negative impact of the technology during development.

Source: IT Daily

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version