April 24, 2025
Trending News

Google has a plan to improve Meet’s portrait mode: use your PC’s GPU

  • September 5, 2022
  • 0

If you often meet in Google Meet, an application that has returned to its pre-pandemic limits, you surely know the different effects it provides. Between them, use portrait

If you often meet in Google Meet, an application that has returned to its pre-pandemic limits, you surely know the different effects it provides. Between them, use portrait mode in real time or isolate yourself against an artificial background as if you were on a chroma key. Google knows how to leverage its artificial intelligence to perform this type of function, but it’s not just about software.

With the latest version of Google Meet, Google wants to get more power from the GPU to improve the result. This is achieved by combining Google’s own artificial intelligence with WebGL, a standard that allows the browser to render graphics.

Your GPU is in Google Meet’s service

Google wants better quality in image segmentation in Meet and uses computers’ GPUs for this. With the latest update of Meet, There are corresponding changes in the model. Henceforth, a new segmentation model based on high-definition (HD) input images is implemented instead of the previously used low-resolution images.

In order not to consume excessive resources, low-performance cores of the GPU are used.

Meet, to not consume excessive resources on your computer. low performance coresis ideal for rendering high-resolution convolutional models. Until now, the PC’s CPU was used in conjunction with Google’s real-time AI models, but this element was more limited for HD graphics segmentation calculations.

As Google explains, it’s not that easy to get the GPU to take advantage of its performance to improve image segmentation. It says right now the GPU can only achieve 25% of the raw performance that OpenGL can. This is because WebGL (the standard that allows websites to be viewed with your GPU), Designed for raw processing of imagesnot for heavy workloads generated by a machine learning model.

The key to overcoming this limitation is called MRT (Multi-Rendering Tag), which is a function of current GPUs. Reduce the bandwidth required by Google’s neural networkRender up to 90% of OpenGL’s native power (as promised by Google).

Source: Xataka

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version