Artificial intelligence technologies, which have made significant progress, have recently gained serious popularity. The works created with systems like DALL-E and Midjourney have captured the internet media. In addition, there is an explosion in the applications that create people’s avatars, and we see these images appearing with the support of artificial intelligence, no matter what platform we visit.
Made by Prisma Labs Lensa AI and one of them. This application, which can be used for a certain fee, is called ‘Magic Avatars’. ‘It has a feature called ‘Magic Avatars’. This allows users to create avatars that can truly be described as magical. However, this situation also shows us a problem of artificial intelligence by causing some problems. Let’s take a look at this problem.
First of all, what is this Lensa AI? How does it work?

Lensa AI is an app made by Prisma Labs, as we mentioned. Lensa, which has been topping the app stores lately, has actually been around since 2018. However, earlier this month, the ““Magic Avatars” feature made it popular. Available in Turkey with prices starting at 88 TL, this AI-powered feature allows the photos you add to be turned into unique avatars with different themes (such as futuristic or anime characters).
Lensa is an open-source deep learning artificial intelligence model that uses a database of artifacts on the internet. Stable spread applications. This database is called LAION-5B and uses 5.85 billion text-image pairs that are filtered by a neural network called CLIP. Last month also released a new version of Stable Diffusion, a deep learning model that creates images from text, which will be released in 2022.
If you would like to learn more about other details about Lensa AI and how it can be used, please browse our content below;
Lensa AI would create avatars that objectify women

Although artificial intelligence systems are advanced, they still have some problems. The best example of this is that artificial intelligences that convert from text to image have problems making hands and fingers. However, according to Polygon, the situation is slightly different in Lensa.
Recent information, women of Lensa AI you have sexualized and revealed some racist conclusions. Many social media users refer to women in the app. big breasts and cleavage Although there is no such request when it is said to have been added nude photos were even taken reported.
Olivia Snow of the University of California also wrote in Wired that even as she added photos from her own childhood, the app kept her going. sexual object When the owners of the application were asked about this issue, the team stated that Snow deliberately violated the terms of use. Lensa’s policy prohibits the use of images of children and prohibits nudity. Users must also certify that they are over the age of 18.
According to users, there are even racist results and avatars with nudity
In addition, the app would contain racist stereotypes in avatars. For example, in the images created by a user above using their own photos, it can be seen that artificial intelligence has a hard time with Asian people.

apart from this Asian women being fetishized is also reported. An Asian journalist named Melissa Heikkilä, writing in MIT Technology Review, shared her experience with the Lensa application. Accordingly, a large amount of nudity was included in the avatars created from their own photos. The journalist stated that his other colleagues would be less likely to find themselves in such a situation.
It has also recently been revealed that this artificial intelligence can be tricked into taking nude photos. A report shared by TechCrunch allows users to use the app with ease. that you can take nude photos, even showed that this can be done with images of celebrities. With new features on December 13, Prisma Labs stated that they will be creating avatars that pose security vulnerabilities that are less commonly seen.
“Artificial intelligence is a reflection of our society”

The events in Lensa show that artificial intelligence still has a long way to go. As is known, negative situations have occurred many times before in these systems. Microsoft has shut down its artificial intelligence called Tay in recent years over racist and misogynistic remarks. Recently, Meta also shut down a language model for similar reasons.
A study conducted in June found that systems trained by the CLIP network, which we saw at Lensa, gender and race what about stereotypes revealed why. He also emphasized that they are less likely to know women and people of color.
We will see how artificial intelligence systems, including Lensa, will deal with this problem in the future. Prisma Labs, the owner of the application, also emphasizes that Stable Diffusion works on unfiltered data on the internet and says that neither they nor Stable AI (the maker of Stable Diffusion) can do anything about it. He also said the data introduced humanity’s current biases into the model. that artificial intelligence holds up a mirror to our society he adds.