Google recently announced that it has made changes to its Google Play policies: These policies will become much more serious with applications that allow content creation using artificial intelligence. The aim is to prevent the misuse of both the visual and audio capabilities of these applications, especially those that open the door to deepfakes.
The company warns developers: in addition to granting permissions to users, they will also need to avoid creating restricted content so that they can report offensive content they found.
This arrangement includes AI chatbot applications that create text-to-text interactions, text-to-image applications, voice-to-image, image-to-image, image creation, voice recording creation by AI… in short, all applications with generative AI qualities.
Apps that host AI-generated content but do not have the ability to render that content, and apps that focus solely on summarizing content (not AI-generated) will not be subject to these terms.
Google also wanted to clarify which aspects will not be included in apps that can generate content:
- Audio or video recordings of real people that enable fraud.
- Content created to encourage harmful behavior (e.g. dangerous activities, self-harm).
- Election-related content that is clearly misleading or false.
- Content created to facilitate bullying and harassment.
- Generative AI applications aim primarily to be sexually satisfying.
- Official documents created by artificial intelligence that allow for dishonest behavior. Creating malicious code.
While they may seem like obvious rules, a little extra order never hurts in the Play Store jungle. Apps will need to comply with Play Store regulations and be mindful of what they can produce, as well as their obligation to allow users to report inappropriate content.
Image | Xataka
in Xataka | With its new section, How can you delete your data and user accounts in your Android applications from Google Play?