YouTube wants to improve comment moderation, spam and live chat bots
- December 16, 2022
- 0
YouTube has announced a plan to crack down on spam and offensive content in comments and live chats. Video service will send a warning users if you find
YouTube has announced a plan to crack down on spam and offensive content in comments and live chats. Video service will send a warning users if you find
YouTube has announced a plan to crack down on spam and offensive content in comments and live chats. Video service will send a warning users if you find that comments posted on your platform violate the community guidelines and should therefore be removed.
Google believes that its comment removal warning will deter users from posting negative comments and reduce the number of times they leave comments that violate copyright. However, if the person continues the same behavior, it is possible that it will it will not be possible to post comments for up to 24 hours or until the specified time limit expires.
According to the company, during the tests it conducted, the results of comment deletion notices and waiting times were encouraging, helping to protect creators from users trying to negatively influence them. The new notification system is currently only available for English comments. Google plans to bring it to more languages in the coming months. The company is also asking users to provide feedback if they believe their feedback system has selected them in error.
In addition to this change, Google has also improved spam detection in comments and says it managed to remove more than 1.1 billion spam comments in just the first six months of 2022. In addition, it improved detection spambots to avoid getting into live chats.
Of course, YouTube will do it all with bots, which will now have the power to issue time limits to users and immediately remove comments deemed offensive. And it won’t be easy. Moderating comments on YouTube often seems like an impossible task, to the point that many sites simply turn off comments altogether because they don’t want to deal with it. Moderating live chat is even more difficult because even if you catch an offensive message quickly after it’s sent, the very nature of the medium means the damage has likely already been done.
Bots are a scalable way to solve this problem, but Google’s auto-moderation record on YouTube and the Play Store is pretty rough. And we have seen several such examples. Labeling a horror channel as “kid-friendly” because it featured animation; disable the Google Play video player because the subtitle files used the “.ass” extension, which is also a swear word. Another example is that the Play Store regularly bans chat apps, Reddit apps, and podcast apps because, like the browser, they have access to user-generated content, and sometimes that content is inappropriate.
YouTube does not appear to involve channel owners in any of these moderation decisions. Note that YouTube’s announcement of improvements to comment moderation says that the portal will notify the author (not the channel owner) of automatic content removal, and that users can “submit comments” to YouTube if they do not agree to automatic comment removal.
The “send feedback” link on many Google services is a black hole with help and no comment moderation queue, so we don’t know if there will be someone (human) behind it to respond to the dispute. YouTube notes that this automatic content moderation will only remove comments that violate the Community Guidelines, a list of relatively simple content bans. We’ll see.
Source: Muy Computer
Alice Smith is a seasoned journalist and writer for Div Bracket. She has a keen sense of what’s important and is always on top of the latest trends. Alice provides in-depth coverage of the most talked-about news stories, delivering insightful and thought-provoking articles that keep her readers informed and engaged.