Tiktok will automate video transfer for nudity and violence
Tiktok will use automation to detect and remove many videos that violate their policies. Over the past year, this service has tested and tweaked the system to find and decrease the content. This will launch the systems in the US and Canada for the next few weeks.
To start, the algorithm will look for posts that violate policies related to underage children’s safety, violence, graphic content, nudity, sex, illegal activity and regulated goods. If the system detects violations, they will immediately attract videos and users who post them can appeal. Users can still mark videos for manual reviews too.
Automatic reviews will be “reserved for content categories where our technology has the highest level of accuracy,” Tiktok said. Only one of the 20 videos that have been removed automatically are false positives and should remain on the platform, according to the company. Tiktok hopes to increase the level of accuracy of the algorithm and note that “the request to appeal video removal remains consistent.”
Tiktok said automation must free the safety staff to focus on content that requires a more nuanced approach, including video containing intimidation, harassment, wrong information and hatred. Most importantly, the system can reduce the number of potentially sad videos that must be watched by the security team, such as those who contain extreme violence or child exploitation. Facebook, for one, has been accused of not doing enough to protect the welfare and mental health of content moderators assigned to review content that is often disturbing.
Elsewhere, Tiktok changes the way to tell users after getting caught in breaking the rules. The platform now tracks the amount, severity, and frequency of violations. Users will see details about the existing account update in their inbox. They can also see information about the consequences of their actions, such as how long they are suspended from posts or involved with other people’s content.