TikTok Is Trying a New Tactic to Punish Policy Offenders


The TikTok app is planning to create an ‘Account status’ page that lets creators and users see what kinds of violations they’ve received.
Image: TikTok

TikTok has potential bans weighing heavy on its mind, so much so that it’s planning to completely remodel how it will decide to ban accounts that violate its policies.

Julie de Bailliencourt, TikTok’s global head of policy, wrote in a Thursday blog post that the platform was shaking up its content moderation policy, letting creators and users know what sorts of violations they might have on record.

Under the new system, users accrue strikes for posting, commenting, or otherwise acting on the platform in a way that violates TikTok’s guidelines. Getting a strike means the offending content is removed, but it doesn’t necessarily mean a temporary or permanent ban. There are different thresholds of strikes within comments, livestreams, and posts. Users who reach a threshold of strikes “will be permanently banned,” de Bailliencourt wrote.

The strike threshold depends on what kind of violation it is. Sharing spam, for instance, is less harmful than promoting hate speech on TikTok. It’s unclear by the blog post what are the thresholds for each kind of violation, though more severe violations will still cause users to receive permabans. Gizmodo reached out to TikTok parent company ByteDance for clarification.

Promoting violence or spreading child sexual abuse material will get you an instant ban, just as before. Any accounts that accrue strikes across multiple policies will also get permabans. Users will be notified if they’re getting close to any kind of temporary or permanent ban.

Users will soon be able to see what kinds of violations they’ve received via an upcoming Safety Center tab with an “Account status” page viewable in the app. This will also allow users to appeal strikes. Strikes expire from an account after 90 days.

Social media moderation is never easy, but TikTok’s moderation policy has long been a rather confusing mess. It would temporarily ban some users from posting or commenting for certain infractions, though in the past the AI systems meant to detect wrongdoers have been overzealous. A couple years ago, TikTok blocked accounts that used “Black Lives Matter” in their bios, which the company blamed on changes to the TikTok Creator Marketplace. Not too long ago, the social app has also had to deal with a huge wave of pornographic content being recommended in users’ feeds. Last year, the company introduced an “adults-only” option and restricted streaming to those 18-years or older to try and cut down on the amount of inappropriate content seen by young people.

On the flip side, those repeat offenders are often violating the same policy over and over. Almost 90% violate using the same feature and over three quarters of those are violating the same policy category.

YouTube, one of TikTok’s biggest competitors in short form videos, also uses a strike system. Facebook and Instagram have a similar strike system that increases the account restrictions the more strikes a user racks up. The system also puts a heavier penalty on some infractions more than others. This system has proved to be problematic, as users are often allowed to break Facebook Marketplace’s own rules on gun sales multiple times before they start to see any sort of ban.

de Bailliencourt wrote that TikTok’s current moderation system has caused consternation with some creators, especially since some have complained they do not know if they violated a policy. Users have also complained about which videos are ineligible for recommendation. The policy head said TikTok is testing a new feature that will tell creators whether a video is no longer being recommended in feeds.

Of course, TikTok itself has been staring down the barrel of its own bans as of late, due to its alleged affiliations with China’s ruling party. In December, the U.S. House of Representatives banned the app from government devices. While congressional Republicans have been the most zealous proponents of wider bans for the ByteDance-owned app, on Tuesday Democratic Senator Michael Bennet also called on Apple and Google to remove the app from their app stores, citing national security risks.


The TikTok app is planning to create an ‘Account status’ page that lets creators and users see what kinds of violations they’ve received.
Image: TikTok

TikTok has potential bans weighing heavy on its mind, so much so that it’s planning to completely remodel how it will decide to ban accounts that violate its policies.

Julie de Bailliencourt, TikTok’s global head of policy, wrote in a Thursday blog post that the platform was shaking up its content moderation policy, letting creators and users know what sorts of violations they might have on record.

Under the new system, users accrue strikes for posting, commenting, or otherwise acting on the platform in a way that violates TikTok’s guidelines. Getting a strike means the offending content is removed, but it doesn’t necessarily mean a temporary or permanent ban. There are different thresholds of strikes within comments, livestreams, and posts. Users who reach a threshold of strikes “will be permanently banned,” de Bailliencourt wrote.

The strike threshold depends on what kind of violation it is. Sharing spam, for instance, is less harmful than promoting hate speech on TikTok. It’s unclear by the blog post what are the thresholds for each kind of violation, though more severe violations will still cause users to receive permabans. Gizmodo reached out to TikTok parent company ByteDance for clarification.

Promoting violence or spreading child sexual abuse material will get you an instant ban, just as before. Any accounts that accrue strikes across multiple policies will also get permabans. Users will be notified if they’re getting close to any kind of temporary or permanent ban.

Users will soon be able to see what kinds of violations they’ve received via an upcoming Safety Center tab with an “Account status” page viewable in the app. This will also allow users to appeal strikes. Strikes expire from an account after 90 days.

Social media moderation is never easy, but TikTok’s moderation policy has long been a rather confusing mess. It would temporarily ban some users from posting or commenting for certain infractions, though in the past the AI systems meant to detect wrongdoers have been overzealous. A couple years ago, TikTok blocked accounts that used “Black Lives Matter” in their bios, which the company blamed on changes to the TikTok Creator Marketplace. Not too long ago, the social app has also had to deal with a huge wave of pornographic content being recommended in users’ feeds. Last year, the company introduced an “adults-only” option and restricted streaming to those 18-years or older to try and cut down on the amount of inappropriate content seen by young people.

On the flip side, those repeat offenders are often violating the same policy over and over. Almost 90% violate using the same feature and over three quarters of those are violating the same policy category.

YouTube, one of TikTok’s biggest competitors in short form videos, also uses a strike system. Facebook and Instagram have a similar strike system that increases the account restrictions the more strikes a user racks up. The system also puts a heavier penalty on some infractions more than others. This system has proved to be problematic, as users are often allowed to break Facebook Marketplace’s own rules on gun sales multiple times before they start to see any sort of ban.

de Bailliencourt wrote that TikTok’s current moderation system has caused consternation with some creators, especially since some have complained they do not know if they violated a policy. Users have also complained about which videos are ineligible for recommendation. The policy head said TikTok is testing a new feature that will tell creators whether a video is no longer being recommended in feeds.

Of course, TikTok itself has been staring down the barrel of its own bans as of late, due to its alleged affiliations with China’s ruling party. In December, the U.S. House of Representatives banned the app from government devices. While congressional Republicans have been the most zealous proponents of wider bans for the ByteDance-owned app, on Tuesday Democratic Senator Michael Bennet also called on Apple and Google to remove the app from their app stores, citing national security risks.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
ApplebytedancefacebookGizmodoGoogleInstagramInternetJulie de BailliencourtMichael BennetOffendersOperating systemsphoto sharingPolicypunishsocial mediaSocial networking servicesSoftwaretacticTech NewsTechnologytiktokTop Storiesu.s. house of representativesVideo softwareWorld Wide Web
Comments (0)
Add Comment