Instagram announced this week that it would expand its approach to content moderation to encompass more types of bad user behavior and to alert users earlier of the penalties for their misconduct.
The social network’s current policy bans accounts with “a certain percentage of violating content”, i.e. nude photos, abuse of other users, libel, or posts containing illegal activity. The updated policy will now ban accounts “with a certain number of violations within a window of time.”
Instagram declined to share specific numbers on the timing and number of posts that would provoke a ban, citing a desire to avoid giving bad actors roadmaps of how to skirt the rules.
The new moderation system aims to avoid a snag Instagram employees noticed: If an account had 10 posts, eight of which violated Instagram’s terms, it would be penalized the same as an account with 1000 posts, 800 of which were in violation, an Instagram spokesperson told The Daily Beast. Within that system, users could essentially dilute their bad behavior and skirt the social network’s rules.
The new system will send users in-app notifications when their accounts are at risk of being disabled and allow them to appeal content deletions via that same notification. The alert will display the user’s history of removed posts. Under the old terms and conditions, users made appeals by contacting Instagram’s help center website, but many complained that the both the ban and appeal processes were opaque.
The new moderation policy is more similar to that of Facebook, Instagram’s parent company. The new content warning system follows news earlier this month that Instagram would implement an algorithmic tool to detect and warn users about offensive comments before they are posted.
“From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect,” Head of Instagram Adam Mosseri said in a blog post on that new anti-bullying feature.