Instagram Will Warn Users Before Posting Offensive Captions

In an effort to reduce cyberbullying on the app, Instagram has been rolling out new methods to censor content. It’s not uncommon for users to see a message covering photos and videos warning them that the following content contains sensitive information and allowing them to choose whether or not they would like to view that content.

Earlier this year, Instagram also began to censor comments. Instagram states, “we started rolling out a new feature powered by AI that notifies people when their comment may be considered offensive before it’s posted. This intervention gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification. From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect.”

More recently, Instagram reportedly told Mashable that the app will be utilizing a similar feature for posting captions. The article states, “Instagram confirmed to Mashable that even if they get a warning about a potentially offensive caption, they can still post it. But if it violates the platform’s terms of service, the post would be removed.” Instagram also states, when someone writes a caption for a feed post and our AI detects the caption as potentially offensive, they will receive a prompt informing them that their caption is similar to those reported for bullying. They will have the opportunity to edit their caption before it’s posted.”

While this new feature does not restrict users from posting whatever caption they want, it does allow them to reconsider before posting. The feature is now only available in select countries and will be expanding globally.