Twitter is experimenting with a new moderation tool that will warn users before they post replies that contain what the company says is “harmful” language.
Twitter describes it as a limited experiment, and it’s only going to show up for iOS users. The prompt that is now supposed to pop up in certain situations will give “you the option to revise your reply before it’s published if it uses language that could be harmful,” reads a message from the official Twitter Support channel.
The approach ha been used by quite a few other social platforms before, most prominently Instagram. The Facebook-owned app now warns users before they post a caption with a message that says the caption “looks similar to others that have been reported.” Prior to that change, Instagram rolled out a warning system for comments last summer.
It’s not exactly clear how Twitter is labeling harmful language, but the company does have hate speech policies and a broader Twitter Rules document that outlines its stances on everything from threats of violence and terrorism-related content to abuse and harassment.
Twitter says it won’t remove something simply because it is offensive: “People are allowed to post content, including potentially inflammatory content, as long as they’re not violating the Twitter Rules,” the company says. But it does have the rule sets that allow it to carve out exceptions to its broad speech policies.