Follow Us on Google News
NEW YORK: Twitter has planned an experiment by sending users a prompt message when they reply to a tweet using ‘offensive or absurd language’.
Twitter has been working on the changes in an effort to clean up conversations on the social media platform as the platform has long been under pressure to clean up hateful and abusive content on its account.
Sunita Saligram, Twitter’s global head of site policy for trust and safety, said, “We are trying to encourage people to rethink their behaviour and rethink their language before posting because they often are in the heat of the moment and they might say something they regret.”
Through this change, when users will hit “send” on their reply, they will be told if the words in their tweet are similar to those in posts that have been reported, and asked if they would like to revise it or not as the platform’s policies do not allow users to target individuals with slurs, racist or sexist tropes, or degrading content.
Twitter took action against almost 396,000 accounts under its abuse policies and more than 584,000 accounts under its hateful conduct policies between January and June of last year, according to its transparency report. The company will run an experiment globally but only for English-language tweets.
Read more: Twitter rages after LUMS announces fee hike