Twitter experiment warns users to self-edit harmful replies
New Delhi: Twitter is currently experimenting with giving users a second opportunity to self-edit their tweets and replies if they contain harmful, abusive and hate content before they post it on the platform and face consequences for violating its policies.
It is still not an Edit button for users but a self-edit tool to tackle rampant harassment on its platform, said Twitter.
“When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful,” tweeted Twitter Support.
When things get heated, you may say things you don’t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
Twitter describes it as a limited experiment, and is currently only available for the iOS users.
The prompt will come as a pop-up on tweets which carry harmful content and Twitter AI/ML tools will try to catch such hate words first-hand.
Twitter users have been demanding an Edit button so that they can improve tweets that have been posted.
Twitter CEO Jack Dorsey first addressed the possibility of adding an edit feature for tweets in December 2016, based on the suggestions.
Back in 2018, while visiting India for Twitter’s pre-election campaign, Dorsey was quizzed why Twitter does not have an edit button.
To which, he said, “the reason Twitter does not have an ‘edit’ button is because people may change their opinions by editing the original tweet and then people who don’t agree with the original view, may have already retweeted the tweet, which is not an accurate representation of what they believe.”