Glitch’s Response to Twitter’s Updated Hateful Conduct Policy

What’s new

Last September, Twitter expanded its Hateful Conduct Policy with the introduction of a Dehumanisation Policy. This new set of rules aims to prohibit “language that treats others as less than human”, denying their human nature or their human qualities. It includes an extensive list of the identifiable groups that could be victimised. However, this policy only protects individuals when they have been dehumanised based on “a membership in an identifiable group”. While we celebrate the good gesture of Twitter to seek feedback from global perspectives and acknowledge the different impacts on cultures and communities around the world, an issue arises when a person’s membership is not clear due to the nature of the dehumanising insult. For example, when someone makes a direct comparison of another person to an animal.

Twitter’s consultation was short, just 14 days which is inaccessible for many and limits the scope. By not giving adequate chance to participate, the purpose of the survey is undermined.  While we believe it is a step forward to end online abusive behaviour, it is not enough.

Our recommendations

    • Publish more information on the implementation of Twitter’s current hateful conduct and new dehumanisation policies
    • Higher engagement with the community and communication of results and statistics
    • Clarity on the definition of “abusive tweet”  
    • Transparency in the exercise of enforcing e.g: sharing guidance of moderators
    • Implementation of an accountability mechanism to secure enforcement
    • Expansion of the regulation for a more consistent policy and higher protection of individuals

Final Thoughts

There is still a large policy gap that social media companies must address. We can all help to fix the glitch and therefore we want to see real efforts to tackle online abuse. Read more of our recommendations here:

Leave a Reply

Glitch is a UK registered charity. Charity number: 1187714
%d bloggers like this: