Why India must be the centre for content moderation reform

17 May, 2020
Why India must be the centre for content moderation reform
Earlier this month, Facebook settled a lawsuit that required the business to provide $52 million in funds to content moderators experiencing mental health issues. In that light, next time when you flick through your Facebook/Instagram feed, take the time to realise what a miracle it really is that the content there tends to be ideal for consumption.  When social media platforms reached hundreds of millions of monthly active users (credit to Chamath Palihapitiya for discovering the metric), they brought a whole lot of humanity’s worst instincts online. 

Think child sexual abuse, cannibalism, animal cruelty, and violence towards infants. Thanks to easy access to the web and the cheap affordances of smartphones, all this is posted to platforms like 8chan, Facebook, and Instagram. There are two significant points of failure here. Firstly, the majority of this content is usually to be moderated by humans.  This brings about a complete host of mental medical issues, best documented by Casey Newton from the Verge. In a detailed report on this issue, Newton writes, “It really is an environment where workers cope by telling dark jokes about committing suicide, then smoke weed during breaks to numb their emotions”. In addition, “In expect a dopamine rush amid the misery (people) ..have been found having sex inside stairwells”.  A significant number of moderators for some platforms are contracted through companies such as for example Cognizant, Genpact, and Accenture. As standard practice, they are asked to sign non-disclosure agreements asking them never to reveal details about their work to even their own families. Moderators going through a huge selection of content decisions a day have a tendency to suffer from Post-traumatic stress disorder (PTSD). They are generally looking for a therapist which isn't provided or insured. 

Secondly, it is hard to keep up standards of free speech when platforms operate in greater than a hundred countries. Not absolutely all countries are democracies, and even in democracies, there are differences in what's acceptable beneath the free speech umbrella. As a platform, if you maintain American standards of expression, you may well be blamed for not adapting sufficiently to your surroundings. Alternatively, it is hard to build capacity and comply with speech standards in over 100 languages.

As a result, standards of expression is a dynamic process, with guidelines for what's acceptable being constantly updated. On a side tangent, that is partly why a Facebook ‘Supreme Court’ was constituted.

In light of most of this, why don't we try to seem sensible of the $52 million settlement since there is a lot of small print to cover. While the sum is the most crucial acknowledgement by Facebook about how exactly damaging content moderation can be for ‘employees’, it generally does not apply to moderators in all countries. Specifically, the lawsuit covers only people who've worked for Facebook through third-party vendors in the United States from 2015 until today, (estimated to be 11,250 people). 

As the sum itself is significant, I'd argue that changes to how content moderation takes places are worth more. Learnings out of this settlement should not be limited to the united states, but instead, applied globally, you start with India. 

The reason why I emphasise India, is because two forces make content moderation in India a substantial pain point. Firstly, India is among the global capitals for the BPO industry. The critical known reasons for that will be the massive user base in addition to a population that speaks English as another language. You may think that content moderators generally work from a dingy basement lit by computer screens. However, the truth is that they are located in big corporate buildings in Gurugram. 

Secondly, going to remedy is taboo in India, even among urban elites. So moderators facing mental health challenges may find it hard to speak about them, and their pleas may fall on deaf ears in the home and at work. 

In line with the Verge, the settlement makes meaningful changes to content moderation tools that might help in mitigating mental medical issues caused by the job. And it is these changes India’s content moderation industry should be banking on. Many of these tools include changing videos to black and white and muting sound by default. Furthermore, the settlement includes increasing option of mental medical researchers to moderators.  The latter includes not simply counsellors (who are regarded as more concerned about getting employees back to work rather than caring about their mental health), but also individual and group remedy sessions. 

You could put a cost tag on what it costs to keep platforms clean of harmful content. $52 million is an excellent starting point (and an underestimation). However the learnings which come out of this experience have the potential to be priceless. Not only regarding how much money they are able to potentially save in counselling costs, but when it comes to protecting against the mental harm that content moderation causes people who undertake it.
Source:
TAG(s):
Search - Nextnews24.com
Share On:
Nextnews24 - Archive