Following years of accusations, Facebook-owner Meta has released its first annual human rights report.
The report covers how the company is addressing its human rights impacts including insights and actions from its due diligence on products, countries and responses to emerging crises.
In 2021, Meta adopted its Human Rights Policy and this report is the first one covering the time between 2020 through 2021.
In its summary, Meta said that ‘the potential for Meta’s platforms to be connected to salient human rights risks caused by third parties’ was noted. This included ‘advocacy of hatred that incites hostility, discrimination, or violence’.
The assessment, it added, did not cover ‘accusations of bias in content moderation’.
The report highlights the role of end-to-end encryption on WhatsApp in protecting people’s privacy — particularly journalists and human rights defenders — and how it’s expanding it to its other messaging apps.
In its report, Meta said it was studying the India recommendations but did not commit to implementing them as it did with other rights assessments.
India is Meta’s largest market globally by number of users and for years rights groups have raised alarms about anti-Muslim hate speech stoking tensions in the country.
Human rights groups including Amnesty International and Human Rights Watch have demanded the release of the India assessment in full, accusing Meta of stalling.
In 2020, Facebook’s top public policy executive in India stepped down following reports that she opposed applying the company’s rules to Hindu nationalist figures flagged internally for promoting violence.
Meta’s Human Rights team is now comprised of 8 people, while about 100 others work on human rights with related teams.
Following a rebrand last year, the social media giant has prioritised its bet on the ‘metaverse’ this year, so analysis of augmented and virtual reality technologies would be discussed in subsequent reports.
Source: Read Full Article