Fresh pledge to tackle online hate speech
The European Commission and four of the world's largest internet companies have agreed a new code of conduct to help halt the spread of hate speech online.
Twitter, Facebook, YouTube and Microsoft have all been involved in the creation of the code with the EC, which establishes protocols on how they should respond to - and help prevent - illegally inflammatory content.
Though the code isn't legally binding, it establishes a series of "public commitments", such as the requirement that companies review reports of hate speech in less than 24 hours and remove or disable access to the content if necessary.
What it covers
Vera Jourova, the EU Commissioner for Justice, Consumers and Gender Equality, says the code arose specifically as a response to the use of social media by terrorist groups hoping to recruit people to their causes.
However, its overarching purpose is in combating racism and xenophobia throughout Europe, following a rise in anti-semitic, anti-immigrant and radical religious commentary such as that from Islamic State, online.
The code's definition of hate speech is defined as "all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin".
This limited definition leaves out various other forms of online abuse, such as harassment on gender grounds or more generalised cyber bullying.
Indeed, the companies involved in the code's creation say that they already monitor and remove illegal content extending beyond its definition.
Facebook and Google both told Reuters that they reviewed the vast majority of reported content within 24 hours as standard.
Monika Bickert, head of global policy management at Facebook, says that they review millions of pieces of reported content each week, which are checked against Facebook's Community Standards.
Regarding hate speech, these standards specify that any content "that directly attacks people based on their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disabilities or diseases" will be removed.
Only a joke
Nevertheless, Facebook also state that they "allow humour, satire, or social commentary related to these topics", which can lead to some difficult and sometimes upsetting decisions as to what counts as offending content.
What one person may perceive as a joke, another may feel is a direct attack on their way of life.
The EC code makes no reference to this quandary, though; Twitter's Karen White says that "there is a clear distinction between freedom of expression and conduct that incites violence and hate."
In effect the code makes official the burden of responsibility for deciding the relative legality of content on the internet companies.
Some worry that they could become a little overzealous in the task, erring on the side of caution rather than risking criticism.
This could have implications for freedom of speech - our right to give our opinion publicly without fear of censorship or punishment.
The code recognises this issue, stating that freedom of expression covers "not only... 'information' or 'ideas' that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the state or any sector of the population".
Although the public are able to complain about content directly to the companies themselves - typically by using "report" buttons, others may choose to flag up content to civil society organisations, such as the Community Security Trust in the UK and France's SOS Racisme.
The code requires the four companies named above - Facebook, Twitter, Microsoft, and Youtube - to co-operate more fully with such organisations, though it doesn't specify which are involved.
The True Vision website - operated by the Association of Chief Police Officers - points out that we can also report illegal content to the hosting company and to the police.
Crimes on the internet are thought of as being committed at the location where the material was posted - but it is the "responsibility of the force where the complainant lives to commence enquiries".
And while the police can act in cases where a law has been broken - for example inciting hatred based on a person's race, religion or sexual orientation - they cannot act where it hasn't.
For example, it isn't currently illegal in England and Wales to incite hatred based on someone being disabled or transgender.
The code recognises that although some content isn't illegal, it can still be hugely offensive.
To this end, participating companies will be required to do more to "educate and raise awareness with their users about the types of content not permitted under their rules and community guidelines".
Whether this will inspire or influence users to vet their own content before posting will remain to be seen.