Opinions

How Social And Commerce Platforms Can Manage Self-Expression And Harmful Content

Companies are working in different ways to manage the thorny issue.

Share this article

Share this article

Companies are working in different ways to manage the thorny issue.

Opinions

How Social And Commerce Platforms Can Manage Self-Expression And Harmful Content

Companies are working in different ways to manage the thorny issue.

Share this article

Maintaining freedom of self-expression while minimising harmful content is an ongoing challenge for social and commerce platforms. Whether you take down content or leave it up, someone, somewhere will have a different opinion.

Emotions run high and the organisation’s attempt to maintain a balance gets lost in the passionate arguments for and against. To maintain their balance, management must be robust and prepared to stand by their considered decisions.

At Spread Group we have seen trolling, rude direct messaging, a couple of petitions and even major campaigns against decisions we have made. For that reason, it is important to have clear guidelines and a solid process.

So what can organisations do to manage this balance and create a procedure? We’d love to have the definitive answer, but it’s more of an ongoing process.

At Spread Group, we manage content on our ecommerce platform through the ‘many eyes’ of our community, a team of in-house trend specialists and new technologies. All of this information also forms our community standards, to try and prevent harmful content appearing on the platform in the first place.

Social and ecommerce platforms are all facing similar challenges and deploying these tactics in various ways with different levels of success.

Twitter has used its teams along with trusted partners to try and stop the spread of harmful content, rather than taking it all down. With the American election coming up, the platform updated its approach in the summer of 2020. It started by adding ‘unsubstantiated’ labels to some tweets about COVID-19 conspiracy theories.

By the end of the year was including ‘this claim is disputed’ to Donald Trump’s tweets about a stolen election. It also tried the tactic of ‘pre-bunking’ false claims with a series of messages and adding more steps to the retweet process. The company believes this change introduced some friction and resulted in a 20% decrease in tweet sharing.

Facebook has added new AI technology to identify and remove hateful content. In 2020 the social media platform introduced an AI detection system to remove harmful posts.

It reported a huge 134% increase in the number of hate posts it removed between the first and second quarters of 2020. It’s a similar story over on Instagram where AI systems apparently detected 84% of hate speech in Q2, compared to 45% in Q1.

The aim for Twitch is to make decisions about online behaviour based on content, rather than judgement calls about intention to cause harm. The company has introduced new policies this year to help moderators address issues of harassment and hate speech and has changed its guidelines to help modify behaviour by gamers on its platform.

Amazon seems to be doing less well. It has faced calls from users this year to take down products. This includes t-shirts that carry offensive messages about disability, Blue Lives Murder products (still up) and face masks combining Trump, the American flag and Nazi symbols. This was taken down overnight after being highlighted on Twitter.

The balance they are all struggling with is not stopping or delaying the legitimate creations or comments, but at the same time preventing the unwanted and harmful. This is an issue for us at Spread Group too.

We’re a €131m ecommerce platform, enabling users to express themselves on clothing and accessories. We have an average of 25,000 designs uploaded a day by users (roughly 10m in 2019).

Our standards and filters help to weed out harmful content before it is uploaded and our in-house trends team also makes proactive and reactive checks on designs.

Of the 10m designs a year, only a tiny fraction are rejected because they violate our Community Standards. Most designs are quickly uploaded and allow the campaign or shop to get on with spreading their message and the business of selling their merch.

In a year of elections however, we responded to alerts from our community and took down the online shop of the prospective Mayor of Vienna, Heinz-Christian Strache.

Although the campaign’s Spreadshop was mostly bland ‘Team Strache’ t-shirts, his offline comments were considered to incite hatred and violence. (He didn’t win).

Similarly, we proactively remove designs in our Marketplace that support the German right-wing political party, the AfD (which does not have a shop). This is in response to statements their leaders have made in real life and in their manifesto.

Like Twitch, in 2020 Spread Group removed designs that included the American Confederate flag. Many regions across the world have local symbols of identity. These emblems can be fairly neutral for years.

Then real-life events change and they become associated with a toxic viewpoint or violence. Our in-house team monitored this trend and saw that the violent events in 2020 had shifted the design from inflammatory to harmful.

Reviewing words and images is one challenge; symbols or even letters, like Q, is another level of complexity. New technology, including AI, can sometimes help with this, as Facebook is finding.

Our internal team had previously decided that the QAnon theories were odd, but didn’t contravene our community standards. However, in 2020 there have been violent attacks and we have taken down obvious designs referring to this group. We have not banned the letter Q though in general.

This balance is a challenge that is increasing for social and ecommerce platforms. The UK and EU have been discussing laws that will require sites that host user-generated content to ‘remove and limit the spread of illegal content’ or risk penalties of £18m or 10% of turnover, whichever is the greater[6].

So we have to keep up if we don’t want to incur a fine. Shapes shift, letters change their association, flags become offensive. The only way to keep on top of the content our users generate is through a combination of our in-house trends team, the vigilance of our community and applying new technology.

The line between controversial ideas and malicious intent will always need patrolling. But the right to freedom of self-expression is a fundamental part of our platform. We are firm believers in the power of freedom of expression and the strength it has to lift, enrich, and fortify the global community.

We are working towards being the company we want to be but it’s definitely an ongoing process for all platforms.

Philipe Rooke is CEO of Spread Group.

Get news to your inbox
Trending articles on Opinions

How Social And Commerce Platforms Can Manage Self-Expression And Harmful Content

Share this article