With online abuse being an increasing problem since the rise of social media, moderators have been the main solution to-date, but they are becoming increasingly overwhelmed.
Arwen.AI – a company with a solution to minimise online hate – provides a new way to automatically remove unwanted content from these channels using artificial intelligence. The group has previously worked with Burnley FC and since being implemented, has reduced negative content on the club’s social profiles by 70%.
The firm’s CEO and Co-Founder, Matthew McGrory, spoke to Insider Sport on the impact that such toxicity can have on an athletes’ wellbeing, as well as the damaging effects it places on a brand’s advertising strategy.
Insider Sport – Could you give us an overview of how Arwen was established and its activities in sports since?
Matthew McGrory – Arwen was established in 2020. David Cole, the CTO, and I, were seeing the growing levels of toxicity on social media and were frustrated by the lack of solutions available. The well-publicised death of Caroline Flack and then the subsequent abuse after the Euros in Summer 2021 just cemented the need for action oriented solutions as opposed to research on the topic.
The problem was obvious but solutions allowing individuals and brands to take control were not. Elite Sports was an obvious place for us to start, though we’ve not been limited to that sector, I used to work at Fulham FC and at Brands Hatch (MotorSport Vision) and knew the challenges in the sector.
We’ve now got a host of clients in professional football and Formula One. They range from governing bodies (FIA) to teams (Mercedes F1, Alpine etc) and individuals (Lewis Hamilton, George Russell, Michael Owen etc). We’re now being approached by others globally.
IS – With online toxicity and dangerous content having grown in recent years, how does this impact athletes, moderators and brands hoping to engage audiences via social media?
MM – There’s a range of impacts. Obviously, the athletes themselves report the negative impact on their mental health and wellbeing, which negatively affects their performance and confidence. They also describe how it impacts their family members, some of whom actively manage their social media, which is something we can also help with.
Then there are the fans and followers.
One in five people in the UK witness online abuse each year, so it’s not an insignificant number. Everyone has an example. The impact of seeing it turns people off – research shows that 38% of people who feel that online communities are unsafe, switch off and disengage. If you’re trying to promote or advertise to these communities, which is clearly a major goal for rights holders in sports, then that’s a third of followers that aren’t going to engage with your brand, which is a major problem. Brands don’t want to advertise next to racism, sexism and toxic language, and when they do, lots of followers won’t engage with it.
Moderators have been the main solution to-date, but they’re increasingly overwhelmed. Social media toxicity volumes have increased by 40% since 2019, and Spam volumes (pornbot and Cryptobots) are growing 350%+ year on year.
This is a tidal wave of bad content that the moderator must review and remove. It is bad for their mental health, with 83% reported burnout in the industry. It’s also bad for productivity having people deleting stuff when their job is to create engaging content and nurturing communities.
The spam issue is particularly pernicious. Every time one of our clients posts, 20 spam bots comment on their channel. Some of this is pornographic material that is not safe or appropriate for those communities. Some of it is Cryptobots, which try and defraud people. In the US 95,000 people lost $770 million to fraud initiated on social media. We don’t even have data for the UK, but one can assume it’s equivalent.
You’ve got increasing volumes of toxic and bad content, which is affecting everyone negatively, leading to big financial and commercial implications. And the solutions aren’t working, so you can see how Arwen is needed.
IS – What are some of the commercial implications for brands and athletes these ‘toxic online environments can have?
MM – Clearly losing a third of your audience to toxic and bad content is a massive commercial implication. It’s the dirty secret of the industry. Research shows that toxicity causes a 34% drop in ROAS (Return on Advertising Spend). This is in a marketing world where every 1% improvement in reach is bitterly fought over. Our customers are coming to us because they see through this dirty secret – they see that improving the quality of your online community is vital if you want to attract the best brands and athlete ambassadors.
What we have shown is that once the community is safe and inclusive for all – once you have better quality – quantity goes up too! We brought 29.4% of Mercedes AMG F1 followers back into active engagement because Arwen reduced spam by 93% and toxicity by 70%. Quality leads to quantity – and quality plus quantity is what brands and athletes want.
IS – Building on this, how can sports stakeholders benefit commercially from countering dangerous online content? Is this just limited to public image or do the benefits go beyond that?
MM – Undoubtedly brands and athletes want to show they are aligned with their values – and they won’t accept their online environments being full of toxic and abusive content. So we do help our clients to communicate the change they’re making – partly for public image, but also because 95% of people welcome it. They deserve credit for making the change, but as I’ve set out there are lots of additional benefits. It really is a win-win.
IS – Focusing on football, how did the partnership with Burnley FC come about and what objectives have been achieved so far?
Burnley FC came to us because they were seeing more and more toxic content, and their social media team were looking for better solutions to manage it. Since being implemented we’ve reduced toxicity on their social media profiles by 70% – that means that Arwen has removed the severely toxic content so consistently that people have stopped posting toxic content.
We’ve de-normalised that behaviour. So now Arwen does two things: it’s a smoke alarm to prevent that behaviour breaking out again, and it’s a spambot remover, making sure that Pornbots and Pirate Streams don’t continually pollute their posts. We continue to remove 93% of their bots day-in-day out.
IS – What are the company’s plans on rolling out the technology across the sports industry – in terms of athletes and rights holders as well as brands and potential sponsors?
MM – We have three approaches to that:
- Partnerships – most recently with FIA, starting with protection for their key social media accounts, with moves now to grow our relationship out to support the FIA’s multi-year commitment to improving trust and safety in motorsports, both online and offline. The recent abuse of volunteers and individual drivers at the end of the 2022 season, in Mexico and Brazil, is a clear target for us to fix in the 2023 season. We are engaged with other sporting regulators across world sports to try and forge the same partnerships.
- Growth – we’re actively marketing Arwen globally in elite sports and beyond, with engagements under way in USA and Australia, as well as beyond our starting point of motorsports and football. We’re now a recognised brand with a good track record, so people are coming to us.
- Results-oriented – we’re developing a global marketing metric so that brands and advertisers can objectively assess quality from a trust and safety perspective. We want a world where a brand can say “this social channel has a safety score of 3/10 and that channel 9/10, so I should put my advertising money with the 9/10 because that’s where I’ll get both best return and best values alignment.”
IS – Social media is a vast digital landscape – how significant is the use of data in analysing and extracting information from these platforms and how does Arwen accomplish this?
MM – Data is the lifeblood of Arwen. We protect 450m online users from toxic and unwanted content on behalf of our clients, every single minute. That’s a huge amount of data to process, especially when we have a sub-second removal commitment. It’s useful to look at this in four ways:
- Platform logic – underneath Arwen is our data platform. On it we host 24 different Artificial Intelligence (A.I.) models, each one an expert in detecting a form of unwanted content – racism, sexism, homophobia, profanity, insult, identity attack etc. Some of these are proprietary to Arwen (we built them because we were unhappy with what was available on the market, or there wasn’t anything available). Others we hire in from the market. We mapped the market early on, identifying all the best A.I. models and their providers.
For instance, the A.I. model we use for sexual aggression is hired from a US dating platform, because that’s been trained on billions of lines of dating data making it good at detecting sexual aggression and misogyny. But for spam, we have built our own models, because the success rates of A.I. models on the market weren’t good enough for us. It also means we can add new A.I. models and languages super quickly when our clients need them. Arwen’s secret is that we then blend all these 24 A.I. models to score each comment and decide on the action required in under a second.
- Personalised – Arwen doesn’t set any blanket rules. Instead, each client personalises Arwen to act according to their own unique values. We have some clients who are happy to allow some profanity and some that aren’t. This is important because freedom of speech and trust and safety is very contextual – each individual and organisation knows best how to set the rules for their community to adhere to their own brand values.
- Cloud-based – Arwen is entirely cloud based, so we scale to meet demand. On certain weekends of the year, we may have multiple clients having simultaneous events globally, so comment volumes shoot up. Arwen just scales automatically to meet demand, with no loss in quality or performance.
- Data-driven decisions – Arwen now understands a great deal about patterns of toxicity and unwanted content globally. This is a rich source of data to help us, and our clients make better decisions. For instance, we can pinpoint which users are persistently racist and let our clients know about them. They can then choose to sanction them online as well as offline (banning them from events, season tickets etc).
We can also help their marketing teams better understand their audiences. Every June our client’s promote Pride month, and many learn from Arwen that they need to be more educational in why diversity and inclusion is important, rather than assuming this is understood. We want to expand access to this data – suitably anonymised and protected – so that regulators and academics can use it to make better decisions on how to improve online safety in the long term.