
Britain’s enemies don’t need tanks or missiles to undermine our democracy. All they need is a smartphone and a social media account. The algorithms do the rest — amplifying lies, polarising debate, and pushing users down rabbit-holes of radicalisation. It’s cheap, effective, and happening right now – last year, inauthentic Iranian accounts were found to have posted 250,000 pro-Scottish independence tweets on X since late 2021, while MI5 has warned that state actors are using AI to manipulate democratic processes.
We don’t tolerate this on TV or radio. Broadcasters have to aim for factual accuracy and balance, so even when they don’t get it perfectly right, most of us know that what we hear or see on our airwaves is kosher, even when we don’t agree with it.
But online is the Wild West in comparison. Social media is designed to maximise engagement, which means showing each of us more of what we already like, or hate, to keep us scrolling. The algorithms don’t care if something is factually accurate, a lie, or a scam, as long as we keep looking and don’t log off.
This creates big opportunities for organised criminals to scam and con honest Brits out of our hard-earned cash. And for foreign Governments, religious extremists, or anyone with a bit of money and an axe to grind to spread lies or one-sided content to smear individuals, brands, or democratic institutions. The goal? To erode trust and weaken us through polarisation, radicalisation, intolerance, and violence. And the rise of AI makes cheap, high-quality fakery much easier and faster, so the arms race is escalating fast.
The answer isn’t a Ministry of Truth. We don’t need censors, but tools that put free citizens in control instead. Social media platforms should offer each of us simple, user-friendly controls over how factually inaccurate content appears in our timelines, so we can spot it easily as we’re scrolling, with minimum fuss. Some of us will decide we don’t want to see it at all, while others will prefer to see it with a clear label that something’s a scam. But in a free society, the crucial thing is that citizens should be making their own decisions, rather than being told what they can see or hear.
Nor should factual accuracy be defined by politicians, tech bosses, or anyone with a vested interest, either. There are well-established, independent, international journalistic standards for what counts as facts or lies, so we should use those. It will keep everyone from President Putin to Elon Musk at arm’s length, and give high-quality citizen-journalists a level playing field so their reporting can compete toe-to-toe against more traditional mainstream media outlets as well.
The same goes for tolerance, too. Citizens need to be free to choose who they follow online and what shows they watch, but the algorithms should never take that as a cue to feed each of us a pure and increasingly violent or extreme diet of more of the same. That’s a recipe for creating radicalisation rabbit-holes which encourage political moderates to become ultra-left or alt-right extremists. Or which pushes mainstream religious believers to become jihadists and sectarian haters. Or which tells single young men they should be incels.
Fortunately, there’s already a well-proven ‘Duty of Balance’ which has been helping broadcasters deal with this problem for decades. It isn’t fancy: it just says that they have to show both sides of an argument. It’s common sense, and it works, so let’s apply it to social media algorithms when they post things we haven’t chosen into our timelines, too.
Winston Churchill said, “A lie gets halfway around the world before the truth has a chance to get its pants on.” In today’s digital age, that’s truer than ever. So let’s give citizens the tools to choose wisely. Let them speak free — but tell them no lies.
If you like this idea, you’ll find more details, soundbites and rebuttals about it under Let Me Speak Free, But Tell Me No Lies in the Policy Thumbnail section of our website
This article was originally published in the online blog of the Council for Countering Online Disinformation (CCOD) where John is a member of the Advisory Board

Leave a Reply