
The Telegraph was recently forced to issue the following statement:
The Telegraph is aware of an image circulating on X which purports to be a Telegraph article about ’emergency detainment camps’. No such article has ever been published by the Telegraph.
In response, John Penrose said (via Twitter):
“This shows the power of mis / disinformation used to fan hate & radicalise. How do we make social media safer & more trustable, without killing free speech? There’s a good & proven solution.
First, The Telegraph is right to put the record straight. But that’s too slow – the fake headline had already been shared everywhere, including by Elon Musk. So the fake’s authors had already won by the time The Telegraph could react.
So we need to know clearly if something is factually accurate or made up as soon as it appears in our timelines. If we have to take time to check, most of us won’t & many of us will accept it as true.
So social media platforms need to check ‘provenance’ (where the original post came from; whether it has been doctored since then; who authored each step) so there’s an easily-checkable audit trail for each post. There are independent international standards being developed on how to do this safely. Then platforms can tell if something was generated by AI, altered by a Russian bot, or genuinely published by @Telegraph . And then they should tell us immediately when it turns up in our timelines, so we can decide whether to trust it or not.
But untruths are only 1 kind of misinformation: there’s a 2nd version which uses accurate facts to present very 1-sided or biased stories. Fortunately there’s a proven old-world antidote to this: UK broadcasters have had to deliver balanced reporting for over 50 years.
And, broadly, it works OK to stop the kind of radicalising echo-chambers & rabbit-holes which infest social media. Doing the same for online platforms would mean someone whose timeline is full of far right racism (or far left, or incels, or religious hate) would also see things which put the other side of the same argument.
The platforms will hate it (their business model focuses on sending each of us more of what they know we already want, or what other people like us want) but we would all be safer.
Some people say Ministers & Ofcom can do all this using the new Online Safety Act, while others say they will need extra powers. Whichever one is right, this approach forges consensus & rational debate instead of sowing division and spreading lies.
Why wouldn’t we do it?
Leave a Reply