Imagine there’s a reliability index (R) for what a politician says.
An R value of 100 would mean that a politician has an excellent track record: there is no evidence of them having said anything false.
An R value of 0 would mean that nothing they said can be trusted.
Imagine that R values are updated regularly, and are published in real-time by a process that is transparent, pulling together diverse sets of data from multiple spheres of discourse, using criteria agreed by people from all sides of politics.
Then, next time we hear a politician passing on some claim – some statistic about past spending, about economic performance, about homelessness, about their voting record, or about what they have previously said – we could use their current R value as a guide to whether to take the claim seriously.
Ideally, R values would also be calculated for political commentators too.
My view is that truth matters. A world where lies win, and where politicians are expected to bend the truth on regular occasions, is a world in which we are all worse off. Much worse off.
Far better is a world where politicians no longer manufacture or pass on claims, just because these claims cause consternation to their opponents, sow confusion, and distract attention. Far better if any time a politician did such a thing, their R value would visibly drop. Far better if politicians cared much more than at present about always telling the truth.
R values would play roles broadly similar to what already happens with credit scores. If someone is known to be a bad credit risk, there should be more barriers for them to receive financial loans.
Another comparison is with the “page rank” idea at the heart of online searches. The pages that have incoming links from other pages that are already believed to be important, grow in importance in turn.
Consider also the Klout score, which is (sometimes) used as the measure of influence of social media users or brands.
Evidently, many questions arise. Would a reliability index be possible? Is the reliability of a politician’s statements a single quantity, or should it vary from subject to subject? How should the influence of older statements decline over time? How could the index avoid being gamed? How should satire be accommodated?
Then there are questions not just over practicality but also over desirability. Will the reliability index result in better politics, or a worse politics? Would it impede honest conversation, or usher in new types of implicit censorship? Would the “cure” be worse than the “disease”?
My view is that a good reliability index will be hard to achieve, but it’s by no means impossible. It will require clarity of thinking, an amalgamation of insights from multiple perspectives, and a great deal of focus and diligence. It will presumably need to evolve over time, from simpler beginnings into a more rounded calculation. That’s a project we should all be willing to get behind.
The reliability index will need to be created outside of any commercial framework. It deserves to be funded by public funds in a non-political way, akin to the operation of judges and juries. It will need to be resistant to howls of outrage from those politicians (and journalists) whose R values plummet on account of exposure of their untruths and distortions.
If done well, I believe the reliability index would soon have a positive impact upon political discourse. It will help ensure discussions are objective and open-minded, rather than being dominated by loud, powerful voices. It’s part of what I see as the better politics that is possible in the not-so-distant future.
There’s a lot more to say about the topic, but for now, I’ll finish with just one more question. Has such a proposal been pursued before?