An idea of decentralised truth
I share values of metamodernism — the development of philosophy, aesthetics, and culture, emerging from and reacting to postmodernism. In some sense, metamodernism is a renaissance of sincerity, educated naivety, hope, the desire for universal truths, it’s opposed to postmodernism, but not in the same sense as postmodernism is opposed to modernism.
The very existence of metamodernism proves opposition to postmodernism reasonable. Maybe we should start building upon its developments. It’s hard to do so, mostly because of how hard it is to validate information nowadays and how much metamodernism relies on truth. I suggest decentralizing trustworthy information and constructive discourse by means of decentralised fact-checking and proofreading. This can be achieved with the help of modern decentralised systems, such as blockchain or, for that matter, any suitable ledger algorithm.
“Veritas is a free distributed network for us to collaborate with each other in pursuit of truth.”
We have long been thinking about the true value of truth. XX century postmodernists questioned all kinds of truth, going as far as rendering the concept of universal truth as such — worthless. Deconstruction. This idea later becomes an integral part of the postmodern thought. Certain aspects of our life are postmodern now, with all the good and bad that it brings. There are, of course, consequences to this. Metamodernism — a term that reflects the development of philosophy, aesthetics, and culture, emerging from and reacting to postmodernism. In certain sense it’s a renaissance of sincerity, educated naivety, hope, the desire for universal truths. You can see it’s naturally opposed to postmodernism, yet it doesn’t exactly suggest going back to naive positions of modernism. Instead, it assumes our society actively oscillates between both modernism and postmodernism. Metamodernism relies on enlightened naivety, conscientiousness and of course, truth. Metamodernist sees truth as immensely desirable.
Now, if you take a close look at how we distribute information and what it actually looks like, you’ll notice that it’s largely incoherent. Also, you have to understand that neither can it be consistently verified too. Information providers, such as websites, newspapers or any social media, for that matter, are often inconsistent and irresponsible. Provided data can always be modified without notice and there’s no way to prove it. Neither there’s a consistent way to tell if you should trust the information in the first place and this is what I mean when I say that we can’t consistently verify such information. And it’s exactly the problem: the medium we currently use to distribute information is just not flexible, not enough. Harvesting truth from multiple incoherent sources is both hard and counter-productive. Postmodernist might even argue whether it’s reasonable to actually do that! “What if truth, in fact, just doesn’t matter?” However, the very existence of metamodernist ideas proves quite the contrary: truth has to survive. And for that, I suggest we decentralize the medium. In some sense, truth itself has to become decentralised: directly or indirectly consumed from the decentralised systems. And these systems, decentralised systems, they can be transparent, resistant to censorship and most importantly, they all assume a consensus algorithm. And consensus means each and every party can agree on the exact state of the network. In blockchain, for example, they use proof-of-work (or simply, mining). There’s others. Whatever algorithm powers the infrastructure, there’s virtually no way to manipulate information that network has reached consensus upon; and it will happen, again and again. Information now becomes consistently verifiable and it’s a good thing. But consensus only means we can agree on the chronological order of information, but not on the information itself. Moreover, I suggest it’s fair to assume that we never will be able to do that! “But how can we know what’s true?” There just has to be some way to prove certain kind of information and to disprove the other. I suggest securing the value of truth by means of collaborative proofreading and fact-checking. If you’re familiar with the blockchain-related terminology, think proof-of-truth.
“Veritas is a free distributed network for us to collaborate with each other in pursuit of truth.” To me, Veritas, the goddess of truth, seems like a surprisingly great name for concept network, so I’ll call it that. Good. I’ll refer to actively participating users of the network as actors. So what is it all about? Actors create and reply to existing messages. You can think of messages as Bitcoin transactions: they are immutable (can’t be tampered with once committed) and verifiable (see digital signatures). Now consider a news piece. News piece is a perfectly suitable kind of information for Veritas. Both individuals and conventional news providers could benefit from directing their fact-checking efforts in favour of a trustworthy decentralised network. And that is one example of the immediate practical value Veritas can offer potentially. But wait, let’s go back to news. From the metamodernist perspective, what constitutes a good news message? It’s relevant, clever in its writing, doesn’t peddle any subjective ideological viewpoint, follows logic and reasoning. It has its facts intact. It might deconstruct certain ideas, but never for the sake of deconstruction itself. It’s good. Now, actors can react to messages in a transparent and verifiable way: by submitting a reply (child message). There’s no way to manipulate the discussion with misleading discourse, considering someone reasonable eventually will come along to deconstruct it. Another thing important to keep in mind is that consistent and reliable fact-checking is a very hard problem to solve. But wait, “how can you tell what’s true anyway?” You back it up with evidence. Harder evidence naturally provokes more confidence. Coincidentally, the very notion of confidence is exactly what centralized truth does wrong. Centralized parties are known to easily manipulate, distort and modify information. Sometimes they do it with malicious intent, and sometimes they don’t. Either way, information just cannot really be verified. decentralised truth, on the other hand, is not prone to this. In Veritas, the confidence of information can be used to verify it. And confidence has to be measurable. The discourse isn’t a quantitative metric, so what are we going to do about that? I suggest breaking confidence into the base and acquired components.
The idea behind base confidence is that probability of the newly submitted information to be true is not necessarily random: trustworthy actors are more likely to submit trustworthy information. And one way to measure trustworthiness is to rate actors by the total confidence of their contributions. So base confidence can be calculated this way. Now, the acquired confidence shall reflect the reaction of the network towards the message. As messages form trees, and trees consist of branches, where each branch can be seen as a thread of discussion. It lets actors to directly or indirectly contribute to the discussion, by replying to messages of any reasonable depth (as in the tree). And we make them vote every time they act. Actor’s sentiment towards a certain message shall reflect their overall impression of its trustworthiness. Mathematically speaking, it can be -1.0 (don’t trust completely), 1.0 (trust completely) or anywhere in between. One example of how it’s different from binary voting is Medium’s “clapping” system. But how exactly does sentiment affect the confidence of the parent message? Unfortunately, there’s a whole lot to take into consideration here, as there are plenty of variables available: the confidence and sentiment values of both messages, the trustworthiness of their authors, tree depth. Messages with both negative sentiment and negative confidence will actually end up improving the confidence of their parent messages. This is probably not fair, but building a fair confidence model is not the purpose of this essay, so let’s assume such a model exists. Whatever algorithm ends up evaluating confidence, messages will still affect one another. Here’s an analogy: comment section on Reddit. Except you can’t vote if you don’t comment. And if you decide to vote, it’s possible to elaborate on your sentiment. And when you’re done, your vote can and probably will affect all the comments up the tree. It’s a fast, potentially fair, consistent and verifiable way of proving confidence. Hence proof-of-truth.
What’s also important to understand is that in order for Veritas to be truly decentralised and — in that sense — independent, it has to be self-sustainable. Someone has to host the infrastructure, and I’ll refer to them as hosters. Storing information isn’t free: it takes storage and availability. And one way or another, someone has to pay for it. Veritas could be powered by its own economy, serving the purpose of handling infrastructure fees, tas. Actors pay hosters for messages in tas/byte fees. For their contributions, trustworthy actors are rewarded with more tas. If supported, the economy could prove hosting and efficient contributing profitable. I’m definitely not nearly a qualified economist, but I suspect market capitalization could be one way to do it. Why invest in fiat or some speculative crypto market, if you can invest in truth? It’s not a currency, but it certainly has value. What value does speculative cryptocurrency market, in particular, hold, exactly? Whatever it is, I sincerely believe that truth is a much more valuable asset.
In this essay, I tried to outline both general and specific ideas as-is. For further discussion, I intend to assemble a discussion group (mailing list?). If you’re interested and consider yourself either journalist, engineer, philosopher or just feel like we’re on the same page, send me a line at [email protected]. Thank you for your time.
Tuesday, 13 Mar 2018