Special thanks to Tim Swanson for reviewing, and for further discussions on the arguments in his original paper on settlement finality.
Recently one of the major disputes in ongoing debate between public blockchain and permissioned blockchain proponents is the issue of settlement finality. One of the simple properties that a centralized system at least appears to have is a notion of “finality”: once an operation is completed, that operation is completed for good, and there is no way that the system can ever “go back” and revert that operation. Decentralized systems, depending on the specific nature of their design, may provide that property, or they may provide it probabilistically, within certain economic bounds, or not at all, and of course public and permissioned blockchains perform very differently in this regard.
This concept of finality is particularly important in the financial industry, where institutions need to maximally quickly have certainty over whether or not the certain assets are, in a legal sense, “theirs”, and if their assets are deemed to be theirs, then it should not be possible for a random blockchain glitch to suddenly decide that the operation that made those assets theirs is now reverted and so their ownership claim over those assets is lost.
In one of his recent articles, Tim Swanson argues:
Entrepreneurs, investors and enthusiasts claim that public blockchains are an acceptable settlement mechanism and layer for financial instruments. But public blockchains by design cannot definitively guarantee settlement finality, and as a result, they are currently not a reliable option for the clearing and settling of financial instruments.
Is this true? Are public blockchains completely incapable of any notion of settlement finality, is it the case, as some proof of work maximalists imply, that only proof of work can provide true finality and it’s permissioned chains that are a mirage, or is the truth even more nuanced and complex? In order to fully understand the differences between the finality properties that different blockchain architectures provide, we will have to dig into the depths of mathematics, computer science and game theory – that is to say, cryptoeconomics.
Finality is always probabilistic
First of all, a very important philosophical point to make is that there is no system in the world that offers truly 100% settlement finality in the literal sense of the term. If share ownership is recorded on a paper registry, then it is always possible for the registry to burn down, or for a hooligan to run into the registry, draw a “c” in front of every “1” to make it look like a “9”, and run out. Even without any malicious attackers, it is also possible that one day everyone who knows the registry’s location will be struck by lightning and die simultaneously. Centralized computerized registries have the same problems, and arguably an attack is even easier to pull off, at least if the security of the central bank of Bangladesh is any indication.
In the case of fully on-chain “digital bearer assets” where there is no ownership other than the chain itself, the only recourse is a community-driven hard fork. In the case of using blockchains (permissioned or public) as registries for ownership of legally registered property (land, stocks, fiat currency, etc), however, it is the court system that is the ultimate source of decision-making power regarding ownership. In these case that the registry does fail, the courts can do one of two things. First, it is possible that the attackers find some way to get their assets out of the system before they can respond. In this case, the total quantity of assets on the ledger and the total quantity of assets in the real world no longer match up; hence, it is a mathematical certainty that someone with a finalized balance of x will eventually instead have to make do with an actual balance of y < x.
But the courts also have another alternative. They are absolutely not required to look at the registry in its standard presentation and take the results literally; it is the job of physical courts to look at intent, and determine that the correct response to the “c” drawn in front of the “1” is an eraser, not putting up one’s hands and agreeing that uncle Billy is now rich. Here, once again, finality is not final, although this particular instance of finality reversion will be to society’s benefit. These arguments apply to all other tools used to maintain registries and attacks against them, including 51% attacks on both public and consortium blockchains, as well.
The practical relevance of the philosophical argument that all registries are fallible is strengthened by the empirical evidence presented to us by the experience of Bitcoin. In Bitcoin, there have so far been three instances in which a transaction has been reverted after a long time:
- In 2010, an attacker managed to give themselves 186 billion BTC by exploiting an integer overflow vulnerability. This was fixed, but at the cost of reverting half a day’s worth of transactions.
- In 2013, the blockchain forked because of a bug that existed in one version of the software but not another version, leading to part of the network rejecting a chain that was accepted as dominant by the other part. The split was resolved after 6 hours.
- In 2015, roughly six blocks were reverted because a Bitcoin mining pool was mining invalid blocks without verifying them
Out of these three incidents, it is only in the case of the third that the underlying cause is unique to public chain consensus, as the reason why the mining pool was acting incorrectly was precisely due to a failure of the economic incentive structure (essentially, a version of the verifier’s dilemma problem). In the other two, the failure was the result of a software glitch – a situation which could have happened in a consortium chain as well. One could argue that a consistency-favoring consensus algorithm like PBFT would have prevented the second incident, but even that would have failed in the face of the first incident, where all nodes were running code containing the overflow vulnerability.
Hence, one can make a reasonably strong case that if one is actually interested in minimizing failure rates, there is a piece of advice which may be even more valuable than “switch from a public chain to a consortium chain”: run multiple implementations of the consensus code, and only accept a transaction as finalized if all of the implementations accept it (note that this is already standard advice that we give to exchanges and other high-value users building on the Ethereum platform). However, this is a false dichotomy: if one wants to truly be robust, and one agrees with the arguments put forward by consortium chain proponents that the consortium trust model is more secure, then one should certainly do both.
Finality in Proof of Work
Technically, a proof of work blockchain never allows a transaction to truly be “finalized”; for any given block, there is always the possibility that someone will create a longer chain that starts from a block before that block and does not include that block. Practically speaking, however, financial intermediaries on top of public blockchains have evolved a very practical means of determining when a transaction is sufficiently close to being final for them to make decisions based on it: waiting for six confirmations.
The probabilistic logic here is simple: if an attacker has less than 25% of network hashpower, then we can model an attempted double spend as a random walk that starts at -6 (meaning “the attacker’s double-spend chain is six blocks shorter than the original chain”), and at each step has a 25% chance of adding 1 (ie. the attacker makes a block and inches a step closer) and an 75% chance of subtracting 1 (ie. the original chain makes a block). We can determine the probability that this process will ever reach zero (ie. the attacker’s chain overtaking the original) mathematically, via the formula (0.25 / 0.75)^6 ~= 0.00137 – smaller than the transaction fee that nearly all exchanges charge. If you want even greater certainty, you can wait 13 confirmations for a one-in-a-million chance of the attacker succeeding, and 162 confirmations for a chance so small that the attacker is literally more likely to guess your private key in a single attempt. Hence, some notion of de-facto finality even on proof-of-work blockchains does in fact exist.
However, this probabilistic logic assumes that 75% of nodes behave honestly (at lower percentages like 60% a similar argument can be made but more confirmations are required). There is now also an economic debate to be had: is that assumption likely to be true? There are arguments that miners can be bribed, eg. through a P + epsilon attack, to all follow an attacking chain (a practical way of executing such a bribe may be to run a negative-fee mining pool, possibly advertising a zero fee and quietly providing even higher revenues to avoid arousing suspicion). Attackers may also try to hack into or disrupt the infrastructure of mining pools, an attack which can potentially be done very cheaply as the incentive for security in proof of work is limited (if a miner gets hacked, they lose only their rewards for a few hours; their principal is safe). And, last but not least, there is what Swanson has elsewhere called the “Maginot Line” attack: throw a very large amount of money at the problem and simply bring more miners in than the rest of the network combined.
Finality in Casper
The Casper protocol is intended to offer stronger finality guarantees than proof of work. First, there is a standard definition of “total economic finality”: it takes place when 2/3 of all validators make maximum-odds bets that a given block or state will be finalized. This condition offers very strong incentives for validators to never try to collude to revert the block: once validators make such maximum-odds bets, in any blockchain where that block or state is not present, the validators lose their entire deposits. As Vlad Zamfir put it, imagine a version of proof of work where if you participate in a 51% attack your mining hardware burns down.
Second, the fact that validators are pre-registered means that there is no possibility that somewhere else out there there are some other validators making the equivalent of a longer chain. If you see 2/3 of validators placing their entire stakes behind a claim, then if you see somewhere else 2/3 of validators placing their entire stakes behind a contradictory claim, that necessarily implies that the intersection (ie. at least 1/3 of validators) will now lose their entire deposits no matter what happens. This is what we mean by “economic finality”: we can’t guarantee that “X will never be reverted”, but we can guarantee the slightly weaker claim that “either X will never be reverted or a large group of validators will voluntarily destroy millions of dollars of their own capital”.
Finally, even if a double-finality event does take place, users are not forced to accept the claim that has more stake behind it; instead, users will be able to manually choose which fork to follow along, and are certainly able to simply choose “the one that came first”. A successful attack in Casper looks more like a hard-fork than a reversion, and the user community around an on-chain asset is quite free to simply apply common sense to determine which fork was not an attack and actually represents the result of the transactions that were originally agreed upon as finalized.
Law and Economics
However, these stronger protections are nevertheless economic. And this is where we get to the next part of Swanson’s argument:
Thus, if the market value of a native token (such as a bitcoin or ether) increases or decreases, so too does the amount of work generated by miners who compete to receive the networks seigniorage and expend or contract capital outlays in proportion to the tokens marginal value. This then leaves open the distinct possibility that, under certain economic conditions, Byzantine actors can and will successfully create block reorgs without legal recourse.
There are two versions of this argument. The first is a kind of “law maximalist” viewpoint that “mere economic guarantees” are worthless and purely in some philosophical sense legal guarantees are the only kind of guarantees that count. This stronger version is obviously false: in many cases, the primary or only kind of punishment that the law metes out for malfeasance is fines, and fines are themselves nothing more than a “mere economic incentive”. If mere economic incentives are good enough for the law, at least in some cases, then they ought to be good enough for settlement architectures, at least in some cases.
The second version of the argument is much more simple and pragmatic. Suppose that, in the current situation where the total value of all existing ether is $700 million, you calculate that you need $30 million of mining power to successfully conduct a 51% attack, and once Casper launches you predict that there will be a staking participation rate of 30%, and so finality reversion will carry a minimum cost of $700 million * 30% * 1/3 = $70 million (if you are willing to reduce your tolerance to validators dropping offline to 1/4, then you can increase the finality threshold to 3/4, and thereby increase the size of the intersection to 1/2 and thereby get an even higher security margin at $105 million). If you are trading $10 million worth of equities, and you intend to do this for only two months, then this is almost certainly fine; the public blockchain’s economic incentives will do quite a fine job of disincentivizing malfeasance and any attack will not be nearly worth the trouble.
Now, suppose that you intend to trade $10 million worth of equities, but you are going to commit to using the Ethereum public blockchain as the base infrastructure layer for five years. Now, you have much less certainty. The value of ether could be the same or higher, or it could be near-zero. The participation rate in Casper could go up to 50%, or it could drop to 10%. Hence, it’s entirely possible that the cost of a 51% attack will drop, say to even below $1 million. At that point, conducting a 51% attack in order to earn profits through some market manipulation attack is entirely possible.
A third case is an even more obvious one: what if you want to trade $100 billion worth of equities? Now, the cost of attacking the public blockchain is peanuts compared to the potential profits from a market manipulation attack; hence, the public blockchain is completely unsuitable for the task.
It is worth noting that the cost of an attack is not quite as simple to estimate as was shown above. If you bribe existing validators to carry out an attack, then the math applies. A more realistic scenario, however, would involve buying coins and using those deposits to attack; this would have a cost of either $105 million or $210 million depending on the finality threshold. The act of buying coins may also affect the price. The actual attack, if imperfectly planned, will almost certainly result in even greater losses than the theoretical minimum of 1/3 or 1/2, and the amount of revenue that can be earned from an attack will likely be much less than the total value of the assets. However, the general principle remains the same.
Some proponents of some cryptocurrencies argue that these concerns are temporary, and that in five years the market cap of their cryptocurrency of choice will obviously be around $1 trillion, within an order of magnitude of gold, and so these arguments will be moot. This position is, at the present moment, arguably indefensible: if a bank seriously believes such a story to be the case, then it should give up on its blockchain-based securitization initiatives and instead simply buy and hold as many units of that cryptocurrency as it can. If, in the future, some cryptocurrency does manage to become established to such a degree, then it would certainly be worth rethinking the security arguments.
Hence, all in all, the weaker argument, that for high-value assets the economic security margin of public blockchains is too low, is entirely correct and depending on the use case is a completely valid reason for financial institutions to explore private and consortium chains.
Censorship Resistance, and other Practical Concerns
Another concern that is raised is the issue that public blockchains are censorship resistant, allowing anyone to send transactions, whereas financial institutions have the requirement to be able to limit which actors participate in which systems and sometimes what form that participation takes. This is entirely correct. One counter-point that can be raised is that public blockchains, and particularly highly generalizeable ones such as Ethereum, can serve as base layers for systems that do carry these restrictions: for example, one can create a token contract that only allows transactions which transfer to and from accounts that are in a specific list or are approved by an entity represented by a specific address on the chain. The rebuttal that is made to this counter-point elsewhere is that such a construction is unnecessarily Rube-Goldbergian, and one may as well just create the mechanism on a permissioned chain in the first place – otherwise one is paying the costs of censorship-resistance and independence from the traditional legal system that public chains provide without the benefits. This argument is reasonable, although it is important to point out that it is an argument about efficiency, and not fundamental possibility, so if benefits of public chains not connected to censorship resistance (eg. lower coordination costs, network effect) prove to dominate then it is not an absolute knockdown.
There are other efficiency concerns. Because public blockchains must maintain a high degree of decentralization, the node software must be able to be run on standard consumer laptops; this puts strains on transaction throughput that do not exist to the same extent on a permissioned network, where one can simply require all nodes to run on 64-core servers with very high-speed internet connections. In the future, the intention is certainly for innovations in sharding to alleviate these concerns on the public chain, and if implementation goes as planned then in half a decade’s time there will be no limit to the scaling throughput of public chains as long as you parallelize enough and add enough nodes to the network, although even still there will always inevitably remain at least some efficiency and thus cost differential between public and permissioned chains.
The final technical concern is latency. Public chains run between thousands of consumer laptops on the public internet, whereas permissioned chains run between a much smaller number of nodes with fast internet connections, which may even be located physically close to each other. Hence, the latency, and hence time-to-finality, of permissioned chains will inevitably be lower than of public chains. Unlike concerns about efficiency, this is a problem that can never be made negligible because of technological improvements: as much as we might wish it to, Moore’s law does not make the speed of light become twice as fast every two years, and no matter how many optimizations get made there will always be a differential between networks made out of many arbitrarily located nodes and networks made out of a possibly colocated few nodes, and the difference between the two will always be quite visible to the human eye.
At the same time, public blockchains of course have many advantages in their own right, and there are likely many use cases for which the legal, business development and trust costs of setting up a consortium chain for some application are so high that it will be much simpler to just throw it on the public chain, and a large part of what makes the public chain valuable is in fact its ability to allow users to build applications regardless of how socially well-connected they are: even a 14-year-old can code up a decentralized exchange, publish it to the blockchain, and others can evaluate and use the application based on its own merits. Some developers just don’t have the connections to put together a consortium, and public chains play a crucial role in serving these developers. The cross-application synergies that can so easily organically emerge in public chains are another important benefit. Ultimately, we may see the two ecosystems evolving to serve different constituencies over time, although even still they share many challenges in scalability, security and privacy, and can benefit greatly by working together.