It’s no secret that cryptocurrency has a scaling issue, so we look at various ways either proof-of-work or proof-of-stake could be effective.
Most people in the cryptocurrency world are aware that network validation often comes in one of two forms: proof-of-work or proof-of-stake. There are others, but these systems are common and power many of the most popular blockchains. They take the same basic problem — verifying transactions — and solve it in unique ways. However, both offer different solutions to the ongoing debate over scaling. Does one have a true advantage over the other, or are they just different philosophies? We’ll take a look at both.
Proof-of-work, explained
Most people have heard of Bitcoin (BTC) “miners,” but just what do they do? In essence, miners work competitively to solve complex math problems in order to secure transactions on the network. See, one of the biggest risks to a blockchain is something called a “double-spend” attack. This is when someone spends the same money twice. This isn’t often a problem with traditional currencies, but with digital currencies, a system is needed to make sure someone can’t send the same Bitcoin to multiple parties.
This is where miners come in. As mentioned, they use powerful processors in order to validate each block on the chain with elaborate cryptographic functions, ensuring that invalid transactions, such as double-spends, are removed. Using the distributed consensus, all the other miners and nodes on the network then “agree” that these transactions are valid. This process is known as proof-of-work, or PoW.
The main threat to this system comes from the possibility of what is known as a 51% attack. This is where one attacker gains over half of the total computing power on the network, which now means that the “consensus” is whatever it says it is. This has happened before and remains a concern for many blockchains to this day.
With PoW, security is achieved not only due to the complex nature of the cryptographic functions being processed but also by the relatively high cost that it takes in terms of energy. This makes attacking the network expensive. The upside is that taking over the whole thing would require 51% of all processing power associated with the blockchain, which is unfeasible for larger chains such as Bitcoin. The downside, however, is that it takes massive amounts of energy to protect the network, making the whole thing grossly less efficient than a centralized alternative. This also only stands to be a bigger issue as cryptocurrency brings in more users.
For years now, developers have been looking for ways to make blockchain technology faster, more efficient and scalable. If Bitcoin, or any project, is ever going to see global adoption, solutions to these problems must be found. Ideas have included making blocks bigger or splitting them up into “shards,” as well as various multiple-layer solutions such as sidechains. We’ll look at all of these in a moment, but first let’s look at proof-of-stake, which is itself one possible answer to the scaling solution.
How proof-of-stake is different
Proof-of-stake, or PoS, gets rid of miners altogether and instead has “validators.” Validators don’t use processing power to secure blocks, instead they literally “stake” their funds on the blocks that they believe are valid. A validator can generally be anyone willing to stake coins on the network, and an algorithm determines which validators will be chosen for each block. Whereas miners want to increase their chances of solving the complex math problem by throwing more processing power at it, validators increase their chances of being selected to validate a block by throwing more money at it. Miners are incentivized with the reward of new coins, but validators often only receive a cut of the fees included in the block, proportional to the amount they had previously staked.
Should an attacker try to validate a bad block, the attacker will lose its stake and be barred from further validation privileges. As for the 51% problem, now a malevolent party seeking to hijack the network wouldn’t need over half of the processing power — it would need over half of all the coins in circulation. This is obviously very unlikely, as no cryptocurrency community would have much faith in any coin where this was even remotely possible to begin with. Lastly, this fixes the energy consumption issue present with PoW, as now there is no need for large numbers of powerful computers running 24/7.
One of the criticisms of PoS is that it still allows for a form of centralization. Basically, having more of an asset means you have more weight for validating, which earns you more rewards for staking, which means you now have even more weight, etc. Others have pointed out the “nothing-to-stake” problem, where validators could arguably stake funds across multiple different blockchain histories. Lastly, having too many validators still slows down the network, as it makes consensus take longer to reach relative to the number of validators. Fortunately, ways to address all these problems are being explored.
Enter delegated proof-of-stake
A potential solution to the shortcomings of the original PoS design is called delegated proof-of-stake, or DPoS. The DPoS model is different because instead of every user staking resources in order to be a validator, users vote on which parties should be the validators of the next block. Staking more resources gives more weight to your vote, but only a limited number of validators are actually used, and they can be voted out or back in with each block.
As all users are able to stake and vote, the community should retain control if it feels a validator is not acting in its best interest. Validators obviously have an incentive to work with the community because being elected to the position enables you to receive block rewards. Lastly, by limiting the number of parties involved, consensus can be reached much quicker, which potentially could enable a notable boost to network speed. Some of the biggest projects implementing this system include EOS and Tron.
Of course, centralization is a concern here, as there is still a chance for those with massive resources to manipulate the vote. This is a fair concern, but in general, the larger community should still retain greater voting power than any single entity could have, and an elected validator is still only one of many, thus limiting its real power.
Other ways to scale proof-of-work
Not everyone is convinced that PoS is the future, hence there are still a few viable avenues being explored for scaling PoW. As mentioned, one of the systems on the table is simply to make the blocks themselves hold more transactions. In the short term, this actually does sound pretty reasonable. Larger blocks are a good way to increase network throughput pretty quickly, but they can come with some caveats. For one, on their own, bigger blocks aren’t necessarily a fix-all solution. In the long term, you can’t just keep making blocks bigger and bigger indefinitely. Switching from 1-megabyte blocks to 2-MB or 4-MB blocks isn’t really a big deal, but where does it end? 1 gigabyte? 10 GB? At least for blockchains designed like Bitcoin, the added size of the blocks would begin to make storing the whole chain exceedingly burdensome. Of course, if transaction speed is less of a priority than storing data on the blockchain, then large blocks again become useful, and it is really making sure that they are synchronized, which becomes the most important aspect.
A different philosophy that some projects are looking into is a technique called “sharding.” Sharding works by dividing up blocks into “shards,” which then get processed on the network — only not every miner has to process every shard. This means each block is only partially mined by each miner, which means that less power needs to be used and the block can be validated faster as well. The same logic can also be applied to a PoS system, only instead of miners, it would be validators. In either sense, the plan is to increase overall latency by not making every player on the network have to process the full extent of every block.
Sharding does come with some drawbacks that have yet to be sufficiently addressed, however. For one, after breaking up the blockchain into shards, these shards cannot communicate with each other. This could be problematic for applications that rely on multiple shards. While a system for hard communication could be developed, it would be exceedingly complex and be at risk for a plethora of potentially devastating data errors.
In a similar vein, sharding also opens up a new security risk. In theory, hackers now could attack the network by focusing on just a single shard, which would take far fewer resources than trying to take over an entire block. They could then craft seemingly valid transactions into the shard and submit it back to the main chain. An attack such as this makes no sense if blocks are kept whole, so it remains a valid risk to user funds.
One more important area researchers are looking into is something known as “sidechains” or “second-layer solutions.” In a nutshell, this is generally a separate network that sits on top of a blockchain and handles transactions “off-chain.” Users can open up “channels” between each other and transact however they see fit, and only when they close this channel does the data get batched and written onto the main chain to create the immutable record. Multiple channels can be linked together in order to form a global payment network that is backed up by the blockchain but can move much faster in real time. This is especially ideal for frequent and smaller transactions and could provide a road to seeing cryptocurrency used as cash.
There are some downsides, as in the current form channels generally need to be “collateralized.” This means money has to be put into the channel before it can be used. Combined with the fact that not all of the bugs have been worked out, this can certainly mean serious risk to funds should something go wrong before it is recorded on the blockchain. Generally, there needs to be very precise work in these protocols to make sure that the sidechains and main chain stay in perfect sync, but so far, results are optimistic.
Some of the most popular versions of this technology include the Lightning Network for Bitcoin and the Raiden Network for Ethereum. These projects are certainly still early on, and there are in fact multiple versions of the lightning network being developed. It is as of yet unclear which version will become the standard, if any. Another example of a second-layer solution project for Ethereum is called Plasma and would see smart contracts used to build sidechains of transaction data that would, again, only occasionally write to the main layer. Similarly, Charles Hoskinson, the creator of Cardano, has discussed the project’s upcoming technology Hydra, which introduces elements of a second layer as well as sharding in the hopes of reaching upward of “1 million transactions per second.”
One other project that is taking elements of many of these different solutions and bringing them together is ILCoin. ILCoin uses something called the RIFT protocol, and it approaches the blockchain in a slightly different way to create a “Decentralized Hybrid Blockchain System,” or DHCB. This is a multilayered system still based upon the PoW SHA-256 algorithm that Bitcoin uses, but here the chain is composed of blocks that are filled with “mini-blocks.” Mini-blocks are fixed at 25 MB, however the amount of them that can fit inside of a regular block has, theoretically, no limit. The team declares it has successfully created blocks of up to 5 GB, and according to its documentation:
“Assuming each transaction is occupying the minimum number of bytes possible, each block may contain up to a maximum of 21551724 transactions. With an average block mining time of 3 – 5 minutes, that equates to between 71839 and 119731 transactions per second using a 5 GB block.”
Thanks to the RIFT protocol, 5 GB blocks and the mini-block architecture, ILCoin has scheduled the launch of its Decentralized Cloud Blockchain, or DCB, for this year. The team says that DCB will allow for on-chain storage of a wide array of digital content, including images, videos and more. Until now, storing large amounts of data on-chain was not possible due to blockchain bloating.
Still a lot of work to do
The reality could be that there isn’t just one correct solution for scaling. Each project may need to look at how it is being used and ask what path or paths are best for it. Not to mention new strategies and technologies are constantly emerging that could shake up the whole game at any time. While all of the ideas here show immense promise, the book is still not yet written on how to scale blockchains. Likely a combination of many of these ideas and more will ultimately shape how cryptocurrency reaches a mass audience, but the problem needs to be solved before it does. Otherwise, it is possible that a centralized, permissioned chain will be the only kind that is accessible to a global population.
Disclaimer. Cointelegraph does not endorse any content or product on this page. While we aim at providing you all important information that we could obtain, readers should do their own research before taking any actions related to the company and carry full responsibility for their decisions, nor this article can be considered as an investment advice.
via cointelgraph.com