[ad_1]
Last updated: 22 February, 2018
A few years after Satoshi Nakamoto unleashed his bitcoin paper on the world, the cryptocurrency’s users began to notice a potential problem: bitcoin wasn’t very liquid.
For a system that many claimed could replace fiat payments, this was a big barrier. While Visa handles around 24,000 transactions a second, bitcoin could process up to 7. Unless something could be done about this, bitcoin’s utility was limited.
Thus began the “scaling debate,” which polarized the community and unleashed a wave of technological innovation in the search of workarounds. Yet while significant progress has been made, a sustainable solution is still far from clear.
The problem arises from bitcoin’s design: Satoshi programmed the blocks to have a size limit of approximately 1MB each, in order to prevent network spam.
Since each block takes an average of 10 minutes to process, this works out to a relatively small number of transactions overall. An increase in demand would inevitably lead to an increase in fees, and bitcoin’s utility would diminish even further.
Don’t think so
A simple solution initially appeared to be an increase in the block size. Yet that idea turned out to be not simple at all.
First, there was no clear agreement as to how much it should be increased by. Some proposals advocated for 2MB, another for 8MB, and one wanted to go as high as 32MB.
The core development team argued that increasing the block size at all would weaken the protocol’s decentralization by concentrating mining power – with bigger blocks, only the more powerful miners would be successful, and the race for faster machines could eventually make bitcoin mining unprofitable. Also, the number of nodes able to run a much heavier blockchain could decrease, further centralizing a network that depends on decentralization.
Second, the method of the change was contentious. How do you execute a system-wide upgrade when participation is decentralized? Should everyone have to update their bitcoin software? What if some miners, nodes and merchants don’t?
And finally, an existential argument emerged. Bitcoin is bitcoin, why mess with it? If someone didn’t like it, they were welcome to modify the open-source code and launch their own coin (indeed, some have done just that).
What’s more, Satoshi is no longer around to tell us what he originally intended. And even if he were, would he care? Did he not design bitcoin to run itself?
I have an idea
In 2015, developer Pieter Wiulle revealed a solution that, at first glance, looked like it could appease all groups. Segregated Witness, or SegWit, increased the capacity of the bitcoin blocks without changing their size limit, by altering how the transaction data was stored. (For a more detailed account, see our explainer.)
SegWit was deployed on the bitcoin network in August 2017, via a soft fork (to make it compatible with nodes that did not upgrade). In spite of initial excitement about the benefits, however, uptake has been slow. While many wallets and other bitcoin services are gradually adjusting their software, others are reticent to do so because of the perceived risk and cost.
Take two
Several industry players argued that SegWit didn’t go far enough – it might help in the short term, but sooner or later bitcoin would again be up against a limit to its growth.
In 2017, coinciding with CoinDesk’s Consensus conference in New York, a new approach was revealed: Segwit2X. This idea – backed by several of the sector’s largest exchanges – combined SegWit with an increase in the block size to 2MB, effectively multiplying the pre-SegWit transaction capacity by a factor of 8.
Far from solving the problem, the proposal unleashed a further wave of discord. The manner of its unveiling (through a public announcement rather than an upgrade proposal) and its lack of replay protection (transactions could happen on both versions, potentially leading to double spending) rankled many. And the perceived redistribution of power away from developers towards miners and businesses threatened to cause a fundamental split in the community
In the end, the idea was dropped a few months later, just weeks from its target date of implementation
Meanwhile…
Other technological approaches are being developed as a potential way to increase capacity.
Schnorr signatures offer a way to consolidate signature data, reducing the space it takes up within a bitcoin block (and enhancing privacy). Combined with SegWit, this could allow a much greater number of transactions, without changing the block size limit
And work is proceeding on the lightning network, a second layer protocol that runs on top of bitcoin, opening up channels of fast microtransactions that only settle on the bitcoin network when the channel participants are ready.
Getting closer
So where are we now? Adoption of the SegWit upgrade is slowly spreading throughout the network, increasing transaction capacity and lowering fees.
Most blocks are just over the 1MB mark, and transaction fees – which shot up to over $50 in December 2017 – have fallen back down to around $4 at time of writing.
Progress is accelerating on more advanced solutions such as lightning, with transactions being sent on testnets (as well as some using real bitcoin). And the potential of Schnorr signatures is attracting increasing attention, with several proposals working on detailing functionality and integration.
While bitcoin’s use as a payment mechanism seems to have taken a back seat to its value as an investment asset, the need for a greater number of transactions is still pressing as the fees charged by the miners for processing are now more expensive than fiat equivalents. What’s more, taking into account that we are still at the beginning of cryptocurrency evolution, the development of new features that enhance functionality is crucial for the potential of the underlying blockchain technology to be realised.
Authored by Noelle Acheson. Steps image via Shutterstock.
[ad_2]
Source link