The world is eagerly waiting for a next-generation, high-performance, permissionless blockchain, and this blockchain should be able to industrially scale all decentralized applications. So far, the crypto community has witnessed:
- Peer-to-peer blockchain networks that use all the peers to validate transactions and provide computation and storage — or traditional blockchains — such as Bitcoin and Ethereum.
- P2P blockchain networks that shard transactions, computation and storage — or sharding blockchains — such as Ethereum 2.0 and Zilliqa.
Sharding mechanisms give hope for unlimited, sustainable scalability of blockchains, but many people in the blockchain space strongly believe that the scalability or sharding has reached a tipping point — but that is not quite true. Let’s dive into it.
In the blockchain world, why do we need sharding?
Currently, the internet is used in payments, Internet of Things, smart cities, robotics, web searches, streaming videos, e-commerce, autonomous vehicles, etc. Hence, the internet generates:
- Over 1 billion transactions per second (transactions).
- Over 1 sextillion calculations per second (computations).
- Over 2.5 quintillion bytes of data per second (storage).
This work needs to be harmoniously split among all the peers in a P2P network. This splitting of work is called sharding technology. Sharding can be applied to transactions, computations and storage.
Problems that plague sharding mechanisms
A permissionless P2P network is unpredictable, and to compensate for this unpredictability, various blockchain protocols fix the number of validations and the number of storage copies to a constant that is derived from a mathematical computation based on certain assumptions. This limits the scalability of blockchains, as the system will either overcompensate and limit scale or undercompensate and risk security/integrity.
What if the P2P network can be predicted? Can the number of validation and storage peers be flexible depending on the chaoticity of the P2P network? That is to say, if the P2P network behaves ideally, then only one validation and storage copy is needed, and if the peers in the P2P network behave maliciously or deviate from the ideal nature, then the number of validation and storage copies will increase proportionally.
Problems faced by peers/shards in a P2P network include:
- Internet connection problems, electricity cuts, data loss and much more.
- Joining and leaving the network, all the time, throughout the globe.
- Data availability and data consistency problems.
- If a peer/shard goes offline, the data belonging to that shard is lost forever.
- Peers/shards can turn malicious anytime.
The culprit here is the unpredictability of P2P networks! This decreases the performance of validation, computation and storage.
Due to the uncertainty in P2P networks, a self-healing mechanism is introduced.
Case one: Traditional blockchains. All the N nodes in the network validate/compute/store all the transactions in the network. (N)
Case two: Ideal P2P. Consider an ideal P2P blockchain network where all the peers in the network are available 24/7 with good internet, bandwidth, electricity supply, etc., and are good peers that are not malicious. Then any transaction/computation/storage that arrives at the network can be validated/computed/stored by one peer. (1)
Case three: Sharded blockchains. A real P2P blockchain network is not so ideal, and hence a mathematical formula is derived based on the maximum possible deviation from the ideal P2P blockchain network and certain assumptions to set a fixed number, such as 22–600 peers, to validate/compute/store, depending on the blockchain protocol. (N/x)
Case four: Self-healing blockchains. Cases one, two and three are extreme scenarios, as shown in the graph below. The number of transactions/computation/storage should depend on the level of deviation from the ideal state (with an adequate safety margin). (N/x(c)), where (c) stands for the chaoticity of the network. Chaoticity (c) of the network is a function of internet bandwidth, electricity, data availability, data consistency and the number of nodes joining or leaving. If there is any change in the function compared with the ideal state, whether positive or negative, countermeasures are deployed accordingly by the P2P network. Hence, the network automatically heals if there is any stress on it.
The analogy to self-healing blockchains
Let us use the Paris subway as an example, where depending on the traffic of people, the metro trains change their timing, frequency, number of compartments and speed.
- Traditional: There will be a maximum number of metro trains with maximum frequency, a maximum number of compartments and maximum speed all the time. (A lot of energy is wasted.)
- Ideal: There will be a minimum number of metro trains with minimum frequency, a minimum number of compartments and a minimum speed all the time. (It takes a lot of time for people to commute.)
- Sharded: The number of metro trains and their frequency, number of compartments and speeds will be less than the maximum, but the numbers are fixed no matter the number of people who want to use the metro.
- Self-healing: Depending on the number of people, whether it’s during peak hours from 7 am to 9 am and 4 pm to 7 pm, and the number of trains available, etc., the number of metro trains and their frequency, number of compartments and speeds change accordingly and are flexible for a harmonious output.
Self-healing blockchains are designed in such a way that they can survive for decades, if not centuries. The scalability achieved by these types of blockchains are close to centralized systems, yet they maintain true decentralization. Because there is high scalability, any centralized application can be built on self-healing blockchains.
Applying artificial intelligence to the time series — internet bandwidth, electricity, data availability, data consistency, data loss, number of nodes joining/leaving, etc. — could further improve self-healing blockchains, making them faster and able to predict an event before it happens and, hence, able to deploy countermeasures before it occurs.
The views, thoughts and opinions expressed here are the authors alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.
This article was co-authored by Akshay Kumar Kandhi, Nilesh Patankar, Sebastien Dupont and Samiran Ghosh.
Akshay Kumar Kandhi is the head of innovation, research and development at Uniris, where he is at the forefront of research in blockchain and biometrics. He has a degree from Ecole Polytechnique in France.
Nilesh Patankar is the co-founder and chief operating officer of Uniris. Nilesh is a seasoned technologist with over 25 years of experience in the payments domain. He has managed global programs for the card network Mastercard and the bank Barclays. He was also the chief technology officer of Payback, the largest coalition loyalty program in India with over 100 million users.
Sebastien Dupont is the co-founder and CEO of Uniris. Sebastien is a security and identity expert. He was responsible for two of the largest projects at telecommunications company Orange: Identity, which had 100 million users, and Mobile Banking in Africa, growing the turnover from 10 million euros to 4 billion euros. He was also a cybersecurity expert at Thales. He has been a prominent blockchain evangelist since 2013.
Samiran Ghosh is the senior global ambassador of Uniris. He is also a member of the prestigious Forbes Technology Council, MIT Technology Review and is a TEDx speaker on technology.