Categories
News

Bitcoin ‘could break $200,000 for a moment’ says an optimistic Brock Pierce (www.blockcast.cc)

Visit us at https://is.gd/OMRCsA

Categories
News

Analyst: ‘Deflationary forces’ present an optimistic outlook for BTC, ETH in 2022 (www.blockcast.cc)

Visit us at https://is.gd/zTcIU5

Categories
News

Analysis of the two major camps of Rollups Optimistic and ZK Rollups: Can DeFi trigger another period of high growth? (www.blockcast.cc)

Visit us at https://is.gd/mZ7EhX

Categories
News

Investors “still optimistic about cryptocurrencies” despite market downturn (www.blockcast.cc)

Visit us at https://is.gd/Y96Nwx

Categories
News

The short-term sideways will not hinder the optimistic atmosphere. | CFTC COT Bitcoin Holding Weekly Report (www.blockcast.cc)

Written: 7

On February 20, the CFTC announced the latest CME Bitcoin Futures Weekly Report (February 10 to February 16). During the latest statistical cycle, the pace of BTC’s rise slightly stagnated. During the entire statistical cycle, only about $1,000 was obtained. Compared with the increase of thousands or even tens of thousands of dollars in the previous statistical cycles, the latest statistical cycle is basically in a state of high and sideways rest, and the last time there was a similar situation of various accounts after sideways. Risk control has generally begun to be deployed. With the sharp rise in the previous two weeks, whether the newly re-established atmosphere of chasing more will change is the key to this weekly report.

The number of total positions (total open positions) in the latest data has further increased from 11,055 to 11,426. This value has risen for the second consecutive week and hit a new high of nearly four weeks. The sideways market did not significantly affect market optimism. Atmosphere, but it’s worth noting that the increase in positions in the latest statistical cycle has decreased compared with the increase in the previous statistical cycle. It can be seen that the high sideways market has inevitably played some bias on the enthusiasm of new funds. negative influence.

| CFTC COT Bitcoin Holding Weekly Report

In terms of sub-data, the largest dealers’ long positions have dropped from 387 to 368, short positions have remained unchanged at 564, and long and short (hedged) positions have dropped from 89 to 61. The long-term net short position status of dealer accounts is still continuing. What is really worth noting is that in the latest statistical cycle, this type of account has also undergone a clear short adjustment from the perspective of position adjustment, reducing long positions and two-way positions. At the same time, the short position remains unchanged. It can be considered that this type of account is still cautious about this round of high and sideways. After all, the current Bitcoin price is at a historical high level. The potential risk of a large correction can never be ignored for relatively conservative institutional investors. Therefore, it can be considered that this type of The account threw a clearer “air defense” signal in the latest statistical cycle.

| CFTC COT Bitcoin Holding Weekly Report
| CFTC COT Bitcoin Holding Weekly Report

Asset management institutions’ long positions dropped from 348 to 338, short positions rose from 265 to 339, and two-way positions dropped from 26 to 9. The net position of this type of account in the previous statistical period has returned to the net long position and the situation has not continued. With the completion of the latest statistical period, the current short position has once again exceeded the long position. As a type of institution that has been holding a positive attitude in the long run, another long-short reversal once again expressed strong risk control expectations. Although such accounts decisively chased more during the previous round of surge, as the pace of the rise slows, large institutions still cannot abandon their risk control thinking.

It can be seen that, considering that the Bitcoin price is currently in a special historical position, institutional investors with strict risk control standards are expected to maintain this relatively bearish cautious attitude for some time to come. Therefore, the interpretation of the adjustment of several types of institutional accounts in the weekly report also requires special treatment in special periods. This seemingly “too conservative” contrarian adjustment is not necessarily a real “misjudgment”. Risk control thinking is also worthwhile Learn.

At this stage, when there is considerable unilateral fluctuations in currency prices during the statistical period, whether institutional investors have made clear unilateral adjustments is more worthy of attention, and this relatively stagnant position adjustment in the sideways phase , No need to over-interpret.

| CFTC COT Bitcoin Holding Weekly Report
| CFTC COT Bitcoin Holding Weekly Report

In the latest statistical cycle, the long position of leveraged fund accounts further increased from 3149 to 3301, the short position increased from 8481 to 8,750 simultaneously, and the two-way position decreased slightly from 665 to 660. Leveraged funds have maintained the idea of ​​two-way simultaneous overweighting initiated in the last statistical cycle, and as mentioned in the previous weekly report, generally speaking, the direction judgment of this type of account will be expressed in simple two-way increase/decrease, and two-way synchronous increase is still It is a relatively clear positive signal, and it can be considered that leveraged funds are still optimistic about the market outlook.

| CFTC COT Bitcoin Holding Weekly Report
| CFTC COT Bitcoin Holding Weekly Report

In terms of large positions, long positions rose from 2822 to 3030, short positions rose from 64 to 132, and two-way positions rose from 146 to 151. Large accounts did not continue the net long position adjustment of the previous statistical cycle, but a considerable increase in long positions was also carried out in the latest statistical cycle. The current long-short position ratio of such accounts still maintains at 95 % Or more, is the most determined long supporter of all types of accounts. In the short-term, the overwhelming attitude of such accounts has not shown any signs of faltering.

| CFTC COT Bitcoin Holding Weekly Report
| CFTC COT Bitcoin Holding Weekly Report

In terms of retail holdings, long positions rose from 3423 to 3508, and short positions rose from 755 to 760 simultaneously. In the latest statistical cycle, retail accounts have carried out a very limited range of long and short two-way holdings. The stagnant pace of market rise makes it difficult for this type of account to form a consistent judgment, so I chose the relatively safe “hold the troops” and wait for the next round of trends. appear.

| CFTC COT Bitcoin Holding Weekly Report
| CFTC COT Bitcoin Holding Weekly Report

Extended reading: What is the CFTC position report? What is the value? How to interpret it?

Disclaimer: As a blockchain information platform, the articles published on this site only represent the author’s personal views, and have nothing to do with ChainNews’ position. The information, opinions, etc. in the article are for reference only and are not intended as or regarded as actual investment advice.

Categories
News

Hard fork Optimistic Rollup, analysis of Layer 2 DAO basic protocol Metis (www.blockcast.cc)

What kind of chemical reaction will occur when combining DAO and Layer 2?

Written by: Kevin, co-founder of MetisDAO

Discord temporarily bans WallStreetBets discussion group and Robinhood offline game station farce tells us clearly that the “right” of the centralized platform may be far greater than you think, and the entire Internet world and the financial world may be in urgent need To carry out a “decentralized” reform, at least centralized Internet companies and financial institutions should not hold everyone’s throats.

The emergence of smart contracts and DAOs provides a good direction for exploring decentralization. Relying on smart contracts, DeFi is surging and re-architects finance in a decentralized way, while DAO takes it a step further, making direct democracy possible and making governance truly decentralized for the first time.

However, at present, most people’s understanding of DAO is still very shallow. Many people think that DAO is an autonomous community, and then everyone can achieve governance through voting on the chain. In fact, the possibility of DAO is far more complicated than voting.

To some extent, Bitcoin is actually the first DAO organization in human history. Miners formed DAO in a decentralized mode, but not for voting, but for issuing Bitcoin through self-made rules such as bookkeeping rewards. , And to ensure that the Bitcoin account is true and effective, that is, DAO exists to complete value generation activities (collaboration).

However, Internet applications such as WiKi, chat rooms, and short videos are unable to support existing DAOs, public chains and smart contracts due to their complex business logic, which indicates that the entire blockchain industry has a lack of technical infrastructure.

Hard fork Optimistic Rollup, analysis of Layer 2 DAO basic protocol MetisVitalik made similar remarks

Various Layer 2 solutions (Lightning Network, Rollups, etc.) have been discussed in the industry for many years. Starting in 2020, we have seen the development and application of rollup technology begin to land, but currently rollups are mainly used to solve payment problems and usage scenarios Can not be effectively expanded.

What kind of chemical reaction will happen when DAO and Layer2, the two hot spots that the industry has given high hopes, are put together? Maybe Metis is revealing the answer to this question.

Comprehensive understanding of Metis

Metis has built a new set of second-layer protocols based on the main chain of the public chain, which allows Web2 projects and communities to easily build their DAC (decentralized company, a kind of DAO) on the blockchain. , Quickly launch DApps (decentralized applications), and use blockchain-based tools to manage the collaboration between the community and the data computing layer.

The core of the Metis two-layer agreement is the game mechanism based on the Optimistic Rollup (OR) idea and the pledge withdrawal of the OR side chain, that is, Optimistic Governance. To match this game mechanism, Metis hard-forked Optimistic Rollup and built the Metis Virtual Machine (MVM) to manage business and data layers and complex operations through the ComCo management framework deployed on the side chain.

Optimistic governance is used to solve the problem of how to establish trust and determine the collaborative relationship between cooperative parties that lack a trust foundation in the decentralized scenario of DApps, and to ensure that the calculation process and results of business logic in DApps are true and credible. In order to build the foundation of trust, before determining the cooperative relationship, the cooperative parties who lack trust need to pledge the deposit into the Metis agreement as a promise of performance. After the pledge is completed, the cooperative relationship between the parties involved is confirmed. In the process of collaborative execution, Metis will assume that all parties are honest and keep their promises (this is true in most cases). Therefore, if no collaborating party raises objections, the margin will be challenged for a period of time when the collaborative transaction is completed. Automatic return after the end of the period. In this case, there is no need for governance intervention, and the entire process can ensure the highest efficiency. However, if a malicious party appears (for example, a collaborating party fails to submit calculation results or deliverables on time and quality), the pledged deposit will be frozen by the agreement, and the arbitration and punishment system will be activated for governance to protect compliance The interests of the owner.

Similarly, in the data operation layer of DApps, since microservices and operations occur on the side chain, in order to prevent the integrator (Aggregator) from packaging or replacing the wrong data when the data is packaged and synchronized to the main chain. The operation also requires a pledge deposit, and these data will not be immediately confirmed on the main chain. The validators on the main chain will repeat the calculation results. If there is a problem with the packaged data, these validators will pledge the deposit. Then, a challenge is initiated on the main chain within a certain period of time, and the smart contract on the main chain is triggered to initiate an arbitration game. The arbitration contract will execute the calculation result again and compare the answers of the verifier and the packager to provide the correct answer The party that gets the commission, the security deposit of the evil party will be confiscated.

As it involves business and data governance, management, and complex logical operations in DApps, it is difficult to achieve only through the Layer1 main chain and smart contracts (the gas cost, efficiency, and lack of functionality mentioned above), Metis developed ComCo management framework, through the microservice tools deployed by ComCo on the side chain to realize the complex calculation and management of the business and data layers in DApps. This design greatly breaks through the performance, function, and cost limitations of Layer 1 and smart contracts. It can not only reduce gas costs by collectively submitting transaction operations, but also achieve more by adding various microservices on the side chain. Function.

Therefore Metis provides a new set of high scalability, high performance, and low cost Layer 2 protocol. Developers can use the Metis open source microservice software framework to quickly implement decentralized applications. They can check some existing services, modules and UI to achieve “one-click DApp”, and they can also call various Metis In-depth development of interfaces and protocols to achieve DApps such as decentralized wikis, chat rooms, open source communities, event organizations, task distribution platforms, community games, and Defi.

Since these DApps are built on the organizational framework of DAC, community members can use their power to participate in the value generation activities of the project under a unified management rule, contribute and obtain token incentives.

Explore the “new generation” DAO basic agreement

Metis formed the earliest framework of the second-tier protocol in 2019 and proposed the concept of DAC-decentralized company. Each DApp is a decentralized company DAC composed of community members and stakeholders, who carry out distributed cross-domain collaboration in order to complete a specific mission. DAC is a subset of DAO. DAC not only cares about governance, but also cares more about management. The latter is what other DAO projects lack at present. In fact, we have seen that most DAOs are only responsible for proposals and Voting, these functions can be realized through some modules in the project, in fact, there is no need to set up a separate DAO to do this.

Metis CEO Elena Sinelnikova feels a lot about this issue. She is also the co-founder of CryptoChicks (the other co-founder of the project is Vitalik’s mother Natalia Ameline), leading the world’s largest (headquartered in Canada, with members in 56 countries around the world) female blockchain community. Elena has been engaged in community building work and holds several global Hackathon and related training events every year, so she has been looking for a mechanism that can solve the problem of effective collaboration in a decentralized environment that lacks trust. She believes that DAO provides good ideas, but there is still a big gap in the management of decentralized collaboration.

Therefore, the three partners of Metis focused on how to construct an agreement to allow DApps to establish trust, determine collaboration relationships, and verify the results of calculations on the new organizational structure, management mechanism, and software framework.

At the end of 2020, the MVP testnet of Metis was launched. The CTO and co-founder of CasperLabs Medha Parlikar, in particular, recognized the importance of Metis to the main chain of Layer1 such as Ethereum, Casper and Polkadot. She believes that Metis is an enabler (enabler). Let Web2.0 developers quickly build decentralized DApps, and implement decentralized business governance and management under the structure of DAC. Therefore, the highly scalable Metis Layer 2 protocol can complement the high-performance Layer 1 main chain including the Casper network, and jointly support more user cases on the chain. Medha quickly agreed to serve as the chief consultant of Metis. At the same time, Casper and Metis also launched a joint laboratory-TranspilerLab.DAC (Compiler Lab, which is also a DAC built on Metis) to jointly carry out the research and development of the new architecture and the developer community Construction and project incubation.

How to use Metis

The MVP version currently online by Metis is a community-oriented demonstration version that allows the community to create their decentralized company DAC on the blockchain in three simple steps.

Hard fork Optimistic Rollup, analysis of Layer 2 DAO basic protocol MetisCreate DAC in 3 steps

At present, these communities can operate and maintain WiKi, hackathon and other businesses and activities through task management and knowledge management.

Hard fork Optimistic Rollup, analysis of Layer 2 DAO basic protocol MetisTask management

Hard fork Optimistic Rollup, analysis of Layer 2 DAO basic protocol MetisKnowledge management

Metis has been using this set of agreements and rules to manage the Metis project (eating your own dog food). We can track the development trajectory of the Metis project since 2018, the tasks, contributions and deliverables of the various participants in the community, etc. .

Hard fork Optimistic Rollup, analysis of Layer 2 DAO basic protocol MetisTraceable trajectory

Hard fork Optimistic Rollup, analysis of Layer 2 DAO basic protocol MetisExample of a task and deliverable

Product roadmap

The Metis team defined the main phases of the project with a series of well-known historical civilization development process terms. At present, Metis has completed preparations for the Stone Age and has launched the MVP version on the testnet. However, since the mainnet has not yet been launched, only simple Internet applications are currently supported. Further improvements in performance and functions require the gradual development of the Bronze Age mainnet and the era of great navigation to support more commercial applications.

Disclaimer: As a blockchain information platform, the articles published on this site only represent the author’s personal views, and have nothing to do with the position of ChainNews. The information, opinions, etc. in the article are for reference only, and are not intended as or regarded as actual investment advice.

Categories
News

The choice of MCDEX V3 expansion plan: Optimistic Rollup or ZKRollup? (www.blockcast.cc)

Although ZKRollup is a more ideal technical solution in terms of function, why does this perpetual contract exchange still choose Optimistic Rollup?

Written by: MCDEX

MCDEX requires L2 plan

Ethereum is currently the most important public chain ecology. Although MCDEX V3 adopts a multi-chain deployment plan, MCDEX will continue to develop together with the Ethereum ecosystem to serve Ethereum users. However, as we all know, the gas fee of Ethereum L1 is expensive, and the network throughput is low and the block time is long. MCDEX needs the Ethereum L2 solution for the following two reasons :

  • Reduce gas fees for transactions . Since the trading logic of perpetual contracts is more complicated than that of spot, MCDEX’s contract gas fee is about 5 times that of spot (Uniswap) . When L1 is crowded, Gas Price soars, causing users’ transaction fees to increase sharply. We need to reduce Gas costs through the L2 scheme.

  • Enhance liquidation capabilities . On the one hand, MCDEX’s clearing system needs to be able to clear positions with insufficient maintenance margin in a timely manner; on the other hand, users want to obtain higher leverage, that is, less maintenance margin. This requires the clearing system to complete the clearing in the shortest possible time . Therefore, we need infrastructure with larger throughput and lower latency than L1. Even in order to improve the clearing capacity, even a slight increase in the centralization of the system is worthwhile.

Requirements for the L2 program

No solution is perfect. We need to make compromises based on business needs and choose the most suitable solution for us. When choosing the L2 plan, we put forward some of our requirements for the L2 plan:

Technology maturity : The technology of the L2 solution must be mature and reliable, preferably a solution that has already released the main network and has users and applications. If it is a solution under development, at least it can match our V3 release schedule.

Degree of decentralization : In order to obtain the largest consensus and support the asset scale of one billion or even tens of billions of dollars, the higher the degree of decentralization of the L2 solution, the better. In terms of safety, it is best to be as close as possible to the safety of L1.

Developer friendliness : L2 needs to be compatible with EVM as much as possible, and provide a mature and complete development tool chain (compiler, debugger, sandbox environment), node (compatible with L1 API), infrastructure (the Graph, etc.)

Cost and performance : L2 needs to greatly reduce Gas costs; maximize TPS and minimize confirmation time, thereby increasing the clearing capacity of MCDEX.

Optional L2 plan

When we examine the Ethereum L2 solution, some optional solutions are:

  • State channel
  • Side chain (e.g. xDai/Matic)
  • Plasma
  • ZKRollups (Matter Labs / Starkware)
  • Optimistic Rollups (Optimism / Arbitrum)

Starting from the requirement of compatible EVM, the state channel, Plasma, and Starkware solutions are basically excluded . Although Matter Labs’ ZKRollup will support general EVM smart contracts in the future, this technology is still under development, and it may take several months to see the final product. Its technological maturity and progress cannot meet our v3 requirements. In fact, the side chain is a relatively good “transition plan”. But considering that the progress of Optimistic Rollup projects will soon be officially used on the mainnet within 3 months, and Optimistic Rollup provides better decentralization features than side chains, we will focus on Optimistic Rollup solutions on.

OptimismOVM and Offchain Arbitrum are both excellent Optimistic-like Rollup solutions (although sometimes we call the former Optic Rollup and the latter Arbitrum Rollup). The difference in their technical principles is mainly that OVM uses a single-round interactive process (interactive), while Arbitrum uses a multi-round interactive process. The other technologies of the two programs are not much different. Therefore, the two programs can be considered the same in terms of decentralization.

The main reasons why we currently choose Arbitrum are the following:

  • On-chain cost: Multi-round interaction has lower on-chain cost than single round interaction.

  • Technology maturity: Neither plan has yet been released on the mainnet. But Arbitrum provides detailed documentation, code, and a testnet that can be used for permissionless evaluation. Its code is under audit. Its mainnet release plan also meets the progress requirements of our project v3. We judge that Arbitrum is more technically mature and confident than OVM.

  • Developer friendliness: Arbitrum provides a development environment and node API that is fully compatible with EVM. We deployed MCDEX V2 to the Arbitrum testnet without modifying a single line of code. Infrastructure including the Graph can also be used smoothly. In contrast, OVM requires developers to slightly modify the code when dealing with time-related operations. The bigger problem is that the OVM project is approved in the early stage, which means that developers who are not approved by the OVM team cannot use OVM in the early stage. This greatly limits the freedom of developers.

  • Sequencer model: The Sequencer model is a new feature to be released in Arbitrum. This feature allows users to quickly confirm the status of the transaction at L2 without waiting for the transaction to be submitted to L1. This feature slightly sacrifices the degree of decentralization, but greatly speeds up the transaction confirmation time. Using this function, MCDEX V3 will have extremely fast transaction speed and clearing capabilities.

I must admit that compared to Optimistic Rollup, ZKRollup is a more ideal technical solution from a functional perspective . However, ZKRollup still has greater technical risks, and it will take a long time for its technology to mature. Therefore, starting from a relatively mature technology like Arbitrum, it is a more secure plan to promote our business development. Finally, there is a possibility in the future: as the ZKRollup technology matures, the Offchain team can also add ZK proofs to Arbitrum, thereby upgrading Arbitrum from Optimistic Rollup to ZKRollup.

Disclaimer: As a blockchain information platform, the articles published on this site only represent the author’s personal views and have nothing to do with ChainNews’ position. The information, opinions, etc. in the article are for reference only, and are not intended as or regarded as actual investment advice.

Categories
News

Everything you need to know about Optimistic Rollup (www.blockcast.cc)

Georgios Konstantopoulos, a research partner at Paradigm, analyzed the incentive structure of Optimistic Rollup and responded to common criticisms.

Written by: Georgios Konstantopoulos, research partner of Paradigm, a crypto venture capital firm.
Compiler: Zhan Juan
Paradigm authorized Chain Wen to translate and publish the Chinese version of this article.

In the Ethereum ecosystem, one of the biggest challenges is how to achieve low latency and high throughput under severe resource constraints (such as CPU, bandwidth, memory, and disk space).

The decentralization of the system is determined by the ability of the weakest node in the network to verify the rules of the system. High-performance protocols that can run on low-resource hardware can be called ” scalable “.

In this article, we will delve into the principles of current “two-layer solutions”, their corresponding security models, and how they solve the scalability problem of Ethereum.

If you are interested in learning more about the cutting-edge Ethereum’s scalable technology, and want to know how to build and structure such a system, then this article may be helpful to you.

Throughout the article, important keywords or concepts will be highlighted in bold. These are the words/terms you will encounter in the process of learning cryptocurrency knowledge. This topic is very complicated. Maybe you will feel a little confused in the process of reading, but as long as you keep reading, I believe you will gain something.

Blockchain resource requirements

There are three factors that affect the resource requirements of running nodes in decentralized networks (such as Bitcoin and Ethereum):

  • Bandwidth: The cost of downloading and broadcasting any blockchain-related data.
  • Calculation: The cost of running calculations in scripts or smart contracts.
  • Storage: The cost of storing transaction data for indexing, and the cost of storing “state” to continue processing new transaction blocks. It is worth noting that storing “state” (account balance, contract bytecode, nonce value) is more expensive than storing raw transaction data.

There are 2 ways to measure performance :

  • Throughput: The number of transactions that the system can process per second.
  • Latency: The time required for transaction processing.

The ideal attribute of emerging encryption networks such as Bitcoin and Ethereum is decentralization, but what are the elements that make the network decentralized?

  • Low trust: This attribute allows any individual to verify that the number of bitcoins will not exceed 21 million, or that their bitcoins will not be forged. The person who runs the node software independently calculates the latest status and verifies that all the rules are followed in the process.
  • Low cost: If the operating cost of the node software is high, individuals will rely on a trusted third party to verify the status. High cost means high trust demand, which is what we want to avoid in the first place.

Another required attribute is scalability : the ability to scale throughput and latency to the cost of running the system in a super-linear manner. This definition is good, but it does not include “trust”. Therefore, we have clarified “decentralized scalability”: to achieve scalability without significantly increasing the assumption of system trust.

To zoom in, the runtime environment of Ethereum is the Ethereum Virtual Machine (EVM). Transactions running through the EVM perform various operations at different costs, for example, the cost of storage operations is higher than the cost of adding operations. The calculation unit in the transaction is called “gas”, and the system parameters are set to process 12.5m gas at most per block, and a transaction block is generated every 12.5 seconds on average. Therefore, Ethereum has a latency of 12.5 seconds and a throughput of 1 million gas per second.

You may ask: What benefits can 1 million gas per second bring?

  • ~47 “simple transfer” transactions per second. These transactions consume 21,000 gas and are responsible for transferring ETH from A to B. They are the simplest type of transaction.
  • ~16 ERC20 token transfers per second. Compared with ETH transfers, these methods involve more storage operations, so each time the cost is ~60k gas.
  • ~10 Uniswap asset transactions per second. The average cost of a token-to-token transaction is about 102k gas.
  • …Just pick the gas cost of a transaction you like, and divide it by 1m (12.5m / 12.5 /gas)

Note that as the execution complexity of the transaction increases, the throughput of the system will decrease to a very low value. There is room for improvement!

Solution 1: Use an intermediary

We can use a trusted third party to facilitate all transactions. In this way, we can get very high throughput, and the delay may only be sub-second. Great! This will not change any system-wide parameters, but we will choose to join the trust model unilaterally set by a third party. They may choose to censor us or even confiscate our assets. This is not desirable.

Solution 2: Make the block bigger and more frequent

We can reduce the delay by reducing the time between two blocks, or we can increase the throughput by increasing the block gas limit. This change will make the cost of operating nodes higher and make it difficult for individuals to run nodes (this has already appeared on platforms such as EOS, Solana, and Ripple).

In Solution 1, the need for trust increases. In option 2, the cost increases. This eliminates them as a scalability option.

Starting from the first principle, rediscover Optimistic Rollup

In the next section, we assume that the reader already knows about hashes and Merkel trees .

Based on the knowledge we have learned so far, let us simulate a Socratic conversation. The goal is to find a protocol that can increase the effective throughput of Ethereum without increasing the burden on users and node operators.

Q: We want to expand Ethereum without significantly changing trust and cost assumptions. how should I do it?

Answer: We want to reduce the requirements of existing operations on system costs (see the three resource types above). In order to understand why this is not easy, we need to first look at the architecture of Ethereum:

Every node of Ethereum currently stores and executes every transaction submitted to it by users. During the execution process, the transaction runs through the EVM and interacts with the state of the EVM (such as storage, balance, etc.)-this operation is very expensive. Common smart contract optimization techniques focus on minimizing the number of interactions with the state, but they can only provide minor constant improvements.

Q: Are you saying that there is a way to trade without involving state, thereby keeping resource costs low?

Answer: In extreme cases, can we move all execution out of the chain while keeping some data on the chain? We can do this by introducing a third party called a ” sequencer “. They are responsible for storing and executing transactions submitted by users locally. In order to keep the system active, sequencers need to periodically submit the Merkel root of the transactions they receive and the state root generated on Ethereum. This is a step in the right direction, as we only store O(1) data in the state of Ethereum for O(N) off-chain transactions.

Q: So we can achieve scaling by letting the sorter calculate everything under the chain and only release Merkel roots?

Answer: Yes.

Q: Okay, so once you join, the sequencer can guarantee that your transfer fee is very cheap. So how will deposit and withdrawal work?

Answer: The user will enter the system by depositing money on Ethereum, and then the sorter will credit the corresponding amount to the user’s account. Users can propose such transaction content on Ethereum, for example, “I want to withdraw 3 ETH, and my account currently has >3 ETH, this is proof”. Even if there is no actual user status on the first level, users can still show the related Merkel certificate issued by the sequencer, indicating that they have sufficient funds in the current state.

Q: Now we know that users need Merkel certificates to withdraw their funds. How do users obtain the data to construct Merkel’s proof?

Answer: They can ask the sorter to provide them with data!

Q: What if the sorter is temporarily or permanently unavailable?

Answer: The sorter may be malicious, or it may be offline just due to technical issues, which will cause performance degradation (or worse, steal your assets!). Therefore, we must also require the sequencer to submit complete transaction data on the chain for storage, but it cannot be executed. On this issue, the goal is to obtain data availability. Assuming that all data is permanently stored on Ethereum, even if the sorter disappears, the new sorter can retrieve all layer 2 related data from Ethereum, reconstruct the latest layer 2 state, and leave from their predecessor Where to continue.

Q: If the sorter is online but refuses to provide me with Merkel proof data, can I download it from Ethereum?

Answer: Yes, you can synchronize an Ethereum node yourself, or connect to one of the many hosting node services.

Q: One thing I still don’t understand, how can you store things in Ethereum without executing it? Doesn’t every transaction go through the EVM?

Answer: Suppose you submit 10 transactions and transfer ETH from A to B. Performing each transaction will perform the following operations: increase A’s nonce, decrease A’s balance, and increase B’s balance. This requires quite a lot of writing and reading from the world state of Ethereum. Instead, you can send all transaction codes to the publish (bytes _transactions) public function of the smart contract. Note that the body of this function is empty! This means that the published transaction data will not be interpreted, executed, and will not be accessed anywhere, it is only stored in the historical log of the blockchain (the cost of writing is very low).

Q: Can we trust the sorter? What if they post an invalid state transition?

Answer: Any time the sequencer publishes a batch of state transitions, there will be a “dispute period” during which any party can issue a ” fraud certificate ” to show that one of the state transitions is invalid. This can be proved by replaying the transaction that caused the state transition on the chain and comparing the resulting state root with the state root issued by the sequencer. If the state root does not match, the fraud proof is successful and the state transition is cancelled. If there are more state transitions after the invalid state transition, they will also be cancelled. But if the issues have passed the dispute period, they can no longer be questioned and they will be considered final transactions.

Q: Wait a minute! You said before that if a) increases costs, or b) introduces new trust assumptions, then it is not scalability. In the scenario you describe, don’t we also assume that someone will always report fraud?

Answer: Yes. We assume that there is an entity called a “validator” who is responsible for monitoring fraud. If the level 1 and level 2 states do not match, they will publish evidence of fraud. We also assume that the validator can reliably obtain the fraud evidence contained in Ethereum within the dispute period. We believe that the existence of validators is a “weak” hypothesis. Imagine if there are thousands of users in an application, you only need one person to run a validator. This doesn’t sound too ridiculous! On the other hand, changing the trust model of Ethereum or increasing the operating cost of Ethereum nodes is a “strong” assumption change that we do not want to make. This is what we call the “hypothesis that significantly changes the underlying system” when we define decentralized scalability.

Q: I agree that someone will run a validator, because all parties can benefit from the success of this new solution. But of course this also depends on the actual operating cost. So what are the resource requirements for running a validator and a sequencer?

Answer: The sequencer and validator must run an Ethereum full node (not an archive node), a complete layer 2 node, to generate the layer 2 state. The software run by the verifier is responsible for creating fraud proofs, and the software run by the sorter is responsible for bundling and publishing user transactions.

Q: Is that so?

Answer: Yes! Congratulations! You have rediscovered the Optimistic Rollup (Optimistic Rollup is a combination of “Optimistic Contract” and “Data Availability on the Chain” (aka “Data Aggregation”), which is the most anticipated scaling solution for 2019-2021. The reason is easy to understand, because it is the final product of many years of research in the Ethereum community, and you should have already experienced this in the short dialogue above.

Optimistic incentives

The extension of layer 2 is based on the fact that we are trying to minimize the number of on-chain transactions executed. We use fraud proofs to cancel any invalid state transitions that may occur. Since fraud proofs are transactions on the chain, we also hope to minimize the number of fraud proofs issued on Ethereum. In an ideal situation, fraud has never occurred, and therefore, proof of fraud has never been issued.

We curb fraud by introducing fidelity bonds . In order for users to become sorters, they must first issue a bond on Ethereum, and if it is proven fraudulent, they will lose the bond. In order to incentivize individuals to detect fraud, the sorter’s bonds will be distributed to verifiers after being cut.

Loyalty bonds and dispute period

When designing an incentive mechanism for fraud evidence, there are two parameters that need to be adjusted:

  • Loyalty bond scale: The sorter must announce the number distributed to the verifier. The larger the scale, the greater the incentive to become a verifier, and the smaller the incentive to commit fraud as a sorter.
  • Dispute period: the time window during which fraud proofs can be issued. After this time window, layer 2 transactions are considered safe on layer 1. The longer dispute period can provide better security guarantees for preventing censorship attacks. A shorter dispute period can create a good user experience for users who withdraw from the second tier to the first tier, because they do not need to wait a long time to reuse their funds on the first tier.

In our opinion, an exact static value cannot be found for these two parameters. Perhaps 10 ETH bonds and a 1-day dispute period are sufficient. But maybe 1 ETH and 7 days are enough. The real answer is that it depends on the motivation to become a validator (which depends on the operating cost) and how easy it is to issue a fraud proof (which depends on the level of congestion at the first layer). Both should be adjustable, either manually or automatically.

It is worth mentioning that EIP1559 introduces a new BASEFEE opcode to Ethereum, which can be used to estimate congestion on the chain and therefore programmatically adjust the duration of the dispute period.

How to properly implement this punishment mechanism is very important, otherwise it will be abused in practice. Let me give an example to illustrate what a naive and non-practical realization is:

  1. Alice posted a 1 ETH bond, so she was able to act as a sorter in the system
  2. Alice posted a fraud status update
  3. Bob noticed this and published a controversy. If successful, this will grant 1 ETH in Alice’s bond to Bob and cancel the fraud status update
  4. Alice noticed this controversy and also published a controversy (challenge herself!)
  5. ALice got her 1 ETH, even if she tried to defraud, she was actually not punished.

Alice can launch this attack by “snatching away”, that is, broadcasting the same transaction as Bob, but with a higher gas price, causing Alice’s transaction to be executed before Bob. This means that Alice can continue to try to cheat at the lowest cost (only Ethereum transaction fees).

The solution to this problem is simple: instead of granting all the bonds to the disputer, X% of the bonds are burned. In the above example, if we burn 50%, Alice will only get 0.5 ETH, which is enough to prevent her from trying to cheat in step 2. Of course, destroying the bond reduces the incentive to run the validator (because the payment becomes less), so it is necessary to ensure that the bond is still sufficient to motivate the user to become a validator after the burned part.

Common criticisms of Optimistic Rollup and our response

Now that we have completed the building blocks of Optimistic Rollup, let us explore and resolve the most popular criticisms of this mechanism.

Long withdrawal/dispute periods are fatal to adoption and composability

As we mentioned above, a long dispute period is good for security. There seems to be an inherent trade-off: a long dispute period is not good for users, because users have to wait a long time if they want to withdraw funds. The dispute period is short, of course, it can bring a smooth user experience, but then you will take the risk of fraud.

We think this is not a problem. Due to the possibility of long withdrawal delays, we expect that market makers will quickly step in and provide faster withdrawal services. This is possible because people who verify the level 2 status can correctly determine whether a withdrawal is fraudulent, and will “purchase” their services at a smaller discount. for example:

Participants:

  • Alice: There are 5 ETH on the second layer.
  • Bob: In the “market maker” smart contract, there are 4.95 ETH on the first layer, and a validator is running on the second layer

step:

  1. Alice let Bob know that she wants to withdraw money “fast” and pay him a fee of 0.05 ETH
  2. Alice sends a cash withdrawal request to Bob’s “market maker” smart contract
  3. Two things can happen at this time:
    1. Bob checks whether the withdrawal is valid on his tier 2 validator and approves the quick withdrawal. This will immediately transfer 4.95 ETH to Alice’s layer 1 address. After the withdrawal period ends, Bob can receive these 5 ETHs and make a net profit.
    2. Bob’s verifier reminds him that the transaction is invalid. Bob disputes the state transition caused by the transaction, cancels the transaction, and earns his bonds because the sorter allows malicious transactions to occur.

Alice is either honest and takes the money out immediately, or she is dishonest and will be punished. We hope that the fees paid to market makers will be compressed over time. If there is a demand for this service, users will eventually lose sight of this process.

The most important implication of this feature is that it can achieve composability with the first-level contract without having to wait for a complete dispute period.

Note that this technique was first in the ” Simple Fast Withdrawals(quick and easy cash withdrawal) described in an article.

Miners can use bribery to review cash withdrawals and undermine the security of Optimistic Rollup

In the post of “Optimistic Rollup’s Near-Zero Cost Attack Scenario”, some people believe that the incentives of miners are too large, which will cause sorters to collude with Ethereum miners and are unwilling to review any disputed transactions. Of course, considering the dependence of system security on dispute resolution, this is fatal to any Optimistic system.

We disagree with the argument of this article. We assume that the honest party is always willing to bribe miners and can provide more funds than the malicious party. In addition, every time miners deviate from “honest” behavior because they help malicious parties win, additional costs are incurred. This behavior will destroy the value of Ethereum, which in turn will increase the additional cost of miners engaged in such behavior.

In fact, this situation has been studied in the academic literature, proving that “the threat of counterattack induces a perfect equilibrium of subgames where no attack occurs.”

We would like to thank Hasu for letting us take note of the argument of this paper.

The verifier’s dilemma inhibits operating the verifier and destroying Optimistic Rollup

In response to the verifier’s dilemma, Ed Felten wrote an excellent analysis and solution , which we summarize as follows:

  1. If the system’s incentives work as expected, no one will cheat
  2. If no one is cheating, then there is no point in running a verification program, because you can’t make money from operating it
  3. Because no one runs the validator, there is a chance that the sorter will eventually cheat
  4. The sorter cheated and the system no longer works as expected

This sounds important and almost contradictory. Under the assumption that the reward scale is fixed, more validators will reduce the expected reward of a single validator. In addition, with more verifiers, the amount of cake that can be allocated seems to decrease, and fewer frauds occur, which further exacerbates the problem. In subsequent analysis, Felten also provided a way to solve the verifier’s dilemma .

I want to take the opposite position here. I think the verifier’s dilemma is not as important as the critics say. In practice, there is a non-monetary incentive to be a validator. For example, you may build a large application on an aggregation platform, or you may hold tokens. If the system fails, your application will not run, or the tokens you hold will depreciate. In addition to this, the need for quick withdrawals creates an incentive for market maker verifiers to exist (as we saw in the previous section), which is not affected by fraud. To be more specific, Bitcoin does not provide any incentives to store the entire blockchain history or provide your local data to peers, but people still do it selflessly.

Even if running a validator in a vacuum environment does not meet the incentive mechanism, it can ensure the safety of the system, which is the most important thing for those entities that invest in the success of the system. Therefore, we believe that in the second layer of the Optimistic system, there is no need to design a mechanism to bypass the verifier’s dilemma.

in conclusion

We analyzed one of the technologies that will be crucial to Ethereum in 2021: Optimistic Rollup.

Summarize its benefits: Optimistic Rollup is an extension of Ethereum. It carries the security, composability and developer moat of Ethereum, while improving performance, and will not have a substantial impact on the cost or trust requirements of Ethereum users . We explored the incentive structures that make Optimistic Rollup work, and responded to common criticisms.

What we want to emphasize is that the upper limit of Optimistic Rollup performance is determined by the data published on the first layer. Therefore, its advantages are: 1) Compress the data you publish as much as possible (for example, aggregate by BLS signature ), 2) Have a large and cheap data layer (for example, ETH2).

As additional reading, we recommend Buterin’s incomplete guide on Rollup and trust models . We also recommend studying ZK Rollup, a close relative of Optimistic Rollup. Finally, there are other ways to obtain decentralized scalability, namely sharding and state channels, each of which has advantages and disadvantages.

Source link: research.paradigm.xyz

Categories
News

Deputy Director of the Institute of Finance of the Academy of Social Sciences: It is not appropriate to be too optimistic about the role of the central bank’s digital currency in the short term (www.blockcast.cc)

Deputy Director of the Institute of Finance of the Chinese Academy of Social Sciences: It is not appropriate to be too optimistic about the role of central bank digital currency in the short term

Note: This article is a transcript of the author’s speech at the China Macroeconomic Forum (CMF) Macroeconomic Hot Issues Seminar (No. 20), which has been reviewed by me. Please be sure to indicate the source for reprinting.

Zhang Ming, deputy director of the Institute of Finance, Chinese Academy of Social Sciences, deputy director of the National Finance and Development Laboratory, director of the China Chief Economist Forum

What is the definition of digital currency?

I start from the three most representative digital currencies, namely Bitcoin, Libra and CBDC, and talk about their innovation and applicability.

The first is the cryptocurrency represented by Bitcoin. This currency was created by the private sector, not linked to any existing currency, and the total amount is roughly constant. In terms of innovation, Bitcoin is a groundbreaking new creation. But the key problem is that there is no government credit endorsement behind Bitcoin, and the total amount is constant, so the current holders hold it not for trading, but to wait for appreciation. From this perspective, Bitcoin is not a real currency, but a financial asset.

The second type is a stable currency represented by Libra. Its currency value is linked to a basket of currencies. Libra is also very innovative. Once it starts to be widely used, the challenge to the existing international monetary system and structure may be the biggest. But precisely because it is highly innovative and initiated by the private sector, governments of all countries, including the United States, are now very vigilant about this new potential challenge. So far, Libra is still an initiative and has not yet been implemented.

The third type is central bank digital currency (CBDC). A typical representative is the digital currency DCEP launched by the People’s Bank of China. It is relatively the least innovative. It is a sovereign currency that can form a certain substitute for M0 and is linked to the RMB one-to-one. This is because it is less innovative and easier to start and promote. For the People’s Bank of China, there are not too many potential risks and worries to promote it. But also because it only replaces M0, it is not much different from the past currencies. Therefore, the current market and the industry’s expectations for DCEP seem to be too high, thinking that it can solve the problems that the current financial system cannot solve, and that it will Significantly enhance the international status of the RMB. In terms of the current form of DCEP, the marginal changes it can bring to the current Chinese financial system and the existing international monetary system may not be very strong.

How does digital currency affect existing monetary policy, financial stability, and macroeconomics?

Regarding the impact of the central bank’s digital currency on macroeconomic and financial stability, I will discuss four small issues here. My core point is that the current market may be too optimistic about the role DCEP can have in the short term.

The first question is whether the introduction of digital currency can improve the transmission efficiency of monetary policy? We have always said in the past that the transmission efficiency of China’s monetary and credit system was low. my country’s central bank digital currency still adopts a two-tier system, that is, from the central bank to commercial banks, and then from commercial banks to the real economy. Because commercial banking systems have their own preferences and objective functions, they may not be able to pass liquidity to those who need money as the central bank wishes. Therefore, the introduction of DCEP may not significantly improve the transmission efficiency of the money market.

The second question is, can the central bank’s digital currency strengthen the structural function of China’s monetary policy? The market widely believes that digital currency can be used to achieve inclusive finance and support small and medium-sized enterprises. This goal can indeed be achieved, but it must be better integrated with fiscal policy. In other words, for digital currency to better perform the structural function of monetary policy, it means to give this part of digital currency directly to people who need money, such as low- and middle-income groups, small and medium-sized enterprises, etc., but in fact this is The transfer payment function of fiscal policy. Therefore, unless digital currency is used as the ultimate carrier of fiscal policy transfer payments, it is difficult for digital currency to strengthen the structural function of monetary policy through conventional channels.

The third question is, can the introduction of the central bank’s digital currency significantly promote the internationalization of the RMB? This is also more difficult. According to international experience, there are three main determinants of whether a country’s currency can grow into an international currency. They are ranked in order of importance. The first is the competitiveness of the currency’s financial market, including the depth, breadth and liquidity of the financial market. ; The second is the historically formed network of positive externalities and path dependence; the third is the country’s economic scale and the scale of international trade. Digital currency can indeed improve the convenience of cross-border payment, but the convenience of cross-border payment is by no means the most important indicator that determines the degree of internationalization of a country’s currency. Conversely, if the Federal Reserve launches a digital dollar, it will further strengthen the dollar’s status as an international reserve currency. The reason is simple, because among the three important factors that determine the degree of currency internationalization, the financial market, positive externalities, and the scale of economic trade have all been realized in the United States. Therefore, if the convenience of cross-border payments in the U.S. dollar can be further improved at this time, the U.S. dollar The consolidation of the global reserve currency status is indeed a good thing.

The fourth question is related to the Sino-US economic, trade and financial friction, that is, can the introduction of the central bank’s digital currency reduce my country’s dependence on the SWIFT system? Some people say that considering that the United States may threaten to cut off the SWIFT system, the introduction of the Chinese central bank’s digital currency at this time and the establishment of a new cross-border payment clearing system can reduce my country’s dependence on SWIFT. This view is probably too optimistic. The key to the establishment of the international payment and settlement system is not whether the residents of the country are willing to accept it, but the degree of acceptance by non-residents. If non-residents are not currently accepting traditional renminbi so much, why must they accept a new payment and settlement system based on digital renminbi? So the reason is similar. Whether a cross-border payment settlement network can be established depends on the competitiveness of China’s real economy and financial market. It is unrealistic to expect that a single digital currency can change all of this.

In addition, when discussing the impact of central bank digital currency on macroeconomic and financial stability, there is actually a hypothetical premise that the scale of digital currency must be large enough. If it is limited to the current small-scale pilots, if the proportion of digital currencies in the total M0 is still small, it is too early to discuss these issues.

What are the future development prospects of digital currency?

I will mainly talk about two prospects here.

First, in the future, on a global scale, different forms of digital currencies may present a competitive landscape. There are not only digital currencies of different natures like Bitcoin, Libra, CBDC, etc., but also central bank digital currencies of different types and currency calibers within CBDC, which may form a pattern of blooming flowers. It is worth noting that “let a hundred flowers bloom” here does not refer to the international reserve currency. The international reserve system is destined to be the territory of a few currencies. In the future, there will be two currencies that can serve as reserve currencies, one is a national currency, such as the US dollar, euro, renminbi, etc.; the other is a stable currency, whether it is SDR, eSDR or Libra. Have the potential to become an international reserve currency, because they themselves still rely on the credit of the major governments. In contrast, the cryptocurrency represented by Bitcoin is more a financial asset in nature and can only be regarded as a “digital currency” in a broad sense. In the future, international reserve currency competition will be very fierce, so it will not involve too many currencies.

Second, my speech today believes that the market may be too optimistic about the short-term role of central bank digital currencies, but it does not mean that I am not optimistic about central bank digital currencies. I am generally optimistic and long-term optimistic about the development prospects of central bank digital currencies. Regarding the evolution trend of the Chinese version of the central bank’s digital currency DECP (it seems to be renamed eCNY), here are four conjectures:

Will the central bank’s digital currency coverage exceed M0 in the future?

Will the central bank digital currency become a carrier of interest-bearing assets in the future instead of the current zero-interest assets?

Will there be a new third-party payment platform led or managed by the People’s Bank of China in the future? This is very important because it determines whether eCNY, Tencent, and Alipay are mainly cooperation or competition.

In the future, is it possible for the central bank of our country to implement different policies regarding the use of digital RMB and traditional RMB in cross-border flows and offshore markets? In theory, different policies should not be set for the two renminbi. However, compared with traditional RMB, it may be relatively easy for foreign residents to obtain digital RMB. But on the other hand, the Chinese version of digital currency is actually traceable, that is, the central bank has the ability to control the cross-border flow of large amounts of digital renminbi. The key lies in whether the central bank is willing to change the use of digital renminbi in cross-border flows and offshore markets. It’s easier.

Categories
News

Concise understanding of the characteristics and operation of the Optimistic Rollup (www.blockcast.cc)

Optimistic Rollup has been widely recognized, Fuel Labs, Offchain Labs, Optimism and Hubble are all building Optimistic Rollup infrastructure.

Recommended reading: ” How does the Ethereum Layer 2 ecosystem stand?

Original title: “Introduction | Optimistic Rollup Minimalist Interpretation”
Written by: John Adler
Translation & Proofreading: Min Min & A Jian

Today, for Ethereum and all blockchains, Optimistic Rollup is the most promising scalability solution. But what is Optimistic Rollup (ORU)? Why does it make Ethereum developers and scalability researchers so excited?

In this article, we will introduce the basic knowledge of ORU in an easy-to-understand way. If you want to know the in-depth technical details, please check this article ( Chinese translation ).

Features

ORU has many desirable characteristics. Among all blockchain scalability technologies, its characteristics are unparalleled. A reasonably designed ORU system will have the following characteristics:

  1. Trust-free . Unlike traditional sidechain technology, ORU is trust-free (or in a more professional way, it has the feature of minimizing trust). You can withdraw your funds from Rollup at any time, without trusting that most block producers on ORU are honest.
  2. License-free . Unlike Plasma, ORU is license-free. Anyone can become a block producer on ORU, because all block data on Rollup is published on Ethereum and can be obtained from Ethereum. How to choose the next leader is a specific implementation issues, rather than a fundamental limitation.
  3. Free hosting . As mentioned above, because ORU is both trust-free and permission-free, you can withdraw your funds at any time, and no one can stop you. Therefore, ORU is free of hosting.
  4. Strong expressiveness . Unlike ZK Rollup, ORU (in terms of theory and practice) is highly expressive. Whether it is a Bitcoin-like UTXO payment or a mature and compatible EVM execution, ORU can handle it.
  5. Open to participation . Unlike payment channels, ORU supports smart contracts and is open to everyone like Uniswap.
  6. High capital efficiency . Unlike payment channels, ORU does not require users to lock funds in advance.
  7. Anti-chain congestion . Unlike payment channels and Plasma, ORU can withstand congestion on the chain because ORU’s fraud proof is at the block level, not a closing mechanism like payment channels or an exit mechanism like Plasma.
  8. No need for new cryptography . Unlike ZK Rollup, ORU does not require any new cryptography.
  9. Fast (not instant) finality . Unlike ZK Rollup, ORU does not need to generate a proof, so ORU blocks can be immediately published to Ethereum. Since valid ORU blocks cannot be rolled back, once these blocks are published on Ethereum, they can obtain the same finality as Ethereum.

A brief history of Optimistic Rollup

One of the earliest measures to enhance Bitcoin’s scalability was the side chain. The side chain is a blockchain that runs together with the parent chain, but has different characteristics: shorter block time, larger block size, and more expressive smart contracts. However, ordinary side chains have a fatal disadvantage: if most miners/validators on a side chain are dishonest, user funds will be stolen.

Over the years, many technologies have been trying to enhance the security of side chains to ensure that user funds will not be stolen even if the vast majority of participants are dishonest (this is called two-way anchoring of trust minimization) . Earlier examples include merged mining, shadow chain, and then Plasma and ZK Rollup. Interestingly, before the advent of ORU, a similar solution was delayed state execution under the sharding mechanism (we will talk about this soon!).

The master of these studies is what we now know as Optimistic Rollup. In June 2019, the “Minimum Viable Merger Consensus” described this technology for the first time. Since then, the Ethereum community has vigorously supported ORU as a scalability solution for Ethereum-style smart contract execution, without waiting for Serenity Phase 2.

How Optimistic Rollup works

Introduction | Optimistic Rollup Minimalist ExplanationOptimistic Rollup visualization

As a two-way anchored side chain that minimizes trust (in other words, even if every verifier on the side chain is dishonest, there will be no stolen funds), ORU is very simple in operation (of course, my The “principle” is explained in more detail).

  1. The aggregator collects the transactions on Rollup, packs them into the Rollup block, and sends the Rollup block together with the deposit (we will explain why the deposit is needed soon) to Ethereum (or another block similar to Ethereum) Chain, on which a smart contract with a large number of states is running). This Rollup block will not be translated or executed-the smart contract only records the block hash and tracks the hashes of all Rollup blocks. Rollup blocks themselves are not stored in smart contracts, but everyone can find them in Ethereum’s historical transactions.

  2. The Rollup block contains a state root, that is, the root of the state tree of the Rollup block. If the state root is invalid, anyone can use the fraud proof to prove that it is invalid during the challenge period. This may be because a transaction in this Rollup block is invalid, or because the state root is invalid. If a Rollup block is proved to be invalid, the contract will roll back the Rollup chain, and all Rollup blocks following this invalid block will become orphan blocks. Once the fraud proof is successful, part of the deposit will be paid to the prover, and the remaining part will be destroyed.

  3. If no one submits a fraud proof until the end of the challenge period, the contract will finalize the Rollup block and allow the aggregator to withdraw the deposit. When users retrieve money from the Rollup chain back to the main chain, they need to initiate a withdrawal request on the Rollup chain. The money can only be retrieved after the contract has finalized the Rollup block.

That’s it! ORU seems very simple. Why did it take so long to have a specific plan and implementation? This is because the design space of these technologies is actually unlimited, and “to find the answer, you must first find the right direction.”

Please note that the above is how ORU operates on blockchains such as Ethereum through on-chain execution. ORU can also be used as an application with client-side execution functions and implemented on projects such as LazyLedger. In the latter case, the fraud proof will be disseminated through a peer-to-peer network without publishing to the smart contract.

compromise

Although many features of ORU are essential to decentralized blockchains and unstoppable financial platforms and applications, there is a price to pay to achieve these features.

1. By default, due to the delay in interacting with the smart contract on Ethereum, the challenge period for fraud proof will be very long (up to several weeks), and this will cause withdrawal delays. Client-side execution can significantly shorten the challenge period. However, we only need to allow liquidity providers to provide withdrawal services through atomic swaps and charge a small fee to easily solve the delay problem. In fact, this is a new DeFi component: liquidity providers can use their liquidity to earn income by providing services.

2. The throughput of ORU is capped by the data availability throughput of Ethereum. In this case, we can treat ORU as a pseudo-shard. Multiple ORUs can run in parallel on the same data availability layer. Fortunately, data availability is easier to scale than execution. Projects such as LazyLedger have been specifically optimized to provide a highly scalable and universal data availability layer, allowing all Rollup projects to reach their full potential.

in conclusion

All in all, ORU has been widely recognized. This solution allows Ethereum (and even the blockchain) to fulfill the promise of sharding before Serenity Phase 2 goes online: it can help decentralized applications achieve scalable execution without compromising important attributes. The projects that build ORU infrastructure include Fuel Labs, Offchain Labs, Optimism, Hubble, etc. (this list is not complete, and there is no endorsement for them). The projects planned to be built on ORU are even harder to count!

Source link: coinmarketcap.com