BTC Co-Processing Security Protocol

Turing completeness has always been the ultimate goal of BTC scalability, expanding BTC from a transaction network to a general-purpose turing computing network, greatly enriching the possibilities of decentralized network scenarios. Over the past decade, numerous Turing-complete proposals and evolutionary directions have emerged. BCH/BSV chose the path of large block forks, while ETH directly added the EVM virtual machine to its technology stack. The former is maximally compatible with BTC but has limited computational scenarios. The latter ETH seemed to be the most successful fork path in the past, but it has two potential problems: the fragmentation of BTC assets and reducing security as shifting to PoS consensus.

This article attempts to propose a BTC co-processing architecture, mainly to achieve three design goals: native BTC asset, turing-complete computation, and a secure and lightweight verification protocol.

Native BTC Asset Mapping

Currently, BTC's off-chain scaling solutions are mainly based on sidechain or Layer 2. A major factor restricting the development of this design approach is the assets security of offchain . It requires transferring the ownership of BTC to the off-chain, which greatly separates BTC from its native security.

The concept of native asset mapping proposed in this article has an important premise: not transferring asset ownership. Asset ownership confirmation occurs on the BTC chain, not off-chain. Under this design requirement, one possible design approach could be transaction coloring or adding the OP State opcode. A special transaction (H_TX) can be constructed under the structure of a normal transaction (N_TX) through a client plugin. This transaction can have a larger block or faster transaction finality. Since it's compatible with the standard transaction structure, H_TX will be packaged into the Memory Pool along with N_TX. For users, there's no need to cross-chain BTC assets or interact directly with the sidechain.

Turing-Complete Computation

To ensure BTC's lightweight characteristics, designing native Turing-complete computation on-chain is very difficult, requiring large-scale refactoring of hardware adaptation and the kernel engine. For a decentralized open-source software project, the cycle would be very long. Therefore, off-chain computation extension is a workable design approach in the short term.

The off-chain computation extension scheme proposed in this article mainly references the ideas of CPU co-processors. BTC is the core CPU handling important states and instruction set registers, while the off-chain acts as a co-processor mainly responsible for accelerating and extending the instruction set, such as the CPU with GPU. As shown in the figure above, the transaction agent (Tx Agent) subscribes to special transactions (H_TX) from the on-chain transaction pool and feeds them into the UTXO VM for instruction set updates. Intermediate changes are recorded in the State module for mutual invocation between multiple programs and state logging (State Binlog). The final on-chain state update is the most important security guarantee. Since the off-chain co-processor doesn't generate real asset accounts, the VM generates a batch transaction (Batch Tx) based on the State updates and re-broadcasts it to the chain for final account state updates.

Secure and Lightweight Verification Protocol

Since the off-chain only expands computational capabilities and doesn't add an asset mapping module, the core asset ownership is always anchored to BTC itself. For the entire system, the most important thing is to verify the integrity of the outsourced computation of BTC extended instructions. This can be broadly divided into two stages: original instruction integrity and computation correctness. Instruction integrity can be verified by writing to the on-chain block data area, such as the Block Witness data area, for external challenge confirmation. Computation correctness can be quickly verified through ZKP protocols or by performing Redo operations based on state logs, the latter being similar to the Binlog mechanism in databases.

Last updated