以太坊公式(以太坊 算法)

文章前言

共识算法是区块链项目的核心之一,每一个运行着的区块链都需要一个共识算法来保证出块的有效性和有序性,在以太坊的官方源码中,有两个共识算法—clique和ethash,它们位于以太坊项目的consensus目录下,其中clique是PoA(权威证明,Proof of Authority)共识,它主要用于测试网络,ethash是目前以太坊主网Homestead版本的PoW(工作量证明,Proof of Work)共识算法,用于正式网络。

共识引擎

Engine接口定义了共识引擎需要实现的所有函数,实际上按功能可以划分为2类:

  • 区块验证类:以Verify开头,当收到新区块时,需要先验证区块的有效性 

  • 区块盖章类:包括Prepare/Finalize/Seal等,用于最终生成有效区块(比如:添加工作量证明)

下图是以太坊共识引擎组件关系图:

在这里引申出了与区块验证相关联的两个外部接口——processor(执行交易)和Validator(验证区块内容和状态),同时由于需要访问之前的区块链上的数据,抽象出了一个ChainReader接口,从上图中可以看到这里的BlockChain和HeaderChain都实现了该接口,所以可以访问链上数据

区块验证

区块验证过程如下图所示,可以看到当downloader收到新的区块时会直接调用BlockChain.insertChain()函数将新的区块插入区块链,不过在插入之前会优先对区块的有效性和合法性进行验证处理,主要涉及以下四个步骤:

  • 验证区块头:Ethash.VerifyHeaders() 

  • 验证区块内容:BlockValidator.VerifyBody()(内部还会调用Ethash.VerifyUncles())

  • 执行区块交易:BlockProcessor.Process()(基于其父块的世界状态)

  • 验证状态转换:BlockValidator.ValidateState()

以太坊公式(以太坊 算法)

区块盖章

新产生的区块必须经过\\”盖章(seal)\\”才能成为有效区块,具体到Ethash来说就是要执行POW计算以获得低于设定难度的nonce值,整个过程主要分为3个步骤:

  • 准备工作:调用Ethash.Prepare()计算难度值 

  • 生成区块:调用Ethash.Finalize()打包新区块 

  • 区块盖章:调用Ethash.Seal()进行POW计算,填充nonce值

源码分析

ethash

ethash目录结构如下所示:

├─ethash│      algorithm.go       // Dagger-Hashimoto算法实现│      api.go         // RPC方法│      consensus.go      // 共识设计│      difficulty.go        // 难度设计│      ethash.go        // cache结构体和dataset结构体实现│      sealer.go          // 共识接口Seal实现
基本常量

文件go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go的开头处定义了POW协议的常量(区块奖励、区块难度、错误信息等):

// filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go L40// Ethash proof-of-work protocol constants.var (  FrontierBlockReward           = big.NewInt(5e+18) // Block reward in wei for successfully mining a block  ByzantiumBlockReward          = big.NewInt(3e+18) // Block reward in wei for successfully mining a block upward from Byzantium  ConstantinopleBlockReward     = big.NewInt(2e+18) // Block reward in wei for successfully mining a block upward from Constantinople  maxUncles                     = 2                 // Maximum number of uncles allowed in a single block  allowedFutureBlockTimeSeconds = int64(15)         // Max seconds from current time allowed for blocks, before they\\\'re considered future blocks
// calcDifficultyEip2384 is the difficulty adjustment algorithm as specified by EIP 2384. // It offsets the bomb 4M blocks from Constantinople, so in total 9M blocks. // Specification EIP-2384: https://eips.ethereum.org/EIPS/eip-2384 calcDifficultyEip2384 = makeDifficultyCalculator(big.NewInt(9000000))
// calcDifficultyConstantinople is the difficulty adjustment algorithm for Constantinople. // It returns the difficulty that a new block should have when created at time given the // parent block\\\'s time and difficulty. The calculation uses the Byzantium rules, but with // bomb offset 5M. // Specification EIP-1234: https://eips.ethereum.org/EIPS/eip-1234 calcDifficultyConstantinople = makeDifficultyCalculator(big.NewInt(5000000))
// calcDifficultyByzantium is the difficulty adjustment algorithm. It returns // the difficulty that a new block should have when created at time given the // parent block\\\'s time and difficulty. The calculation uses the Byzantium rules. // Specification EIP-649: https://eips.ethereum.org/EIPS/eip-649 calcDifficultyByzantium = makeDifficultyCalculator(big.NewInt(3000000)))
// Various error messages to mark blocks invalid. These should be private to// prevent engine specific errors from being referenced in the remainder of the// codebase, inherently breaking if the engine is swapped out. Please put common// error types into the consensus package.var ( errOlderBlockTime = errors.New(\\\"timestamp older than parent\\\") errTooManyUncles = errors.New(\\\"too many uncles\\\") errDuplicateUncle = errors.New(\\\"duplicate uncle\\\") errUncleIsAncestor = errors.New(\\\"uncle is ancestor\\\") errDanglingUncle = errors.New(\\\"uncle\\\'s parent is not ancestor\\\") errInvalidDifficulty = errors.New(\\\"non-positive difficulty\\\") errInvalidMixDigest = errors.New(\\\"invalid mix digest\\\") errInvalidPoW = errors.New(\\\"invalid proof-of-work\\\"))
矿工地址

Author用于返回第一笔交易的目的地址(币基交易的奖励地址,也是矿工的地址):

// filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go  L82// Author implements consensus.Engine, returning the header\\\'s coinbase as the// proof-of-work verified author of the block.func (ethash *Ethash) Author(header *types.Header) (common.Address, error) {  return header.Coinbase, nil}
验区块头

VerifyHeader函数用于校验区块头,这里首先检查当前的共识模式是否是ModeFullFake,如果是则直接返回nil,否则检查区块头是否已经存在以及是否无父区块,如果以上校验全部通过则调用verifyHeader函数进行适当验证:

// filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go L88// VerifyHeader checks whether a header conforms to the consensus rules of the// stock Ethereum ethash engine.func (ethash *Ethash) VerifyHeader(chain consensus.ChainHeaderReader, header *types.Header, seal bool) error {  // If we\\\'re running a full engine faking, accept any input as valid  if ethash.config.PowMode == ModeFullFake {    return nil  }  // Short circuit if the header is known, or its parent not  number := header.Number.Uint64()  if chain.GetHeader(header.Hash(), number) != nil {    return nil  }  parent := chain.GetHeader(header.ParentHash, number-1)  if parent == nil {    return consensus.ErrUnknownAncestor  }  // Sanity checks passed, do a proper verification  return ethash.verifyHeader(chain, header, parent, false, seal, time.Now().Unix())}

在verifyHeader方法中同样检查运行模式是否是ModeFullFake,如果是则认为所有的输入皆未有效,如果不是则尽可能生成过个线程,之后通过一个for循环来进行批量验证,在验证过程中进而调用了verifyHeaderWorker方法验证区块,验证完后向done信道发送区块索引号:

// filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go  L108// VerifyHeaders is similar to VerifyHeader, but verifies a batch of headers// concurrently. The method returns a quit channel to abort the operations and// a results channel to retrieve the async verifications.func (ethash *Ethash) VerifyHeaders(chain consensus.ChainHeaderReader, headers []*types.Header, seals []bool) (chan<- struct{}, <-chan error) {  // If we\\\'re running a full engine faking, accept any input as valid  if ethash.config.PowMode == ModeFullFake || len(headers) == 0 {    abort, results := make(chan struct{}), make(chan error, len(headers))    for i := 0; i < len(headers); i++ {      results <- nil    }    return abort, results  }
// Spawn as many workers as allowed threads workers := runtime.GOMAXPROCS(0) if len(headers) < workers { workers = len(headers) }
// Create a task channel and spawn the verifiers var ( inputs = make(chan int) done = make(chan int, workers) errors = make([]error, len(headers)) abort = make(chan struct{}) unixNow = time.Now().Unix() ) for i := 0; i < workers; i++ { go func() { for index := range inputs { errors[index] = ethash.verifyHeaderWorker(chain, headers, seals, index, unixNow) done <- index } }() }
errorsOut := make(chan error, len(headers)) go func() { defer close(inputs) var ( in, out = 0, 0 checked = make([]bool, len(headers)) inputs = inputs ) for { select { case inputs <- in: if in++; in == len(headers) { // Reached end of headers. Stop sending to workers. inputs = nil } case index := <-done: for checked[index] = true; checked[out]; out++ { errorsOut <- errors[out] if out == len(headers)-1 { return } } case <-abort: return } } }() return abort, errorsOut}

verifyHeaderWorker方法如下所示,在这里首先获取父区块的header之后调用ethash.verifyHeader进行区块验证:

func (ethash *Ethash) verifyHeaderWorker(chain consensus.ChainHeaderReader, headers []*types.Header, seals []bool, index int, unixNow int64) error {  var parent *types.Header  if index == 0 {    parent = chain.GetHeader(headers[0].ParentHash, headers[0].Number.Uint64()-1)  } else if headers[index-1].Hash() == headers[index].ParentHash {    parent = headers[index-1]  }  if parent == nil {    return consensus.ErrUnknownAncestor  }  return ethash.verifyHeader(chain, headers[index], parent, false, seals[index], unixNow)}

ethash.verifyHeader如下所示,主要做了以下几件事情:

  • 检查header.Extra 是否超过32字节

  • 检查时间戳是否超过15秒,15秒以后就被认为是未来区块

  • 检查当前header的时间戳是否小于父区块的时间戳

  • 检查区块难度,在检查前会根据时间戳和父块难度计算区块难度

  • 检查Gas limit是否小于2^63-1

  • 检查gasUsed为<= gasLimit

  • 检查验证当前区块号是父块加1

  • 检查给定的块是否满足pow难度要求

  • 检查硬分叉的特殊字段

    // fileidr:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go  L242// verifyHeader checks whether a header conforms to the consensus rules of the// stock Ethereum ethash engine.// See YP section 4.3.4. \\\"Block Header Validity\\\"func (ethash *Ethash) verifyHeader(chain consensus.ChainHeaderReader, header, parent *types.Header, uncle bool, seal bool, unixNow int64) error {  // Ensure that the header\\\'s extra-data section is of a reasonable size  if uint64(len(header.Extra)) > params.MaximumExtraDataSize {    return fmt.Errorf(\\\"extra-data too long: %d > %d\\\", len(header.Extra), params.MaximumExtraDataSize)  }  // Verify the header\\\'s timestamp  if !uncle {    if header.Time > uint64(unixNow+allowedFutureBlockTimeSeconds) {      return consensus.ErrFutureBlock    }  }  if header.Time <= parent.Time {    return errOlderBlockTime  }  // Verify the block\\\'s difficulty based on its timestamp and parent\\\'s difficulty  expected := ethash.CalcDifficulty(chain, header.Time, parent)
    if expected.Cmp(header.Difficulty) != 0 { return fmt.Errorf(\\\"invalid difficulty: have %v, want %v\\\", header.Difficulty, expected) } // Verify that the gas limit is <= 2^63-1 cap := uint64(0x7fffffffffffffff) if header.GasLimit > cap { return fmt.Errorf(\\\"invalid gasLimit: have %v, max %v\\\", header.GasLimit, cap) } // Verify that the gasUsed is <= gasLimit if header.GasUsed > header.GasLimit { return fmt.Errorf(\\\"invalid gasUsed: have %d, gasLimit %d\\\", header.GasUsed, header.GasLimit) }
    // Verify that the gas limit remains within allowed bounds diff := int64(parent.GasLimit) - int64(header.GasLimit) if diff < 0 { diff *= -1 } limit := parent.GasLimit / params.GasLimitBoundDivisor
    if uint64(diff) >= limit || header.GasLimit < params.MinGasLimit { return fmt.Errorf(\\\"invalid gas limit: have %d, want %d += %d\\\", header.GasLimit, parent.GasLimit, limit) } // Verify that the block number is parent\\\'s +1 if diff := new(big.Int).Sub(header.Number, parent.Number); diff.Cmp(big.NewInt(1)) != 0 { return consensus.ErrInvalidNumber } // Verify the engine specific seal securing the block if seal { if err := ethash.verifySeal(chain, header, false); err != nil { return err } } // If all checks passed, validate any special fields for hard forks if err := misc.VerifyDAOHeaderExtraData(chain.Config(), header); err != nil { return err } if err := misc.VerifyForkHashes(chain.Config(), header, uncle); err != nil { return err } return nil}
    验叔区块

    VerifyUncles函数用于验证区块的叔区块是否符合以太坊ethash引擎一致性规则,主要检查以下几个内容:

    • 检查当前引擎的运行模式是否是ModeFullFake,如果是则直接返回nil,否则对叔区块进行进一步校验

    • 检查叔区块的数量是否大于最大叔区块数量设置(2个),如果叔区块为0则直接返回nil

    • 收集叔区块与祖先区块

    • 确认叔块只被奖励一次且叔块有个有效的祖先

    // filedir: go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go L186// VerifyUncles verifies that the given block\\\'s uncles conform to the consensus// rules of the stock Ethereum ethash engine.func (ethash *Ethash) VerifyUncles(chain consensus.ChainReader, block *types.Block) error {  // If we\\\'re running a full engine faking, accept any input as valid  if ethash.config.PowMode == ModeFullFake {    return nil  }  // Verify that there are at most 2 uncles included in this block  if len(block.Uncles()) > maxUncles {    return errTooManyUncles  }  if len(block.Uncles()) == 0 {    return nil  }  // Gather the set of past uncles and ancestors  uncles, ancestors := mapset.NewSet(), make(map[common.Hash]*types.Header)
    number, parent := block.NumberU64()-1, block.ParentHash() for i := 0; i < 7; i++ { ancestor := chain.GetBlock(parent, number) if ancestor == nil { break } ancestors[ancestor.Hash()] = ancestor.Header() for _, uncle := range ancestor.Uncles() { uncles.Add(uncle.Hash()) } parent, number = ancestor.ParentHash(), number-1 } ancestors[block.Hash()] = block.Header() uncles.Add(block.Hash())
    // Verify each of the uncles that it\\\'s recent, but not an ancestor for _, uncle := range block.Uncles() { // Make sure every uncle is rewarded only once hash := uncle.Hash() if uncles.Contains(hash) { return errDuplicateUncle } uncles.Add(hash)
    // Make sure the uncle has a valid ancestry if ancestors[hash] != nil { return errUncleIsAncestor } if ancestors[uncle.ParentHash] == nil || uncle.ParentHash == block.ParentHash() { return errDanglingUncle } if err := ethash.verifyHeader(chain, uncle, ancestors[uncle.ParentHash], true, true, time.Now().Unix()); err != nil { return err } } return nil}
    验区块体

    ValidateBody函数用于验证区块体,在这里手续检查当前数据库中是否已经包含了该区块,如果有的化则直接返回错误信息,之后检查当前数据库中是否包含该区块的父区块,如果没有则直接返回错误,之后验证叔区块的有效性以及其Hash值,最后计算块中交易的hash值并验证是否和区块头中的hash值一致:

    // filedir:go-ethereum-1.10.2\\\\core\\\\block_validator.go L48// ValidateBody validates the given block\\\'s uncles and verifies the block// header\\\'s transaction and uncle roots. The headers are assumed to be already// validated at this point.func (v *BlockValidator) ValidateBody(block *types.Block) error {  // Check whether the block\\\'s known, and if not, that it\\\'s linkable  if v.bc.HasBlockAndState(block.Hash(), block.NumberU64()) {    return ErrKnownBlock  }  // Header validity is known at this point, check the uncles and transactions  header := block.Header()  if err := v.engine.VerifyUncles(v.bc, block); err != nil {    return err  }  if hash := types.CalcUncleHash(block.Uncles()); hash != header.UncleHash {    return fmt.Errorf(\\\"uncle root hash mismatch: have %x, want %x\\\", hash, header.UncleHash)  }  if hash := types.DeriveSha(block.Transactions(), trie.NewStackTrie(nil)); hash != header.TxHash {    return fmt.Errorf(\\\"transaction root hash mismatch: have %x, want %x\\\", hash, header.TxHash)  }  if !v.bc.HasBlockAndState(block.ParentHash(), block.NumberU64()-1) {    if !v.bc.HasBlock(block.ParentHash(), block.NumberU64()-1) {      return consensus.ErrUnknownAncestor    }    return consensus.ErrPrunedAncestor  }  return nil}
    难度调整

    Prepare函数是共识引擎的实现,它初始化了区块头部的难度字段:

    // Prepare implements consensus.Engine, initializing the difficulty field of a// header to conform to the ethash protocol. The changes are done inline.func (ethash *Ethash) Prepare(chain consensus.ChainHeaderReader, header *types.Header) error {  parent := chain.GetHeader(header.ParentHash, header.Number.Uint64()-1)  if parent == nil {    return consensus.ErrUnknownAncestor  }  header.Difficulty = ethash.CalcDifficulty(chain, header.Time, parent)  return nil}

    CalcDifficulty函数用于实现区块难度调整,在这里进而去调用了重载的CalcDifficulty函数:

    // fileidir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go  L304// CalcDifficulty is the difficulty adjustment algorithm. It returns// the difficulty that a new block should have when created at time// given the parent block\\\'s time and difficulty.func (ethash *Ethash) CalcDifficulty(chain consensus.ChainHeaderReader, time uint64, parent *types.Header) *big.Int {  return CalcDifficulty(chain.Config(), time, parent)}

    CalcDifficulty函数会根据不同的以太坊版本来计算区块难度,当前处于Homestead版本,所以进入到calcDifficultyHomestead函数:

    // filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go  L312// CalcDifficulty is the difficulty adjustment algorithm. It returns// the difficulty that a new block should have when created at time// given the parent block\\\'s time and difficulty.func CalcDifficulty(config *params.ChainConfig, time uint64, parent *types.Header) *big.Int {  next := new(big.Int).Add(parent.Number, big1)  switch {  case config.IsMuirGlacier(next):    return calcDifficultyEip2384(time, parent)  case config.IsConstantinople(next):    return calcDifficultyConstantinople(time, parent)  case config.IsByzantium(next):    return calcDifficultyByzantium(time, parent)  case config.IsHomestead(next):    return calcDifficultyHomestead(time, parent)  default:    return calcDifficultyFrontier(time, parent)  }}

    calcDifficultyHomestead实现代码如下所示,这里的算式为:diff = (parent_diff + (parent_diff / 2048 * max(1 – (block_timestamp – parent_timestamp) // 10, -99))) + 2^(periodCount – 2):

    // filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go  L404// calcDifficultyHomestead is the difficulty adjustment algorithm. It returns// the difficulty that a new block should have when created at time given the// parent block\\\'s time and difficulty. The calculation uses the Homestead rules.func calcDifficultyHomestead(time uint64, parent *types.Header) *big.Int {  // https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2.md  // algorithm:  // diff = (parent_diff +  //         (parent_diff / 2048 * max(1 - (block_timestamp - parent_timestamp) // 10, -99))  //        ) + 2^(periodCount - 2)
    bigTime := new(big.Int).SetUint64(time) bigParentTime := new(big.Int).SetUint64(parent.Time)
    // holds intermediate values to make the algo easier to read & audit x := new(big.Int) y := new(big.Int)
    // 1 - (block_timestamp - parent_timestamp) // 10 x.Sub(bigTime, bigParentTime) x.Div(x, big10) x.Sub(big1, x)
    // max(1 - (block_timestamp - parent_timestamp) // 10, -99) if x.Cmp(bigMinus99) < 0 { x.Set(bigMinus99) } // (parent_diff + parent_diff // 2048 * max(1 - (block_timestamp - parent_timestamp) // 10, -99)) y.Div(parent.Difficulty, params.DifficultyBoundDivisor) x.Mul(y, x) x.Add(parent.Difficulty, x)
    // minimum difficulty can ever be (before exponential factor) if x.Cmp(params.MinimumDifficulty) < 0 { x.Set(params.MinimumDifficulty) } // for the exponential factor periodCount := new(big.Int).Add(parent.Number, big1) periodCount.Div(periodCount, expDiffPeriod)
    // the exponential factor, commonly referred to as \\\"the bomb\\\" // diff = diff + 2^(periodCount - 2) if periodCount.Cmp(big1) > 0 { y.Sub(periodCount, big2) y.Exp(big2, y, nil) x.Add(x, y) } return x}
    难度检查

    verifySeal函数用于检查一个区块是否满足POW难度要求,在下述代码中首先对当前引擎运行模式进行了检查,如果是Fake模式则直接返回nil,如果不是则检查,如果我们运行的是一个共享POW,如果是则将验证委托转交给它(例如:矿池),之后检查区块难度是否满足要求,之后跟进fulldag来决定究竟是采用普通的ethash缓存还是使用完整的DAG来快速进行远程挖掘,之后验证区块头中提供的难度值是否有效:

    // filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go  L490// verifySeal checks whether a block satisfies the PoW difficulty requirements,// either using the usual ethash cache for it, or alternatively using a full DAG// to make remote mining fast.func (ethash *Ethash) verifySeal(chain consensus.ChainHeaderReader, header *types.Header, fulldag bool) error {  // If we\\\'re running a fake PoW, accept any seal as valid  if ethash.config.PowMode == ModeFake || ethash.config.PowMode == ModeFullFake {    time.Sleep(ethash.fakeDelay)    if ethash.fakeFail == header.Number.Uint64() {      return errInvalidPoW    }    return nil  }  // If we\\\'re running a shared PoW, delegate verification to it  if ethash.shared != nil {    return ethash.shared.verifySeal(chain, header, fulldag)  }  // Ensure that we have a valid difficulty for the block  if header.Difficulty.Sign() <= 0 {    return errInvalidDifficulty  }  // Recompute the digest and PoW values  number := header.Number.Uint64()
    var ( digest []byte result []byte ) // If fast-but-heavy PoW verification was requested, use an ethash dataset if fulldag { dataset := ethash.dataset(number, true) if dataset.generated() { digest, result = hashimotoFull(dataset.dataset, ethash.SealHash(header).Bytes(), header.Nonce.Uint64())
    // Datasets are unmapped in a finalizer. Ensure that the dataset stays alive // until after the call to hashimotoFull so it\\\'s not unmapped while being used. runtime.KeepAlive(dataset) } else { // Dataset not yet generated, don\\\'t hang, use a cache instead fulldag = false } } // If slow-but-light PoW verification was requested (or DAG not yet ready), use an ethash cache if !fulldag { cache := ethash.cache(number)
    size := datasetSize(number) if ethash.config.PowMode == ModeTest { size = 32 * 1024 } digest, result = hashimotoLight(size, cache.cache, ethash.SealHash(header).Bytes(), header.Nonce.Uint64())
    // Caches are unmapped in a finalizer. Ensure that the cache stays alive // until after the call to hashimotoLight so it\\\'s not unmapped while being used. runtime.KeepAlive(cache) } // Verify the calculated values against the ones provided in the header if !bytes.Equal(header.MixDigest[:], digest) { return errInvalidMixDigest } target := new(big.Int).Div(two256, header.Difficulty) if new(big.Int).SetBytes(result).Cmp(target) > 0 { return errInvalidPoW } return nil}
    奖励计算

    Finalize函数是consenses.Engine的实现,它会先计算收益,然后生成MPT的Merkle Root,最后创建一个新的区块:

      // FinalizeAndAssemble implements consensus.Engine, accumulating the block and// uncle rewards, setting the final state and assembling the block.func (ethash *Ethash) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, txs []*types.Transaction, uncles []*types.Header, receipts []*types.Receipt) (*types.Block, error) {  // Finalize block  ethash.Finalize(chain, header, state, txs, uncles)
      // Header seems complete, assemble into a block and return return types.NewBlock(header, txs, uncles, receipts, trie.NewStackTrie(nil)), nil}

      在这里调用Finalize函数,该函数用于计算收益以及MerKle Root:

      // filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go  L568// Finalize implements consensus.Engine, accumulating the block and uncle rewards,// setting the final state on the headerfunc (ethash *Ethash) Finalize(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, txs []*types.Transaction, uncles []*types.Header) {  // Accumulate any block and uncle rewards and commit the final state root  accumulateRewards(chain.Config(), state, header, uncles)  header.Root = state.IntermediateRoot(chain.Config().IsEIP158(header.Number))}

      accumulateRewards实现如下所示,该函数会计算挖矿奖励,这里的总奖励包括静态区块奖励和叔区块奖励,每个叔区块的coinbase也会得到奖励:

      // filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\consensus.go  L614// AccumulateRewards credits the coinbase of the given block with the mining// reward. The total reward consists of the static block reward and rewards for// included uncles. The coinbase of each uncle block is also rewarded.func accumulateRewards(config *params.ChainConfig, state *state.StateDB, header *types.Header, uncles []*types.Header) {  // Select the correct block reward based on chain progression  blockReward := FrontierBlockReward  if config.IsByzantium(header.Number) {    blockReward = ByzantiumBlockReward  }  if config.IsConstantinople(header.Number) {    blockReward = ConstantinopleBlockReward  }  // Accumulate the rewards for the miner and any included uncles  reward := new(big.Int).Set(blockReward)  r := new(big.Int)  for _, uncle := range uncles {    r.Add(uncle.Number, big8)    r.Sub(r, header.Number)    r.Mul(r, blockReward)    r.Div(r, big8)    state.AddBalance(uncle.Coinbase, r)
      r.Div(blockReward, big32) reward.Add(reward, r) } state.AddBalance(header.Coinbase, reward)}

      之后通过IntermediateRoot来计算当前MTP树的Merkle Root:

      // filedir:go-ethereum-1.10.2\\\\core\\\\state\\\\statedb.go  L834// IntermediateRoot computes the current root hash of the state trie.// It is called in between transactions to get the root hash that// goes into transaction receipts.func (s *StateDB) IntermediateRoot(deleteEmptyObjects bool) common.Hash {  // Finalise all the dirty storage states and write them into the tries  s.Finalise(deleteEmptyObjects)
      // If there was a trie prefetcher operating, it gets aborted and irrevocably // modified after we start retrieving tries. Remove it from the statedb after // this round of use. // // This is weird pre-byzantium since the first tx runs with a prefetcher and // the remainder without, but pre-byzantium even the initial prefetcher is // useless, so no sleep lost. prefetcher := s.prefetcher if s.prefetcher != nil { defer func() { s.prefetcher.close() s.prefetcher = nil }() } // Although naively it makes sense to retrieve the account trie and then do // the contract storage and account updates sequentially, that short circuits // the account prefetcher. Instead, let\\\'s process all the storage updates // first, giving the account prefeches just a few more milliseconds of time // to pull useful data from disk. for addr := range s.stateObjectsPending { if obj := s.stateObjects[addr]; !obj.deleted { obj.updateRoot(s.db) } } // Now we\\\'re about to start to write changes to the trie. The trie is so far // _untouched_. We can check with the prefetcher, if it can give us a trie // which has the same root, but also has some content loaded into it. if prefetcher != nil { if trie := prefetcher.trie(s.originalRoot); trie != nil { s.trie = trie } } usedAddrs := make([][]byte, 0, len(s.stateObjectsPending)) for addr := range s.stateObjectsPending { if obj := s.stateObjects[addr]; obj.deleted { s.deleteStateObject(obj) } else { s.updateStateObject(obj) } usedAddrs = append(usedAddrs, common.CopyBytes(addr[:])) // Copy needed for closure } if prefetcher != nil { prefetcher.used(s.originalRoot, usedAddrs) } if len(s.stateObjectsPending) > 0 { s.stateObjectsPending = make(map[common.Address]struct{}) } // Track the amount of time wasted on hashing the account trie if metrics.EnabledExpensive { defer func(start time.Time) { s.AccountHashes += time.Since(start) }(time.Now()) } return s.trie.Hash()}

      之后创建区块:

        // filedir:go-ethereum-1.10.2\\\\core\\\\types\\\\block.go L197// NewBlock creates a new block. The input data is copied,// changes to header and to the field values will not affect the// block.//// The values of TxHash, UncleHash, ReceiptHash and Bloom in header// are ignored and set to values derived from the given txs, uncles// and receipts.func NewBlock(header *Header, txs []*Transaction, uncles []*Header, receipts []*Receipt, hasher TrieHasher) *Block {  b := &Block{header: CopyHeader(header), td: new(big.Int)}
        // TODO: panic if len(txs) != len(receipts) if len(txs) == 0 { b.header.TxHash = EmptyRootHash } else { b.header.TxHash = DeriveSha(Transactions(txs), hasher) b.transactions = make(Transactions, len(txs)) copy(b.transactions, txs) }
        if len(receipts) == 0 { b.header.ReceiptHash = EmptyRootHash } else { b.header.ReceiptHash = DeriveSha(Receipts(receipts), hasher) b.header.Bloom = CreateBloom(receipts) }
        if len(uncles) == 0 { b.header.UncleHash = EmptyUncleHash } else { b.header.UncleHash = CalcUncleHash(uncles) b.uncles = make([]*Header, len(uncles)) for i := range uncles { b.uncles[i] = CopyHeader(uncles[i]) } }
        return b}
        Nonce值

        Seal函数尝试找到一个能够满足区块难度需求的nonce值,在这里首先检查是否是fake模式,如果是则直接返回0 nonce,如果是共享pow则转到共享对象执行Seal操作,之后创建一个runner以及多重搜索线程,之后给线程上锁,保证内存的缓存,之后检查rand是否为空,如果为空则为ethash的字段rand进行赋值操作,之后线程解锁,如果挖矿线程编号为0,则返回当前物理上可用CPU编号,如果threads小于0(非法结果)则直接置为0,之后创建一个倒计时锁对象,之后调用mine函数进行挖矿,之后一直等待,直到操作被终止或者找到一个Nonce值:

          // filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\sealer.go L48// Seal implements consensus.Engine, attempting to find a nonce that satisfies// the block\\\'s difficulty requirements.func (ethash *Ethash) Seal(chain consensus.ChainHeaderReader, block *types.Block, results chan<- *types.Block, stop <-chan struct{}) error {  // If we\\\'re running a fake PoW, simply return a 0 nonce immediately  if ethash.config.PowMode == ModeFake || ethash.config.PowMode == ModeFullFake {    header := block.Header()    header.Nonce, header.MixDigest = types.BlockNonce{}, common.Hash{}    select {    case results <- block.WithSeal(header):    default:      ethash.config.Log.Warn(\\\"Sealing result is not read by miner\\\", \\\"mode\\\", \\\"fake\\\", \\\"sealhash\\\", ethash.SealHash(block.Header()))    }    return nil  }  // If we\\\'re running a shared PoW, delegate sealing to it  if ethash.shared != nil {    return ethash.shared.Seal(chain, block, results, stop)  }  // Create a runner and the multiple search threads it directs  abort := make(chan struct{})
          ethash.lock.Lock() threads := ethash.threads if ethash.rand == nil { seed, err := crand.Int(crand.Reader, big.NewInt(math.MaxInt64)) if err != nil { ethash.lock.Unlock() return err } ethash.rand = rand.New(rand.NewSource(seed.Int64())) } ethash.lock.Unlock() if threads == 0 { threads = runtime.NumCPU() } if threads < 0 { threads = 0 // Allows disabling local mining without extra logic around local/remote } // Push new work to remote sealer if ethash.remote != nil { ethash.remote.workCh <- &sealTask{block: block, results: results} } var ( pend sync.WaitGroup locals = make(chan *types.Block) ) for i := 0; i < threads; i++ { pend.Add(1) go func(id int, nonce uint64) { defer pend.Done() ethash.mine(block, id, nonce, abort, locals) // 调用mine函数 }(i, uint64(ethash.rand.Int63())) } // Wait until sealing is terminated or a nonce is found go func() { var result *types.Block select { case <-stop: // Outside abort, stop all miner threads close(abort) case result = <-locals: // One of the threads found a block, abort all others select { case results <- result: default: ethash.config.Log.Warn(\\\"Sealing result is not read by miner\\\", \\\"mode\\\", \\\"local\\\", \\\"sealhash\\\", ethash.SealHash(block.Header())) } close(abort) case <-ethash.update: // Thread count was changed on user request, restart close(abort) if err := ethash.Seal(chain, block, results, stop); err != nil { ethash.config.Log.Error(\\\"Failed to restart sealing after update\\\", \\\"err\\\", err) } } // Wait for all miners to terminate and return the block pend.Wait() }() return nil}
          找Nonce

          mine函数是真正的pow矿工,它用来检索一个nonce值,nonce值开始于seed值,seed值是能最终产生正确的可匹配可验证的区块难度,mine方法主要就是对nonce的操作,以及对区块头的重建操作:

          // filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\sealer.go  L30// mine is the actual proof-of-work miner that searches for a nonce starting from// seed that results in correct final block difficulty.func (ethash *Ethash) mine(block *types.Block, id int, seed uint64, abort chan struct{}, found chan *types.Block) {  // Extract some data from the header 从区块头中提取出一些数据  var (    header  = block.Header()    hash    = ethash.SealHash(header).Bytes()    target  = new(big.Int).Div(two256, header.Difficulty)    number  = header.Number.Uint64()    dataset = ethash.dataset(number, false)  )  // Start generating random nonces until we abort or find a good one    var (    attempts = int64(0)    nonce    = seed  )  logger := ethash.config.Log.New(\\\"miner\\\", id)  logger.Trace(\\\"Started ethash search for new nonces\\\", \\\"seed\\\", seed)search:  for {    select {    case <-abort:    // 挖矿中止,更新状态,中止当前操作      // Mining terminated, update stats and abort      logger.Trace(\\\"Ethash nonce search aborted\\\", \\\"attempts\\\", nonce-seed)      ethash.hashrate.Mark(attempts)      break search
          default: // 默认执行逻辑 // We don\\\'t have to update hash rate on every nonce, so update after after 2^X nonces attempts++ if (attempts % (1 << 15)) == 0 { ethash.hashrate.Mark(attempts) attempts = 0 } // Compute the PoW value of this nonce // 计算nonce的pow值 digest, result := hashimotoFull(dataset.dataset, hash, nonce) if new(big.Int).SetBytes(result).Cmp(target) <= 0 { // Correct nonce found, create a new header with it 找到正确nonce值,创建一个基于它的新的区块头 header = types.CopyHeader(header) header.Nonce = types.EncodeNonce(nonce) header.MixDigest = common.BytesToHash(digest)
          // Seal and return a block (if still needed) 封装并返回一个区块 select { case found <- block.WithSeal(header): logger.Trace(\\\"Ethash nonce found and reported\\\", \\\"attempts\\\", nonce-seed, \\\"nonce\\\", nonce) case <-abort: logger.Trace(\\\"Ethash nonce found but discarded\\\", \\\"attempts\\\", nonce-seed, \\\"nonce\\\", nonce) } break search } nonce++ } } // Datasets are unmapped in a finalizer. Ensure that the dataset stays live // during sealing so it\\\'s not unmapped while being read. runtime.KeepAlive(dataset)}
          远程验证

          startRemoteSealer函数用于开启远程验证,在这里首先初始化了一个remoteSealer对象,之后调用loop开启主循环:

            func startRemoteSealer(ethash *Ethash, urls []string, noverify bool) *remoteSealer {  ctx, cancel := context.WithCancel(context.Background())  s := &remoteSealer{    ethash:       ethash,    noverify:     noverify,    notifyURLs:   urls,    notifyCtx:    ctx,    cancelNotify: cancel,    works:        make(map[common.Hash]*types.Block),    rates:        make(map[common.Hash]hashrate),    workCh:       make(chan *sealTask),    fetchWorkCh:  make(chan *sealWork),    submitWorkCh: make(chan *mineResult),    fetchRateCh:  make(chan chan uint64),    submitRateCh: make(chan *hashrate),    requestExit:  make(chan struct{}),    exitCh:       make(chan struct{}),  }  go s.loop()  return s}

            loop主循环函数如下所示:

            func (s *remoteSealer) loop() {  defer func() {    s.ethash.config.Log.Trace(\\\"Ethash remote sealer is exiting\\\")    s.cancelNotify()    s.reqWG.Wait()    close(s.exitCh)  }()
            ticker := time.NewTicker(5 * time.Second) defer ticker.Stop()
            for { select { case work := <-s.workCh: // Update current work with new received block. // Note same work can be past twice, happens when changing CPU threads. s.results = work.results s.makeWork(work.block) s.notifyWork()
            case work := <-s.fetchWorkCh: // Return current mining work to remote miner. if s.currentBlock == nil { work.errc <- errNoMiningWork } else { work.res <- s.currentWork }
            case result := <-s.submitWorkCh: // Verify submitted PoW solution based on maintained mining blocks. if s.submitWork(result.nonce, result.mixDigest, result.hash) { result.errc <- nil } else { result.errc <- errInvalidSealResult }
            case result := <-s.submitRateCh: // Trace remote sealer\\\'s hash rate by submitted value. s.rates[result.id] = hashrate{rate: result.rate, ping: time.Now()} close(result.done)
            case req := <-s.fetchRateCh: // Gather all hash rate submitted by remote sealer. var total uint64 for _, rate := range s.rates { // this could overflow total += rate.rate } req <- total
            case <-ticker.C: // Clear stale submitted hash rate. for id, rate := range s.rates { if time.Since(rate.ping) > 10*time.Second { delete(s.rates, id) } } // Clear stale pending blocks if s.currentBlock != nil { for hash, block := range s.works { if block.NumberU64()+staleThreshold <= s.currentBlock.NumberU64() { delete(s.works, hash) } } }
            case <-s.requestExit: return } }}

            当收到新推送的work通知时,首先暂存当前结果s.results = work.results,之后调用make函数给外部矿工创建一个work package,work package包含以下四个方面的信息:

            • result[0]:32 bytes十六进制编码的当前区块的头部pow-hash值

            • result[1]:32 bytes十六进制编码的提供给DAG的seed hash值

            • result[2]:32 bytes十六进制编码的边界条件(挖矿难度)

            • result[3]:十六进制编码的区块编号

            // filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\sealer.go  L338// makeWork creates a work package for external miner.//// The work package consists of 3 strings://   result[0], 32 bytes hex encoded current block header pow-hash//   result[1], 32 bytes hex encoded seed hash used for DAG//   result[2], 32 bytes hex encoded boundary condition (\\\"target\\\"), 2^256/difficulty//   result[3], hex encoded block numberfunc (s *remoteSealer) makeWork(block *types.Block) {  hash := s.ethash.SealHash(block.Header())  s.currentWork[0] = hash.Hex()  s.currentWork[1] = common.BytesToHash(SeedHash(block.NumberU64())).Hex()  s.currentWork[2] = common.BytesToHash(new(big.Int).Div(two256, block.Difficulty()).Bytes()).Hex()  s.currentWork[3] = hexutil.EncodeBig(block.Number())
            // Trace the seal work fetched by remote sealer. s.currentBlock = block s.works[hash] = block}

            之后通过notifyWork函数将新的要处理的work通知给所有指定的挖矿节点:

            // filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\sealer.go  L356// notifyWork notifies all the specified mining endpoints of the availability of// new work to be processed.func (s *remoteSealer) notifyWork() {  work := s.currentWork
            // Encode the JSON payload of the notification. When NotifyFull is set, // this is the complete block header, otherwise it is a JSON array. var blob []byte if s.ethash.config.NotifyFull { blob, _ = json.Marshal(s.currentBlock.Header()) } else { blob, _ = json.Marshal(work) }
            s.reqWG.Add(len(s.notifyURLs)) for _, url := range s.notifyURLs { go s.sendNotification(s.notifyCtx, url, blob, work) }}
            func (s *remoteSealer) sendNotification(ctx context.Context, url string, json []byte, work [4]string) { defer s.reqWG.Done()
            req, err := http.NewRequest(\\\"POST\\\", url, bytes.NewReader(json)) if err != nil { s.ethash.config.Log.Warn(\\\"Can\\\'t create remote miner notification\\\", \\\"err\\\", err) return } ctx, cancel := context.WithTimeout(ctx, remoteSealerTimeout) defer cancel() req = req.WithContext(ctx) req.Header.Set(\\\"Content-Type\\\", \\\"application/json\\\")
            resp, err := http.DefaultClient.Do(req) if err != nil { s.ethash.config.Log.Warn(\\\"Failed to notify remote miner\\\", \\\"err\\\", err) } else { s.ethash.config.Log.Trace(\\\"Notified remote miner\\\", \\\"miner\\\", url, \\\"hash\\\", work[0], \\\"target\\\", work[2]) resp.Body.Close() }}

            当收到获取mining work指令时则将当前mining work返回给远程矿工:

                case work := <-s.fetchWorkCh:      // Return current mining work to remote miner.      if s.currentBlock == nil {        work.errc <- errNoMiningWork      } else {        work.res <- s.currentWork      }

            当收到远程矿工提交的工作证明是则调用submitWork来验证提交的POW解决方案是否可行:

                case result := <-s.submitWorkCh:      // Verify submitted PoW solution based on maintained mining blocks.      if s.submitWork(result.nonce, result.mixDigest, result.hash) {        result.errc <- nil      } else {        result.errc <- errInvalidSealResult      }

            submitWork函数如下所示,在这里首先检查当前block是否为nil,之后检查当前矿工提交的work是当前pending状态,之后通过verifySeal来验证区块

            // filedir:go-ethereum-1.10.2\\\\consensus\\\\ethash\\\\sealer.go  L398// submitWork verifies the submitted pow solution, returning// whether the solution was accepted or not (not can be both a bad pow as well as// any other error, like no pending work or stale mining result).func (s *remoteSealer) submitWork(nonce types.BlockNonce, mixDigest common.Hash, sealhash common.Hash) bool {  if s.currentBlock == nil {    s.ethash.config.Log.Error(\\\"Pending work without block\\\", \\\"sealhash\\\", sealhash)    return false  }  // Make sure the work submitted is present  block := s.works[sealhash]  if block == nil {    s.ethash.config.Log.Warn(\\\"Work submitted but none pending\\\", \\\"sealhash\\\", sealhash, \\\"curnumber\\\", s.currentBlock.NumberU64())    return false  }  // Verify the correctness of submitted result.  header := block.Header()  header.Nonce = nonce  header.MixDigest = mixDigest
            start := time.Now() if !s.noverify { if err := s.ethash.verifySeal(nil, header, true); err != nil { s.ethash.config.Log.Warn(\\\"Invalid proof-of-work submitted\\\", \\\"sealhash\\\", sealhash, \\\"elapsed\\\", common.PrettyDuration(time.Since(start)), \\\"err\\\", err) return false } } // Make sure the result channel is assigned. if s.results == nil { s.ethash.config.Log.Warn(\\\"Ethash result channel is empty, submitted mining result is rejected\\\") return false } s.ethash.config.Log.Trace(\\\"Verified correct proof-of-work\\\", \\\"sealhash\\\", sealhash, \\\"elapsed\\\", common.PrettyDuration(time.Since(start)))
            // Solutions seems to be valid, return to the miner and notify acceptance. solution := block.WithSeal(header)
            // The submitted solution is within the scope of acceptance. if solution.NumberU64()+staleThreshold > s.currentBlock.NumberU64() { select { case s.results <- solution: s.ethash.config.Log.Debug(\\\"Work submitted is acceptable\\\", \\\"number\\\", solution.NumberU64(), \\\"sealhash\\\", sealhash, \\\"hash\\\", solution.Hash()) return true default: s.ethash.config.Log.Warn(\\\"Sealing result is not read by miner\\\", \\\"mode\\\", \\\"remote\\\", \\\"sealhash\\\", sealhash) return false } } // The submitted block is too old to accept, drop it. s.ethash.config.Log.Warn(\\\"Work submitted is too old\\\", \\\"number\\\", solution.NumberU64(), \\\"sealhash\\\", sealhash, \\\"hash\\\", solution.Hash()) return false}

            当收到submitRateCh请求时则通过提交的value来跟踪远程验证者的哈希速率

                case result := <-s.submitRateCh:      // Trace remote sealer\\\'s hash rate by submitted value.      s.rates[result.id] = hashrate{rate: result.rate, ping: time.Now()}      close(result.done)

            当收到fetchRateCh请求时,则计算远程验证者的所有哈希比率:

                case req := <-s.fetchRateCh:      // Gather all hash rate submitted by remote sealer.      var total uint64      for _, rate := range s.rates {        // this could overflow        total += rate.rate      }      req <- total

            当收到ticker.C请时,则清空hash Rate:

                  case <-ticker.C:      // Clear stale submitted hash rate.      for id, rate := range s.rates {        if time.Since(rate.ping) > 10*time.Second {          delete(s.rates, id)        }      }      // Clear stale pending blocks      if s.currentBlock != nil {        for hash, block := range s.works {          if block.NumberU64()+staleThreshold <= s.currentBlock.NumberU64() {            delete(s.works, hash)          }        }      }

              当收到requestExit请求时则直接退出:

                    case <-s.requestExit:      return    }
                clique

                clique目录结构如下所示:

                ├─clique│      api.go               // RPC方法│      clique.go      // 共识设计│      snapshot.go      // 快照处理

                基本常量

                const (  checkpointInterval = 1024 // Number of blocks after which to save the vote snapshot to the database  inmemorySnapshots  = 128  // Number of recent vote snapshots to keep in memory  inmemorySignatures = 4096 // Number of recent block signatures to keep in memory
                wiggleTime = 500 * time.Millisecond // Random delay (per signer) to allow concurrent signers)
                const ( checkpointInterval = 1024 // Number of blocks after which to save the vote snapshot to the database inmemorySnapshots = 128 // Number of recent vote snapshots to keep in memory inmemorySignatures = 4096 // Number of recent block signatures to keep in memory
                wiggleTime = 500 * time.Millisecond // Random delay (per signer) to allow concurrent signers)
                // Clique proof-of-authority protocol constants.var ( epochLength = uint64(30000) // Default number of blocks after which to checkpoint and reset the pending votes
                extraVanity = 32 // Fixed number of extra-data prefix bytes reserved for signer vanity extraSeal = crypto.SignatureLength // Fixed number of extra-data suffix bytes reserved for signer seal
                nonceAuthVote = hexutil.MustDecode(\\\"0xffffffffffffffff\\\") // Magic nonce number to vote on adding a new signer nonceDropVote = hexutil.MustDecode(\\\"0x0000000000000000\\\") // Magic nonce number to vote on removing a signer.
                uncleHash = types.CalcUncleHash(nil) // Always Keccak256(RLP([])) as uncles are meaningless outside of PoW.
                diffInTurn = big.NewInt(2) // Block difficulty for in-turn signatures diffNoTurn = big.NewInt(1) // Block difficulty for out-of-turn signatures)

                错误类型:

                  // Various error messages to mark blocks invalid. These should be private to// prevent engine specific errors from being referenced in the remainder of the// codebase, inherently breaking if the engine is swapped out. Please put common// error types into the consensus package.var (  // errUnknownBlock is returned when the list of signers is requested for a block  // that is not part of the local blockchain.  errUnknownBlock = errors.New(\\\"unknown block\\\")
                  // errInvalidCheckpointBeneficiary is returned if a checkpoint/epoch transition // block has a beneficiary set to non-zeroes. errInvalidCheckpointBeneficiary = errors.New(\\\"beneficiary in checkpoint block non-zero\\\")
                  // errInvalidVote is returned if a nonce value is something else that the two // allowed constants of 0x00..0 or 0xff..f. errInvalidVote = errors.New(\\\"vote nonce not 0x00..0 or 0xff..f\\\")
                  // errInvalidCheckpointVote is returned if a checkpoint/epoch transition block // has a vote nonce set to non-zeroes. errInvalidCheckpointVote = errors.New(\\\"vote nonce in checkpoint block non-zero\\\")
                  // errMissingVanity is returned if a block\\\'s extra-data section is shorter than // 32 bytes, which is required to store the signer vanity. errMissingVanity = errors.New(\\\"extra-data 32 byte vanity prefix missing\\\")
                  // errMissingSignature is returned if a block\\\'s extra-data section doesn\\\'t seem // to contain a 65 byte secp256k1 signature. errMissingSignature = errors.New(\\\"extra-data 65 byte signature suffix missing\\\")
                  // errExtraSigners is returned if non-checkpoint block contain signer data in // their extra-data fields. errExtraSigners = errors.New(\\\"non-checkpoint block contains extra signer list\\\")
                  // errInvalidCheckpointSigners is returned if a checkpoint block contains an // invalid list of signers (i.e. non divisible by 20 bytes). errInvalidCheckpointSigners = errors.New(\\\"invalid signer list on checkpoint block\\\")
                  // errMismatchingCheckpointSigners is returned if a checkpoint block contains a // list of signers different than the one the local node calculated. errMismatchingCheckpointSigners = errors.New(\\\"mismatching signer list on checkpoint block\\\")
                  // errInvalidMixDigest is returned if a block\\\'s mix digest is non-zero. errInvalidMixDigest = errors.New(\\\"non-zero mix digest\\\")
                  // errInvalidUncleHash is returned if a block contains an non-empty uncle list. errInvalidUncleHash = errors.New(\\\"non empty uncle hash\\\")
                  // errInvalidDifficulty is returned if the difficulty of a block neither 1 or 2. errInvalidDifficulty = errors.New(\\\"invalid difficulty\\\")
                  // errWrongDifficulty is returned if the difficulty of a block doesn\\\'t match the // turn of the signer. errWrongDifficulty = errors.New(\\\"wrong difficulty\\\")
                  // errInvalidTimestamp is returned if the timestamp of a block is lower than // the previous block\\\'s timestamp + the minimum block period. errInvalidTimestamp = errors.New(\\\"invalid timestamp\\\")
                  // errInvalidVotingChain is returned if an authorization list is attempted to // be modified via out-of-range or non-contiguous headers. errInvalidVotingChain = errors.New(\\\"invalid voting chain\\\")
                  // errUnauthorizedSigner is returned if a header is signed by a non-authorized entity. errUnauthorizedSigner = errors.New(\\\"unauthorized signer\\\")
                  // errRecentlySigned is returned if a header is signed by an authorized entity // that already signed a header recently, thus is temporarily not allowed to. errRecentlySigned = errors.New(\\\"recently signed\\\"))
                  地址提取

                  ecrecover函数用于从签名头中提取以太坊账户地址信息:

                  // ecrecover extracts the Ethereum account address from a signed header.func ecrecover(header *types.Header, sigcache *lru.ARCCache) (common.Address, error) {  // If the signature\\\'s already cached, return that  hash := header.Hash()  if address, known := sigcache.Get(hash); known {    return address.(common.Address), nil  }  // Retrieve the signature from the header extra-data  if len(header.Extra) < extraSeal {    return common.Address{}, errMissingSignature  }  signature := header.Extra[len(header.Extra)-extraSeal:]
                  // Recover the public key and the Ethereum address pubkey, err := crypto.Ecrecover(SealHash(header).Bytes(), signature) if err != nil { return common.Address{}, err } var signer common.Address copy(signer[:], crypto.Keccak256(pubkey[1:])[12:])
                  sigcache.Add(hash, signer) return signer, nil}
                  构造引擎

                  new用于初始化一个共识引擎对象:

                  // New creates a Clique proof-of-authority consensus engine with the initial// signers set to the ones provided by the user.func New(config *params.CliqueConfig, db ethdb.Database) *Clique {  // Set any missing consensus parameters to their defaults  conf := *config  if conf.Epoch == 0 {    conf.Epoch = epochLength  }  // Allocate the snapshot caches and create the engine  recents, _ := lru.NewARC(inmemorySnapshots)  signatures, _ := lru.NewARC(inmemorySignatures)
                  return &Clique{ config: &conf, db: db, recents: recents, signatures: signatures, proposals: make(map[common.Address]bool), }}
                  矿工地址

                  Author函数通过调用ecrecover函数来检索区块奖励地址,也就是coinbase:

                  // Author implements consensus.Engine, returning the Ethereum address recovered// from the signature in the header\\\'s extra-data section.func (c *Clique) Author(header *types.Header) (common.Address, error) {  return ecrecover(header, c.signatures)}
                  验区块头

                  VerifyHeader函数用于验证区块头:

                  // VerifyHeader checks whether a header conforms to the consensus rules.func (c *Clique) VerifyHeader(chain consensus.ChainHeaderReader, header *types.Header, seal bool) error {  return c.verifyHeader(chain, header, nil)}

                  从上述代码中可以看到这里又调用了重载的VerifyHeaders函数,该函数用于批量验证区块头信息:

                  // VerifyHeaders is similar to VerifyHeader, but verifies a batch of headers. The// method returns a quit channel to abort the operations and a results channel to// retrieve the async verifications (the order is that of the input slice).func (c *Clique) VerifyHeaders(chain consensus.ChainHeaderReader, headers []*types.Header, seals []bool) (chan<- struct{}, <-chan error) {  abort := make(chan struct{})  results := make(chan error, len(headers))
                  go func() { for i, header := range headers { err := c.verifyHeader(chain, header, headers[:i])
                  select { case <-abort: return case results <- err: } } }() return abort, results}

                   c.verifyHeader的具体实现如下所示,该方法主要验证区块头是否遵循共识规则,在这主要做了以下检测:

                  • 区块编号是否为nil

                  • 区块头部时间戳是否大于当前时间戳

                  • checkpoint(检测点)是否为true,Coinbase是否等于common.Address{}

                  • 检测Nonce值是否合法

                  • 检测extra-data是否包含vanity和signature两部分数据

                  • 检测区块头部的MixDigest

                  • 检测区块的UncleHash

                  • 检查区块的难度

                  • 检查硬分叉的部分字段信息

                  • 检查关联字段

                  // verifyHeader checks whether a header conforms to the consensus rules.The// caller may optionally pass in a batch of parents (ascending order) to avoid// looking those up from the database. This is useful for concurrently verifying// a batch of new headers.func (c *Clique) verifyHeader(chain consensus.ChainHeaderReader, header *types.Header, parents []*types.Header) error {  if header.Number == nil {    return errUnknownBlock  }  number := header.Number.Uint64()
                  // Don\\\'t waste time checking blocks from the future if header.Time > uint64(time.Now().Unix()) { return consensus.ErrFutureBlock } // Checkpoint blocks need to enforce zero beneficiary checkpoint := (number % c.config.Epoch) == 0 if checkpoint && header.Coinbase != (common.Address{}) { return errInvalidCheckpointBeneficiary } // Nonces must be 0x00..0 or 0xff..f, zeroes enforced on checkpoints if !bytes.Equal(header.Nonce[:], nonceAuthVote) && !bytes.Equal(header.Nonce[:], nonceDropVote) { return errInvalidVote } if checkpoint && !bytes.Equal(header.Nonce[:], nonceDropVote) { return errInvalidCheckpointVote } // Check that the extra-data contains both the vanity and signature if len(header.Extra) < extraVanity { return errMissingVanity } if len(header.Extra) < extraVanity+extraSeal { return errMissingSignature } // Ensure that the extra-data contains a signer list on checkpoint, but none otherwise signersBytes := len(header.Extra) - extraVanity - extraSeal if !checkpoint && signersBytes != 0 { return errExtraSigners } if checkpoint && signersBytes%common.AddressLength != 0 { return errInvalidCheckpointSigners } // Ensure that the mix digest is zero as we don\\\'t have fork protection currently if header.MixDigest != (common.Hash{}) { return errInvalidMixDigest } // Ensure that the block doesn\\\'t contain any uncles which are meaningless in PoA if header.UncleHash != uncleHash { return errInvalidUncleHash } // Ensure that the block\\\'s difficulty is meaningful (may not be correct at this point) if number > 0 { if header.Difficulty == nil || (header.Difficulty.Cmp(diffInTurn) != 0 && header.Difficulty.Cmp(diffNoTurn) != 0) { return errInvalidDifficulty } } // If all checks passed, validate any special fields for hard forks if err := misc.VerifyForkHashes(chain.Config(), header, false); err != nil { return err } // All basic checks passed, verify cascading fields return c.verifyCascadingFields(chain, header, parents)}

                  当以上检查通过后继续检查所有不独立的头部字段:

                  // filedir:go-ethereum-1.10.2\\\\consensus\\\\clique\\\\clique.go  L304// verifyCascadingFields verifies all the header fields that are not standalone,// rather depend on a batch of previous headers. The caller may optionally pass// in a batch of parents (ascending order) to avoid looking those up from the// database. This is useful for concurrently verifying a batch of new headers.func (c *Clique) verifyCascadingFields(chain consensus.ChainHeaderReader, header *types.Header, parents []*types.Header) error {  // The genesis block is the always valid dead-end  number := header.Number.Uint64()  if number == 0 {    return nil  }  // Ensure that the block\\\'s timestamp isn\\\'t too close to its parent  var parent *types.Header  if len(parents) > 0 {    parent = parents[len(parents)-1]  } else {    parent = chain.GetHeader(header.ParentHash, number-1)  }  if parent == nil || parent.Number.Uint64() != number-1 || parent.Hash() != header.ParentHash {    return consensus.ErrUnknownAncestor  }  if parent.Time+c.config.Period > header.Time {    return errInvalidTimestamp  }  // Retrieve the snapshot needed to verify this header and cache it  snap, err := c.snapshot(chain, number-1, header.ParentHash, parents)  if err != nil {    return err  }  // If the block is a checkpoint block, verify the signer list  if number%c.config.Epoch == 0 {    signers := make([]byte, len(snap.Signers)*common.AddressLength)    for i, signer := range snap.signers() {      copy(signers[i*common.AddressLength:], signer[:])    }    extraSuffix := len(header.Extra) - extraSeal    if !bytes.Equal(header.Extra[extraVanity:extraSuffix], signers) {      return errMismatchingCheckpointSigners    }  }  // All basic checks passed, verify the seal and return  return c.verifySeal(chain, header, parents)}
                  快照检索

                  snapshot函数的主要作用是统计并保存链的某段高度区间的投票信息和签名者列表,统计区间从某个checkpoint开始(包括genesis block),到某个更高高度的block,在snapshot有两个中特殊的结构体:

                  • Vote——代表的一次投票的详细信息,包括谁给谁投的票、投的加入票还是踢出票等

                  // Vote represents a single vote that an authorized signer made to modify the// list of authorizations.type Vote struct {  Signer    common.Address `json:\\\"signer\\\"`    // Authorized signer that cast this vote  Block     uint64         `json:\\\"block\\\"`     // Block number the vote was cast in (expire old votes)  Address   common.Address `json:\\\"address\\\"`   // Account being voted on to change its authorization  Authorize bool           `json:\\\"authorize\\\"` // Whether to authorize or deauthorize the voted account}
                  • Tally——投票结果统计

                  // Tally is a simple vote tally to keep the current score of votes. Votes that// go against the proposal aren\\\'t counted since it\\\'s equivalent to not voting.type Tally struct {  Authorize bool `json:\\\"authorize\\\"` // Whether the vote is about authorizing or kicking someone  Votes     int  `json:\\\"votes\\\"`     // Number of votes until now wanting to pass the proposal}

                  snapshot的数据结构如下

                  // Snapshot is the state of the authorization voting at a given point in time.type Snapshot struct {  config   *params.CliqueConfig // Consensus engine parameters to fine tune behavior  sigcache *lru.ARCCache        // Cache of recent block signatures to speed up ecrecover
                  Number uint64 `json:\\\"number\\\"` // Block number where the snapshot was created Hash common.Hash `json:\\\"hash\\\"` // Block hash where the snapshot was created Signers map[common.Address]struct{} `json:\\\"signers\\\"` // Set of authorized signers at this moment Recents map[uint64]common.Address `json:\\\"recents\\\"` // Set of recent signers for spam protections Votes []*Vote `json:\\\"votes\\\"` // List of votes cast in chronological order Tally map[common.Address]Tally `json:\\\"tally\\\"` // Current vote tally to avoid recalculating}

                  snapshot函数的实现如下所示:

                  // snapshot retrieves the authorization snapshot at a given point in time.func (c *Clique) snapshot(chain consensus.ChainHeaderReader, number uint64, hash common.Hash, parents []*types.Header) (*Snapshot, error) {  // Search for a snapshot in memory or on disk for checkpoints  var (    headers []*types.Header    snap    *Snapshot  )  for snap == nil {    // If an in-memory snapshot was found, use that    // 如果在内存中找到可用快照则加载    if s, ok := c.recents.Get(hash); ok {      snap = s.(*Snapshot)      break    }    // If an on-disk checkpoint snapshot can be found, use that   // 如果可以找到磁盘上的检查点快照则使用该快照    if number%checkpointInterval == 0 {      if s, err := loadSnapshot(c.config, c.signatures, c.db, hash); err == nil {        log.Trace(\\\"Loaded voting snapshot from disk\\\", \\\"number\\\", number, \\\"hash\\\", hash)        snap = s        break      }    }  // 如果是创世区块则创建初始快照    // If we\\\'re at the genesis, snapshot the initial state. Alternatively if we\\\'re    // at a checkpoint block without a parent (light client CHT), or we have piled    // up more headers than allowed to be reorged (chain reinit from a freezer),    // consider the checkpoint trusted and snapshot it.    if number == 0 || (number%c.config.Epoch == 0 && (len(headers) > params.FullImmutabilityThreshold || chain.GetHeaderByNumber(number-1) == nil)) {      checkpoint := chain.GetHeaderByNumber(number)      if checkpoint != nil {        hash := checkpoint.Hash()
                  signers := make([]common.Address, (len(checkpoint.Extra)-extraVanity-extraSeal)/common.AddressLength) for i := 0; i < len(signers); i++ { copy(signers[i][:], checkpoint.Extra[extraVanity+i*common.AddressLength:]) } snap = newSnapshot(c.config, c.signatures, number, hash, signers) // 新建快照 if err := snap.store(c.db); err != nil { // 存储快照 return nil, err } log.Info(\\\"Stored checkpoint snapshot to disk\\\", \\\"number\\\", number, \\\"hash\\\", hash) break } } // No snapshot for this header, gather the header and move backward // 如果当前区块头没有快照则收集区块头信息同时先后移动 var header *types.Header if len(parents) > 0 { // If we have explicit parents, pick from there (enforced) header = parents[len(parents)-1] if header.Hash() != hash || header.Number.Uint64() != number { return nil, consensus.ErrUnknownAncestor } parents = parents[:len(parents)-1] } else { // No explicit parents (or no more left), reach out to the database // 如果没有父区块则从数据库中查询 header = chain.GetHeader(hash, number) if header == nil { return nil, consensus.ErrUnknownAncestor } } headers = append(headers, header) number, hash = number-1, header.ParentHash } // Previous snapshot found, apply any pending headers on top of it for i := 0; i < len(headers)/2; i++ { headers[i], headers[len(headers)-1-i] = headers[len(headers)-1-i], headers[i] } snap, err := snap.apply(headers) // 避免没有尽头的投票窗口,周期性的清除除旧的投票提案 if err != nil { return nil, err } c.recents.Add(snap.Hash, snap)
                  // If we\\\'ve generated a new checkpoint snapshot, save to disk if snap.Number%checkpointInterval == 0 && len(headers) > 0 { if err = snap.store(c.db); err != nil { return nil, err } log.Trace(\\\"Stored voting snapshot to disk\\\", \\\"number\\\", snap.Number, \\\"hash\\\", snap.Hash) } return snap, err}
                  构造快照

                  newSnapshot函数用于初始化一个快照,具体实现代码如下所示:

                  // newSnapshot creates a new snapshot with the specified startup parameters. This// method does not initialize the set of recent signers, so only ever use if for// the genesis block.func newSnapshot(config *params.CliqueConfig, sigcache *lru.ARCCache, number uint64, hash common.Hash, signers []common.Address) *Snapshot {  snap := &Snapshot{    config:   config,    sigcache: sigcache,    Number:   number,    Hash:     hash,    Signers:  make(map[common.Address]struct{}),    Recents:  make(map[uint64]common.Address),    Tally:    make(map[common.Address]Tally),  }  for _, signer := range signers {    snap.Signers[signer] = struct{}{}  }  return snap}
                  加载快照

                  loadSnapshot函数用于从数据库中加载一个已经存在的快照,具体实现代码如下所示:

                    // loadSnapshot loads an existing snapshot from the database.func loadSnapshot(config *params.CliqueConfig, sigcache *lru.ARCCache, db ethdb.Database, hash common.Hash) (*Snapshot, error) {  blob, err := db.Get(append([]byte(\\\"clique-\\\"), hash[:]...))  if err != nil {    return nil, err  }  snap := new(Snapshot)  if err := json.Unmarshal(blob, snap); err != nil {    return nil, err  }  snap.config = config  snap.sigcache = sigcache
                    return snap, nil}
                    快照存储

                    store函数用于存储快照到数据库中:

                    // store inserts the snapshot into the database.func (s *Snapshot) store(db ethdb.Database) error {  blob, err := json.Marshal(s)  if err != nil {    return err  }  return db.Put(append([]byte(\\\"clique-\\\"), s.Hash[:]...), blob)}
                    快照复制

                    以太坊通过copy函数来复制快照:

                    // copy creates a deep copy of the snapshot, though not the individual votes.func (s *Snapshot) copy() *Snapshot {  cpy := &Snapshot{    config:   s.config,    sigcache: s.sigcache,    Number:   s.Number,    Hash:     s.Hash,    Signers:  make(map[common.Address]struct{}),    Recents:  make(map[uint64]common.Address),    Votes:    make([]*Vote, len(s.Votes)),    Tally:    make(map[common.Address]Tally),  }  for signer := range s.Signers {    cpy.Signers[signer] = struct{}{}  }  for block, signer := range s.Recents {    cpy.Recents[block] = signer  }  for address, tally := range s.Tally {    cpy.Tally[address] = tally  }  copy(cpy.Votes, s.Votes)
                    return cpy}

                    验证投票

                    // validVote returns whether it makes sense to cast the specified vote in the// given snapshot context (e.g. don\\\'t try to add an already authorized signer).func (s *Snapshot) validVote(address common.Address, authorize bool) bool {  _, signer := s.Signers[address]  return (signer && !authorize) || (!signer && authorize)}

                    新增投票

                      // cast adds a new vote into the tally.func (s *Snapshot) cast(address common.Address, authorize bool) bool {  // Ensure the vote is meaningful  if !s.validVote(address, authorize) {    return false  }  // Cast the vote into an existing or new tally  if old, ok := s.Tally[address]; ok {    old.Votes++    s.Tally[address] = old  } else {    s.Tally[address] = Tally{Authorize: authorize, Votes: 1}  }  return true}

                      移除投票

                      // uncast removes a previously cast vote from the tally.func (s *Snapshot) uncast(address common.Address, authorize bool) bool {  // If there\\\'s no tally, it\\\'s a dangling vote, just drop  tally, ok := s.Tally[address]  if !ok {    return false  }  // Ensure we only revert counted votes  if tally.Authorize != authorize {    return false  }  // Otherwise revert the vote  if tally.Votes > 1 {    tally.Votes--    s.Tally[address] = tally  } else {    delete(s.Tally, address)  }  return true}
                      授权创建

                      apply通过接受一个给定区块头创建了一个新的授权

                      // filedir:go-ethereum-1.10.2\\\\consensus\\\\clique\\\\snapshot.go  L182// apply creates a new authorization snapshot by applying the given headers to// the original one.func (s *Snapshot) apply(headers []*types.Header) (*Snapshot, error) {  // Allow passing in no headers for cleaner code  if len(headers) == 0 {    return s, nil  }  // Sanity check that the headers can be applied  for i := 0; i < len(headers)-1; i++ {    if headers[i+1].Number.Uint64() != headers[i].Number.Uint64()+1 {      return nil, errInvalidVotingChain    }  }  if headers[0].Number.Uint64() != s.Number+1 {    return nil, errInvalidVotingChain  }  // Iterate through the headers and create a new snapshot  snap := s.copy()
                      var ( start = time.Now() logged = time.Now() ) for i, header := range headers { // Remove any votes on checkpoint blocks number := header.Number.Uint64() if number%s.config.Epoch == 0 { // 如果区块高度正好在Epoch结束,则清空投票和计分器 snap.Votes = nil snap.Tally = make(map[common.Address]Tally) } // Delete the oldest signer from the recent list to allow it signing again if limit := uint64(len(snap.Signers)/2 + 1); number >= limit { delete(snap.Recents, number-limit) } // Resolve the authorization key and check against signers signer, err := ecrecover(header, s.sigcache) // 从区块头中解密出来签名者地址 if err != nil { return nil, err } if _, ok := snap.Signers[signer]; !ok { return nil, errUnauthorizedSigner } for _, recent := range snap.Recents { if recent == signer { return nil, errRecentlySigned } } snap.Recents[number] = signer
                      // Header authorized, discard any previous votes from the signer 区块头认证,不管该签名者之前的任何投票 for i, vote := range snap.Votes { if vote.Signer == signer && vote.Address == header.Coinbase { // Uncast the vote from the cached tally snap.uncast(vote.Address, vote.Authorize) // 从缓存计数器中移除该投票
                      // Uncast the vote from the chronological list snap.Votes = append(snap.Votes[:i], snap.Votes[i+1:]...) // 从按时间排序的列表中移除投票 break // only one vote allowed } } // Tally up the new vote from the signer 从签名者中计数新的投票 var authorize bool switch { case bytes.Equal(header.Nonce[:], nonceAuthVote): authorize = true case bytes.Equal(header.Nonce[:], nonceDropVote): authorize = false default: return nil, errInvalidVote } if snap.cast(header.Coinbase, authorize) { snap.Votes = append(snap.Votes, &Vote{ Signer: signer, Block: number, Address: header.Coinbase, Authorize: authorize, }) } // If the vote passed, update the list of signers 判断票数是否超过一半的投票者,如果投票通过,更新签名者列表 if tally := snap.Tally[header.Coinbase]; tally.Votes > len(snap.Signers)/2 { if tally.Authorize { snap.Signers[header.Coinbase] = struct{}{} } else { delete(snap.Signers, header.Coinbase)
                      // Signer list shrunk, delete any leftover recent caches if limit := uint64(len(snap.Signers)/2 + 1); number >= limit { delete(snap.Recents, number-limit) } // Discard any previous votes the deauthorized signer cast for i := 0; i < len(snap.Votes); i++ { if snap.Votes[i].Signer == header.Coinbase { // Uncast the vote from the cached tally snap.uncast(snap.Votes[i].Address, snap.Votes[i].Authorize)
                      // Uncast the vote from the chronological list snap.Votes = append(snap.Votes[:i], snap.Votes[i+1:]...)
                      i-- } } } // Discard any previous votes around the just changed account 不管之前的任何投票,直接改变账户 for i := 0; i < len(snap.Votes); i++ { if snap.Votes[i].Address == header.Coinbase { snap.Votes = append(snap.Votes[:i], snap.Votes[i+1:]...) i-- } } delete(snap.Tally, header.Coinbase) } // If we\\\'re taking too much time (ecrecover), notify the user once a while if time.Since(logged) > 8*time.Second { log.Info(\\\"Reconstructing voting history\\\", \\\"processed\\\", i, \\\"total\\\", len(headers), \\\"elapsed\\\", common.PrettyDuration(time.Since(start))) logged = time.Now() } } if time.Since(start) > 8*time.Second { log.Info(\\\"Reconstructed voting history\\\", \\\"processed\\\", len(headers), \\\"elapsed\\\", common.PrettyDuration(time.Since(start))) } snap.Number += uint64(len(headers)) snap.Hash = headers[len(headers)-1].Hash()
                      return snap, nil}
                      出块机制

                      intrun函数用于处理出块机制,判断方法是看当前块的高度是否和自己在签名者列表中的顺序一致:

                      // inturn returns if a signer at a given block height is in-turn or not.func (s *Snapshot) inturn(number uint64, signer common.Address) bool {  signers, offset := s.signers(), 0  for offset < len(signers) && signers[offset] != signer {    offset++  }  return (number % uint64(len(signers))) == uint64(offset)}
                      签名列表

                      签名者以升序检索授权签名者列表:

                        // signers retrieves the list of authorized signers in ascending order.func (s *Snapshot) signers() []common.Address {  sigs := make([]common.Address, 0, len(s.Signers))  for sig := range s.Signers {    sigs = append(sigs, sig)  }  sort.Sort(signersAscending(sigs))  return sigs}

                        验叔区块

                        // VerifyUncles implements consensus.Engine, always returning an error for any// uncles as this consensus mechanism doesn\\\'t permit uncles.func (c *Clique) VerifyUncles(chain consensus.ChainReader, block *types.Block) error {  if len(block.Uncles()) > 0 {    return errors.New(\\\"uncles not allowed\\\")  }  return nil}
                        签名验证

                        verifySeal函数用于验证区块头部的签名是否满足协议一致性要求:

                          // filedir: go-ethereum-1.10.2\\\\consensus\\\\clique\\\\clique.go L436// verifySeal checks whether the signature contained in the header satisfies the// consensus protocol requirements. The method accepts an optional list of parent// headers that aren\\\'t yet part of the local blockchain to generate the snapshots// from.func (c *Clique) verifySeal(chain consensus.ChainHeaderReader, header *types.Header, parents []*types.Header) error {  // Verifying the genesis block is not supported   不支持创世区块  number := header.Number.Uint64()  if number == 0 {    return errUnknownBlock  }  // Retrieve the snapshot needed to verify this header and cache it  检索验证该区块的快照并缓存它  snap, err := c.snapshot(chain, number-1, header.ParentHash, parents)  if err != nil {    return err  }
                          // Resolve the authorization key and check against signers signer, err := ecrecover(header, c.signatures) if err != nil { return err } if _, ok := snap.Signers[signer]; !ok { return errUnauthorizedSigner } for seen, recent := range snap.Recents { if recent == signer { // Signer is among recents, only fail if the current block doesn\\\'t shift it out if limit := uint64(len(snap.Signers)/2 + 1); seen > number-limit { return errRecentlySigned } } } // Ensure that the difficulty corresponds to the turn-ness of the signer if !c.fakeDiff { inturn := snap.inturn(header.Number.Uint64(), signer) if inturn && header.Difficulty.Cmp(diffInTurn) != 0 { return errWrongDifficulty } if !inturn && header.Difficulty.Cmp(diffNoTurn) != 0 { return errWrongDifficulty } } return nil}

                          前期准备

                          Prepare用于实现共识引擎,它提供了所有共识字段以便运行事务:

                          // Prepare implements consensus.Engine, preparing all the consensus fields of the// header for running the transactions on top.func (c *Clique) Prepare(chain consensus.ChainHeaderReader, header *types.Header) error {  // If the block isn\\\'t a checkpoint, cast a random vote (good enough for now)  header.Coinbase = common.Address{}  header.Nonce = types.BlockNonce{}
                          number := header.Number.Uint64() // Assemble the voting snapshot to check which votes make sense snap, err := c.snapshot(chain, number-1, header.ParentHash, nil) if err != nil { return err } if number%c.config.Epoch != 0 { //如果number不是epoch的整数倍(不是checkpoint),则进行投票信息的填充 c.lock.RLock()
                          // Gather all the proposals that make sense voting on addresses := make([]common.Address, 0, len(c.proposals)) for address, authorize := range c.proposals { if snap.validVote(address, authorize) { addresses = append(addresses, address) } } // If there\\\'s pending proposals, cast a vote on them 填写投票信息(投票信息存储在Coinbase和Nonce字段中) if len(addresses) > 0 { header.Coinbase = addresses[rand.Intn(len(addresses))] if c.proposals[header.Coinbase] { copy(header.Nonce[:], nonceAuthVote) } else { copy(header.Nonce[:], nonceDropVote) } } c.lock.RUnlock() } // Set the correct difficulty header.Difficulty = calcDifficulty(snap, c.signer)
                          // Ensure the extra data has all its components if len(header.Extra) < extraVanity { header.Extra = append(header.Extra, bytes.Repeat([]byte{0x00}, extraVanity-len(header.Extra))...) } header.Extra = header.Extra[:extraVanity]
                          if number%c.config.Epoch == 0 { // 如果number是epoch的整数倍(将要生成一个checkpoint),则填充签名者列表 for _, signer := range snap.signers() { header.Extra = append(header.Extra, signer[:]...) } } header.Extra = append(header.Extra, make([]byte, extraSeal)...)
                          // Mix digest is reserved for now, set to empty header.MixDigest = common.Hash{}
                          // Ensure the timestamp has the correct delay parent := chain.GetHeader(header.ParentHash, number-1) if parent == nil { return consensus.ErrUnknownAncestor } header.Time = parent.Time + c.config.Period if header.Time < uint64(time.Now().Unix()) { header.Time = uint64(time.Now().Unix()) } return nil}
                          奖励计算

                          FinalizeAndAssemble函数用于计算MTP的Merkle ROOT并计算叔区块的hash(POA共识中默认没有叔区块)

                          // FinalizeAndAssemble implements consensus.Engine, ensuring no uncles are set,// nor block rewards given, and returns the final block.func (c *Clique) FinalizeAndAssemble(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, txs []*types.Transaction, uncles []*types.Header, receipts []*types.Receipt) (*types.Block, error) {  // Finalize block  c.Finalize(chain, header, state, txs, uncles)
                          // Assemble and return the final block for sealing return types.NewBlock(header, txs, nil, receipts, trie.NewStackTrie(nil)), nil}

                          Finalize函数如下所示:

                          // Finalize implements consensus.Engine, ensuring no uncles are set, nor block// rewards given.func (c *Clique) Finalize(chain consensus.ChainHeaderReader, header *types.Header, state *state.StateDB, txs []*types.Transaction, uncles []*types.Header) {  // No block rewards in PoA, so the state remains as is and uncles are dropped  header.Root = state.IntermediateRoot(chain.Config().IsEIP158(header.Number))  header.UncleHash = types.CalcUncleHash(nil)}
                          私钥注入

                          Authorize向一致性引擎中注入私钥,以创建新的块:

                            // Authorize injects a private key into the consensus engine to mint new blocks// with.func (c *Clique) Authorize(signer common.Address, signFn SignerFn) {  c.lock.Lock()  defer c.lock.Unlock()
                            c.signer = signer c.signFn = signFn}
                            共识验证

                            Seal函数尝试使用本地签名凭据创建一个验证区块:

                            // filedir:go-ethereum-1.10.2\\\\consensus\\\\clique\\\\clique.go  L574// Seal implements consensus.Engine, attempting to create a sealed block using// the local signing credentials.func (c *Clique) Seal(chain consensus.ChainHeaderReader, block *types.Block, results chan<- *types.Block, stop <-chan struct{}) error {  header := block.Header()
                            // Sealing the genesis block is not supported number := header.Number.Uint64() if number == 0 { return errUnknownBlock } // For 0-period chains, refuse to seal empty blocks (no reward but would spin sealing) if c.config.Period == 0 && len(block.Transactions()) == 0 { log.Info(\\\"Sealing paused, waiting for transactions\\\") return nil } // Don\\\'t hold the signer fields for the entire sealing procedure c.lock.RLock() signer, signFn := c.signer, c.signFn c.lock.RUnlock()
                            // Bail out if we\\\'re unauthorized to sign a block snap, err := c.snapshot(chain, number-1, header.ParentHash, nil) if err != nil { return err } if _, authorized := snap.Signers[signer]; !authorized { return errUnauthorizedSigner } // If we\\\'re amongst the recent signers, wait for the next block for seen, recent := range snap.Recents { if recent == signer { // Signer is among recents, only wait if the current block doesn\\\'t shift it out if limit := uint64(len(snap.Signers)/2 + 1); number < limit || seen > number-limit { log.Info(\\\"Signed recently, must wait for others\\\") return nil } } } // Sweet, the protocol permits us to sign the block, wait for our time delay := time.Unix(int64(header.Time), 0).Sub(time.Now()) // nolint: gosimple if header.Difficulty.Cmp(diffNoTurn) == 0 { // It\\\'s not our turn explicitly to sign, delay it a bit wiggle := time.Duration(len(snap.Signers)/2+1) * wiggleTime delay += time.Duration(rand.Int63n(int64(wiggle)))
                            log.Trace(\\\"Out-of-turn signing requested\\\", \\\"wiggle\\\", common.PrettyDuration(wiggle)) } // Sign all the things! sighash, err := signFn(accounts.Account{Address: signer}, accounts.MimetypeClique, CliqueRLP(header)) if err != nil { return err } copy(header.Extra[len(header.Extra)-extraSeal:], sighash) // Wait until sealing is terminated or delay timeout. log.Trace(\\\"Waiting for slot to sign and propagate\\\", \\\"delay\\\", common.PrettyDuration(delay)) go func() { select { case <-stop: return case <-time.After(delay): }
                            select { case results <- block.WithSeal(header): default: log.Warn(\\\"Sealing result is not read by miner\\\", \\\"sealhash\\\", SealHash(header)) } }()
                            return nil}

                            参考链接

                            https://github.com/ethereum/EIPs/issues/225

                            https://blog.csdn.net/TurkeyCock/article/details/80659040

                            原创文章,作者:七芒星实验室,如若转载,请注明出处:https://www.sudun.com/ask/34204.html

                            (0)
                            七芒星实验室的头像七芒星实验室
                            上一篇 2024年4月15日
                            下一篇 2024年4月15日

                            相关推荐

                            发表回复

                            您的电子邮箱地址不会被公开。 必填项已用 * 标注