网易云课堂视频在线教学,地址:https://study.163.com/course/introduction/1209401942.htm

8.1 End-2-End案例简介

Fabric官方提供了实现点对点的Fabric网络示例,该网络有两个组织(organizations),一个组织有两种节点(Peer),通过Kafka方式实现排序(Orderer)服务。
End-2-End案例的运行需要“cryptogen”和“configtxgen”两个工具,用于Fabric网络所需的数字证书验证和访问控制功能。

  • cryptogen:生成用于识别和验证网络中各种组件的x509证书。
  • configtxgen:生成用于通道和区块所需要配置文件。

两个工具可以通过如下命令方式生成,在该End-2-End例子中已集成到generateArtifacts.sh这个文件,运行后自动生成,无需手动命令操作,两个文件生成到$GOPATH/src/github.com/hyperledger /release/linux-amd64/bin这个目录。

# cd $GOPATH/src/github.com/hyperledger/fabric
# make release
# cd $GOPATH/src/github.com/hyperledger/release/linux-amd64/bin

还可以通过以下网址直接访问下载,地址:https://nexus.hyperledger.org/content/repositories/releases/org/hyperledger/fabric/hyperledger-fabric/,本书使用V1.4版本,对应文件为hyperledger-fabric-linux-amd64-1.4.0.tar.gz,下载完成后解压,获取bin目录。

8.2 End-2-End案例运行
1. 拷贝e2e_cli源文件

由于Fabric 1.4开始删除了End-2-End案例,需要手动拷贝e2e_cli目录到$GOPATH/src/github.com/hyperledger/fabric/examples目录下。
End-2-End案例源文件访问https://github.com/dragon-lin/bookfile网页中的“书籍对应源码/第八章 官方End-2-End运行”目录中获取。

2. 修改为可执行权限

# chmod -R  $GOPATH/src/github.com/hyperledger/fabric/examples/e2e_cli

3. 运行e2e_cli

# docker stop $(docker ps -a -q)
# docker rm $(docker ps -a -q)
# cd $GOPATH/src/github.com/hyperledger/fabric/examples/e2e_cli
# ./network_setup.sh up

显示start-e2e表示开始运行,如下图所示:

图:End-2-End开始运行

显示END-E2E表示运行成功,如下图所示:

图:End-2-End运行成功

8.3 End-2-End案例分析
8.3.1 案例架构
       End-2-End案例由3个zookeeper、4个kafka和1个orderer实现排序,包括两个组织,分别为Org1和Org2,每个组织中有两个节点,分别为peer0和peer1,结构图如下所示:


图:End-2-End结构图

所有的配置都在docker-compose-cli.yaml文件里,配置顺序分别为zookeeper、kafka、orderer、peer和cli,先运行zookeeper集群、再运行kafka集群,最后运行orderer和peer,必须按照以上运行顺序;实现的功能集中写在script.sh文件里,自动运行全部功能,直到显示成功,具体功能如下:
   1. 验证排序(orderer)服务是否可用,函数:checkOSNAvailability
   2. 创建通道,函数:createChannel
   3. 加入通道,函数:checkOSNAvailability
   4. 更新组织1的锚节点,函数:updateAnchorPeers
   5. 更新组织2的锚节点,函数:updateAnchorPeers
   6. 在组织1的节点0上安装智能合约,函数:installChaincode
   7. 在组织2的节点0上安装智能合约,函数:installChaincode
   8. 在组织2的节点0上实例化智能合约,函数:instantiateChaincode
   9. 在组织1的节点0上查询智能合约,函数:chaincodeQuery
   10. 从组织1的节点0向组织2的节点0转移数据10的交易,函数:chaincodeInvoke
   11. 在组织2的节点1上安装智能合约,函数:installChaincode
   12. 在组织2的节点1上查询智能合约,函数:chaincodeQuery
8.3.2 文件结构
      End-2-End案例的全部文件在fabric/examples/e2e_cli目录下,文件结构如下所示:


图:文件结构

文件说明:

文件(或目录)名称

说明

base

存放配置提炼的公有部分,有两个文件,分别为docker-compose-base.yaml和peer-base.yaml

channel-artifacts

存放生成的通道和创世纪块等文件,包括有channel.tx、genesis.block、Org1MSPanchors.tx和Org2MSPanchors.tx

crypto-config

存放生成的公私钥和证书等文件

scripts

只有一个script.sh文件,该文件是案例的运行功能的集合,运行后会自动执行全部功能,直到完成

configtx.yaml

通道配置文件

crypto-config.yaml

生成的公私钥和证书的配置文件

docker-compose-cli.yaml

Fabric网络Docker运行配置文件

download-dockerimages.sh

下载Fabric镜像执行文件

generateArtifacts.sh

生成公私钥和证书的执行文件

network_setup.sh

案例运行的入口文件

8.3.3 执行流程
        Fabric基础环境搭建完成后,End-2-End案例的运行先从network_setup.sh文件执行,执行过程中调用generateArtifacts.sh生成公私钥和证书等文件,再根据docker-compose-cli.yaml的配置内容通过docker运行zookeeper、kafka、orderer、peer和cli,最后在cli中运行script.sh文件,批量执行创建通道、加入通道、安装智能合约、实例化智能合约、执行交易和执行查询等功能,以上过程在没有错误的情况下,自动执行逐行执行,直到提示END-E2E表示成功。
执行详细流程如下:


图:详细流程图

流程说明:
1. 在e2e_cli目录执行network_setup.sh up表示开始执行,network_setup.sh down表示结束执行;
2. 执行network_setup.sh up后先判断是否存在crypto-config目录,如果不存在,则调用generateArtifacts.sh文件生成公私钥和证书;否则通过命令docker-compose -f $COMPOSE_FILE up -d开始启动Fabric网络;
3. Fabric网络启动成功后,自动执行script.sh文件,按照代码顺序,分别执行如下代码:
    1) 显示start-e2e:显示将开始执行案例;
    2) checkOSNAvailability:执行peer channel fetch 0 0_block.pb -o orderer.example.com:7050 -c "$ORDERER_SYSCHAN_ID" --tls --cafile $ORDERER_CA >&log.txt命令,验证排序(orderer)服务是否可用;
    3) createChannel:执行peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile $ORDERER_CA >&log.txt命令创建通道;
    4) joinChannel:执行peer channel join -b $CHANNEL_NAME.block >&log.txt命令四个peer节点加入到通道中;
    5) updateAnchorPeers 0 1:执行peer channel update -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/${CORE_PEER_LOCALMSPID}anchors.tx --tls --cafile $ORDERER_CA >&log.txt命令更新组织1的锚节点0;
    6) updateAnchorPeers 0 2:执行peer channel update -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/${CORE_PEER_LOCALMSPID}anchors.tx --tls --cafile $ORDERER_CA >&log.txt命令更新组织2的锚节点0;
    7) installChaincode 0 1:执行peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/example02/cmd >&log.txt命令在组织1的节点0上安装智能合约;
    8) installChaincode 0 2:执行peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/example02/cmd >&log.txt命令在组织2的节点0上安装智能合约;
    9) instantiateChaincode 0 2:执行peer chaincode instantiate -o orderer.example.com:7050 --tls --cafile $ORDERER_CA -C $CHANNEL_NAME -n mycc -v 1.0 -c '{"Args":["init","a","100","b","200"]}' -P "AND ('Org1MSP.peer','Org2MSP.peer')" >&log.txt在组织2的节点0上实例化智能合约,初始化a值为100和b值为200;
    10) chaincodeQuery 0 1 100:执行peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}' >&log.txt命令在组织1的节点0上查询a值,并判断是否为100;
    11) chaincodeInvoke 0 1 0 2:执行peer chaincode invoke -o orderer.example.com:7050 --tls --cafile $ORDERER_CA -C $CHANNEL_NAME -n mycc $PEER_CONN_PARMS -c '{"Args":["invoke","a","b","10"]}' >&log.txt命令从a值中转称10到a值中;
    12) installChaincode 1 2:执行peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/example02/cmd >&log.txt命令在组织2的节点1上安装智能合约;
    13) chaincodeQuery 1 2 90:执行peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}' >&log.txt命令在组织2的节点1上查询a值,并判断是否为90;
    14) chaincodeQuery 1 3 90:执行peer chaincode query -C $CHANNEL_NAME -n mycc -c '{"Args":["query","a"]}' >&log.txt命令在组织3的节点1上查询a值,并判断是否为90;
    15) 显示end-e2e:以上代码执行没有出现错误,则显示end-e2e表示成功执行;
8.3.4 智能合约介绍
      智能合约通过Go语言编写,实现存储a和b值,并在a和b之间数据交易转移,主要包括Init(初始化)、Invoke(交易)、delete(删除)和query(查询)四个函数,具体代码如下:

/*
Copyright IBM Corp. All Rights Reserved. SPDX-License-Identifier: Apache-2.0
*/ package example02 import (
  "fmt"
  "strconv"   "github.com/hyperledger/fabric/core/chaincode/shim"
  pb "github.com/hyperledger/fabric/protos/peer"
)   // SimpleChaincode example simple Chaincode implementation
  type SimpleChaincode struct {
} func (t *SimpleChaincode) Init(stub shim.ChaincodeStubInterface) pb.Response {
fmt.Println("ex02 Init")
  _, args := stub.GetFunctionAndParameters()
  var A, B string // Entities
  var Aval, Bval int // Asset holdings
  var err error   if len(args) != {
  return shim.Error("Incorrect number of arguments. Expecting 4")
} // Initialize the chaincode
A = args[]
Aval, err = strconv.Atoi(args[])
if err != nil {
  return shim.Error("Expecting integer value for asset holding")
}
B = args[]
Bval, err = strconv.Atoi(args[])
if err != nil {
  return shim.Error("Expecting integer value for asset holding")
}
fmt.Printf("Aval = %d, Bval = %d\n", Aval, Bval) // Write the state to the ledger
err = stub.PutState(A, []byte(strconv.Itoa(Aval)))
if err != nil {
  return shim.Error(err.Error())
} err = stub.PutState(B, []byte(strconv.Itoa(Bval)))
if err != nil {
  return shim.Error(err.Error())
} return shim.Success(nil)
} func (t *SimpleChaincode) Invoke(stub shim.ChaincodeStubInterface) pb.Response {
    fmt.Println("ex02 Invoke")
    function, args := stub.GetFunctionAndParameters()
    if function == "invoke" {
      // Make payment of X units from A to B
      return t.invoke(stub, args)
    } else if function == "delete" {
      // Deletes an entity from its state
      return t.delete(stub, args)
    } else if function == "query" {
    // the old "Query" is now implemtned in invoke
    return t.query(stub, args)
  }
  return shim.Error("Invalid invoke function name. Expecting \"invoke\" \"delete\" \"query\"")
} // Transaction makes payment of X units from A to B
func (t *SimpleChaincode) invoke(stub shim.ChaincodeStubInterface, args []string) pb.Response {
  var A, B string // Entities
  var Aval, Bval int // Asset holdings
  var X int // Transaction value
  var err error   if len(args) != {
    return shim.Error("Incorrect number of arguments. Expecting 3")
  }   A = args[]
  B = args[]   // Get the state from the ledger
  // TODO: will be nice to have a GetAllState call to ledger
  Avalbytes, err := stub.GetState(A)
  if err != nil {
    return shim.Error("Failed to get state")
  }
  if Avalbytes == nil {
    return shim.Error("Entity not found")
  }
  Aval, _ = strconv.Atoi(string(Avalbytes))   Bvalbytes, err := stub.GetState(B)
  if err != nil {
    return shim.Error("Failed to get state")
  }
  if Bvalbytes == nil {
    return shim.Error("Entity not found")
  }
  Bval, _ = strconv.Atoi(string(Bvalbytes))   // Perform the execution
  X, err = strconv.Atoi(args[])
  if err != nil {
    return shim.Error("Invalid transaction amount, expecting a integer value")
  }
  Aval = Aval - X
  Bval = Bval + X
  fmt.Printf("Aval = %d, Bval = %d\n", Aval, Bval)   // Write the state back to the ledger
  err = stub.PutState(A, []byte(strconv.Itoa(Aval)))
  if err != nil {
    return shim.Error(err.Error())
  }   err = stub.PutState(B, []byte(strconv.Itoa(Bval)))
  if err != nil {
    return shim.Error(err.Error())
  }   return shim.Success(nil)
} // Deletes an entity from state
func (t *SimpleChaincode) delete(stub shim.ChaincodeStubInterface, args []string) pb.Response {
  if len(args) != {
    return shim.Error("Incorrect number of arguments. Expecting 1")
  }   A := args[]   // Delete the key from the state in ledger
  err := stub.DelState(A)
  if err != nil {
    return shim.Error("Failed to delete state")
  }   return shim.Success(nil)
} // query callback representing the query of a chaincode
func (t *SimpleChaincode) query(stub shim.ChaincodeStubInterface, args []string) pb.Response {
  var A string // Entities
  var err error   if len(args) != {
    return shim.Error("Incorrect number of arguments. Expecting name of the person to query")
  }   A = args[]   // Get the state from the ledger
  Avalbytes, err := stub.GetState(A)
  if err != nil {
    jsonResp := "{\"Error\":\"Failed to get state for " + A + "\"}"
    return shim.Error(jsonResp)
  }   if Avalbytes == nil {
    jsonResp := "{\"Error\":\"Nil amount for " + A + "\"}"
    return shim.Error(jsonResp)
  }   jsonResp := "{\"Name\":\"" + A + "\",\"Amount\":\"" + string(Avalbytes) + "\"}"
  fmt.Printf("Query Response:%s\n", jsonResp)
  return shim.Success(Avalbytes)
}

8.3.5 配置介绍
1. 证书配置:
crypto-config.yaml:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# # ---------------------------------------------------------------------------
# "OrdererOrgs" - Definition of organizations managing orderer nodes
# ---------------------------------------------------------------------------
OrdererOrgs:
# ---------------------------------------------------------------------------
# Orderer
# ---------------------------------------------------------------------------
- Name: Orderer
Domain: example.com
CA:
Country: US
Province: California
Locality: San Francisco
# ---------------------------------------------------------------------------
# "Specs" - See PeerOrgs below for complete description
# ---------------------------------------------------------------------------
Specs:
- Hostname: orderer
# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
# ---------------------------------------------------------------------------
# Org1
# ---------------------------------------------------------------------------
- Name: Org1
Domain: org1.example.com
EnableNodeOUs: true
CA:
Country: US
Province: California
Locality: San Francisco
# ---------------------------------------------------------------------------
# "Specs"
# ---------------------------------------------------------------------------
# Uncomment this section to enable the explicit definition of hosts in your
# configuration. Most users will want to use Template, below
#
# Specs is an array of Spec entries. Each Spec entry consists of two fields:
# - Hostname: (Required) The desired hostname, sans the domain.
# - CommonName: (Optional) Specifies the template or explicit override for
# the CN. By default, this is the template:
#
# "{{.Hostname}}.{{.Domain}}"
#
# which obtains its values from the Spec.Hostname and
# Org.Domain, respectively.
# ---------------------------------------------------------------------------
# Specs:
# - Hostname: foo # implicitly "foo.org1.example.com"
# CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above
# - Hostname: bar
# - Hostname: baz
# ---------------------------------------------------------------------------
# "Template"
# ---------------------------------------------------------------------------
# Allows for the definition of or more hosts that are created sequentially
# from a template. By default, this looks like "peer%d" from to Count-.
# You may override the number of nodes (Count), the starting index (Start)
# or the template used to construct the name (Hostname).
#
# Note: Template and Specs are not mutually exclusive. You may define both
# sections and the aggregate nodes will be created for you. Take care with
# name collisions
# ---------------------------------------------------------------------------
Template:
Count:
# Start:
# Hostname: {{.Prefix}}{{.Index}} # default
# ---------------------------------------------------------------------------
# "Users"
# ---------------------------------------------------------------------------
# Count: The number of user accounts _in addition_ to Admin
# ---------------------------------------------------------------------------
Users:
Count:
# ---------------------------------------------------------------------------
# Org2: See "Org1" for full specification
# ---------------------------------------------------------------------------
- Name: Org2
Domain: org2.example.com
EnableNodeOUs: true
CA:
Country: US
Province: California
Locality: San Francisco
Template:
Count:
Users:
Count:

2. 通道配置:
configtx.yaml:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# ---
################################################################################
#
# Profile
#
# - Different configuration profiles may be encoded here to be specified
# as parameters to the configtxgen tool
#
################################################################################
Profiles: TwoOrgsOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
TwoOrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
Capabilities:
<<: *ApplicationCapabilities ################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations: # SampleOrg defines an MSP using the sampleconfig. It should never be used
# in production but may be used as a template for other definitions
- &OrdererOrg
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: OrdererOrg # ID to load the MSP definition as
ID: OrdererMSP # MSPDir is the filesystem path which contains the MSP configuration
MSPDir: crypto-config/ordererOrganizations/example.com/msp - &Org1
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: Org1MSP # ID to load the MSP definition as
ID: Org1MSP MSPDir: crypto-config/peerOrganizations/org1.example.com/msp AnchorPeers:
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication. Note, this value is only
# encoded in the genesis block in the Application section context
- Host: peer0.org1.example.com
Port: - &Org2
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: Org2MSP # ID to load the MSP definition as
ID: Org2MSP MSPDir: crypto-config/peerOrganizations/org2.example.com/msp AnchorPeers:
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication. Note, this value is only
# encoded in the genesis block in the Application section context
- Host: peer0.org2.example.com
Port: ################################################################################
#
# SECTION: Orderer
#
# - This section defines the values to encode into a config transaction or
# genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults # Orderer Type: The orderer implementation to start
# Available types are "solo" and "kafka"
OrdererType: kafka Addresses:
- orderer.example.com: # Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s # Batch Size: Controls the number of messages batched into a block
BatchSize: # Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: # Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: MB # Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: KB Kafka:
# Brokers: A list of Kafka brokers to which the orderer connects. Edit
# this list to identify the brokers of the ordering service.
# NOTE: Use IP:port notation.
Brokers:
- kafka0:
- kafka1:
- kafka2:
- kafka3: # Organizations is the list of orgs which are defined as participants on
# the orderer side of the network
Organizations: ################################################################################
#
# SECTION: Application
#
# - This section defines the values to encode into a config transaction or
# genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults # Organizations is the list of orgs which are defined as participants on
# the application side of the network
Organizations:
################################################################################
#
# SECTION: Capabilities
#
# - This section defines the capabilities of fabric network. This is a new
# concept as of v1.1.0 and should not be utilized in mixed networks with
# v1..x peers and orderers. Capabilities define features which must be
# present in a fabric binary for that binary to safely participate in the
# fabric network. For instance, if a new MSP type is added, newer binaries
# might recognize and validate the signatures from this type, while older
# binaries without this support would be unable to validate those
# transactions. This could lead to different versions of the fabric binaries
# having different world states. Instead, defining a capability for a channel
# informs those binaries without this capability that they must cease
# processing transactions until they have been upgraded. For v1..x if any
# capabilities are defined (including a map with all capabilities turned off)
# then the v1..x peer will deliberately crash.
#
################################################################################
Capabilities:
# Channel capabilities apply to both the orderers and the peers and must be
# supported by both. Set the value of the capability to true to require it.
Global: &ChannelCapabilities
# V1. for Global is a catchall flag for behavior which has been
# determined to be desired for all orderers and peers running v1..x,
# but the modification of which would cause incompatibilities. Users
# should leave this flag set to true.
V1_1: true # Orderer capabilities apply only to the orderers, and may be safely
# manipulated without concern for upgrading peers. Set the value of the
# capability to true to require it.
Orderer: &OrdererCapabilities
# V1. for Order is a catchall flag for behavior which has been
# determined to be desired for all orderers running v1..x, but the
# modification of which would cause incompatibilities. Users should
# leave this flag set to true.
V1_1: true # Application capabilities apply only to the peer network, and may be safely
# manipulated without concern for upgrading orderers. Set the value of the
# capability to true to require it.
Application: &ApplicationCapabilities
# V1. for Application is a catchall flag for behavior which has been
# determined to be desired for all peers running v1..x, but the
# modification of which would cause incompatibilities. Users should
# leave this flag set to true.
V1_1: true

3. 基础配置:
1) docker-compose-base.yaml:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# version: '' services:
zookeeper:
image: hyperledger/fabric-zookeeper
restart: always
ports:
- ''
- ''
- '' kafka:
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES= # * * B
- KAFKA_REPLICA_FETCH_MAX_BYTES= # * * B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
ports:
- '' orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
- ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
- ORDERER_KAFKA_RETRY_LONGTOTAL=100s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- : peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:
- CORE_PEER_LOCALMSPID=Org1MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- :
- :
- : peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.org1.example.com
- CORE_PEER_ADDRESS=peer1.org1.example.com:
- CORE_PEER_CHAINCODEADDRESS=peer1.org1.example.com:
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:
- CORE_PEER_LOCALMSPID=Org1MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls:/etc/hyperledger/fabric/tls ports:
- :
- :
- : peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org2.example.com
- CORE_PEER_ADDRESS=peer0.org2.example.com:
- CORE_PEER_CHAINCODEADDRESS=peer0.org2.example.com:
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org2.example.com:
- CORE_PEER_LOCALMSPID=Org2MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- :
- :
- : peer1.org2.example.com:
container_name: peer1.org2.example.com
extends:
file: peer-base.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.org2.example.com
- CORE_PEER_ADDRESS=peer1.org2.example.com:
- CORE_PEER_CHAINCODEADDRESS=peer1.org2.example.com:
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org2.example.com:
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org2.example.com:
- CORE_PEER_LOCALMSPID=Org2MSP
volumes:
- /var/run/:/host/var/run/
- ../crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/msp:/etc/hyperledger/fabric/msp
- ../crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls:/etc/hyperledger/fabric/tls
ports:
- :
- :
- :
) peer-base.yaml:
# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
# version: ''
services:
peer-base:
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=e2e_cli_default
#- CORE_LOGGING_LEVEL=ERROR
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start

4. Zookeeper配置

zookeeper0:
container_name: zookeeper0
extends:
file: base/docker-compose-base.yaml
service: zookeeper
environment:
- ZOO_MY_ID=
- ZOO_SERVERS=server.=zookeeper0:: server.=zookeeper1:: server.=zookeeper2:: zookeeper1:
container_name: zookeeper1
extends:
file: base/docker-compose-base.yaml
service: zookeeper
environment:
- ZOO_MY_ID=
- ZOO_SERVERS=server.=zookeeper0:: server.=zookeeper1:: server.=zookeeper2:: zookeeper2:
container_name: zookeeper2
extends:
file: base/docker-compose-base.yaml
service: zookeeper
environment:
- ZOO_MY_ID=
- ZOO_SERVERS=server.=zookeeper0:: server.=zookeeper1:: server.=zookeeper2::

5. Kafka配置

kafka0:
container_name: kafka0
extends:
file: base/docker-compose-base.yaml
service: kafka
environment:
- KAFKA_BROKER_ID=
- KAFKA_MIN_INSYNC_REPLICAS=
- KAFKA_DEFAULT_REPLICATION_FACTOR=
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:,zookeeper1:,zookeeper2:
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2 kafka1:
container_name: kafka1
extends:
file: base/docker-compose-base.yaml
service: kafka
environment:
- KAFKA_BROKER_ID=
- KAFKA_MIN_INSYNC_REPLICAS=
- KAFKA_DEFAULT_REPLICATION_FACTOR=
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:,zookeeper1:,zookeeper2:
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2 kafka2:
container_name: kafka2
extends:
file: base/docker-compose-base.yaml
service: kafka
environment:
- KAFKA_BROKER_ID=
- KAFKA_MIN_INSYNC_REPLICAS=
- KAFKA_DEFAULT_REPLICATION_FACTOR=
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:,zookeeper1:,zookeeper2:
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2 kafka3:
container_name: kafka3
extends:
file: base/docker-compose-base.yaml
service: kafka
environment:
- KAFKA_BROKER_ID=
- KAFKA_MIN_INSYNC_REPLICAS=
- KAFKA_DEFAULT_REPLICATION_FACTOR=
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:,zookeeper1:,zookeeper2:
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
. Orderer配置
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
- kafka0
- kafka1
- kafka2
- kafka3

7. Peer配置

peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org1.example.com peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org2.example.com peer1.org2.example.com:
container_name: peer1.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org2.example.com

8. Cli配置

cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT'
volumes:
- /var/run/:/host/var/run/
- ../chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
- peer0.org2.example.com
- peer1.org2.example.com

HyperLedger Fabric 1.4 官方End-2-End运行(8)的更多相关文章

  1. 阿里云容器服务区块链解决方案全新升级 支持Hyperledger Fabric v1.1

    摘要: 全球开源区块链领域影响最为广泛的Hyperledger Fabric日前宣布了1.1版本的正式发布,带来了一系列丰富的新功能以及在安全性.性能与扩展性等方面的显著提升.阿里云容器服务区块链解决 ...

  2. (二)Hyperledger Fabric 1.1安装部署-Fabric Samples

    Hyperledger Fabric Samples是官方推荐的First Network,对于熟悉fabric和测试基础环境很有好处. Fabric Samples源码下载:使用git下载源码,进入 ...

  3. Hyperledger Fabric v1.1.0安装记录(国内源版)

    1. 安装虚拟机     虚拟机软件采用:VirtualBox     操作系统选择:Ubuntu 14.04     内存:4G     CPU:2核     硬盘:20G     2.(可选)更改 ...

  4. Hyperledger Fabric 1.0 从零开始(八)——Fabric多节点集群生产部署

    6.1.平台特定使用的二进制文件配置 该方案与Hyperledger Fabric 1.0 从零开始(五)--运行测试e2e类似,根据企业需要,可以控制各节点的域名,及联盟链的统一域名.可以指定单独节 ...

  5. Hyperledger Fabric 1.0 从零开始(五)——运行测试e2e

    3:运行测试e2e 3.1.运行fabric-samples的问题说明 该问题说明能够解决6.1.平台特定使用的二进制文件配置第一步的问题.可以选择继续阅读该说明,或者等参考到6.1小节时再反向阅读本 ...

  6. Hyperledger Fabric 1.0 从零开始(七)——启动Fabric多节点集群

    5:启动Fabric多节点集群 5.1.启动orderer节点服务 上述操作完成后,此时各节点的compose配置文件及证书验证目录都已经准备完成,可以开始尝试启动多机Fabric集群. 首先启动or ...

  7. Hyperledger Fabric 1.0 从零开始(六)——创建Fabric多节点集群

    4:创建Fabric多节点集群 4.1.配置说明 首先可以根据官方Fabric自带的e2e_cli列子中的集群方案来生成我们自己的集群,与案例不同的是我们需要把容器都分配到不同的服务器上,彼此之间通过 ...

  8. Hyperledger Fabric 1.0 从零开始(四)——Fabric源码及镜像文件处理

    2:Fabric源码及镜像文件处理 2.1下载Fabric源码 下载Fabric源码是因为要用到源码中提到的列子和工具,工具编译需要用到go语言环境,因此需要把源码目录放到$GOPATH下.通过1.3 ...

  9. Hyperledger Fabric 1.0 从零开始(三)——环境构建(内网/准离线)

    有公网环境的服务器可以直接看 Hyperledger Fabric 1.0 从零开始(二)--环境构建(公网) ,本篇内容与上篇相似,只不过环境搭建需要在内网下,也就是网络被限制的情况下. 1:环境构 ...

随机推荐

  1. DOS下常用命令

    0,想进入某个驱动器,直接输入盘符即可.如:“d:”1,CD--进入指定目录 2,cls--清除显示器屏幕上的内容,使DOS提示符到屏幕左上角. 3,time--显示和设置DOS的系统时间 4,dir ...

  2. Spark资源调度分配内幕天机彻底解密:Driver在Cluster模式下的启动、两种不同的资源调度方式源码彻底解析、资源调度内幕总结

    本课主题 Master 资源调度的源码鉴赏 资源调度管理 任务调度与资源是通过 DAGScheduler.TaskScheduler.SchedulerBackend 等进行的作业调度 资源调度是指应 ...

  3. less 学习

    收藏地址 : http://www.bootcss.com/p/lesscss/

  4. linux配置sudo

    编辑/etc/sudoers或者直接使用root用户运行visodu 添加如下两行:oracle  ALL=(ALL)       NOPASSWD: ALLoinstall        ALL=( ...

  5. jave web 监听器。

    https://www.imooc.com/video/5664 Web监听器由Servlet规范提供的,可以监听客户端的请求以及服务端的操作,即监听ServletContext.HttpSessio ...

  6. SAP人工智能服务Recast.AI的一个简单例子

    关于这个例子的完整介绍,请参考公众号 "汪子熙"的两篇文章: SAP C/4HANA与人工智能和增强现实(AR)技术结合的又一个创新案例 和使用Recast.AI创建具有人工智能的 ...

  7. 【剑指offer】字符串的组合

    版权声明:本文为博主原创文章,未经博主同意不得转载. https://blog.csdn.net/mmc_maodun/article/details/26405471 转载请注明出处:http:// ...

  8. STL - next_permutation 全排列函数

    学习: http://blog.sina.com.cn/s/blog_9f7ea4390101101u.html http://blog.csdn.net/ac_gibson/article/deta ...

  9. linux如何安装IntelliJ IDEA

    http://www.linuxidc.com/Linux/2016-12/137946.htm 创建的那个启动器脚本,只要这样输入就可以直接启动idea了

  10. 在eclipse中配置Tomcat时,出现“Cannot create a server using the selected type”的错误。

    出现原因:Tomcat重新安装,并且安装目录改变了. 解决方案:在“Window->preferences->Server->Runtime Environment”,编辑Tomca ...