With the development of the Internet, its user base, number of applications, and bandwidth are rapidly increasing. In recent years, smart devices have entered the world of the Internet as a new user carrier. They come in various forms, ranging from simple devices like refrigerators, air conditioners, or microwaves, to sophisticated ones like drones or autonomous vehicles. These smart devices are collectively referred to as Internet of Things (IoT) devices, used to control the functionality of applications and network operations. There are enough cases to prove that attackers are leveraging IoT devices to launch large-scale network attacks known as Distributed Denial of Service (DDoS) attacks. In this chapter, you will learn about DDoS attacks and how blockchain can help enterprises defend against such large-scale network attacks.
DDoS attacks have malicious intent, disrupting legitimate traffic to servers by sending a large number of requests from geographically dispersed systems, hindering normal user access. Now, let's first understand how Denial of Service (DoS) attacks work. During a DoS attack, the attacker bombards the target computer with a large number of requests, causing server resources to be exhausted, which leads to legitimate user requests failing. In a DoS attack, the attacker uses a single machine to exhaust the target server's resources. In contrast, DDoS attacks are more powerful because they can utilize up to millions of computers to overwhelm the target server.
An increasing number of enterprises are migrating their application services to the cloud, which has substantial infrastructure, to meet the real-time demands of their massive customer base. Enterprises can build their own large cloud server infrastructure or migrate to servers provided by cloud service providers. Today, attackers prefer to use DDoS attack methods to disrupt target services because they can generate GB or even TB of random data to bombard the target victim, and the target security teams find it difficult to identify and intercept each attack source, as they may number in the millions.
Moreover, attackers never legitimately control their attacking source machines; instead, they infect millions of computers worldwide with some carefully designed malware, gaining full access and then controlling them to launch large-scale DDoS attacks. This collection of infected computers is called a botnet, while an individual infected computer is referred to as a bot.
While it is difficult to pinpoint the first DDoS attack event in the world, the first notable DDoS attack occurred in 1999, targeting the University of Minnesota. It affected over 220 systems and caused the entire network infrastructure to be down for several days.
On Friday, October 21, 2016, the world witnessed the occurrence of the most sophisticated DDoS attack against Dyn (a managed DNS provider). Dyn confirmed that the Mirai botnet was the primary source of the malicious attack traffic. This attack opened a new focus on Internet security and threat landscapes.
To launch a DDoS attack, hackers can either build an entire botnet themselves or rent botnet resources from the dark web. Once the attackers are ready with their weapons, they only need to discover vulnerable sites, hosts, or entire networks.
A computer scientist from Lockheed-Martin coined the term "cyber kill chain," which outlines the entire stages of a cyber attack from reconnaissance to the final attack on the target. These stages include:
- Reconnaissance: The attacker identifies target devices and begins searching for vulnerabilities.
- Weaponization: The attacker uses remote toolkits and malware (viruses or worms) to act against the vulnerabilities.
- Delivery: The attacker injects attack code into the victim's network through various methods such as phishing emails, secret downloads, USB devices, insider assistance, etc.
- Exploitation: Malicious code is used to trigger attacks and take measures to exploit vulnerabilities in the target network.
- Installation: Malware is automatically installed on the victim's computer.
- Command and Control: The malware gives remote attackers control access to the victim's machine.
To understand each stage from a DDoS perspective, it is crucial to understand the botnet infrastructure and how it is built.
- Building a Botnet
The distributed nature of DDoS attacks requires leveraging millions of infected computers worldwide. Today, attackers can rent or purchase readily available botnet resources from the dark web. There are tools like Jumper, Dirt, and Pandore that effectively lower or even eliminate the technical barriers for attackers to build these botnets.
The following diagram describes the entire lifecycle of a botnet:
- Reconnaissance
The target systems under reconnaissance can be as large as an entire data center or as small as a single computer. In both cases, building a botnet requires identifying hosts with vulnerabilities that malware can exploit. Attackers look for information directly or indirectly related to their targets to illegally access those protected assets. They will attempt all possible methods to bypass existing security systems, including firewalls, intrusion prevention systems (IPS), web application firewalls, and endpoint protection systems.
- Weaponization
By using numerous open-source software resources, attackers can effectively avoid technical barriers encountered during the construction of malicious code. If a programmer has malicious intent and develops code, new malware may be created that security systems initially find difficult to detect.
Here are some commonly used tools for conducting DDoS attacks:
-
Low Orbit Ion Cannon (LOIC): This is one of the favorite tools used by the hacker group Anonymous. It is a simple flooding tool that can generate large amounts of TCP, UDP, or HTTP traffic to congest the target server. It was originally developed to test server performance throughput. However, the Anonymous group uses this open-source tool to launch complex DDoS attacks. The tool was later enhanced with IRC functionality, allowing users to control connected systems via IRC.
-
High Orbit Ion Cannon (HOIC): After effectively using LOIC for several years,
- Reconnaissance
The target systems under reconnaissance can be as large as an entire data center or as small as a single computer. In both cases, building a botnet requires identifying hosts with vulnerabilities that malware can exploit. Attackers look for information directly or indirectly related to their targets to illegally access those protected assets. They will attempt all possible methods to bypass existing security systems, including firewalls, intrusion prevention systems (IPS), web application firewalls, and endpoint protection systems.
- Weaponization
By using numerous open-source software resources, attackers can effectively avoid technical barriers encountered during the construction of malicious code. If a programmer has malicious intent and develops code, new malware may be created that security systems initially find difficult to detect.
Here are some commonly used tools for conducting DDoS attacks:
- Low Orbit Ion Cannon (LOIC): This is one of the favorite tools used by the hacker group Anonymous. It is a simple flooding tool that can generate large amounts of TCP, UDP, or HTTP traffic to congest the target server. It was originally developed to test server performance throughput. However, the Anonymous group uses this open-source tool to launch complex DDoS attacks. The tool was later enhanced with IRC functionality, allowing users to control connected systems via IRC.
Once downloaded to the newly compromised victim system, the script automatically initiates a new attack process on this new bot node. The entire attack mechanism uses HTTP, FTP, and Remote Procedure Call (RPC) protocols. In this method, the attacker operates the victim's machine, and the compromised system connects to the attacker's central repository, subsequently downloading malicious code from the central repository. As shown in the diagram below:
-
Reverse Link Propagation: In this method, the attacker ports the toolkit to the newly infected host. This toolkit is carefully designed to accept file transfer requests from infected systems. Meanwhile, the reverse channel file copying process can be completed by a port listener using the Trivial File Transfer Protocol (TFTP). Unlike the central source propagation method, the attacker transmits both the exploit and attack code to the compromised host. As shown in the diagram below.
-
Automatic Propagation: In this mechanism, when the attacker compromises a system, their toolkit is copied to the infected host. This mechanism differs at the transmission method level, as the attack toolkit is initially implanted by the attacker on the infected host. In this method, the attacker first transmits the exploit attack, followed by their own code, rather than from any central repository. As shown in the diagram below:
- Exploitation
Once the malware enters the network, it will initiate exploitation against various vulnerabilities (including unpatched software vulnerabilities, flawed software programming practices, or user negligence). Typically, there are numerous vulnerabilities present in the network, and their exploitability makes the vulnerabilities themselves more lethal.
- Installation
In the installation phase, the malware anchors itself in the target system, allowing remote attackers access. During the installation process, the malware can be embedded in the user space or kernel space of the system. Malware installed in user space is easily detectable. However, malware in kernel space is rarely detected by security systems (including endpoint protection, endpoint detection, and response platforms).
- Command and Control (C2)
After successfully installing the intrusion tools, the target host is completely controlled by the attacker's remote central system. The network of these compromised devices is called a botnet, which the attacker can manipulate at will. However, the botnet nodes remain dormant until activated by the attacker. There are even some encrypted communications between bot hosts on public peer networks.
- Taking Action Against the Target
Once the C2 channel is established, the attacker can launch a DDoS attack against the target. At this stage, the attacker runs scripts to activate all bot hosts in the entire botnet. The attacker also configures the botnet to determine what type of attack traffic needs to be generated.
Blockchain is a decentralized network that allows independent parties to communicate without any third-party involvement. To protect the network from DDoS attacks, enterprise services can be distributed across multiple server nodes, providing high resilience and eliminating single points of failure. The two main advantages of using blockchain are as follows:
-
Blockchain technology can be used to deploy distributed ledgers to store blacklisted IPs.
-
Blockchain technology eliminates the risk of single points of failure.
The DDoS defense platform based on blockchain first requires preparing a testing environment using Node.js and Truffle on the Ethereum blockchain. We will use an existing blockchain project to protect the network from DDoS attacks. This project can be found at the link https://github.com/gladiusio/gladius-contracts.
We need to prepare the infrastructure for the Gladius project by following these steps:
-
First, we will install Node.js in the system environment, with the link https://nodejs.org/uk/download/package-manager/#arch-linux.
-
We need to install Truffle in the testing environment:
-
Execute the following command in the command line:
-
Now execute the following command in the command line to start the test network:
The following screenshot shows the output of running the above command:
- In this terminal window, we can see all transactions in the test blockchain network. Now, we must open a new terminal window and navigate to the working directory.
To configure the project, please follow the instructions below:
-
Find the .zip file on the https://github.com/gladiusio/gladius-contracts webpage and download it, then extract the file to your specified path.
-
Replace the code in truffle.js with the following code:
-
We will enter the folder named gladius-contracts-master and use the following command to compile the contract:
The following screenshot shows the output of running the above command:
- Now, we can execute the following command to deploy the contract to the local blockchain of ganache-cli:
The following screenshot shows the output of running the above command:
-
Now, we must use the truffle test command to start unit tests to ensure that the smart contract is functioning correctly:
-
Download the .zip file from https://github.com/gladiusio/gladius-control-daemon and extract it to the same folder as gladius-contracts.
-
Next, we find the gladius-control-daemon-master folder in the terminal and link the contract application binary interface (ABI). The ABI is the interface between two program modules, where one program module is at the machine code level:
The following screenshot shows the output of running the above command:
-
Next, we can execute the npm install command to install the required dependencies:
-
Then, we can execute the node index.js command to start the script:
-
Open a new terminal window and execute the gladius-networkd command:
-
Next, we need to open another new terminal and execute the gladius-controld command:
-
To start your node, you need to execute the following command in the new terminal:
The following screenshot shows the output of running the above command:
-
We can submit data to a specific pool, allowing it to accept or reject becoming part of the pool:
-
After completing the node creation, we can use the management application to check its status. This displays the node information in the blockchain:
Now just download the Gladius client to your computer and access the system.
Once Gladius is activated, all nodes will handle a continuous stream of requests to verify website connections and block malicious activities. Gladius is actively committed to solving several issues within the system and achieving a stable system.
Blockchain can be used in the following scenarios:
-
Conflict Situations: Blockchain networks connect not only trusted parties but also untrusted ones. Therefore, it is crucial to focus on conflict situations and resolve issues seamlessly. Blockchain uses consensus mechanisms to confirm transactions and build blocks. Different blockchains use different consensus models, such as Proof of Work (PoW), Proof of Stake (PoS), etc., but the goal is the same: to avoid conflicts and ensure successful transactions.
-
Shared Public Database: If an enterprise organization shares a public database among its employees (administrators or non-IT personnel), contractors, or third parties, a permissioned chain can truly meet the requirements. When a centralized database is shared among different parties, it increases the risk of access control using privilege escalation. When using a permissioned chain, it ensures that only the submitted peers have the right to change the database, and transaction endorsements can be completed by any pre-selected participant.
-
Business Rules for Transactions: If the business model requires you to have a simple or complex logical strategy to execute any transaction, then blockchain can provide good guarantees through its logical strategies, such as Ethereum's smart contracts or Hyperledger's chaincode. Business strategies will always be defined in the node software, which will enforce the nodes to operate according to the defined rules.
-
System Transparency Requirements: If an organization's business model requires transparency to customers or suppliers across the entire supply chain, then distributed ledger technology can better provide end-to-end visibility of supply chain operations management systems. In a blockchain network without permission control, every node is allowed to read and write the blockchain ledger, thus becoming transparent. However, in a permissioned environment, enterprises tend to prefer only pre-selected nodes to participate in the blockchain computation process and ledger management.
-
Data Immutability Requirements: If an enterprise organization needs to develop a highly secure database for data appending only, then cryptographic hashes and digital signatures can help build such a highly secure ledger. When constructing each block, the hash of the previous block is used, so once the database is created, it is impossible to modify or rearrange them.
Blockchain is one of the most powerful technologies seen in the industry, but it is not always suitable for all tasks. This makes the evaluation phase critical in every aspect. After understanding the business scenarios it is best suited for, let's look at some situations where blockchain is not suitable:
-
Storing Quite Large Data: Due to its distributed and decentralized nature, the entire database is stored with every node in the blockchain network (in permissioned ledgers, only pre-selected participants are allowed to read and store data, thus replicating the database will take a long time and be slow). There are some solutions built for this purpose, and we can quickly review them.
-
Frequent Changes to Transaction Rules: If a smart contract strategy is set up and initiated, the execution path will not change. Organizations that frequently change business processes and operations are not recommended to use blockchain-based applications. Each subsystem and subprocess within the blockchain network must be deterministic.
-
Blockchain Must Retrieve Data from External Data Sources: Building blockchain smart contracts is not meant to retrieve information from external data sources. Even if configured to communicate between the blockchain and trusted databases, it will operate as a regular database operation. Moreover, in this case, blockchain smart contracts will not extract entries from external databases. Instead, trusted databases must push data onto the blockchain.
Blockchain is bringing us great technological and business opportunities, facilitating collaboration between different organizations. Leaders are currently experiencing and seeking ways to use blockchain technology for their business operations to keep up with changing market demands. Let's focus on some important questions in implementing blockchain initiatives:
- Who are the most trusted blockchain technology leaders in my industry?
- What is my competitor's view on blockchain?
- Which business departments are most susceptible to disruption?
- Who will our blockchain deployment impact the most? What might their reactions be?
- What are the possible business cases for blockchain? How can we achieve better and more sustainable business models?
- What are the total cost factors involved in the deployment?
- What is the impact of current regulations on blockchain applications?
- How can we collaborate with regulators to launch blockchain applications in the market for a win-win situation?
- How can we apply security controls to blockchain applications?
Before launching blockchain applications to the market, a series of brainstorming sessions are expected, but it is advisable to clearly define the project's scope and ally with the right stakeholders.
Laziness and curiosity are the sources of innovation and progress.
Experiments with public blockchain platforms like Bitcoin and Ethereum fully demonstrate the significant advantages of blockchain technology in supporting decentralized transactions.
An increasing number of enterprises are also starting to pay attention to blockchain technology, attempting to introduce it into business scenarios to improve the efficiency of complex business transactions and reduce the costs of multi-party cooperation. The Hyperledger Fabric project has emerged. As one of the early projects in the Hyperledger community, Fabric integrates the latest achievements from the technology and finance sectors, providing a distributed ledger platform implementation aimed at consortium chain scenarios for the first time.
This chapter will guide readers on how to locally compile and install the Fabric environment from source code, as well as how to deploy a typical Fabric network in a multi-server environment. Additionally, it will introduce how to quickly start a complete Fabric network environment using containerization in a single-machine environment. Next, it will explain the operations related to chaincode and application channels and SDK support. Finally, this chapter will discuss important considerations for deploying the Fabric network in a production environment.
Starting from version 1.0, Fabric has undergone a redesign in its architecture, decoupling the roles of nodes while improving security, performance, scalability, and pluggability. Before sending transactions to the network, it is necessary to collect sufficient endorsement support from endorsement nodes and use dedicated ordering nodes to handle the core ordering process throughout the network.
Currently, there are four different types of service nodes in the network, collaborating to complete the functions of the entire blockchain system. Decoupling the roles of nodes in the network is a significant innovation in Fabric's design, determined by the special requirements and environment of consortium chain scenarios:
- Endorser Nodes: Responsible for checking and endorsing transaction proposals, calculating transaction execution results.
- Committer Nodes: Responsible for rechecking the legality before accepting transaction results, accepting legitimate transactions for ledger modifications, and writing them into the blockchain structure.
- Orderer Nodes: Responsible for sorting all transactions sent to the network, organizing the sorted transactions into blocks according to the agreements in the configuration, and then submitting them to committer nodes for processing.
- Certificate Authority Nodes (CA): Responsible for managing all certificates in the network, providing standard PKI services.
Additionally, the network supports multi-channel features. A separate system channel is used to manage various configuration information in the network and complete the creation of other application channels (application channels used by users to send transactions).
Currently, to start a Fabric network, the following main steps need to be followed:
-
Prepare various configurations within the network, including the organizational structure of members in the network and their corresponding certificates (completed using the cryptogen tool); generate the initial configuration block file for the system channel, create the configuration update transaction file for the new application channel, and any necessary anchor node configuration update transaction files (completed using the configtxgen tool).
-
Use the initial configuration block file of the system channel to start the ordering node. Once the ordering node starts, it automatically creates the system channel according to the specified configuration.
-
Different organizations start Peer nodes according to preset roles. At this point, there are no application channels in the network, and Peer nodes have not joined the network.
-
Use the configuration update transaction file for the newly created application channel to send a transaction to the system channel to create a new application channel.
-
Allow the corresponding Peer nodes to join the created application channel; at this point, Peer nodes join the network and are ready to receive transactions.
-
Users install the registered chaincode (related definitions are referenced in section 9.5) through the client, and once the chaincode container starts successfully, users can call the chaincode and send transactions to the network.
Subsequent chapters will detail the operational sequence and methods for each step.
For readers with strong hands-on abilities, it is recommended to deploy the Hyperledger Fabric network through local compilation and installation to gain a deeper understanding of the related components.
Hyperledger Fabric is implemented based on the Go language, and it is recommended to configure the Golang environment to version 1.7 or higher for local compilation. The following will explain how to compile and generate binary files for components such as fabric-peer, fabric-orderer, and fabric-ca, as well as how to install some configuration and development-related tools.
Common Linux distributions (including Ubuntu, Redhat, CentOS, etc.) and MacOS can natively support Fabric compilation and operation.
The operating system is recommended to be Linux kernel version 3.10+ and support a 64-bit environment. Additionally, as a Fabric node, the physical memory is recommended to be at least 2GB; if there is more chaincode, more containers will be needed; reserve enough hard disk space (generally recommended 20GB or more) to store block files. In production environments where performance and stability requirements are high, even more physical resources should be reserved.
The following will take Ubuntu 16.04 as an example for operations.
Tip: The resources required to run Fabric nodes are not demanding; as an experiment, Fabric nodes can even run normally on a Raspberry Pi.
1. Install Go Language Environment
The Go language environment can be downloaded from the golang.org website as a binary compressed package for installation. Note that it is not recommended to install via package managers, as the versions are often outdated.
For example, to download Go version 1.8, you can use the following command:
$ curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
After downloading, extract the directory and move it to an appropriate location (recommended to /usr/local):
$ tar -xvf go1.8.linux-amd64.tar.gz
$ sudo mv go /usr/local
After installation, remember to configure the GOPATH environment variable:
export GOPATH=YOUR_LOCAL_GO_PATH/Go
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
At this point, you can verify whether the installation was successful using the go version command:
$ go version
go version go1.8 linux/amd64
2. Install Dependency Packages
To compile the Fabric-related code, some dependency packages are needed, which can be installed using the following command:
You can obtain the code required for compiling the fabric-peer and fabric-orderer components using the following command; both are currently in the same repository:
$ git clone http://gerrit.hyperledger.org/r/fabric
Starting a Fabric network is a relatively complex process, and the main steps include planning the topology, preparing relevant configuration files, starting the Orderer node, starting the Peer nodes, and operating the network. Here, we will explain the relevant operational steps based on the examples included in the Fabric code.
The started Fabric network includes one Orderer node and four Peer nodes, as well as one management node to generate relevant startup files, which will execute commands as an operational client after the network starts.
The four Peer nodes belong to two organizations (Org1 and Org2) under the same management domain (example.com), and both organizations join the same application channel (business-channel). The first node (peer0 node) in each organization acts as an anchor node to communicate with other organizations, and all nodes can access each other through domain names, forming a complete network.
Before starting the Fabric network, it is necessary to generate some configuration files for startup in advance, mainly including MSP-related files (msp/), TLS-related files (tls/), the initial block for the system channel (orderer.genesis.block), the transaction file for the new application channel (businesschannel.tx), and the anchor node configuration update transaction files (Org1MSPanchors.tx and Org2MSPanchors.tx). The functions of each file are as follows:
Note: This section mainly describes how to generate these startup configuration files. More detailed explanations about these configurations can be found in subsequent related chapters.
1. Generate Organizational Relationships and Certificates
The Fabric network provides consortium chain services, consisting of multiple organizations, where members within the organization provide node services to maintain the network and manage permissions through identities.
Therefore, it is first necessary to plan the relationships between various organizations and members, generate corresponding certificate files, and deploy them to their respective nodes.
Users can manually generate certificates and private keys for each entity using PKI services (such as fabric-ca) or OpenSSL tools. However, when the organizational structure is complex, this manual generation method is prone to errors and is inefficient.
The Fabric project provides the cryptogen tool (based on the crypto standard library) to automate this generation. This process first relies on the crypto-config.yaml configuration file.
The structure of the crypto-config.yaml configuration file is very simple and supports defining several organizations of two types (OrdererOrgs and PeerOrgs). Each organization can define multiple nodes (Spec) and users (User).
An example of the content of a crypto-config.yaml configuration file is as follows, which defines an OrdererOrgs type organization called Orderer (including one node orderer.example.com) and two PeerOrgs type organizations Org1 and Org2 (each including 2 nodes and 1 ordinary user):
OrdererOrgs:
- Name: Orderer
Domain: example.com
Specs:
- Hostname: orderer
CommonName: orderer.example.com
PeerOrgs:
- Name: Org1
Domain: org1.example.com
Template:
Count: 2
Users:
Count: 1
- Name: Org2
Domain: org2.example.com
Template:
Count: 2
Users:
Count: 1
Using this configuration file, the following command can be executed to generate the organization and identity files for the specified topology structure for the Fabric network, stored in the crypto-config directory:
$ cryptogen generate --config=./crypto-config.yaml --output ./crypto-config
View the structure of the crypto-config directory, generated according to the definitions in the example crypto-config.yaml:
$ tree -L 4 crypto-config
crypto-config
|-- ordererOrganizations
| `-- example.com
| |-- ca
| | |-- 293def0fc6d07aab625308a3499cd97f8ffccbf9e9769bf4107d6781f5e8072b_sk
| | `-- ca.example.com-cert.pem
| |-- msp
| | |-- admincerts
| | |-- cacerts
| | `-- tlscacerts
| |-- orderers
| | `-- orderer.example.com
| |-- tlsca
| | |-- 2be5353baec06ca695f7c3b04ca0932912601a4411939bfcfd44af18274d5a65_sk
| | `-- tlsca.example.com-cert.pem
| `-- users
| `-- [email protected]
`-- peerOrganizations
|-- org1.example.com
| |-- ca
| | |-- 501c5f828f58dfa3f7ee844ea4cdd26318256c9b66369727afe8437c08370aee_sk
| | `-- ca.org1.example.com-cert.pem
| |-- msp
| | |-- admincerts
| | |-- cacerts
| | `-- tlscacerts
| |-- peers
| | |-- peer0.org1.example.com
| | `-- peer1.org1.example.com
| |-- tlsca
| | |-- 592a08f84c99d6f083b3c5b9898b2ca4eb5fbb9d1e255f67df1fa14c123e4368_sk
| | `-- tlsca.org1.example.com-cert.pem
| `-- users
| |-- [email protected]
| `-- [email protected]
`-- org2.example.com
|-- ca
| |-- 86d97f9eb601868611eab5dc7df88b1f6e91e129160651e683162b958a728162_sk
| `-- ca.org2.example.com-cert.pem
|-- msp
| |-- admincerts
| |-- cacerts
| `-- tlscacerts
|-- peers
| |-- peer0.org2.example.com
| `-- peer1.org2.example.com
|-- tlsca
| |-- 4b87c416978970948dffadd0639a64a2b03bc89f910cb6d087583f210fb2929d_sk
| `-- tlsca.org2.example.com-cert.pem
`-- users
|-- [email protected]
`-- [email protected]
According to the definitions in crypto-config.yaml, the generated crypto-config directory includes a multi-level directory structure. The ordererOrganizations directory includes identity information for the Orderer organization (1 Orderer node); the peerOrganizations directory contains the relevant identity information for all Peer node organizations (2 organizations, 4 nodes). The most critical is the msp directory, which represents the identity information of the entities.
For the Orderer node, the contents of the directory crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com (including the msp and tls subdirectories) need to be copied to the /etc/hyperledger/fabric path of the Orderer node (consistent with the Orderer's own configuration).
For Peer nodes, the corresponding certificate files from the peerOrganizations directory need to be copied. For example, for org1's peer0, the contents of the directory crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com (including msp and tls) need to be copied to the /etc/hyperledger/fabric path of Peer0 (consistent with the Peer’s own configuration).
For client nodes, to facilitate operations, the complete crypto-config directory can be copied to the /etc/hyperledger/fabric path (consistent with the configuration in configtx.yaml).
Note: Currently, once the organizational structure is generated, if modifications are needed, only manual adjustments to the certificates can be made, so it is essential to plan the consortium in advance. Future support for dynamic online adjustments to organizational structures and node identities will be available.
2. Generate Initial Block for Ordering Service Startup
When the Orderer node starts, it can specify using the pre-generated initial block file as the initial configuration for the system channel. The initial block includes relevant configuration information for the Ordering service and consortium information. The initial block can be generated using the configtxgen tool. The generation process relies on the /etc/hyperledger/fabric/configtx.yaml file. The configtx.yaml configuration file defines the relevant configurations and topology information for the entire network.
Writing the configtx.yaml configuration file can refer to examples in the Fabric code (such as in the examples/e2e_cli path or sampleconfig path). Here, the following content will be used for generation:
Profiles:
TwoOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
TwoOrgsChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.com/msp
BCCSP:
Default: SW
SW:
Hash: SHA2
Security: 256
FileKeyStore:
KeyStore:
- &Org1
Name: Org1MSP
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
BCCSP:
Default: SW
SW:
Hash: SHA2
Security: 256
FileKeyStore:
KeyStore:
AnchorPeers:
- Host: peer0.org1.example.com
Port: 7051
- &Org2
Name: Org2MSP
ID: Org2MSP
MSPDir: crypto-config/peerOrganizations/org2.example.com/msp
BCCSP:
Default: SW
SW:
Hash: SHA2
Security: 256
FileKeyStore:
KeyStore:
AnchorPeers:
- Host: peer0.org2.example.com
Port: 7051
Orderer: &OrdererDefaults
OrdererType: solo
Addresses:
- orderer.example.com:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- 127.0.0.1:9092
Organizations:
Application: &ApplicationDefaults
Organizations:
This configuration file defines two templates: TwoOrgsOrdererGenesis and TwoOrgsChannel, where the former can be used to generate the initial block file for the Ordering service. By executing the following command, specifying the use of the TwoOrgsOrdererGenesis template defined in the configtx.yaml file, the initial block file for the Ordering service system channel can be generated. Note that the ordering service type here adopts a simple solo mode; in production environments, a Kafka cluster service can be used.