diff --git a/docs/en/10-cluster/02-cluster-mgmt.md b/docs/en/10-cluster/02-cluster-mgmt.md index 674c92e2766a4eb304079140af19c8efea72d55e..bd3386c41161fc55b4bedcecd6ad3ab5c35be8b6 100644 --- a/docs/en/10-cluster/02-cluster-mgmt.md +++ b/docs/en/10-cluster/02-cluster-mgmt.md @@ -54,14 +54,14 @@ Database changed. taos> show vgroups; vgId | tables | status | onlines | v1_dnode | v1_status | compacting | ========================================================================================== - 14 | 38000 | ready | 1 | 1 | master | 0 | - 15 | 38000 | ready | 1 | 1 | master | 0 | - 16 | 38000 | ready | 1 | 1 | master | 0 | - 17 | 38000 | ready | 1 | 1 | master | 0 | - 18 | 37001 | ready | 1 | 1 | master | 0 | - 19 | 37000 | ready | 1 | 1 | master | 0 | - 20 | 37000 | ready | 1 | 1 | master | 0 | - 21 | 37000 | ready | 1 | 1 | master | 0 | + 14 | 38000 | ready | 1 | 1 | leader | 0 | + 15 | 38000 | ready | 1 | 1 | leader | 0 | + 16 | 38000 | ready | 1 | 1 | leader | 0 | + 17 | 38000 | ready | 1 | 1 | leader | 0 | + 18 | 37001 | ready | 1 | 1 | leader | 0 | + 19 | 37000 | ready | 1 | 1 | leader | 0 | + 20 | 37000 | ready | 1 | 1 | leader | 0 | + 21 | 37000 | ready | 1 | 1 | leader | 0 | Query OK, 8 row(s) in set (0.001154s) ``` @@ -161,14 +161,14 @@ First `show vgroups` is executed to show the vgroup distribution. taos> show vgroups; vgId | tables | status | onlines | v1_dnode | v1_status | compacting | ========================================================================================== - 14 | 38000 | ready | 1 | 3 | master | 0 | - 15 | 38000 | ready | 1 | 3 | master | 0 | - 16 | 38000 | ready | 1 | 3 | master | 0 | - 17 | 38000 | ready | 1 | 3 | master | 0 | - 18 | 37001 | ready | 1 | 3 | master | 0 | - 19 | 37000 | ready | 1 | 1 | master | 0 | - 20 | 37000 | ready | 1 | 1 | master | 0 | - 21 | 37000 | ready | 1 | 1 | master | 0 | + 14 | 38000 | ready | 1 | 3 | leader | 0 | + 15 | 38000 | ready | 1 | 3 | leader | 0 | + 16 | 38000 | ready | 1 | 3 | leader | 0 | + 17 | 38000 | ready | 1 | 3 | leader | 0 | + 18 | 37001 | ready | 1 | 3 | leader | 0 | + 19 | 37000 | ready | 1 | 1 | leader | 0 | + 20 | 37000 | ready | 1 | 1 | leader | 0 | + 21 | 37000 | ready | 1 | 1 | leader | 0 | Query OK, 8 row(s) in set (0.001314s) ``` @@ -191,14 +191,14 @@ Query OK, 0 row(s) in set (0.000575s) taos> show vgroups; vgId | tables | status | onlines | v1_dnode | v1_status | v2_dnode | v2_status | compacting | ================================================================================================================= - 14 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 | - 15 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 | - 16 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 | - 17 | 38000 | ready | 1 | 3 | master | 0 | NULL | 0 | - 18 | 37001 | ready | 2 | 1 | slave | 3 | master | 0 | - 19 | 37000 | ready | 1 | 1 | master | 0 | NULL | 0 | - 20 | 37000 | ready | 1 | 1 | master | 0 | NULL | 0 | - 21 | 37000 | ready | 1 | 1 | master | 0 | NULL | 0 | + 14 | 38000 | ready | 1 | 3 | leader | 0 | NULL | 0 | + 15 | 38000 | ready | 1 | 3 | leader | 0 | NULL | 0 | + 16 | 38000 | ready | 1 | 3 | leader | 0 | NULL | 0 | + 17 | 38000 | ready | 1 | 3 | leader | 0 | NULL | 0 | + 18 | 37001 | ready | 2 | 1 | follower | 3 | leader | 0 | + 19 | 37000 | ready | 1 | 1 | leader | 0 | NULL | 0 | + 20 | 37000 | ready | 1 | 1 | leader | 0 | NULL | 0 | + 21 | 37000 | ready | 1 | 1 | leader | 0 | NULL | 0 | Query OK, 8 row(s) in set (0.001242s) ``` @@ -207,7 +207,7 @@ It can be seen from above output that vgId 18 has been moved from dnode 3 to dno :::note - Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0. -- Only a vnode in normal state, i.e. master or slave, can be moved. vnode can't be moved when its in status offline, unsynced or syncing. +- Only a vnode in normal state, i.e. leader or follower, can be moved. vnode can't be moved when its in status offline, unsynced or syncing. - Before moving a vnode, it's necessary to make sure the target dnode has enough resources: CPU, memory and disk. ::: diff --git a/docs/en/10-cluster/03-ha-and-lb.md b/docs/en/10-cluster/03-ha-and-lb.md index bd718eef9f8dc181628132de831dbca2af59d158..9780e8f6c68904e444d07c6a8c87b095c6b70ead 100644 --- a/docs/en/10-cluster/03-ha-and-lb.md +++ b/docs/en/10-cluster/03-ha-and-lb.md @@ -27,7 +27,7 @@ There may be multiple dnodes in a cluster, but only one mnode can be started in SHOW MNODES; ``` -The end point and role/status (master, slave, unsynced, or offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work. If `numOfMNodes` is configured to 2, another mnode will be started when the second dnode is launched. +The end point and role/status (leader, follower, unsynced, or offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work. If `numOfMNodes` is configured to 2, another mnode will be started when the second dnode is launched. For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher. @@ -58,13 +58,13 @@ When a dnode is offline, it can be detected by the TDengine cluster. There are t - If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join the cluster automatically. The system administrator has to manually join the dnode to the cluster. :::note -If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted on, after all the vnodes or mnodes in the group become online and can exchange status. Following this, the vgroup (or mnode group) is able to provide service. +If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the leader node can only be voted on, after all the vnodes or mnodes in the group become online and can exchange status. Following this, the vgroup (or mnode group) is able to provide service. ::: ## Arbitrator -The "arbitrator" component is used to address the special case when the number of replicas is set to an even number like 2,4 etc. If half of the vnodes in a vgroup don't work, it is impossible to vote and select a master node. This situation also applies to mnodes if the number of mnodes is set to an even number like 2,4 etc. +The "arbitrator" component is used to address the special case when the number of replicas is set to an even number like 2,4 etc. If half of the vnodes in a vgroup don't work, it is impossible to vote and select a leader node. This situation also applies to mnodes if the number of mnodes is set to an even number like 2,4 etc. To resolve this problem, a new arbitrator component named `tarbitrator`, an abbreviation of TDengine Arbitrator, was introduced. The `tarbitrator` simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally. diff --git a/docs/en/14-reference/07-tdinsight/assets/15146-tdengine-monitor-dashboard.json b/docs/en/14-reference/07-tdinsight/assets/15146-tdengine-monitor-dashboard.json index f651983528ca824b4e6b14586aac5a5bfb4ecab8..54dc1062d6440cc0fc7b8c69d9e4c6b53e4cd01e 100644 --- a/docs/en/14-reference/07-tdinsight/assets/15146-tdengine-monitor-dashboard.json +++ b/docs/en/14-reference/07-tdinsight/assets/15146-tdengine-monitor-dashboard.json @@ -211,7 +211,7 @@ ], "timeFrom": null, "timeShift": null, - "title": "Master MNode", + "title": "Leader MNode", "transformations": [ { "id": "filterByValue", @@ -221,7 +221,7 @@ "config": { "id": "regex", "options": { - "value": "master" + "value": "leader" } }, "fieldName": "role" @@ -300,7 +300,7 @@ ], "timeFrom": null, "timeShift": null, - "title": "Master MNode Create Time", + "title": "Leader MNode Create Time", "transformations": [ { "id": "filterByValue", @@ -310,7 +310,7 @@ "config": { "id": "regex", "options": { - "value": "master" + "value": "leader" } }, "fieldName": "role" diff --git a/docs/en/14-reference/07-tdinsight/assets/tdengine-grafana-7.x.json b/docs/en/14-reference/07-tdinsight/assets/tdengine-grafana-7.x.json index b4254c428b28a0084e54b5e3c509dd2e0ec651b9..1add8522a712aa2cfef6187e577c42d205432b66 100644 --- a/docs/en/14-reference/07-tdinsight/assets/tdengine-grafana-7.x.json +++ b/docs/en/14-reference/07-tdinsight/assets/tdengine-grafana-7.x.json @@ -153,7 +153,7 @@ ], "timeFrom": null, "timeShift": null, - "title": "Master MNode", + "title": "Leader MNode", "transformations": [ { "id": "filterByValue", @@ -163,7 +163,7 @@ "config": { "id": "regex", "options": { - "value": "master" + "value": "leader" } }, "fieldName": "role" @@ -246,7 +246,7 @@ ], "timeFrom": null, "timeShift": null, - "title": "Master MNode Create Time", + "title": "Leader MNode Create Time", "transformations": [ { "id": "filterByValue", @@ -256,7 +256,7 @@ "config": { "id": "regex", "options": { - "value": "master" + "value": "leader" } }, "fieldName": "role" diff --git a/docs/en/14-reference/07-tdinsight/index.md b/docs/en/14-reference/07-tdinsight/index.md index cebfafa225e6e8de75ff84bb51fa664784177910..e74c9de7b2aa71278a99d45f250e0dcaf86d4704 100644 --- a/docs/en/14-reference/07-tdinsight/index.md +++ b/docs/en/14-reference/07-tdinsight/index.md @@ -274,8 +274,8 @@ Details of the metrics are as follows. This section contains the current information and status of the cluster, the alert information is also here (from left to right, top to bottom). - **First EP**: the `firstEp` setting in the current TDengine cluster. -- **Version**: TDengine server version (master mnode). -- **Master Uptime**: The time elapsed since the current Master MNode was elected as Master. +- **Version**: TDengine server version (leader mnode). +- **Leader Uptime**: The time elapsed since the current Leader MNode was elected as Leader. - **Expire Time** - Enterprise version expiration time. - **Used Measuring Points** - The number of measuring points used by the Enterprise Edition. - **Databases** - The number of databases. @@ -333,7 +333,7 @@ Data node resource usage display with repeated multiple rows for the variable `$ 2. **Has MNodes?**: whether the current dnode is a mnode. 3. **CPU Cores**: the number of CPU cores. 4. **VNodes Number**: the number of VNodes in the current dnode. -5. **VNodes Masters**: the number of vnodes in the master role. +5. **VNodes Masters**: the number of vnodes in the leader role. 6. **Current CPU Usage of taosd**: CPU usage rate of taosd processes. 7. **Current Memory Usage of taosd**: memory usage of taosd processes. 8. **Disk Used**: The total disk usage percentage of the taosd data directory. diff --git a/docs/en/21-tdinternal/01-arch.md b/docs/en/21-tdinternal/01-arch.md index 4d8bed4d2d6b3a0404e10213aeab599767325cc2..d7d472eb98a22325e850f4f040dccaa34d02bbff 100644 --- a/docs/en/21-tdinternal/01-arch.md +++ b/docs/en/21-tdinternal/01-arch.md @@ -22,9 +22,9 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc **Virtual node (vnode)**: To better support data sharding, load balancing and prevent data from overheating or skewing, data nodes are virtualized into multiple virtual nodes (vnode, V2, V3, V4, etc. in the figure). Each vnode is a relatively independent work unit, which is the basic unit of time-series data storage and has independent running threads, memory space and persistent storage path. A vnode contains a certain number of tables (data collection points). When a new table is created, the system checks whether a new vnode needs to be created. The number of vnodes that can be created on a data node depends on the capacity of the hardware of the physical node where the data node is located. A vnode belongs to only one DB, but a DB can have multiple vnodes. In addition to the stored time-series data, a vnode also stores the schema and tag values of the included tables. A virtual node is uniquely identified in the system by the EP of the data node and the VGroup ID to which it belongs and is created and managed by the management node. -**Management node (mnode)**: A virtual logical unit responsible for monitoring and maintaining the running status of all data nodes and load balancing among nodes (M in the figure). At the same time, the management node is also responsible for the storage and management of metadata (including users, databases, tables, static tags, etc.), so it is also called Meta Node. Multiple (up to 5) mnodes can be configured in a TDengine cluster, and they are automatically constructed into a virtual management node group (M0, M1, M2 in the figure). The master/slave mechanism is adopted for the mnode group and the data synchronization is carried out in a strongly consistent way. Any data update operation can only be executed on the master. The creation of mnode cluster is completed automatically by the system without manual intervention. There is at most one mnode on each dnode, which is uniquely identified by the EP of the data node to which it belongs. Each dnode automatically obtains the EP of the dnode where all mnodes in the whole cluster are located, through internal messaging interaction. +**Management node (mnode)**: A virtual logical unit responsible for monitoring and maintaining the running status of all data nodes and load balancing among nodes (M in the figure). At the same time, the management node is also responsible for the storage and management of metadata (including users, databases, tables, static tags, etc.), so it is also called Meta Node. Multiple (up to 5) mnodes can be configured in a TDengine cluster, and they are automatically constructed into a virtual management node group (M0, M1, M2 in the figure). The leader/follower mechanism is adopted for the mnode group and the data synchronization is carried out in a strongly consistent way. Any data update operation can only be executed on the leader. The creation of mnode cluster is completed automatically by the system without manual intervention. There is at most one mnode on each dnode, which is uniquely identified by the EP of the data node to which it belongs. Each dnode automatically obtains the EP of the dnode where all mnodes in the whole cluster are located, through internal messaging interaction. -**Virtual node group (VGroup)**: Vnodes on different data nodes can form a virtual node group to ensure the high availability of the system. The virtual node group is managed in a master/slave mechanism. Write operations can only be performed on the master vnode, and then replicated to slave vnodes, thus ensuring that one single replica of data is copied on multiple physical nodes. The number of virtual nodes in a vgroup equals the number of data replicas. If the number of replicas of a DB is N, the system must have at least N data nodes. The number of replicas can be specified by the parameter `“replica”` when creating a DB, and the default is 1. Using the multi-replication feature of TDengine, the same high data reliability can be achieved without the need for expensive storage devices such as disk arrays. Virtual node groups are created and managed by the management node, and the management node assigns a system unique ID, aka VGroup ID. If two virtual nodes have the same vnode group ID, it means that they belong to the same group and the data is backed up to each other. The number of virtual nodes in a virtual node group can be dynamically changed, allowing only one, that is, no data replication. VGroup ID is never changed. Even if a virtual node group is deleted, its ID will not be reused. +**Virtual node group (VGroup)**: Vnodes on different data nodes can form a virtual node group to ensure the high availability of the system. The virtual node group is managed in a leader/follower mechanism. Write operations can only be performed on the leader vnode, and then replicated to follower vnodes, thus ensuring that one single replica of data is copied on multiple physical nodes. The number of virtual nodes in a vgroup equals the number of data replicas. If the number of replicas of a DB is N, the system must have at least N data nodes. The number of replicas can be specified by the parameter `“replica”` when creating a DB, and the default is 1. Using the multi-replication feature of TDengine, the same high data reliability can be achieved without the need for expensive storage devices such as disk arrays. Virtual node groups are created and managed by the management node, and the management node assigns a system unique ID, aka VGroup ID. If two virtual nodes have the same vnode group ID, it means that they belong to the same group and the data is backed up to each other. The number of virtual nodes in a virtual node group can be dynamically changed, allowing only one, that is, no data replication. VGroup ID is never changed. Even if a virtual node group is deleted, its ID will not be reused. **TAOSC**: TAOSC is the driver provided by TDengine to applications. It is responsible for dealing with the interaction between application and cluster, and provides the native interface for the C/C++ language. It is also embedded in the JDBC, C #, Python, Go, Node.js language connection libraries. Applications interact with the whole cluster through TAOSC instead of directly connecting to data nodes in the cluster. This module is responsible for obtaining and caching metadata; forwarding requests for insertion, query, etc. to the correct data node; when returning the results to the application, TAOSC also needs to be responsible for the final level of aggregation, sorting, filtering and other operations. For JDBC, C/C++/C#/Python/Go/Node.js interfaces, this module runs on the physical node where the application is located. At the same time, in order to support the fully distributed RESTful interface, TAOSC has a running instance on each dnode of TDengine cluster. @@ -62,13 +62,13 @@ To explain the relationship between vnode, mnode, TAOSC and application and thei 1. Application initiates a request to insert data through JDBC, ODBC, or other APIs. 2. TAOSC checks the cache to see if meta data exists for the table. If it does, it goes straight to Step 4. If not, TAOSC sends a get meta-data request to mnode. 3. Mnode returns the meta-data of the table to TAOSC. Meta-data contains the schema of the table, and also the vgroup information to which the table belongs (the vnode ID and the End Point of the dnode where the table belongs. If the number of replicas is N, there will be N groups of End Points). If TAOSC does not receive a response from the mnode for a long time, and there are multiple mnodes, TAOSC will send a request to the next mnode. -4. TAOSC initiates an insert request to master vnode. +4. TAOSC initiates an insert request to leader vnode. 5. After vnode inserts the data, it gives a reply to TAOSC, indicating that the insertion is successful. If TAOSC doesn't get a response from vnode for a long time, TAOSC will treat this node as offline. In this case, if there are multiple replicas of the inserted database, TAOSC will issue an insert request to the next vnode in vgroup. 6. TAOSC notifies APP that writing is successful. For Step 2 and 3, when TAOSC starts, it does not know the End Point of mnode, so it will directly initiate a request to the configured serving End Point of the cluster. If the dnode that receives the request does not have a mnode configured, it will reply with the mnode EP list, so that TAOSC will re-issue a request to obtain meta-data to the EP of another mnode. -For Step 4 and 5, without caching, TAOSC can't recognize the master in the virtual node group, so assumes that the first vnode is the master and sends a request to it. If this vnode is not the master, it will reply to the actual master as a new target to which TAOSC shall send a request. Once a response of successful insertion is obtained, TAOSC will cache the information of master node. +For Step 4 and 5, without caching, TAOSC can't recognize the leader in the virtual node group, so assumes that the first vnode is the leader and sends a request to it. If this vnode is not the leader, it will reply to the actual leader as a new target to which TAOSC shall send a request. Once a response of successful insertion is obtained, TAOSC will cache the information of leader node. The above describes the process of inserting data. The processes of querying and computing are the same. TAOSC encapsulates and hides all these complicated processes, and it is transparent to applications. @@ -119,65 +119,65 @@ The load balancing process does not require any manual intervention, and it is t ## Data Writing and Replication Process -If a database has N replicas, a virtual node group has N virtual nodes. But only one is the Master and all others are slaves. When the application writes a new record to system, only the Master vnode can accept the writing request. If a slave vnode receives a writing request, the system will notifies TAOSC to redirect. +If a database has N replicas, a virtual node group has N virtual nodes. But only one is the Leader and all others are slaves. When the application writes a new record to system, only the Leader vnode can accept the writing request. If a follower vnode receives a writing request, the system will notifies TAOSC to redirect. -### Master vnode Writing Process +### Leader vnode Writing Process -Master Vnode uses a writing process as follows: +Leader Vnode uses a writing process as follows: -![TDengine Database Master Writing Process](write_master.webp) -