@@ -284,7 +284,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14
TDengine 对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine 引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个 STable 下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示:
-
+
图 5 多表聚合查询原理图
diff --git a/docs-cn/21-tdinternal/02-replica.md b/docs-cn/21-tdinternal/02-replica.md
index a0e3437c164b04b81c6e0009dfc69f5c44a602a9..25d1edab6e9b97be13c8675491cc90ed54520865 100644
--- a/docs-cn/21-tdinternal/02-replica.md
+++ b/docs-cn/21-tdinternal/02-replica.md
@@ -93,7 +93,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
具体的流程图如下:
-
+
选择Master的具体规则如下:
@@ -108,7 +108,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
如果vnode A是master, vnode B是slave, vnode A能接受客户端的写请求,而vnode B不能。当vnode A收到写的请求后,遵循下面的流程:
-
+
1. 应用对写请求做基本的合法性检查,通过,则给该请求包打上一个版本号(version, 单调递增)
2. 应用将打上版本号的写请求封装一个WAL Head, 写入WAL(Write Ahead Log)
@@ -143,7 +143,7 @@ TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性
整个数据恢复流程分为两大步骤,第一步,先恢复archived data(file), 然后恢复wal。具体流程如下:
-
+
1. 通过已经建立的TCP连接,发送sync req给master节点
2. master收到sync req后,以client的身份,向vnode B主动建立一新的专用于同步的TCP连接(syncFd)
diff --git a/docs-cn/21-tdinternal/03-taosd.md b/docs-cn/21-tdinternal/03-taosd.md
index 40959406b4e6f7b47808de7acca821278fd6cf37..0cf0a1aaa222e82f7ca6cc4f0314aa5a50442924 100644
--- a/docs-cn/21-tdinternal/03-taosd.md
+++ b/docs-cn/21-tdinternal/03-taosd.md
@@ -9,7 +9,7 @@ title: taosd的设计
taosd 包含 rpc,dnode,vnode,tsdb,query,cq,sync,wal,mnode,http,monitor 等模块,具体如下图:
-
+
taosd 的启动入口是 dnode 模块,dnode 然后启动其他模块,包括可选配置的 http,monitor 模块。taosc 或 dnode 之间交互的消息都是通过 rpc 模块进行,dnode 模块根据接收到的消息类型,将消息分发到 vnode 或 mnode 的消息队列,或由 dnode 模块自己消费。dnode 的工作线程(worker)消费消息队列里的消息,交给 mnode 或 vnode 进行处理。下面对各个模块做简要说明。
@@ -44,13 +44,13 @@ RPC 模块还提供数据压缩功能,如果数据包的字节数超过系统
taosd 的消息消费由 dnode 通过读写线程池进行控制,是系统的中枢。该模块内的结构体图如下:
-
+
## VNODE 模块
vnode 是一独立的数据存储查询逻辑单元,但因为一个 vnode 只能容许一个 DB ,因此 vnode 内部没有 account,DB,user 等概念。为实现更好的模块化、封装以及未来的扩展,它有很多子模块,包括负责存储的 TSDB,负责查询的 query,负责数据复制的 sync,负责数据库日志的的 WAL,负责连续查询的 cq(continuous query),负责事件触发的流计算的 event 等模块,这些子模块只与 vnode 模块发生关系,与其他模块没有任何调用关系。模块图如下:
-
+
vnode 模块向下,与 dnodeVRead,dnodeVWrite 发生互动,向上,与子模块发生互动。它主要的功能有:
diff --git a/docs-cn/25-application/01-telegraf.md b/docs-cn/25-application/01-telegraf.md
index 5bfc94c53410f6142b3bc24f696334c334cde933..95df8699ef85b02d6e9dba398c787644fc9089b2 100644
--- a/docs-cn/25-application/01-telegraf.md
+++ b/docs-cn/25-application/01-telegraf.md
@@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + Telegraf + Grafana 的 IT 运维系统。架构如下图:
-
+
## 安装步骤
@@ -75,7 +75,7 @@ sudo systemctl start telegraf
点击左侧齿轮图标并选择 `Plugins`,应该可以找到 TDengine data source 插件图标。
点击左侧加号图标并选择 `Import`,从 `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json` 下载 dashboard JSON 文件后导入。之后可以看到如下界面的仪表盘:
-![IT-DevOps-Solutions-telegraf-dashboard.webp]./IT-DevOps-Solutions-telegraf-dashboard.webp)
+
## 总结
diff --git a/docs-cn/25-application/02-collectd.md b/docs-cn/25-application/02-collectd.md
index 5966f2d6544c78adb806d51e8a4157ba7dc420e9..78c61bb969092d7040ddcb3d02ce7bd29a784858 100644
--- a/docs-cn/25-application/02-collectd.md
+++ b/docs-cn/25-application/02-collectd.md
@@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + collectd / statsD + Grafana 的 IT 运维系统。架构如下图:
-
+
## 安装步骤
@@ -81,12 +81,12 @@ repeater 部分添加 { host:'', port: Figure 1. TDengine Technical Ecosystem
diff --git a/docs-en/05-get-started/index.md b/docs-en/05-get-started/index.md
index 858dd6ac56e3a523220903fc63335dfdc573b752..56958ef3ec1c206ee0cff45c67fd3c3a6fa6753a 100644
--- a/docs-en/05-get-started/index.md
+++ b/docs-en/05-get-started/index.md
@@ -130,7 +130,7 @@ After TDengine server is running,execute `taosBenchmark` (previously named tao
taosBenchmark
```
-This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with names from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "California.SanFrancisco" or "California.SanDieo".
+This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with names from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "California.SanFrancisco" or "California.SanDiego".
This command will insert 100 million rows into the database quickly. Time to insert depends on the hardware configuration, it only takes a dozen seconds for a regular PC server.
diff --git a/docs-en/07-develop/08-udf.md b/docs-en/07-develop/08-udf.md
index 61639e34404477d3bb5785da129a1d922a4d020e..0ee61740cc8b8aad7dd39707a1153b022822f0a9 100644
--- a/docs-en/07-develop/08-udf.md
+++ b/docs-en/07-develop/08-udf.md
@@ -1,24 +1,31 @@
---
sidebar_label: UDF
title: User Defined Functions
-description: "Scalar functions and aggregate functions developed by users can be utilized by the query framework to expand the query capability"
+description: "Scalar functions and aggregate functions developed by users can be utilized by the query framework to expand query capability"
---
-In some use cases, the query capability required by application programs can't be achieved directly by builtin functions. With UDF, the functions developed by users can be utilized by query framework to meet some special requirements. UDF normally takes one column of data as input, but can also support the result of sub query as input.
+In some use cases, built-in functions are not adequate for the query capability required by application programs. With UDF, the functions developed by users can be utilized by the query framework to meet business and application requirements. UDF normally takes one column of data as input, but can also support the result of a sub-query as input.
-From version 2.2.0.0, UDF programmed in C/C++ language can be supported by TDengine.
+From version 2.2.0.0, UDF written in C/C++ are supported by TDengine.
-Two kinds of functions can be implemented by UDF: scalar function and aggregate function.
-## Define UDF
+## Types of UDF
+
+Two kinds of functions can be implemented by UDF: scalar functions and aggregate functions.
+
+Scalar functions return multiple rows and aggregate functions return either 0 or 1 row.
+
+In the case of a scalar function you only have to implement the "normal" function template.
+
+In the case of an aggregate function, in addition to the "normal" function, you also need to implement the "merge" and "finalize" function templates even if the implementation is empty. This will become clear in the sections below.
### Scalar Function
-Below function template can be used to define your own scalar function.
+As mentioned earlier, a scalar UDF only has to implement the "normal" function template. The function template below can be used to define your own scalar function.
`void udfNormalFunc(char* data, short itype, short ibytes, int numOfRows, long long* ts, char* dataOutput, char* interBuf, char* tsOutput, int* numOfOutput, short otype, short obytes, SUdfInit* buf)`
-`udfNormalFunc` is the place holder of function name, a function implemented based on the above template can be used to perform scalar computation on data rows. The parameters are fixed to control the data exchange between UDF and TDengine.
+`udfNormalFunc` is the place holder for a function name. A function implemented based on the above template can be used to perform scalar computation on data rows. The parameters are fixed to control the data exchange between UDF and TDengine.
- Definitions of the parameters:
@@ -30,20 +37,24 @@ Below function template can be used to define your own scalar function.
- numOfRows:the number of rows in the input data
- ts: the column of timestamp corresponding to the input data
- dataOutput:the buffer for output data, total size is `oBytes * numberOfRows`
- - interBuf:the buffer for intermediate result, its size is specified by `BUFSIZE` parameter when creating a UDF. It's normally used when the intermediate result is not same as the final result, it's allocated and freed by TDengine.
+ - interBuf:the buffer for an intermediate result. Its size is specified by the `BUFSIZE` parameter when creating a UDF. It's normally used when the intermediate result is not same as the final result. This buffer is allocated and freed by TDengine.
- tsOutput:the column of timestamps corresponding to the output data; it can be used to output timestamp together with the output data if it's not NULL
- numOfOutput:the number of rows in output data
- buf:for the state exchange between UDF and TDengine
- [add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c) is one example of the simplest UDF implementations, i.e. one instance of the above `udfNormalFunc` template. It adds one to each value of a column passed in which can be filtered using `where` clause and outputs the result.
+ [add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c) is one example of a very simple UDF implementation, i.e. one instance of the above `udfNormalFunc` template. It adds one to each value of a passed in column, which can be filtered using the `where` clause, and outputs the result.
### Aggregate Function
-Below function template can be used to define your own aggregate function.
+For aggregate UDF, as mentioned earlier you must implement a "normal" function template (described above) and also implement the "merge" and "finalize" templates.
-`void abs_max_merge(char* data, int32_t numOfRows, char* dataOutput, int32_t* numOfOutput, SUdfInit* buf)`
+#### Merge Function Template
-`udfMergeFunc` is the place holder of function name, the function implemented with the above template is used to aggregate the intermediate result, only can be used in the aggregate query for STable.
+The function template below can be used to define your own merge function for an aggregate UDF.
+
+`void udfMergeFunc(char* data, int32_t numOfRows, char* dataOutput, int32_t* numOfOutput, SUdfInit* buf)`
+
+`udfMergeFunc` is the place holder for a function name. The function implemented with the above template is used to aggregate intermediate results and can only be used in the aggregate query for STable.
Definitions of the parameters:
@@ -53,17 +64,11 @@ Definitions of the parameters:
- numOfOutput:number of rows in the output data
- buf:for the state exchange between UDF and TDengine
-[abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c) is an user defined aggregate function to get the maximum from the absolute value of a column.
-
-The internal processing is that the data affected by the select statement will be divided into multiple row blocks and `udfNormalFunc`, i.e. `abs_max` in this case, is performed on each row block to generate the intermediate of each sub table, then `udfMergeFunc`, i.e. `abs_max_merge` in this case, is performed on the intermediate result of sub tables to aggregate to generate the final or intermediate result of STable. The intermediate result of STable is finally processed by `udfFinalizeFunc` to generate the final result, which contain either 0 or 1 row.
-
-Other typical scenarios, like covariance, can also be achieved by aggregate UDF.
+#### Finalize Function Template
-### Finalize
+The function template below can be used to finalize the result of your own UDF, normally used when interBuf is used.
-Below function template can be used to finalize the result of your own UDF, normally used when interBuf is used.
-
-`void abs_max_finalize(char* dataOutput, char* interBuf, int* numOfOutput, SUdfInit* buf)`
+`void udfFinalizeFunc(char* dataOutput, char* interBuf, int* numOfOutput, SUdfInit* buf)`
`udfFinalizeFunc` is the place holder of function name, definitions of the parameter are as below:
@@ -72,47 +77,64 @@ Below function template can be used to finalize the result of your own UDF, norm
- numOfOutput:number of output data, can only be 0 or 1 for aggregate function
- buf:for state exchange between UDF and TDengine
-## UDF Conventions
+### Example abs_max.c
+
+[abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c) is an example of a user defined aggregate function to get the maximum from the absolute values of a column.
+
+The internal processing happens as follows. The results of the select statement are divided into multiple row blocks and `udfNormalFunc`, i.e. `abs_max` in this case, is performed on each row block to generate the intermediate results for each sub table. Then `udfMergeFunc`, i.e. `abs_max_merge` in this case, is performed on the intermediate result of sub tables to aggregate and generate the final or intermediate result of STable. The intermediate result of STable is finally processed by `udfFinalizeFunc`, i.e. `abs_max_finalize` in this example, to generate the final result, which contains either 0 or 1 row.
+
+Other typical aggregation functions such as covariance, can also be implemented using aggregate UDF.
-The naming of 3 kinds of UDF, i.e. udfNormalFunc, udfMergeFunc, and udfFinalizeFunc is required to have same prefix, i.e. the actual name of udfNormalFunc, which means udfNormalFunc doesn't need a suffix following the function name. While udfMergeFunc should be udfNormalFunc followed by `_merge`, udfFinalizeFunc should be udfNormalFunc followed by `_finalize`. The naming convention is part of UDF framework, TDengine follows this convention to invoke corresponding actual functions.\
+## UDF Naming Conventions
-According to the kind of UDF to implement, the functions that need to be implemented are different.
+The naming convention for the 3 kinds of function templates required by UDF is as follows:
+ - udfNormalFunc, udfMergeFunc, and udfFinalizeFunc are required to have same prefix, i.e. the actual name of udfNormalFunc. The udfNormalFunc doesn't need a suffix following the function name.
+ - udfMergeFunc should be udfNormalFunc followed by `_merge`
+ - udfFinalizeFunc should be udfNormalFunc followed by `_finalize`.
+
+The naming convention is part of TDengine's UDF framework. TDengine follows this convention to invoke the corresponding actual functions.
-- Scalar function:udfNormalFunc is required
-- Aggregate function:udfNormalFunc, udfMergeFunc (if query on STable) and udfFinalizeFunc are required
+Depending on whether you are creating a scalar UDF or aggregate UDF, the functions that you need to implement are different.
-To be more accurate, assuming we want to implement a UDF named "foo". If the function is a scalar function, what we really need to implement is `foo`; if the function is aggregate function, we need to implement `foo`, `foo_merge`, and `foo_finalize`. For aggregate UDF, even though one of the three functions is not necessary, there must be an empty implementation.
+- Scalar function:udfNormalFunc is required.
+- Aggregate function:udfNormalFunc, udfMergeFunc (if query on STable) and udfFinalizeFunc are required.
+
+For clarity, assuming we want to implement a UDF named "foo":
+- If the function is a scalar function, we only need to implement the "normal" function template and it should be named simply `foo`.
+- If the function is an aggregate function, we need to implement `foo`, `foo_merge`, and `foo_finalize`. Note that for aggregate UDF, even though one of the three functions is not necessary, there must be an empty implementation.
## Compile UDF
-The source code of UDF in C can't be utilized by TDengine directly. UDF can only be loaded into TDengine after compiling to dynamically linked library.
+The source code of UDF in C can't be utilized by TDengine directly. UDF can only be loaded into TDengine after compiling to dynamically linked library (DLL).
-For example, the example UDF `add_one.c` mentioned in previous sections need to be compiled into DLL using below command on Linux Shell.
+For example, the example UDF `add_one.c` mentioned earlier, can be compiled into DLL using the command below, in a Linux Shell.
```bash
gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
```
-The generated DLL file `dd_one.so` can be used later when creating UDF. It's recommended to use GCC not older than 7.5.
+The generated DLL file `add_one.so` can be used later when creating a UDF. It's recommended to use GCC not older than 7.5.
## Create and Use UDF
+When a UDF is created in a TDengine instance, it is available across the databases in that instance.
+
### Create UDF
-SQL command can be executed on the same hos where the generated UDF DLL resides to load the UDF DLL into TDengine, this operation can't be done through REST interface or web console. Once created, all the clients of the current TDengine can use these UDF functions in their SQL commands. UDF are stored in the management node of TDengine. The UDFs loaded in TDengine would be still available after TDengine is restarted.
+SQL command can be executed on the host where the generated UDF DLL resides to load the UDF DLL into TDengine. This operation cannot be done through REST interface or web console. Once created, any client of the current TDengine can use these UDF functions in their SQL commands. UDF are stored in the management node of TDengine. The UDFs loaded in TDengine would be still available after TDengine is restarted.
-When creating UDF, it needs to be clarified as either scalar function or aggregate function. If the specified type is wrong, the SQL statements using the function would fail with error. Besides, the input type and output type don't need to be same in UDF, but the input data type and output data type need to be consistent with the UDF definition.
+When creating UDF, the type of UDF, i.e. a scalar function or aggregate function must be specified. If the specified type is wrong, the SQL statements using the function would fail with errors. The input type and output type don't need to be the same in UDF, but the input data type and output data type must be consistent with the UDF definition.
- Create Scalar Function
```sql
-CREATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) [ BUFSIZE B ];
+CREATE FUNCTION userDefinedFunctionName AS "/absolute/path/to/userDefinedFunctionName.so" OUTPUTTYPE [BUFSIZE B];
```
-- ids(X):the function name to be sued in SQL statement, must be consistent with the function name defined by `udfNormalFunc`
-- ids(Y):the absolute path of the DLL file including the implementation of the UDF, the path needs to be quoted by single or double quotes
-- typename(Z):the output data type, the value is the literal string of the type
-- B:the size of intermediate buffer, in bytes; it's an optional parameter and the range is [0,512]
+- userDefinedFunctionName:The function name to be used in SQL statement which must be consistent with the function name defined by `udfNormalFunc` and is also the name of the compiled DLL (.so file).
+- path:The absolute path of the DLL file including the name of the shared object file (.so). The path must be quoted with single or double quotes.
+- outputtype:The output data type, the value is the literal string of the supported TDengine data type.
+- B:the size of intermediate buffer, in bytes; it is an optional parameter and the range is [0,512].
For example, below SQL statement can be used to create a UDF from `add_one.so`.
@@ -123,17 +145,17 @@ CREATE FUNCTION add_one AS "/home/taos/udf_example/add_one.so" OUTPUTTYPE INT;
- Create Aggregate Function
```sql
-CREATE AGGREGATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) [ BUFSIZE B ];
+CREATE AGGREGATE FUNCTION userDefinedFunctionName AS "/absolute/path/to/userDefinedFunctionName.so" OUTPUTTYPE [ BUFSIZE B ];
```
-- ids(X):the function name to be sued in SQL statement, must be consistent with the function name defined by `udfNormalFunc`
-- ids(Y):the absolute path of the DLL file including the implementation of the UDF, the path needs to be quoted by single or double quotes
-- typename(Z):the output data type, the value is the literal string of the type
+- userDefinedFunctionName:the function name to be used in SQL statement which must be consistent with the function name defined by `udfNormalFunc` and is also the name of the compiled DLL (.so file).
+- path:the absolute path of the DLL file including the name of the shared object file (.so). The path needs to be quoted by single or double quotes.
+- OUTPUTTYPE:the output data type, the value is the literal string of the type
- B:the size of intermediate buffer, in bytes; it's an optional parameter and the range is [0,512]
For details about how to use intermediate result, please refer to example program [demo.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/demo.c).
-For example, below SQL statement can be used to create a UDF rom `demo.so`.
+For example, below SQL statement can be used to create a UDF from `demo.so`.
```sql
CREATE AGGREGATE FUNCTION demo AS "/home/taos/udf_example/demo.so" OUTPUTTYPE DOUBLE bufsize 14;
@@ -176,11 +198,11 @@ In current version there are some restrictions for UDF
1. Only Linux is supported when creating and invoking UDF for both client side and server side
2. UDF can't be mixed with builtin functions
3. Only one UDF can be used in a SQL statement
-4. Single column is supported as input for UDF
+4. Only a single column is supported as input for UDF
5. Once created successfully, UDF is persisted in MNode of TDengineUDF
6. UDF can't be created through REST interface
7. The function name used when creating UDF in SQL must be consistent with the function name defined in the DLL, i.e. the name defined by `udfNormalFunc`
-8. The name name of UDF name should not conflict with any of builtin functions
+8. The name of a UDF should not conflict with any of TDengine's built-in functions
## Examples
diff --git a/docs-en/10-cluster/02-cluster-mgmt.md b/docs-en/10-cluster/02-cluster-mgmt.md
index 9d717be236e3e89114f58fc492223e3ad94fc9ea..674c92e2766a4eb304079140af19c8efea72d55e 100644
--- a/docs-en/10-cluster/02-cluster-mgmt.md
+++ b/docs-en/10-cluster/02-cluster-mgmt.md
@@ -3,16 +3,16 @@ sidebar_label: Operation
title: Manage DNODEs
---
-The previous section [Deployment](/cluster/deploy) introduced how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually.
+The previous section, [Deployment],(/cluster/deploy) showed you how to deploy and start a cluster from scratch. Once a cluster is ready, the status of dnode(s) in the cluster can be shown at any time. Dnodes can be managed from the TDengine CLI. New dnode(s) can be added to scale out the cluster, an existing dnode can be removed and you can even perform load balancing manually, if necessary.
:::note
-All the commands to be introduced in this chapter need to be run through TDengine CLI, sometimes it's necessary to use root privilege.
+All the commands introduced in this chapter must be run in the TDengine CLI - `taos`. Note that sometimes it is necessary to use root privilege.
:::
## Show DNODEs
-The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes, etc. It's suggested to execute this command to check after adding or removing a dnode.
+The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes and so on. We recommend executing this command after adding or removing a dnode.
```sql
SHOW DNODES;
@@ -30,7 +30,7 @@ Query OK, 1 row(s) in set (0.008298s)
## Show VGROUPs
-To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located in different dnodes, scaling out can be achieved by adding more vnodes from more dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode according to system resources of the dnodes.
+To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located on different dnodes. One way of scaling out is to add more vnodes on dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode based on system resources of the dnodes.
Launch TDengine CLI `taos` and execute below command:
@@ -87,7 +87,7 @@ taos> show dnodes;
Query OK, 2 row(s) in set (0.001017s)
```
-It can be seen that the status of the new dnode is "offline", once the dnode is started and connects the firstEp of the cluster, execute the command again and get the example output below, from which it can be seen that two dnodes are both in "ready" status.
+It can be seen that the status of the new dnode is "offline". Once the dnode is started and connects to the firstEp of the cluster, you can execute the command again and get the example output below. As can be seen, both dnodes are in "ready" status.
```
taos> show dnodes;
@@ -132,12 +132,12 @@ taos> show dnodes;
Query OK, 1 row(s) in set (0.001137s)
```
-In the above example, when `show dnodes` is executed the first time, two dnodes are shown. Then `drop dnode 2` is executed, after that from the output of executing `show dnodes` again it can be seen that only the dnode with ID 1 is still in the cluster.
+In the above example, when `show dnodes` is executed the first time, two dnodes are shown. After `drop dnode 2` is executed, you can execute `show dnodes` again and it can be seen that only the dnode with ID 1 is still in the cluster.
:::note
-- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Normally, before dropping a dnode, the data belonging to the dnode needs to be migrated to other place.
-- Please be noted that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
+- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Before dropping a dnode, the data belonging to the dnode MUST be migrated/backed up according to your data retention, data security or other SOPs.
+- Please note that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
diff --git a/docs-en/10-cluster/03-ha-and-lb.md b/docs-en/10-cluster/03-ha-and-lb.md
index 6e0c386abe4100ec59f60c1c90b3305e0d187c79..bd718eef9f8dc181628132de831dbca2af59d158 100644
--- a/docs-en/10-cluster/03-ha-and-lb.md
+++ b/docs-en/10-cluster/03-ha-and-lb.md
@@ -7,7 +7,7 @@ title: High Availability and Load Balancing
High availability of vnode and mnode can be achieved through replicas in TDengine.
-The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, the data service will be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
+A TDengine cluster can have multiple databases. Each database has a number of vnodes associated with it. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas. The default value for `replica` is 1. Naturally, a single replica cannot guarantee high availability since if one node is down, the data service is unavailable. Note that the number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation will fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
```sql
CREATE DATABASE demo replica 3;
@@ -15,19 +15,19 @@ CREATE DATABASE demo replica 3;
The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data.
-There may be data for multiple DBs in a dnode. Once a dnode is down, multiple DBs may be affected. However, it's hard to say the cluster is guaranteed to work properly as long as over half of dnodes are online because vnodes are introduced and there may be complex mapping between vnodes and dnodes.
+There may be data for multiple DBs in a dnode. When a dnode is down, multiple DBs may be affected. While in theory, the cluster will provide data access for reading or inserting data if over half the vnodes in vgroups are online, because of the possibly complex mapping between vnodes and dnodes, it is difficult to guarantee that the cluster will work properly if over half of the dnodes are online.
## High Availability of Mnode
-Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`, the valid time range is [1,3]. To make sure the data consistency between mnodes, the data replication between mnodes is performed in a synchronous way.
+Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`. The valid range for `numOfMnodes` is [1,3]. To ensure data consistency between mnodes, data replication between mnodes is performed synchronously.
-There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. Command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster.
+There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. The command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster.
```sql
SHOW MNODES;
```
-The end point and role/status (master, slave, unsynced, or offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode, because there must be at least one mnode otherwise the cluster doesn't work. If `numOfMNodes` is configured to 2, another mnode will be started when the second dnode is launched.
+The end point and role/status (master, slave, unsynced, or offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work. If `numOfMNodes` is configured to 2, another mnode will be started when the second dnode is launched.
For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher.
@@ -36,15 +36,16 @@ If high availability is important for your system, both vnode and mnode must be
:::
-## Load Balance
+## Load Balancing
-Load balance will be triggered in 3 cases without manual intervention.
+Load balancing will be triggered in 3 cases without manual intervention.
-- When a new dnode is joined in the cluster, automatic load balancing may be triggered, some data from some dnodes may be transferred to the new dnode automatically.
+- When a new dnode joins the cluster, automatic load balancing may be triggered. Some data from other dnodes may be transferred to the new dnode automatically.
- When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically.
- When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes.
+
:::tip
-Automatic load balancing is controlled by parameter `balance`, 0 means disabled and 1 means enabled.
+Automatic load balancing is controlled by the parameter `balance`, 0 means disabled and 1 means enabled. This is set in the file [taos.cfg](https://docs.tdengine.com/reference/config/#balance).
:::
@@ -52,22 +53,22 @@ Automatic load balancing is controlled by parameter `balance`, 0 means disabled
When a dnode is offline, it can be detected by the TDengine cluster. There are two cases:
-- The dnode becomes online again before the threshold configured in `offlineThreshold` is reached, it is still in the cluster and data replication is started automatically. The dnode can work properly after the data syncup is finished.
+- The dnode comes online before the threshold configured in `offlineThreshold` is reached. The dnode is still in the cluster and data replication is started automatically. The dnode can work properly after the data sync is finished.
-- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join in the cluster automatically, it can only be joined manually by the system operator.
+- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join the cluster automatically. The system administrator has to manually join the dnode to the cluster.
:::note
-If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted after all the vnodes or mnodes in the group become online and can exchange status, then the vgroup (or mnode group) is able to provide service.
+If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted on, after all the vnodes or mnodes in the group become online and can exchange status. Following this, the vgroup (or mnode group) is able to provide service.
:::
## Arbitrator
-If the number of replicas is set to an even number like 2, when half of the vnodes in a vgroup don't work a master node can't be voted. A similar case is also applicable to mnode if the number of mnodes is set to an even number like 2.
+The "arbitrator" component is used to address the special case when the number of replicas is set to an even number like 2,4 etc. If half of the vnodes in a vgroup don't work, it is impossible to vote and select a master node. This situation also applies to mnodes if the number of mnodes is set to an even number like 2,4 etc.
-To resolve this problem, a new arbitrator component named `tarbitrator`, abbreviated for TDengine Arbitrator, was introduced. Arbitrator simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally.
+To resolve this problem, a new arbitrator component named `tarbitrator`, an abbreviation of TDengine Arbitrator, was introduced. The `tarbitrator` simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally.
-Normally, it's suggested to configure a replica number of each DB or system parameter `numOfMNodes` to an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
+Normally, it's prudent to configure the replica number for each DB or system parameter `numOfMNodes` to be an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
Arbitrator component is installed with the server package. For details about how to install, please refer to [Install](/operation/pkg-install). The `-p` parameter of `tarbitrator` can be used to specify the port on which it provides service.
diff --git a/docs-en/12-taos-sql/01-data-type.md b/docs-en/12-taos-sql/01-data-type.md
index 86ec941f955516e99e6bb54730a55083bc26ed09..3f5a49e3135771c6c1e62bcf158a99ee30f1ed9d 100644
--- a/docs-en/12-taos-sql/01-data-type.md
+++ b/docs-en/12-taos-sql/01-data-type.md
@@ -1,17 +1,17 @@
---
title: Data Types
-description: "The data types supported by TDengine include timestamp, float, JSON, etc"
+description: "TDengine supports a variety of data types including timestamp, float, JSON and many others."
---
-When using TDengine to store and query data, the most important part of the data is timestamp. Timestamp must be specified when creating and inserting data rows or querying data, timestamp must follow the rules below:
+When using TDengine to store and query data, the most important part of the data is timestamp. Timestamp must be specified when creating and inserting data rows. Timestamp must follow the rules below:
-- the format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`
-- internal function `now` can be used to get the current timestamp of the client side
-- the current timestamp of the client side is applied when `now` is used to insert data
+- The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`
+- Internal function `now` can be used to get the current timestamp on the client side
+- The current timestamp of the client side is applied when `now` is used to insert data
- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from 1970-01-01 00:00:00.000 (UTC/GMT)
-- timestamp can be applied with add/subtract operation, for example `now-2h` means 2 hours back from the time at which query is executed,the unit can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operation.
+- Add/subtract operations can be carried out on timestamps. For example `now-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations.
-Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`, like below, the default time precision is millisecond.
+Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds.
```sql
CREATE DATABASE db_name PRECISION 'ns';
@@ -30,8 +30,8 @@ In TDengine, the data types below can be used when specifying a column or tag.
| 7 | SMALLINT | 2 | Short integer, the value range is [-32767, 32767], while -32768 is treated as NULL |
| 8 | TINYINT | 1 | Single-byte integer, the value range is [-127, 127], while -128 is treated as NULL |
| 9 | BOOL | 1 | Bool, the value range is {true, false} |
-| 10 | NCHAR | User Defined| Multiple-Byte string that can include like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
-| 11 | JSON | | json type can only be used on tag, a tag of json type is excluded with any other tags of any other type |
+| 10 | NCHAR | User Defined| Multi-Byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
+| 11 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type |
:::tip
TDengine is case insensitive and treats any characters in the sql command as lower case by default, case sensitive strings must be quoted with single quotes.
@@ -39,7 +39,7 @@ TDengine is case insensitive and treats any characters in the sql command as low
:::
:::note
-Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multiple-byte characters must be stored in NCHAR type.
+Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
:::
diff --git a/docs-en/12-taos-sql/02-database.md b/docs-en/12-taos-sql/02-database.md
index 98b75b30b3ebebb33ce1afe413554f218092bfeb..80581b2f1bc7ce9cd046c18873d3f22b6804d8cf 100644
--- a/docs-en/12-taos-sql/02-database.md
+++ b/docs-en/12-taos-sql/02-database.md
@@ -4,7 +4,7 @@ title: Database
description: "create and drop database, show or change database parameters"
---
-## Create Datable
+## Create Database
```
CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
@@ -12,11 +12,11 @@ CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
:::info
-1. KEEP specifies the number of days for which the data in the database to be created will be kept, the default value is 3650 days, i.e. 10 years. The data will be deleted automatically once its age exceeds this threshold.
+1. KEEP specifies the number of days for which the data in the database will be retained. The default value is 3650 days, i.e. 10 years. The data will be deleted automatically once its age exceeds this threshold.
2. UPDATE specifies whether the data can be updated and how the data can be updated.
- 1. UPDATE set to 0 means update operation is not allowed, the data with an existing timestamp will be dropped silently.
- 2. UPDATE set to 1 means the whole row will be updated, the columns for which no value is specified will be set to NULL
- 3. UPDATE set to 2 means updating a part of columns for a row is allowed, the columns for which no value is specified will be kept as no change
+ 1. UPDATE set to 0 means update operation is not allowed. The update for data with an existing timestamp will be discarded silently and the original record in the database will be preserved as is.
+ 2. UPDATE set to 1 means the whole row will be updated. The columns for which no value is specified will be set to NULL.
+ 3. UPDATE set to 2 means updating a subset of columns for a row is allowed. The columns for which no value is specified will be kept unchanged.
3. The maximum length of database name is 33 bytes.
4. The maximum length of a SQL statement is 65,480 bytes.
5. Below are the parameters that can be used when creating a database
@@ -35,7 +35,7 @@ CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
- maxVgroupsPerDb: [Description](/reference/config/#maxvgroupsperdb)
- comp: [Description](/reference/config/#comp)
- precision: [Description](/reference/config/#precision)
-6. Please note that all of the parameters mentioned in this section can be configured in configuration file `taosd.cfg` at server side and used by default, the default parameters can be overriden if they are specified in `create database` statement.
+6. Please note that all of the parameters mentioned in this section are configured in configuration file `taos.cfg` on the TDengine server. If not specified in the `create database` statement, the values from taos.cfg are used by default. To override default parameters, they must be specified in the `create database` statement.
:::
@@ -52,7 +52,7 @@ USE db_name;
```
:::note
-This way is not applicable when using a REST connection
+This way is not applicable when using a REST connection. In a REST connection the database name must be specified before a table or stable name. For e.g. to query the stable "meters" in database "test" the query would be "SELECT count(*) from test.meters"
:::
@@ -63,13 +63,13 @@ DROP DATABASE [IF EXISTS] db_name;
```
:::note
-All data in the database will be deleted too. This command must be used with caution.
+All data in the database will be deleted too. This command must be used with extreme caution. Please follow your organization's data integrity, data backup, data security or any other applicable SOPs before using this command.
:::
## Change Database Configuration
-Some examples are shown below to demonstrate how to change the configuration of a database. Please note that some configuration parameters can be changed after the database is created, but some others can't, for details of the configuration parameters of database please refer to [Configuration Parameters](/reference/config/).
+Some examples are shown below to demonstrate how to change the configuration of a database. Please note that some configuration parameters can be changed after the database is created, but some cannot. For details of the configuration parameters of database please refer to [Configuration Parameters](/reference/config/).
```
ALTER DATABASE db_name COMP 2;
@@ -81,7 +81,7 @@ COMP parameter specifies whether the data is compressed and how the data is comp
ALTER DATABASE db_name REPLICA 2;
```
-REPLICA parameter specifies the number of replications of the database.
+REPLICA parameter specifies the number of replicas of the database.
```
ALTER DATABASE db_name KEEP 365;
@@ -124,4 +124,4 @@ SHOW DATABASES;
SHOW CREATE DATABASE db_name;
```
-This command is useful when migrating the data from one TDengine cluster to another one. This command can be used to get the CREATE statement, which can be used in another TDengine to create the exact same database.
+This command is useful when migrating the data from one TDengine cluster to another. This command can be used to get the CREATE statement, which can be used in another TDengine instance to create the exact same database.
diff --git a/docs-en/12-taos-sql/03-table.md b/docs-en/12-taos-sql/03-table.md
index 678965893e8b386d9f2842c6e4e650c2d650e080..0505787ff8cc597eafd8299292ebac3e0fd3d4ad 100644
--- a/docs-en/12-taos-sql/03-table.md
+++ b/docs-en/12-taos-sql/03-table.md
@@ -12,10 +12,10 @@ CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_nam
:::info
-1. The first column of a table must be of TIMESTAMP type, and it will be set as the primary key automatically
+1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
2. The maximum length of the table name is 192 bytes.
3. The maximum length of each row is 16k bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
-4. The name of the subtable can only consist of English characters, digits and underscore, and can't start with a digit. Table names are case insensitive.
+4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive. Only ASCII visible characters can be used with escape character.
For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
@@ -44,7 +44,7 @@ The tags for which no value is specified will be set to NULL.
CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
```
-This can be used to create a lot of tables in a single SQL statement to accelerate the speed of the creating tables.
+This can be used to create a lot of tables in a single SQL statement while making table creation much faster.
:::info
@@ -111,7 +111,7 @@ If a table is created using a super table as template, the table definition can
ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
```
-The type of a column is variable length, like BINARY or NCHAR, this can be used to change (or increase) the length of the column.
+If the type of a column is variable length, like BINARY or NCHAR, this command can be used to change the length of the column.
:::note
If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, and the change will be automatically applied to all the subtables created using this super table as template. For tables created in the normal way, the table definition can be changed directly on the table.
diff --git a/docs-en/12-taos-sql/04-stable.md b/docs-en/12-taos-sql/04-stable.md
index 7354484f754b513ac2b8828ac1e13bc550a29efd..b8a608792ab327a81129d29ddd0ff44d7af6e6c5 100644
--- a/docs-en/12-taos-sql/04-stable.md
+++ b/docs-en/12-taos-sql/04-stable.md
@@ -9,7 +9,7 @@ Keyword `STable`, abbreviated for super table, is supported since version 2.0.15
:::
-## Crate STable
+## Create STable
```
CREATE STable [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]);
@@ -19,7 +19,7 @@ The SQL statement of creating a STable is similar to that of creating a table, b
:::info
-1. The tag types specified in TAGS should NOT be timestamp. Since 2.1.3.0 timestamp type can be used in TAGS column, but its value must be fixed and arithmetic operation can't be applied on it.
+1. A tag can be of type timestamp, since version 2.1.3.0, but its value must be fixed and arithmetic operations cannot be performed on it. Prior to version 2.1.3.0, tag types specified in TAGS could not be of type timestamp.
2. The tag names specified in TAGS should NOT be the same as other columns.
3. The tag names specified in TAGS should NOT be the same as any reserved keywords.(Please refer to [keywords](/taos-sql/keywords/)
4. The maximum number of tags specified in TAGS is 128, there must be at least one tag, and the total length of all tag columns should NOT exceed 16KB.
@@ -76,7 +76,7 @@ ALTER STable stb_name DROP COLUMN field_name;
ALTER STable stb_name MODIFY COLUMN field_name data_type(length);
```
-This command can be used to change (or increase, more specifically) the length of a column of variable length types, like BINARY or NCHAR.
+This command can be used to change (or more specifically, increase) the length of a column of variable length types, like BINARY or NCHAR.
## Change Tags of A STable
@@ -94,7 +94,7 @@ This command is used to add a new tag for a STable and specify the tag type.
ALTER STable stb_name DROP TAG tag_name;
```
-The tag will be removed automatically from all the subtables created using the super table as template once a tag is removed from a super table.
+The tag will be removed automatically from all the subtables, created using the super table as template, once a tag is removed from a super table.
### Change A Tag
@@ -102,7 +102,7 @@ The tag will be removed automatically from all the subtables created using the s
ALTER STable stb_name CHANGE TAG old_tag_name new_tag_name;
```
-The tag name will be changed automatically for all the subtables created using the super table as template once a tag name is changed for a super table.
+The tag name will be changed automatically for all the subtables, created using the super table as template, once a tag name is changed for a super table.
### Change Tag Length
@@ -110,7 +110,7 @@ The tag name will be changed automatically for all the subtables created using t
ALTER STable stb_name MODIFY TAG tag_name data_type(length);
```
-This command can be used to change (or increase, more specifically) the length of a tag of variable length types, like BINARY or NCHAR.
+This command can be used to change (or more specifically, increase) the length of a tag of variable length types, like BINARY or NCHAR.
:::note
Changing tag values can be applied to only subtables. All other tag operations, like add tag, remove tag, however, can be applied to only STable. If a new tag is added for a STable, the tag will be added with NULL value for all its subtables.
diff --git a/docs-en/12-taos-sql/08-interval.md b/docs-en/12-taos-sql/08-interval.md
index bf0904458ce5601fa0b9f611f3fcba6106dc5084..1b5265b44b6b63f8f5472e1e8760d1f45401fc21 100644
--- a/docs-en/12-taos-sql/08-interval.md
+++ b/docs-en/12-taos-sql/08-interval.md
@@ -10,7 +10,7 @@ Window related clauses are used to divide the data set to be queried into subset
`INTERVAL` clause is used to generate time windows of the same time interval, `SLIDING` is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining continuous query both the size of time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time window.
-
+
`INTERVAL` and `SLIDING` should be used with aggregate functions and select functions. Below SQL statement is illegal because no aggregate or selection function is used with `INTERVAL`.
@@ -30,7 +30,7 @@ When the time length specified by `SLIDING` is the same as that specified by `IN
In case of using integer, bool, or string to represent the device status at a moment, the continuous rows with same status belong to same status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
-
+
`STATE_WINDOW` is used to specify the column based on which to define status window, for example:
@@ -46,7 +46,7 @@ SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
The primary key, i.e. timestamp, is used to determine which session window the row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to the same session window; otherwise they belong to two different time windows. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
-
+
If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
diff --git a/docs-en/12-taos-sql/index.md b/docs-en/12-taos-sql/index.md
index 32850e8c4b0a816cae94563079c79b94c8611bd5..33656338a7bba38dc55cf536bdba8e95309c5acf 100644
--- a/docs-en/12-taos-sql/index.md
+++ b/docs-en/12-taos-sql/index.md
@@ -3,11 +3,9 @@ title: TDengine SQL
description: "The syntax supported by TDengine SQL "
---
-This section explains the syntax to operating databases, tables, STables, inserting data, selecting data, functions and some tips that can be used in TDengine SQL. It would be easier to understand with some fundamental knowledge of SQL.
+This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL.
-TDengine SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide the functionality of deleting time series data, thus corresponding statements are not provided in TDengine SQL.
-
-TDengine SQL doesn't support abbreviation for keywords, for example `DESCRIBE` can't be abbreviated as `DESC`.
+TDengine SQL is the major interface for users to write data into or query from TDengine. For ease of use, the syntax is similar to that of standard SQL. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide a delete function for time series data and so corresponding statements are not provided in TDengine SQL.
Syntax Specifications used in this chapter:
@@ -16,7 +14,7 @@ Syntax Specifications used in this chapter:
- | means one of a few options, excluding | itself.
- … means the item prior to it can be repeated multiple times.
-To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of meters. Assuming each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
+To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of data from electric meters. Each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
```sql
taos> DESCRIBE meters;
@@ -30,4 +28,4 @@ taos> DESCRIBE meters;
groupid | INT | 4 | TAG |
```
-The data set includes the data collected by 4 meters, the corresponding table name is d1001, d1002, d1003, d1004 respectively based on the data model of TDengine.
+The data set includes the data collected by 4 meters, the corresponding table name is d1001, d1002, d1003 and d1004 based on the data model of TDengine.
diff --git a/docs-en/14-reference/02-rest-api/02-rest-api.mdx b/docs-en/14-reference/02-rest-api/02-rest-api.mdx
index f405d551e530a37a5221e71a824f605fba0c0db9..0edc901bc373683a49dfde061f796dc0ae79ab4f 100644
--- a/docs-en/14-reference/02-rest-api/02-rest-api.mdx
+++ b/docs-en/14-reference/02-rest-api/02-rest-api.mdx
@@ -10,7 +10,7 @@ One difference from the native connector is that the REST interface is stateless
## Installation
-The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language supports the HTTP protocol is enough.
+The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language only needs to support the HTTP protocol.
## Verification
diff --git a/docs-en/14-reference/03-connector/03-connector.mdx b/docs-en/14-reference/03-connector/03-connector.mdx
index 38eba73d0983951901a26eee3962e89007f6d30a..44685579005c2cebd5e0194a10d457cd1199051e 100644
--- a/docs-en/14-reference/03-connector/03-connector.mdx
+++ b/docs-en/14-reference/03-connector/03-connector.mdx
@@ -4,7 +4,7 @@ title: Connector
TDengine provides a rich set of APIs (application development interface). To facilitate users to develop their applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in a few languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
-
+
## Supported platforms
diff --git a/docs-en/14-reference/03-connector/java.mdx b/docs-en/14-reference/03-connector/java.mdx
index 530798af1143d2e611369579a945de295d248ab0..1c84c0b1cacb454ca4e35266a1d362a2d2a038fb 100644
--- a/docs-en/14-reference/03-connector/java.mdx
+++ b/docs-en/14-reference/03-connector/java.mdx
@@ -11,7 +11,7 @@ import TabItem from '@theme/TabItem';
'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). REST connections implement has a slight differences to compare the set of features implemented and native connections.
-
+
The preceding diagram shows two ways for a Java app to access TDengine via connector:
diff --git a/docs-en/14-reference/03-connector/python.mdx b/docs-en/14-reference/03-connector/python.mdx
index 2b238173e04e3e13de36b5ac4d91d0cda290ca72..c52b4f18825c083e4bdfebe26b2e68ef2025ef8a 100644
--- a/docs-en/14-reference/03-connector/python.mdx
+++ b/docs-en/14-reference/03-connector/python.mdx
@@ -53,7 +53,7 @@ Earlier TDengine client software includes the Python connector. If the Python co
:::
-#### to install `taospy`
+#### To install `taospy`
@@ -320,7 +320,7 @@ All database operations will be thrown directly if an exception occurs. The appl
### About nanoseconds
-Due to the current imperfection of Python's nanosecond support (see link below), the current implementation returns integers at nanosecond precision instead of the `datetime` type produced by `ms and `us`, which application developers will need to handle on their own. And it is recommended to use pandas' to_datetime(). The Python Connector may modify the interface in the future if Python officially supports nanoseconds in full.
+Due to the current imperfection of Python's nanosecond support (see link below), the current implementation returns integers at nanosecond precision instead of the `datetime` type produced by `ms` and `us`, which application developers will need to handle on their own. And it is recommended to use pandas' to_datetime(). The Python Connector may modify the interface in the future if Python officially supports nanoseconds in full.
1. https://stackoverflow.com/questions/10611328/parsing-datetime-strings-containing-nanoseconds
2. https://www.python.org/dev/peps/pep-0564/
@@ -328,7 +328,7 @@ Due to the current imperfection of Python's nanosecond support (see link below),
## Frequently Asked Questions
-Welcome to [ask questions or report questions] (https://github.com/taosdata/taos-connector-python/issues).
+Welcome to [ask questions or report questions](https://github.com/taosdata/taos-connector-python/issues).
## Important Update
diff --git a/docs-en/14-reference/04-taosadapter.md b/docs-en/14-reference/04-taosadapter.md
index de42e8a883d8b195b9d342f761e39458e557dfac..55d964c14a091109d82d67f0060e846d7e513c0c 100644
--- a/docs-en/14-reference/04-taosadapter.md
+++ b/docs-en/14-reference/04-taosadapter.md
@@ -24,15 +24,15 @@ taosAdapter provides the following features.
## taosAdapter architecture diagram
-
+
## taosAdapter Deployment Method
### Install taosAdapter
-taosAdapter has been part of TDengine server software since TDengine v2.4.0.0. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TAOSData official website](https://taosdata.com/en/all-downloads/) to download the TDengine server installation package (taosAdapter is included in v2.4.0.0 and later version). If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/develop/BUILD.md) documentation.
+taosAdapter has been part of TDengine server software since TDengine v2.4.0.0. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TDengine official website](https://tdengine.com/all-downloads/) to download the TDengine server installation package (taosAdapter is included in v2.4.0.0 and later version). If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/develop/BUILD.md) documentation.
-### start/stop taosAdapter
+### Start/Stop taosAdapter
On Linux systems, the taosAdapter service is managed by `systemd` by default. You can use the command `systemctl start taosadapter` to start the taosAdapter service and use the command `systemctl stop taosadapter` to stop the taosAdapter service.
@@ -153,8 +153,7 @@ See [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/bl
## Feature List
-- Compatible with RESTful interfaces
- [https://www.taosdata.com/cn/documentation/connector#restful](https://www.taosdata.com/cn/documentation/connector#restful)
+- Compatible with RESTful interfaces [REST API](/reference/rest-api/)
- Compatible with InfluxDB v1 write interface
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
- Compatible with OpenTSDB JSON and telnet format writes
@@ -187,7 +186,7 @@ You can use any client that supports the http protocol to write data to or query
### InfluxDB
-You can use any client that supports the http protocol to access the Restful interface address `http://:6041/` to write data in InfluxDB compatible format to TDengine. The EndPoint is as follows:
+You can use any client that supports the http protocol to access the RESTful interface address `http://:6041/` to write data in InfluxDB compatible format to TDengine. The EndPoint is as follows:
```text
/influxdb/v1/write
@@ -204,7 +203,7 @@ Note: InfluxDB token authorization is not supported at present. Only Basic autho
### OpenTSDB
-You can use any client that supports the http protocol to access the Restful interface address `http://:6041/` to write data in OpenTSDB compatible format to TDengine.
+You can use any client that supports the http protocol to access the RESTful interface address `http://:6041/` to write data in OpenTSDB compatible format to TDengine.
```text
/opentsdb/v1/put/json/:db
diff --git a/docs-en/14-reference/06-taosdump.md b/docs-en/14-reference/06-taosdump.md
index 973999704b595ea9b742f1ef759f973aa1f05649..a7e216398a183a096678d8d70c429606d4e5f809 100644
--- a/docs-en/14-reference/06-taosdump.md
+++ b/docs-en/14-reference/06-taosdump.md
@@ -12,14 +12,13 @@ taosdump can back up a database, a super table, or a normal table as a logical d
Suppose the specified location already has data files. In that case, taosdump will prompt the user and exit immediately to avoid data overwriting which means that the same path can only be used for one backup.
Please be careful if you see a prompt for this.
-taosdump is a logical backup tool and should not be used to back up any raw data, environment settings,
Users should not use taosdump to back up raw data, environment settings, hardware information, server configuration, or cluster topology. taosdump uses [Apache AVRO](https://avro.apache.org/) as the data file format to store backup data.
## Installation
There are two ways to install taosdump:
-- Install the taosTools official installer. Please find taosTools from [All download links](https://www.taosdata.com/all-downloads) page and download and install it.
+- Install the taosTools official installer. Please find taosTools from [All download links](https://www.tdengine.com/all-downloads) page and download and install it.
- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
@@ -28,14 +27,14 @@ There are two ways to install taosdump:
### taosdump backup data
1. backing up all databases: specify `-A` or `-all-databases` parameter.
-2. backup multiple specified databases: use `-D db1,db2,... ` parameters; 3.
+2. backup multiple specified databases: use `-D db1,db2,... ` parameters;
3. back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use This can reduce the backup data time and backup data footprint if table names, column names, and tag names do not use `escape character`. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](/taos-sql/escape) for a description of escaped characters.
:::tip
- taosdump versions after 1.4.1 provide the `-I` argument for parsing Avro file schema and data. If users specify `-s` then only taosdump will parse schema.
-- Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ..." can be tried by challenging the `-B` parameter to a smaller value.
+- Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ...", then try changing the `-B` parameter to a smaller value.
:::
@@ -44,7 +43,7 @@ There are two ways to install taosdump:
Restore the data file in the specified path: use the `-i` parameter plus the path to the data file. You should not use the same directory to backup different data sets, and you should not backup the same data set multiple times in the same path. Otherwise, the backup data will cause overwriting or multiple backups.
:::tip
-taosdump internally uses TDengine stmt binding API for writing recovery data and currently uses 16384 as one write batch for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust to a smaller value by using the `-B` parameter.
+taosdump internally uses TDengine stmt binding API for writing recovery data with a default batch size of 16384 for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust the batch size to a smaller value by using the `-B` parameter.
:::
diff --git a/docs-en/14-reference/07-tdinsight/index.md b/docs-en/14-reference/07-tdinsight/index.md
index dc337bf9fff2a9b60ea2f1c5110185a8ac683098..16bae615c04ab92e4934418d6c0a3aaf1e1ccde8 100644
--- a/docs-en/14-reference/07-tdinsight/index.md
+++ b/docs-en/14-reference/07-tdinsight/index.md
@@ -61,7 +61,7 @@ sudo yum install \
## Automated deployment of TDinsight
-We provide an installation script [`TDinsight.sh`](https://github.com/taosdata/grafanaplugin/releases/latest/download/TDinsight.sh) script to allow users to configure the installation automatically and quickly.
+We provide an installation script [`TDinsight.sh`](https://github.com/taosdata/grafanaplugin/releases/latest/download/TDinsight.sh) to allow users to configure the installation automatically and quickly.
You can download the script via `wget` or other tools:
@@ -233,33 +233,33 @@ The default username/password is `admin`. Grafana will require a password change
Point to the **Configurations** -> **Data Sources** menu, and click the **Add data source** button.
-
+
Search for and select **TDengine**.
-
+
Configure the TDengine datasource.
-
+
Save and test. It will report 'TDengine Data source is working' under normal circumstances.
-
+
### Importing dashboards
Point to **+** / **Create** - **import** (or `/dashboard/import` url).
-
+
Type the dashboard ID `15167` in the **Import via grafana.com** location and **Load**.
-
+
Once the import is complete, the full page view of TDinsight is shown below.
-
+
## TDinsight dashboard details
@@ -269,7 +269,7 @@ Details of the metrics are as follows.
### Cluster Status
-
+
This section contains the current information and status of the cluster, the alert information is also here (from left to right, top to bottom).
@@ -289,7 +289,7 @@ This section contains the current information and status of the cluster, the ale
### DNodes Status
-
+
- **DNodes Status**: simple table view of `show dnodes`.
- **DNodes Lifetime**: the time elapsed since the dnode was created.
@@ -298,14 +298,14 @@ This section contains the current information and status of the cluster, the ale
### MNode Overview
-
+
-1. **MNodes Status**: a simple table view of `show mnodes`. 2.
+1. **MNodes Status**: a simple table view of `show mnodes`.
2. **MNodes Number**: similar to `DNodes Number`, the number of MNodes changes.
### Request
-
+
1. **Requests Rate(Inserts per Second)**: average number of inserts per second.
2. **Requests (Selects)**: number of query requests and change rate (count of second).
@@ -313,46 +313,46 @@ This section contains the current information and status of the cluster, the ale
### Database
-
+
Database usage, repeated for each value of the variable `$database` i.e. multiple rows per database.
-1. **STables**: number of super tables. 2.
-2. **Total Tables**: number of all tables. 3.
-3. **Sub Tables**: the number of all super table sub-tables. 4.
+1. **STables**: number of super tables.
+2. **Total Tables**: number of all tables.
+3. **Sub Tables**: the number of all super table subtables.
4. **Tables**: graph of all normal table numbers over time.
5. **Tables Number Foreach VGroups**: The number of tables contained in each VGroups.
### DNode Resource Usage
-
+
Data node resource usage display with repeated multiple rows for the variable `$fqdn` i.e., each data node. Includes.
1. **Uptime**: the time elapsed since the dnode was created.
-2. **Has MNodes?**: whether the current dnode is a mnode. 3.
-3. **CPU Cores**: the number of CPU cores. 4.
-4. **VNodes Number**: the number of VNodes in the current dnode. 5.
-5. **VNodes Masters**: the number of vnodes in the master role. 6.
+2. **Has MNodes?**: whether the current dnode is a mnode.
+3. **CPU Cores**: the number of CPU cores.
+4. **VNodes Number**: the number of VNodes in the current dnode.
+5. **VNodes Masters**: the number of vnodes in the master role.
6. **Current CPU Usage of taosd**: CPU usage rate of taosd processes.
7. **Current Memory Usage of taosd**: memory usage of taosd processes.
8. **Disk Used**: The total disk usage percentage of the taosd data directory.
-9. **CPU Usage**: Process and system CPU usage. 10.
+9. **CPU Usage**: Process and system CPU usage.
10. **RAM Usage**: Time series view of RAM usage metrics.
11. **Disk Used**: Disks used at each level of multi-level storage (default is level0).
12. **Disk Increasing Rate per Minute**: Percentage increase or decrease in disk usage per minute.
-13. **Disk IO**: Disk IO rate. 14.
+13. **Disk IO**: Disk IO rate.
14. **Net IO**: Network IO, the aggregate network IO rate in addition to the local network.
### Login History
-
+
Currently, only the number of logins per minute is reported.
### Monitoring taosAdapter
-
+
Support monitoring taosAdapter request statistics and status details. Includes.
@@ -376,7 +376,7 @@ TDinsight installed via the `TDinsight.sh` script can be cleaned up using the co
To completely uninstall TDinsight during a manual installation, you need to clean up the following.
1. the TDinsight Dashboard in Grafana.
-2. the Data Source in Grafana. 3.
+2. the Data Source in Grafana.
3. remove the `tdengine-datasource` plugin from the plugin installation directory.
## Integrated Docker Example
diff --git a/docs-en/14-reference/08-taos-shell.md b/docs-en/14-reference/08-taos-shell.md
index fe5e5f2bc29509a4b96646253732076c7a6ee7ea..9bb5178300931e4b3808716badf06c85a4bbf396 100644
--- a/docs-en/14-reference/08-taos-shell.md
+++ b/docs-en/14-reference/08-taos-shell.md
@@ -4,11 +4,11 @@ sidebar_label: TDengine CLI
description: Instructions and tips for using the TDengine CLI
---
-The TDengine command-line application (hereafter referred to as `TDengine CLI`) is the most simplest way for users to manipulate and interact with TDengine instances.
+The TDengine command-line application (hereafter referred to as `TDengine CLI`) is the simplest way for users to manipulate and interact with TDengine instances.
## Installation
-If executed on the TDengine server-side, there is no need for additional installation steps to install TDengine CLI as it is already included and installed automatically. To run TDengine CLI on the environment which no TDengine server running, the TDengine client installation package needs to be installed first. For details, please refer to [connector](/reference/connector/).
+If executed on the TDengine server-side, there is no need for additional installation steps to install TDengine CLI as it is already included and installed automatically. To run TDengine CLI in an environment where no TDengine server is running, the TDengine client installation package needs to be installed first. For details, please refer to [connector](/reference/connector/).
## Execution
diff --git a/docs-en/14-reference/11-docker/index.md b/docs-en/14-reference/11-docker/index.md
index 4ca84be369e14b3223e8609e06c9ebc4e35eaa2d..f532a263d88def21bd8b0fe9c59adaf982ee2404 100644
--- a/docs-en/14-reference/11-docker/index.md
+++ b/docs-en/14-reference/11-docker/index.md
@@ -315,13 +315,13 @@ password: taosdata
taoslog-td2:
```
- :::note
+:::note
- The `VERSION` environment variable is used to set the tdengine image tag
- `TAOS_FIRST_EP` must be set on the newly created instance so that it can join the TDengine cluster; if there is a high availability requirement, `TAOS_SECOND_EP` needs to be used at the same time
- `TAOS_REPLICA` is used to set the default number of database replicas. Its value range is [1,3]
- We recommend setting with `TAOS_ARBITRATOR` to use arbitrator in a two-nodes environment.
- :::
-
+ We recommend setting it with `TAOS_ARBITRATOR` to use arbitrator in a two-nodes environment.
+
+ :::
2. Start the cluster
diff --git a/docs-en/14-reference/12-config/index.md b/docs-en/14-reference/12-config/index.md
index 1a84f1539938ed8456d1c21c6def97d89305914d..10e23bbdb85c1aa65ffa021d3d7a7fdaf7b77b09 100644
--- a/docs-en/14-reference/12-config/index.md
+++ b/docs-en/14-reference/12-config/index.md
@@ -65,7 +65,7 @@ taos --dump-config
| ------------- | ------------------------------------------------------------------------ |
| Applicable | Server Only |
| Meaning | The FQDN of the host where `taosd` will be started. It can be IP address |
-| Default Value | The first hostname configured for the hos |
+| Default Value | The first hostname configured for the host |
| Note | It should be within 96 bytes |
### serverPort
@@ -78,7 +78,7 @@ taos --dump-config
| Note | REST service is provided by `taosd` before 2.4.0.0 but by `taosAdapter` after 2.4.0.0, the default port of REST service is 6041 |
:::note
-TDengine uses continuous 13 ports, both TCP and TCP, from the port specified by `serverPort`. These ports need to be kept as open if firewall is enabled. Below table describes the ports used by TDengine in details.
+TDengine uses continuous 13 ports, both TCP and UDP, from the port specified by `serverPort`. These ports need to be kept open if firewall is enabled. Below table describes the ports used by TDengine in details.
:::
@@ -182,8 +182,8 @@ TDengine uses continuous 13 ports, both TCP and TCP, from the port specified by
| ------------- | -------------------------------------------- |
| Applicable | Server Only |
| Meaning | The maximum number of distinct rows returned |
-| Value Range | [100,000 - 100, 000, 000] |
-| Default Value | 100, 000 |
+| Value Range | [100,000 - 100,000,000] |
+| Default Value | 100,000 |
| Note | After version 2.3.0.0 |
## Locale Parameters
@@ -240,7 +240,7 @@ To avoid the problems of using time strings, Unix timestamp can be used directly
| Default Value | Locale configured in host |
:::info
-A specific type "nchar" is provided in TDengine to store non-ASCII characters such as Chinese, Japanese, Korean. The characters to be stored in nchar type are firstly encoded in UCS4-LE before sending to server side. To store non-ASCII characters correctly, the encoding format of the client side needs to be set properly.
+A specific type "nchar" is provided in TDengine to store non-ASCII characters such as Chinese, Japanese, and Korean. The characters to be stored in nchar type are firstly encoded in UCS4-LE before sending to server side. To store non-ASCII characters correctly, the encoding format of the client side needs to be set properly.
The characters input on the client side are encoded using the default system encoding, which is UTF-8 on Linux, or GB18030 or GBK on some systems in Chinese, POSIX in docker, CP936 on Windows in Chinese. The encoding of the operating system in use must be set correctly so that the characters in nchar type can be converted to UCS4-LE.
@@ -779,7 +779,7 @@ To prevent system resource from being exhausted by multiple concurrent streams,
:::note
HTTP server had been provided by `taosd` prior to version 2.4.0.0, now is provided by `taosAdapter` after version 2.4.0.0.
-The parameters described in this section are only application in versions prior to 2.4.0.0. If you are using any version from 2.4.0.0, please refer to [taosAdapter]](/reference/taosadapter/).
+The parameters described in this section are only application in versions prior to 2.4.0.0. If you are using any version from 2.4.0.0, please refer to [taosAdapter](/reference/taosadapter/).
:::
diff --git a/docs-en/14-reference/12-directory.md b/docs-en/14-reference/12-directory.md
index dbdba2b715bb41baf9b70dce91a3065e585d0434..304e3bcb434ee9a6ba338577a4d1ba546b548e3f 100644
--- a/docs-en/14-reference/12-directory.md
+++ b/docs-en/14-reference/12-directory.md
@@ -32,7 +32,7 @@ All executable files of TDengine are in the _/usr/local/taos/bin_ directory by d
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
:::note
-taosdump after version 2.4.0.0 require taosTools as a standalone installation. A few version taosBenchmark is include in taosTools too.
+taosdump after version 2.4.0.0 require taosTools as a standalone installation. A new version of taosBenchmark is include in taosTools too.
:::
:::tip
diff --git a/docs-en/14-reference/13-schemaless/13-schemaless.md b/docs-en/14-reference/13-schemaless/13-schemaless.md
index d9ce9b434dd14a89d243b2ed629f3fde64e6aba0..ff0b2c51bd433788c593b6e20d4c341a9af7e921 100644
--- a/docs-en/14-reference/13-schemaless/13-schemaless.md
+++ b/docs-en/14-reference/13-schemaless/13-schemaless.md
@@ -3,17 +3,17 @@ title: Schemaless Writing
description: "The Schemaless write method eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data as it is written to the interface."
---
-In IoT applications, many data items are often collected for intelligent control, business analysis, device monitoring, etc. Due to the version upgrade of the application logic, or the hardware adjustment of the device itself, the data collection items may change more frequently. To facilitate the data logging work in such cases, TDengine starting from version 2.2.0.0, it provides a series of interfaces to the schemaless writing method, which eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data as the data is written to the interface. And when necessary, Schemaless writing will automatically add the required columns to ensure that the data written by the user is stored correctly.
+In IoT applications, many data items are often collected for intelligent control, business analysis, device monitoring, etc. Due to the version upgrades of the application logic, or the hardware adjustment of the devices themselves, the data collection items may change frequently. To facilitate the data logging work in such cases, TDengine starting from version 2.2.0.0 provides a series of interfaces to the schemaless writing method, which eliminate the need to create super tables and subtables in advance by automatically creating the storage structure corresponding to the data as the data is written to the interface. And when necessary, schemaless writing will automatically add the required columns to ensure that the data written by the user is stored correctly.
-The schemaless writing method creates super tables and their corresponding sub-tables completely indistinguishable from the super tables and sub-tables created directly via SQL. You can write data directly to them via SQL statements. Note that the names of tables created by schemaless writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and lack readability.
+The schemaless writing method creates super tables and their corresponding subtables completely indistinguishable from the super tables and subtables created directly via SQL. You can write data directly to them via SQL statements. Note that the names of tables created by schemaless writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and lack readability.
## Schemaless Writing Line Protocol
-TDengine's schemaless writing line protocol supports to be compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. However, when using these three protocols, you need to specify in the API the standard of the parsing protocol to be used for the input content.
+TDengine's schemaless writing line protocol supports InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. However, when using these three protocols, you need to specify in the API the standard of the parsing protocol to be used for the input content.
For the standard writing protocols of InfluxDB and OpenTSDB, please refer to the documentation of each protocol. The following is a description of TDengine's extended protocol, based on InfluxDB's line protocol first. They allow users to control the (super table) schema more granularly.
-With the following formatting conventions, Schemaless writing uses a single string to express a data row (multiple rows can be passed into the writing API at once to enable bulk writing).
+With the following formatting conventions, schemaless writing uses a single string to express a data row (multiple rows can be passed into the writing API at once to enable bulk writing).
```json
measurement,tag_set field_set timestamp
@@ -23,7 +23,7 @@ where :
- measurement will be used as the data table name. It will be separated from tag_set by a comma.
- tag_set will be used as tag data in the format `=,=`, i.e. multiple tags' data can be separated by a comma. It is separated from field_set by space.
-- field_set will be used as normal column data in the format of `=,=`, again using a comma to separate multiple normal columns of data. It is separated from the timestamp by space.
+- field_set will be used as normal column data in the format of `=,=`, again using a comma to separate multiple normal columns of data. It is separated from the timestamp by a space.
- The timestamp is the primary key corresponding to the data in this row.
All data in tag_set is automatically converted to the NCHAR data type and does not require double quotes (").
@@ -32,7 +32,7 @@ In the schemaless writing data line protocol, each data item in the field_set ne
- If there are English double quotes on both sides, it indicates the BINARY(32) type. For example, `"abc"`.
- If there are double quotes on both sides and an L prefix, it means NCHAR(32) type. For example, `L"error message"`.
-- Spaces, equal signs (=), commas (,), and double quotes (") need to be escaped with a backslash (\) in front. (All refer to the ASCII character)
+- Spaces, equal signs (=), commas (,), and double quotes (") need to be escaped with a backslash (\\) in front. (All refer to the ASCII character)
- Numeric types will be distinguished from data types by the suffix.
| **Serial number** | **Postfix** | **Mapping type** | **Size (bytes)** |
@@ -58,21 +58,21 @@ Note that if the wrong case is used when describing the data type suffix, or if
Schemaless writes process row data according to the following principles.
-1. You can use the following rules to generate the sub-table names: first, combine the measurement name and the key and value of the label into the next string:
+1. You can use the following rules to generate the subtable names: first, combine the measurement name and the key and value of the label into the next string:
```json
"measurement,tag_key1=tag_value1,tag_key2=tag_value2"
```
Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol.
-The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t*" is a fixed prefix that every table generated by this mapping relationship has. 2.
+The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t*" is a fixed prefix that every table generated by this mapping relationship has.
2. If the super table obtained by parsing the line protocol does not exist, this super table is created.
-If the sub-table obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the sub-table name determined in steps 1 or 2. 4.
+If the subtable obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the subtable name determined in steps 1 or 2.
4. If the specified tag or regular column in the data row does not exist, the corresponding tag or regular column is added to the super table (only incremental).
5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to NULL.
6. For BINARY or NCHAR columns, if the length of the value provided in a data row exceeds the column type limit, the maximum length of characters allowed to be stored in the column is automatically increased (only incremented and not decremented) to ensure complete preservation of the data.
-7. If the specified data sub-table already exists, and the specified tag column takes a value different from the saved value this time, the value in the latest data row overwrites the old tag column take value.
+7. If the specified data subtable already exists, and the specified tag column takes a value different from the saved value this time, the value in the latest data row overwrites the old tag column take value.
8. Errors encountered throughout the processing will interrupt the writing process and return an error code.
:::tip
diff --git a/docs-en/20-third-party/01-grafana.mdx b/docs-en/20-third-party/01-grafana.mdx
index 7239710e0aebdd95977d9b73a5a1a9fccd656542..dc2033ae6f789908d4d9f9ecd96c9396748c4400 100644
--- a/docs-en/20-third-party/01-grafana.mdx
+++ b/docs-en/20-third-party/01-grafana.mdx
@@ -23,7 +23,7 @@ You can download The Grafana plugin for TDengine from Data Sources` on the left side, as shown in the following figure.
-
+
Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it, as shown in the following figure.
-
+
Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration.
-
+
- Host: IP address of the server where the components of the TDengine cluster provide REST service (offered by taosd before 2.4 and by taosAdapter since 2.4) and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
- User: TDengine user name.
@@ -78,23 +78,23 @@ Enter the datasource configuration page, and follow the default prompts to modif
Click `Save & Test` to test. Follows are a success.
-
+
### Create Dashboard
Go back to the main interface to create the Dashboard, click Add Query to enter the panel query page:
-
+
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query.
-- INPUT SQL: enter the statement to be queried (the result set of the SQL statement should be two columns and multiple rows), for example: `select avg(mem_system) from log.dn where ts >= $from and ts < $to interval($interval)`, where, from, to and interval are built-in variables of the TDengine plugin, indicating the range and time interval of queries fetched from the Grafana plugin panel. In addition to the built-in variables, ` custom template variables are also supported.
+- INPUT SQL: enter the statement to be queried (the result set of the SQL statement should be two columns and multiple rows), for example: `select avg(mem_system) from log.dn where ts >= $from and ts < $to interval($interval)`, where, from, to and interval are built-in variables of the TDengine plugin, indicating the range and time interval of queries fetched from the Grafana plugin panel. In addition to the built-in variables, custom template variables are also supported.
- ALIAS BY: This allows you to set the current query alias.
- GENERATE SQL: Clicking this button will automatically replace the corresponding variables and generate the final executed statement.
Follow the default prompt to query the average system memory usage for the specified interval on the server where the current TDengine deployment is located as follows.
-
+
> For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/).
diff --git a/docs-en/20-third-party/09-emq-broker.md b/docs-en/20-third-party/09-emq-broker.md
index 560c6463b59b00a362023d6cfa44cf833419a9ea..738372cabd736c0be47b4080cc2c984e5110236c 100644
--- a/docs-en/20-third-party/09-emq-broker.md
+++ b/docs-en/20-third-party/09-emq-broker.md
@@ -3,7 +3,7 @@ sidebar_label: EMQX Broker
title: EMQX Broker writing
---
-MQTT is a popular IoT data transfer protocol, [EMQX](https://github.com/emqx/emqx) is an open-source MQTT Broker software, without any code, only need to use "rules" in EMQX Dashboard to do simple configuration. You can write MQTT data directly to TDengine. EMQX supports saving data to TDengine by sending it to web services and provides a native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it. tdengine).
+MQTT is a popular IoT data transfer protocol, [EMQX](https://github.com/emqx/emqx) is an open-source MQTT Broker software, you can write MQTT data directly to TDengine without any code, you only need to use "rules" in EMQX Dashboard to create a simple configuration. EMQX supports saving data to TDengine by sending it to web services and provides a native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it.).
## Prerequisites
@@ -44,25 +44,25 @@ Since the configuration interface of EMQX differs from version to version, here
Use your browser to open the URL `http://IP:18083` and log in to EMQX Dashboard. The initial installation username is `admin` and the password is: `public`.
-
+
### Creating Rule
Select "Rule" in the "Rule Engine" on the left and click the "Create" button: !
-
+
### Edit SQL fields
-
+
### Add "action handler"
-
+
### Add "Resource"
-
+
Select "Data to Web Service" and click the "New Resource" button.
@@ -70,13 +70,13 @@ Select "Data to Web Service" and click the "New Resource" button.
Select "Data to Web Service" and fill in the request URL as the address and port of the server running taosAdapter (default is 6041). Leave the other properties at their default values.
-
+
### Edit "action"
Edit the resource configuration to add the key/value pairing for Authorization. Please refer to the [ TDengine REST API documentation ](https://docs.taosdata.com/reference/rest-api/) for the authorization in details. Enter the rule engine replacement template in the message body.
-
+
## Compose program to mock data
@@ -163,7 +163,7 @@ Edit the resource configuration to add the key/value pairing for Authorization.
Note: `CLIENT_NUM` in the code can be set to a smaller value at the beginning of the test to avoid hardware performance be not capable to handle a more significant number of concurrent clients.
-
+
## Execute tests to simulate sending MQTT data
@@ -172,19 +172,19 @@ npm install mqtt mockjs --save ---registry=https://registry.npm.taobao.org
node mock.js
```
-
+
## Verify that EMQX is receiving data
Refresh the EMQX Dashboard rules engine interface to see how many records were received correctly:
-
+
## Verify that data writing to TDengine
Use the TDengine CLI program to log in and query the appropriate databases and tables to verify that the data is being written to TDengine correctly:
-
+
Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine.
EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
diff --git a/docs-en/20-third-party/11-kafka.md b/docs-en/20-third-party/11-kafka.md
index 2da9a86b7d3def338497c9c0f3481918b566aaed..9c78a6645a0578d3b8d494d1fa60831eb88b3c81 100644
--- a/docs-en/20-third-party/11-kafka.md
+++ b/docs-en/20-third-party/11-kafka.md
@@ -9,11 +9,11 @@ TDengine Kafka Connector contains two plugins: TDengine Source Connector and TDe
Kafka Connect is a component of Apache Kafka that enables other systems, such as databases, cloud services, file systems, etc., to connect to Kafka easily. Data can flow from other software to Kafka via Kafka Connect and Kafka to other systems via Kafka Connect. Plugins that read data from other software are called Source Connectors, and plugins that write data to other software are called Sink Connectors. Neither Source Connector nor Sink Connector will directly connect to Kafka Broker, and Source Connector transfers data to Kafka Connect. Sink Connector receives data from Kafka Connect.
-
+
TDengine Source Connector is used to read data from TDengine in real-time and send it to Kafka Connect. Users can use The TDengine Sink Connector to receive data from Kafka Connect and write it to TDengine.
-
+
## What is Confluent?
@@ -26,7 +26,7 @@ Confluent adds many extensions to Kafka. include:
5. GUI for managing and monitoring Kafka - Confluent Control Center
Some of these extensions are available in the community version of Confluent. Some are only available in the enterprise version.
-
+
Confluent Enterprise Edition provides the `confluent` command-line tool to manage various components.
@@ -228,7 +228,7 @@ taos> select * from meters;
Query OK, 4 row(s) in set (0.004208s)
```
-If you see the above data, the synchronization is successful. If not, check the logs of Kafka Connect. For detailed description of configuration parameters, see [Configuration Reference](#Configuration Reference).
+If you see the above data, the synchronization is successful. If not, check the logs of Kafka Connect. For detailed description of configuration parameters, see [Configuration Reference](#configuration-reference).
## The use of TDengine Source Connector
diff --git a/docs-en/21-tdinternal/01-arch.md b/docs-en/21-tdinternal/01-arch.md
index 2c430908e410c7ae8e6f09a3f7e2d059f906fda5..16d4b7afe26107e251a542ee24b644c1d372def0 100644
--- a/docs-en/21-tdinternal/01-arch.md
+++ b/docs-en/21-tdinternal/01-arch.md
@@ -11,7 +11,7 @@ The design of TDengine is based on the assumption that any hardware or software
Logical structure diagram of TDengine distributed architecture as following:
-
+
Figure 1: TDengine architecture diagram
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
@@ -54,7 +54,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc
To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process.
-
+
Figure 2: Typical process of TDengine
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs.
@@ -123,7 +123,7 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but
Master Vnode uses a writing process as follows:
-
+
Figure 3: TDengine Master writing process
1. Master vnode receives the application data insertion request, verifies, and moves to next step;
@@ -137,7 +137,7 @@ Master Vnode uses a writing process as follows:
For a slave vnode, the write process as follows:
-
+
Figure 4: TDengine Slave Writing Process
1. Slave vnode receives a data insertion request forwarded by Master vnode;
@@ -267,7 +267,7 @@ For the data collected by device D1001, the number of records per hour is counte
TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure:
-
+
Figure 5: Diagram of multi-table aggregation query
1. Application sends a query condition to system;
diff --git a/docs-en/25-application/01-telegraf.md b/docs-en/25-application/01-telegraf.md
index 07ab289ac2bbf44c219535fe128db69b34465c01..6a57145cd3d82ca5ec1ab828bfc7b6270bbe9d47 100644
--- a/docs-en/25-application/01-telegraf.md
+++ b/docs-en/25-application/01-telegraf.md
@@ -16,7 +16,7 @@ Current mainstream IT DevOps system usually include a data collection module, a
This article introduces how to quickly build a TDengine + Telegraf + Grafana based IT DevOps visualization system without writing even a single line of code and by simply modifying a few lines of configuration files. The architecture is as follows.
-
+
## Installation steps
@@ -73,9 +73,9 @@ sudo systemctl start telegraf
Log in to the Grafana interface using a web browser at `IP:3000`, with the system's initial username and password being `admin/admin`.
Click on the gear icon on the left and select `Plugins`, you should find the TDengine data source plugin icon.
-Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard- v0.1.0.json`, download the dashboard JSON file and import it. You will then see the dashboard in the following screen.
+Click on the plus icon on the left and select `Import` to get the data from `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json`, download the dashboard JSON file and import it. You will then see the dashboard in the following screen.
-
+
## Wrap-up
diff --git a/docs-en/25-application/02-collectd.md b/docs-en/25-application/02-collectd.md
index 0ddea2855497f1dfdfce7a2aa6749e0c5ba1b9ff..963881eafa6e5085eab951c1b1ab54faeba1fa7b 100644
--- a/docs-en/25-application/02-collectd.md
+++ b/docs-en/25-application/02-collectd.md
@@ -17,7 +17,7 @@ The new version of TDengine supports multiple data protocols and can accept data
This article introduces how to quickly build an IT DevOps visualization system based on TDengine + collectd / StatsD + Grafana without writing even a single line of code but by simply modifying a few lines of configuration files. The architecture is shown in the following figure.
-
+
## Installation Steps
@@ -83,19 +83,19 @@ Click on the gear icon on the left and select `Plugins`, you should find the TDe
Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`, click the plus icon on the left and select Import, follow the instructions to import the JSON file. After that, you can see
The dashboard can be seen in the following screen.
-
+
#### import collectd dashboard
Download the dashboard json file from `https://github.com/taosdata/grafanaplugin/blob/master/examples/collectd/grafana/dashboards/collect-metrics-with-tdengine-v0.1.0.json`. Download the dashboard json file, click the plus icon on the left side and select `Import`, and follow the interface prompts to select the JSON file to import. After that, you can see
dashboard with the following interface.
-
+
#### Importing the StatsD dashboard
Download the dashboard json from `https://github.com/taosdata/grafanaplugin/blob/master/examples/statsd/dashboards/statsd-with-tdengine-v0.1.0.json`. Click on the plus icon on the left and select `Import`, and follow the interface prompts to import the JSON file. You will then see the dashboard in the following screen.
-
+
## Wrap-up
diff --git a/docs-en/25-application/03-immigrate.md b/docs-en/25-application/03-immigrate.md
index 68d8a2b8cc25c80b8a647332df66874bee344715..69166bf78b66a23af35af726f2e5c477195a3595 100644
--- a/docs-en/25-application/03-immigrate.md
+++ b/docs-en/25-application/03-immigrate.md
@@ -32,7 +32,7 @@ We will explain how to migrate OpenTSDB applications to TDengine quickly, secure
The following figure (Figure 1) shows the system's overall architecture for a typical DevOps application scenario.
**Figure 1. Typical architecture in a DevOps scenario**
-
+
In this application scenario, there are Agent tools deployed in the application environment to collect machine metrics, network metrics, and application metrics. Data collectors to aggregate information collected by agents, systems for persistent data storage and management, and tools for monitoring data visualization (e.g., Grafana, etc.).
@@ -75,7 +75,7 @@ After writing the data to TDengine properly, you can adapt Grafana to visualize
TDengine provides two sets of Dashboard templates by default, and users only need to import the templates from the Grafana directory into Grafana to activate their use.
**Importing Grafana Templates** Figure 2.
-
+
After the above steps, you completed the migration to replace OpenTSDB with TDengine. You can see that the whole process is straightforward, there is no need to write any code, and only some configuration files need to be adjusted to meet the migration work.
@@ -88,7 +88,7 @@ In most DevOps scenarios, if you have a small OpenTSDB cluster (3 or fewer nodes
Suppose your application is particularly complex, or the application domain is not a DevOps scenario. You can continue reading subsequent chapters for a more comprehensive and in-depth look at the advanced topics of migrating an OpenTSDB application to TDengine.
**Figure 3. System architecture after migration**
-
+
## Migration evaluation and strategy for other scenarios
diff --git a/docs-en/27-train-faq/03-docker.md b/docs-en/27-train-faq/03-docker.md
index 3f560bcfef6119480b5499649cee1602656dbd6f..8f27c35d7945043d39ad83626ceccee941ad135e 100644
--- a/docs-en/27-train-faq/03-docker.md
+++ b/docs-en/27-train-faq/03-docker.md
@@ -118,7 +118,7 @@ Output is like below:
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
```
-For details of REST API please refer to [REST API]](/reference/rest-api/).
+For details of REST API please refer to [REST API](/reference/rest-api/).
### Run TDengine server and taosAdapter inside container
@@ -265,7 +265,7 @@ Below is an example output:
$ taos> select groupid, location from test.d0;
groupid | location |
=================================
- 0 | California.SanDieo |
+ 0 | California.SanDiego |
Query OK, 1 row(s) in set (0.003490s)
```
diff --git a/docs-examples/c/async_query_example.c b/docs-examples/c/async_query_example.c
index 262757f02b5c52f2d4402d363663db80bb38a54d..b370420b124a21b05f8e0b4041fb1461b1e2478a 100644
--- a/docs-examples/c/async_query_example.c
+++ b/docs-examples/c/async_query_example.c
@@ -182,14 +182,14 @@ int main() {
// query callback ...
// ts current voltage phase location groupid
// numOfRow = 8
-// 1538548685000 10.300000 219 0.310000 beijing.chaoyang 2
-// 1538548695000 12.600000 218 0.330000 beijing.chaoyang 2
-// 1538548696800 12.300000 221 0.310000 beijing.chaoyang 2
-// 1538548696650 10.300000 218 0.250000 beijing.chaoyang 3
-// 1538548685500 11.800000 221 0.280000 beijing.haidian 2
-// 1538548696600 13.400000 223 0.290000 beijing.haidian 2
-// 1538548685000 10.800000 223 0.290000 beijing.haidian 3
-// 1538548686500 11.500000 221 0.350000 beijing.haidian 3
+// 1538548685500 11.800000 221 0.280000 california.losangeles 2
+// 1538548696600 13.400000 223 0.290000 california.losangeles 2
+// 1538548685000 10.800000 223 0.290000 california.losangeles 3
+// 1538548686500 11.500000 221 0.350000 california.losangeles 3
+// 1538548685000 10.300000 219 0.310000 california.sanfrancisco 2
+// 1538548695000 12.600000 218 0.330000 california.sanfrancisco 2
+// 1538548696800 12.300000 221 0.310000 california.sanfrancisco 2
+// 1538548696650 10.300000 218 0.250000 california.sanfrancisco 3
// numOfRow = 0
// no more data, close the connection.
// ANCHOR_END: demo
\ No newline at end of file
diff --git a/docs-examples/c/insert_example.c b/docs-examples/c/insert_example.c
index ca12be9314efbda707dbd05449c746794c209743..ce8fdc5b9372aec7b02d3c9254ec25c4c4f62adc 100644
--- a/docs-examples/c/insert_example.c
+++ b/docs-examples/c/insert_example.c
@@ -36,10 +36,10 @@ int main() {
executeSQL(taos, "CREATE DATABASE power");
executeSQL(taos, "USE power");
executeSQL(taos, "CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)");
- executeSQL(taos, "INSERT INTO d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)"
- "d1002 USING meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)"
- "d1003 USING meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)"
- "d1004 USING meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)");
+ executeSQL(taos, "INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)"
+ "d1002 USING meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)"
+ "d1003 USING meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)"
+ "d1004 USING meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)");
taos_close(taos);
taos_cleanup();
}
diff --git a/docs-examples/c/json_protocol_example.c b/docs-examples/c/json_protocol_example.c
index 182fd201308facc80c76f36cfa57580784d70413..9d276127a64c3d74322e30587ab2e319c29cbf65 100644
--- a/docs-examples/c/json_protocol_example.c
+++ b/docs-examples/c/json_protocol_example.c
@@ -29,11 +29,11 @@ int main() {
executeSQL(taos, "USE test");
char *line =
"[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": "
- "\"Beijing.Chaoyang\", \"groupid\": 2}},{\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, "
- "\"value\": 219, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}},{\"metric\": \"meters.current\", "
- "\"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": "
+ "\"California.SanFrancisco\", \"groupid\": 2}},{\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, "
+ "\"value\": 219, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}},{\"metric\": \"meters.current\", "
+ "\"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": "
"2}},{\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": "
- "\"Beijing.Haidian\", \"groupid\": 1}}]";
+ "\"California.LosAngeles\", \"groupid\": 1}}]";
char *lines[] = {line};
TAOS_RES *res = taos_schemaless_insert(taos, lines, 1, TSDB_SML_JSON_PROTOCOL, TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
diff --git a/docs-examples/c/line_example.c b/docs-examples/c/line_example.c
index 8dd4b1a5075369625645959da0476b76b9fbf290..ce39f8d9df744082a450ce246529bf56adebd1e0 100644
--- a/docs-examples/c/line_example.c
+++ b/docs-examples/c/line_example.c
@@ -27,10 +27,10 @@ int main() {
executeSQL(taos, "DROP DATABASE IF EXISTS test");
executeSQL(taos, "CREATE DATABASE test");
executeSQL(taos, "USE test");
- char *lines[] = {"meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"};
+ char *lines[] = {"meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"};
TAOS_RES *res = taos_schemaless_insert(taos, lines, 4, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_MILLI_SECONDS);
if (taos_errno(res) != 0) {
printf("failed to insert schema-less data, reason: %s\n", taos_errstr(res));
diff --git a/docs-examples/c/multi_bind_example.c b/docs-examples/c/multi_bind_example.c
index fe11df9caad3e216fbd0b1ff2f40a54fe3ba86e5..02e6568e9e88ac8703a4993ed406e770d23c2438 100644
--- a/docs-examples/c/multi_bind_example.c
+++ b/docs-examples/c/multi_bind_example.c
@@ -52,7 +52,7 @@ void insertData(TAOS *taos) {
checkErrorCode(stmt, code, "failed to execute taos_stmt_prepare");
// bind table name and tags
TAOS_BIND tags[2];
- char *location = "Beijing.Chaoyang";
+ char *location = "California.SanFrancisco";
int groupId = 2;
tags[0].buffer_type = TSDB_DATA_TYPE_BINARY;
tags[0].buffer_length = strlen(location);
diff --git a/docs-examples/c/query_example.c b/docs-examples/c/query_example.c
index f88b2467ceb3d9bbeaf6b3beb6a24befd3e398c6..fcae95bcd45a282eaa3ae911b4115e6300c6af8e 100644
--- a/docs-examples/c/query_example.c
+++ b/docs-examples/c/query_example.c
@@ -139,5 +139,5 @@ int main() {
// output:
// ts current voltage phase location groupid
-// 1648432611249 10.300000 219 0.310000 Beijing.Chaoyang 2
-// 1648432611749 12.600000 218 0.330000 Beijing.Chaoyang 2
\ No newline at end of file
+// 1648432611249 10.300000 219 0.310000 California.SanFrancisco 2
+// 1648432611749 12.600000 218 0.330000 California.SanFrancisco 2
\ No newline at end of file
diff --git a/docs-examples/c/stmt_example.c b/docs-examples/c/stmt_example.c
index fab1506f953ef68050e4318406fa2ba1a0202929..28dae5f9d5ea2faec0aa3c0a784d39e252651c65 100644
--- a/docs-examples/c/stmt_example.c
+++ b/docs-examples/c/stmt_example.c
@@ -59,7 +59,7 @@ void insertData(TAOS *taos) {
checkErrorCode(stmt, code, "failed to execute taos_stmt_prepare");
// bind table name and tags
TAOS_BIND tags[2];
- char* location = "Beijing.Chaoyang";
+ char* location = "California.SanFrancisco";
int groupId = 2;
tags[0].buffer_type = TSDB_DATA_TYPE_BINARY;
tags[0].buffer_length = strlen(location);
diff --git a/docs-examples/c/telnet_line_example.c b/docs-examples/c/telnet_line_example.c
index 913d433f6aec07b3bce115d45536ffa4b45a0481..da62da4ba492856b0d73a564c1bf9cdd60b5b742 100644
--- a/docs-examples/c/telnet_line_example.c
+++ b/docs-examples/c/telnet_line_example.c
@@ -28,14 +28,14 @@ int main() {
executeSQL(taos, "CREATE DATABASE test");
executeSQL(taos, "USE test");
char *lines[] = {
- "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+ "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
};
TAOS_RES *res = taos_schemaless_insert(taos, lines, 8, TSDB_SML_TELNET_PROTOCOL, TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
if (taos_errno(res) != 0) {
diff --git a/docs-examples/csharp/AsyncQueryExample.cs b/docs-examples/csharp/AsyncQueryExample.cs
index fe30d21efe82e8d1dc414bd4723227ca93bc944f..3dabbebd1630a207af2e1b1b11cc4ba15bdd94a9 100644
--- a/docs-examples/csharp/AsyncQueryExample.cs
+++ b/docs-examples/csharp/AsyncQueryExample.cs
@@ -224,15 +224,15 @@ namespace TDengineExample
}
//output:
-//Connect to TDengine success
-//8 rows async retrieved
-
-//1538548685000 | 10.3 | 219 | 0.31 | beijing.chaoyang | 2 |
-//1538548695000 | 12.6 | 218 | 0.33 | beijing.chaoyang | 2 |
-//1538548696800 | 12.3 | 221 | 0.31 | beijing.chaoyang | 2 |
-//1538548696650 | 10.3 | 218 | 0.25 | beijing.chaoyang | 3 |
-//1538548685500 | 11.8 | 221 | 0.28 | beijing.haidian | 2 |
-//1538548696600 | 13.4 | 223 | 0.29 | beijing.haidian | 2 |
-//1538548685000 | 10.8 | 223 | 0.29 | beijing.haidian | 3 |
-//1538548686500 | 11.5 | 221 | 0.35 | beijing.haidian | 3 |
-//async retrieve complete.
\ No newline at end of file
+// Connect to TDengine success
+// 8 rows async retrieved
+
+// 1538548685500 | 11.8 | 221 | 0.28 | california.losangeles | 2 |
+// 1538548696600 | 13.4 | 223 | 0.29 | california.losangeles | 2 |
+// 1538548685000 | 10.8 | 223 | 0.29 | california.losangeles | 3 |
+// 1538548686500 | 11.5 | 221 | 0.35 | california.losangeles | 3 |
+// 1538548685000 | 10.3 | 219 | 0.31 | california.sanfrancisco | 2 |
+// 1538548695000 | 12.6 | 218 | 0.33 | california.sanfrancisco | 2 |
+// 1538548696800 | 12.3 | 221 | 0.31 | california.sanfrancisco | 2 |
+// 1538548696650 | 10.3 | 218 | 0.25 | california.sanfrancisco | 3 |
+// async retrieve complete.
\ No newline at end of file
diff --git a/docs-examples/csharp/InfluxDBLineExample.cs b/docs-examples/csharp/InfluxDBLineExample.cs
index 7aad08825209db568d61e5963ec7a00034ab7ca7..7b4453f4ac0b14dd76d166e395bdacb46a5d3fbc 100644
--- a/docs-examples/csharp/InfluxDBLineExample.cs
+++ b/docs-examples/csharp/InfluxDBLineExample.cs
@@ -9,10 +9,10 @@ namespace TDengineExample
IntPtr conn = GetConnection();
PrepareDatabase(conn);
string[] lines = {
- "meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"
+ "meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"
};
IntPtr res = TDengine.SchemalessInsert(conn, lines, lines.Length, (int)TDengineSchemalessProtocol.TSDB_SML_LINE_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_MILLI_SECONDS);
if (TDengine.ErrorNo(res) != 0)
diff --git a/docs-examples/csharp/OptsJsonExample.cs b/docs-examples/csharp/OptsJsonExample.cs
index d774a325afa1a8d93eb858f23dcd97dd29f8653d..2c41acc5c9628befda7eb4ad5c30af5b921de948 100644
--- a/docs-examples/csharp/OptsJsonExample.cs
+++ b/docs-examples/csharp/OptsJsonExample.cs
@@ -8,10 +8,10 @@ namespace TDengineExample
{
IntPtr conn = GetConnection();
PrepareDatabase(conn);
- string[] lines = { "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": 2}}," +
- " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}}, " +
- "{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": 2}}," +
- " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}}]"
+ string[] lines = { "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
+ " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}, " +
+ "{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
+ " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}]"
};
IntPtr res = TDengine.SchemalessInsert(conn, lines, 1, (int)TDengineSchemalessProtocol.TSDB_SML_JSON_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
diff --git a/docs-examples/csharp/OptsTelnetExample.cs b/docs-examples/csharp/OptsTelnetExample.cs
index 81608c32213fa0618a2ca6e0769aacf8e9c8e64d..bb752db1afbbb2ef68df9ca25314c8b91cd9a266 100644
--- a/docs-examples/csharp/OptsTelnetExample.cs
+++ b/docs-examples/csharp/OptsTelnetExample.cs
@@ -9,14 +9,14 @@ namespace TDengineExample
IntPtr conn = GetConnection();
PrepareDatabase(conn);
string[] lines = {
- "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+ "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
};
IntPtr res = TDengine.SchemalessInsert(conn, lines, lines.Length, (int)TDengineSchemalessProtocol.TSDB_SML_TELNET_PROTOCOL, (int)TDengineSchemalessPrecision.TSDB_SML_TIMESTAMP_NOT_CONFIGURED);
if (TDengine.ErrorNo(res) != 0)
diff --git a/docs-examples/csharp/QueryExample.cs b/docs-examples/csharp/QueryExample.cs
index f00e391100c7ce42177e2987f5b0b32dc02262c4..97f0c456d412e2ed608c345ba87469d3f5ccfc15 100644
--- a/docs-examples/csharp/QueryExample.cs
+++ b/docs-examples/csharp/QueryExample.cs
@@ -158,5 +158,5 @@ namespace TDengineExample
// Connect to TDengine success
// fieldCount=6
// ts current voltage phase location groupid
-// 1648432611249 10.3 219 0.31 Beijing.Chaoyang 2
-// 1648432611749 12.6 218 0.33 Beijing.Chaoyang 2
\ No newline at end of file
+// 1648432611249 10.3 219 0.31 California.SanFrancisco 2
+// 1648432611749 12.6 218 0.33 California.SanFrancisco 2
\ No newline at end of file
diff --git a/docs-examples/csharp/SQLInsertExample.cs b/docs-examples/csharp/SQLInsertExample.cs
index fa2e2a50daf06f4d948479e7f5b0df82c517f809..d5462c1062e01fd5c93bac983696d0350117ad92 100644
--- a/docs-examples/csharp/SQLInsertExample.cs
+++ b/docs-examples/csharp/SQLInsertExample.cs
@@ -15,10 +15,10 @@ namespace TDengineExample
CheckRes(conn, res, "failed to change database");
res = TDengine.Query(conn, "CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)");
CheckRes(conn, res, "failed to create stable");
- var sql = "INSERT INTO d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) " +
- "d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) " +
- "d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) " +
- "d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000)('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
+ var sql = "INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000) " +
+ "d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000) " +
+ "d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000)('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000) " +
+ "d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000)('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
res = TDengine.Query(conn, sql);
CheckRes(conn, res, "failed to insert data");
int affectedRows = TDengine.AffectRows(res);
diff --git a/docs-examples/csharp/StmtInsertExample.cs b/docs-examples/csharp/StmtInsertExample.cs
index d6e00dd4ac54ab8dbfc33b93896d19fc585e7642..6ade424b95d64529b7a40a782de13e3106d0c78a 100644
--- a/docs-examples/csharp/StmtInsertExample.cs
+++ b/docs-examples/csharp/StmtInsertExample.cs
@@ -21,7 +21,7 @@ namespace TDengineExample
CheckStmtRes(res, "failed to prepare stmt");
// 2. bind table name and tags
- TAOS_BIND[] tags = new TAOS_BIND[2] { TaosBind.BindBinary("Beijing.Chaoyang"), TaosBind.BindInt(2) };
+ TAOS_BIND[] tags = new TAOS_BIND[2] { TaosBind.BindBinary("California.SanFrancisco"), TaosBind.BindInt(2) };
res = TDengine.StmtSetTbnameTags(stmt, "d1001", tags);
CheckStmtRes(res, "failed to bind table name and tags");
diff --git a/docs-examples/go/insert/json/main.go b/docs-examples/go/insert/json/main.go
index 47d9e9984adc05896fb9954ad3deffde3764b836..6be375270e32a5091c015f88de52c9dda2246b59 100644
--- a/docs-examples/go/insert/json/main.go
+++ b/docs-examples/go/insert/json/main.go
@@ -25,10 +25,10 @@ func main() {
defer conn.Close()
prepareDatabase(conn)
- payload := `[{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
- {"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "Beijing.Haidian", "groupid": 1}},
- {"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
- {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "Beijing.Haidian", "groupid": 1}}]`
+ payload := `[{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
+ {"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "California.LosAngeles", "groupid": 1}},
+ {"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
+ {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]`
err = conn.OpenTSDBInsertJsonPayload(payload)
if err != nil {
diff --git a/docs-examples/go/insert/line/main.go b/docs-examples/go/insert/line/main.go
index bbc41468fe5f13d3e6f896445bb88f3eba584d0f..c17e1a5270850e6a8b497e0dbec4ae714ee1e2d6 100644
--- a/docs-examples/go/insert/line/main.go
+++ b/docs-examples/go/insert/line/main.go
@@ -25,10 +25,10 @@ func main() {
defer conn.Close()
prepareDatabase(conn)
var lines = []string{
- "meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250",
+ "meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250",
}
err = conn.InfluxDBInsertLines(lines, "ms")
diff --git a/docs-examples/go/insert/sql/main.go b/docs-examples/go/insert/sql/main.go
index 91386855334c1930af721e0b4f43395c6a6d8e82..6cd5f860e65f4fffd139668f69cc1772f5310eae 100644
--- a/docs-examples/go/insert/sql/main.go
+++ b/docs-examples/go/insert/sql/main.go
@@ -19,10 +19,10 @@ func createStable(taos *sql.DB) {
}
func insertData(taos *sql.DB) {
- sql := `INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
- power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
- power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
- power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)`
+ sql := `INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+ power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+ power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+ power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)`
result, err := taos.Exec(sql)
if err != nil {
fmt.Println("failed to insert, err:", err)
diff --git a/docs-examples/go/insert/stmt/main.go b/docs-examples/go/insert/stmt/main.go
index c50200ebb427c4c64c2737cb8fe4c3d287551a34..7093fdf1e52bc5a14fc92cec995fd81e70717d9f 100644
--- a/docs-examples/go/insert/stmt/main.go
+++ b/docs-examples/go/insert/stmt/main.go
@@ -37,7 +37,7 @@ func main() {
checkErr(err, "failed to create prepare statement")
// bind table name and tags
- tagParams := param.NewParam(2).AddBinary([]byte("Beijing.Chaoyang")).AddInt(2)
+ tagParams := param.NewParam(2).AddBinary([]byte("California.SanFrancisco")).AddInt(2)
err = stmt.SetTableNameWithTags("d1001", tagParams)
checkErr(err, "failed to execute SetTableNameWithTags")
diff --git a/docs-examples/go/insert/telnet/main.go b/docs-examples/go/insert/telnet/main.go
index 879e6d5cece74fd0b7c815dd34614dca3c9d4544..91fafbe71adbf60d9341b903f5a25708b7011852 100644
--- a/docs-examples/go/insert/telnet/main.go
+++ b/docs-examples/go/insert/telnet/main.go
@@ -25,14 +25,14 @@ func main() {
defer conn.Close()
prepareDatabase(conn)
var lines = []string{
- "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+ "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
}
err = conn.OpenTSDBInsertTelnetLines(lines)
diff --git a/docs-examples/java/src/main/java/com/taos/example/JSONProtocolExample.java b/docs-examples/java/src/main/java/com/taos/example/JSONProtocolExample.java
index cb83424576a4fd7dfa09ea297294ed77b66bd12d..c8e649482fbd747cdc238daa9e7a237cf63295b6 100644
--- a/docs-examples/java/src/main/java/com/taos/example/JSONProtocolExample.java
+++ b/docs-examples/java/src/main/java/com/taos/example/JSONProtocolExample.java
@@ -23,10 +23,10 @@ public class JSONProtocolExample {
}
private static String getJSONData() {
- return "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": 2}}," +
- " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}}, " +
- "{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"Beijing.Chaoyang\", \"groupid\": 2}}," +
- " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"Beijing.Haidian\", \"groupid\": 1}}]";
+ return "[{\"metric\": \"meters.current\", \"timestamp\": 1648432611249, \"value\": 10.3, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
+ " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611249, \"value\": 219, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}, " +
+ "{\"metric\": \"meters.current\", \"timestamp\": 1648432611250, \"value\": 12.6, \"tags\": {\"location\": \"California.SanFrancisco\", \"groupid\": 2}}," +
+ " {\"metric\": \"meters.voltage\", \"timestamp\": 1648432611250, \"value\": 221, \"tags\": {\"location\": \"California.LosAngeles\", \"groupid\": 1}}]";
}
public static void main(String[] args) throws SQLException {
diff --git a/docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java b/docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java
index 8a2eabe0a91f7966cc3cc6b7dfeeb71b71b88d92..990922b7a516bd32a7e299f5743bd1b5e321868a 100644
--- a/docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java
+++ b/docs-examples/java/src/main/java/com/taos/example/LineProtocolExample.java
@@ -12,11 +12,11 @@ import java.sql.Statement;
public class LineProtocolExample {
// format: measurement,tag_set field_set timestamp
private static String[] lines = {
- "meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000", // micro
+ "meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000", // micro
// seconds
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
};
private static Connection getConnection() throws SQLException {
diff --git a/docs-examples/java/src/main/java/com/taos/example/RestInsertExample.java b/docs-examples/java/src/main/java/com/taos/example/RestInsertExample.java
index de89f26cbe38f9343d60aeb8d3e9ce7f67c2e764..af97fe4373ca964260e5614f133f359e229b0e15 100644
--- a/docs-examples/java/src/main/java/com/taos/example/RestInsertExample.java
+++ b/docs-examples/java/src/main/java/com/taos/example/RestInsertExample.java
@@ -16,28 +16,28 @@ public class RestInsertExample {
private static List getRawData() {
return Arrays.asList(
- "d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,Beijing.Chaoyang,2",
- "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,Beijing.Chaoyang,2",
- "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,Beijing.Chaoyang,2",
- "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,Beijing.Chaoyang,3",
- "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,Beijing.Haidian,2",
- "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,Beijing.Haidian,2",
- "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,Beijing.Haidian,3",
- "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,Beijing.Haidian,3"
+ "d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,California.SanFrancisco,2",
+ "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,California.SanFrancisco,2",
+ "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,California.SanFrancisco,2",
+ "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,California.SanFrancisco,3",
+ "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,California.LosAngeles,2",
+ "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,California.LosAngeles,2",
+ "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,California.LosAngeles,3",
+ "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,California.LosAngeles,3"
);
}
/**
* The generated SQL is:
- * INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)
- * power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:15.000',12.60000,218,0.33000)
- * power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:16.800',12.30000,221,0.31000)
- * power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES('2018-10-03 14:38:16.650',10.30000,218,0.25000)
- * power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 14:38:05.500',11.80000,221,0.28000)
- * power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 14:38:16.600',13.40000,223,0.29000)
- * power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 14:38:05.000',10.80000,223,0.29000)
- * power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 14:38:06.500',11.50000,221,0.35000)
+ * INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)
+ * power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:15.000',12.60000,218,0.33000)
+ * power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:16.800',12.30000,221,0.31000)
+ * power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 14:38:16.650',10.30000,218,0.25000)
+ * power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 14:38:05.500',11.80000,221,0.28000)
+ * power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 14:38:16.600',13.40000,223,0.29000)
+ * power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 14:38:05.000',10.80000,223,0.29000)
+ * power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 14:38:06.500',11.50000,221,0.35000)
*/
private static String getSQL() {
StringBuilder sb = new StringBuilder("INSERT INTO ");
diff --git a/docs-examples/java/src/main/java/com/taos/example/RestQueryExample.java b/docs-examples/java/src/main/java/com/taos/example/RestQueryExample.java
index b1a1d224c6d9af2b83ac039726dcdb49a33ec2b0..a3581a1f4733e8bf3e3f561bb6cab5a725d8a1c0 100644
--- a/docs-examples/java/src/main/java/com/taos/example/RestQueryExample.java
+++ b/docs-examples/java/src/main/java/com/taos/example/RestQueryExample.java
@@ -51,5 +51,5 @@ public class RestQueryExample {
// possible output:
// avg(voltage) location
-// 222.0 Beijing.Haidian
-// 219.0 Beijing.Chaoyang
+// 222.0 California.LosAngeles
+// 219.0 California.SanFrancisco
diff --git a/docs-examples/java/src/main/java/com/taos/example/StmtInsertExample.java b/docs-examples/java/src/main/java/com/taos/example/StmtInsertExample.java
index 2a7ccebf41cae1a22d7516966e2c6ffb10011b64..bbcc92b22f67c31384b0fb7a082975eaac2ff2bc 100644
--- a/docs-examples/java/src/main/java/com/taos/example/StmtInsertExample.java
+++ b/docs-examples/java/src/main/java/com/taos/example/StmtInsertExample.java
@@ -30,14 +30,14 @@ public class StmtInsertExample {
private static List getRawData() {
return Arrays.asList(
- "d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,Beijing.Chaoyang,2",
- "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,Beijing.Chaoyang,2",
- "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,Beijing.Chaoyang,2",
- "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,Beijing.Chaoyang,3",
- "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,Beijing.Haidian,2",
- "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,Beijing.Haidian,2",
- "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,Beijing.Haidian,3",
- "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,Beijing.Haidian,3"
+ "d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,California.SanFrancisco,2",
+ "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,California.SanFrancisco,2",
+ "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,California.SanFrancisco,2",
+ "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,California.SanFrancisco,3",
+ "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,California.LosAngeles,2",
+ "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,California.LosAngeles,2",
+ "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,California.LosAngeles,3",
+ "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,California.LosAngeles,3"
);
}
diff --git a/docs-examples/java/src/main/java/com/taos/example/TelnetLineProtocolExample.java b/docs-examples/java/src/main/java/com/taos/example/TelnetLineProtocolExample.java
index 1431eccf16dabaac20f60ae7e971ef49707ba509..4c9368288df74f829121aeab5b925d1d083d29f0 100644
--- a/docs-examples/java/src/main/java/com/taos/example/TelnetLineProtocolExample.java
+++ b/docs-examples/java/src/main/java/com/taos/example/TelnetLineProtocolExample.java
@@ -11,14 +11,14 @@ import java.sql.Statement;
public class TelnetLineProtocolExample {
// format: =[ =]
- private static String[] lines = { "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+ private static String[] lines = { "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
};
private static Connection getConnection() throws SQLException {
diff --git a/docs-examples/java/src/test/java/com/taos/test/TestAll.java b/docs-examples/java/src/test/java/com/taos/test/TestAll.java
index 92fe14a49d5f5ea5d7ea5f1d809867b3de0cc9d2..42db24485afec05298159f7b0c3a4e15835d98ed 100644
--- a/docs-examples/java/src/test/java/com/taos/test/TestAll.java
+++ b/docs-examples/java/src/test/java/com/taos/test/TestAll.java
@@ -23,16 +23,16 @@ public class TestAll {
String jdbcUrl = "jdbc:TAOS://localhost:6030?user=root&password=taosdata";
try (Connection conn = DriverManager.getConnection(jdbcUrl)) {
try (Statement stmt = conn.createStatement()) {
- String sql = "INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)\n" +
- " power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 15:38:15.000',12.60000,218,0.33000)\n" +
- " power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES('2018-10-03 15:38:16.800',12.30000,221,0.31000)\n" +
- " power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES('2018-10-03 15:38:16.650',10.30000,218,0.25000)\n" +
- " power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 15:38:05.500',11.80000,221,0.28000)\n" +
- " power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES('2018-10-03 15:38:16.600',13.40000,223,0.29000)\n" +
- " power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 15:38:05.000',10.80000,223,0.29000)\n" +
- " power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 15:38:06.000',10.80000,223,0.29000)\n" +
- " power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 15:38:07.000',10.80000,223,0.29000)\n" +
- " power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES('2018-10-03 15:38:08.500',11.50000,221,0.35000)";
+ String sql = "INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)\n" +
+ " power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 15:38:15.000',12.60000,218,0.33000)\n" +
+ " power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 15:38:16.800',12.30000,221,0.31000)\n" +
+ " power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 15:38:16.650',10.30000,218,0.25000)\n" +
+ " power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 15:38:05.500',11.80000,221,0.28000)\n" +
+ " power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 15:38:16.600',13.40000,223,0.29000)\n" +
+ " power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:05.000',10.80000,223,0.29000)\n" +
+ " power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:06.000',10.80000,223,0.29000)\n" +
+ " power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:07.000',10.80000,223,0.29000)\n" +
+ " power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:08.500',11.50000,221,0.35000)";
stmt.execute(sql);
}
diff --git a/docs-examples/node/nativeexample/influxdb_line_example.js b/docs-examples/node/nativeexample/influxdb_line_example.js
index a9fc6d11df0b335b92bb3292baaa017cb4bc42ea..2050bee54506a3ee6fe7d89de97b3b41334dd4a6 100644
--- a/docs-examples/node/nativeexample/influxdb_line_example.js
+++ b/docs-examples/node/nativeexample/influxdb_line_example.js
@@ -13,10 +13,10 @@ function createDatabase() {
function insertData() {
const lines = [
- "meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250",
+ "meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250",
];
cursor.schemalessInsert(
lines,
diff --git a/docs-examples/node/nativeexample/insert_example.js b/docs-examples/node/nativeexample/insert_example.js
index 85a353f889176655654d8c39c9a905054d3b6622..ade9d83158362cbf00a856b43a973de31def7601 100644
--- a/docs-examples/node/nativeexample/insert_example.js
+++ b/docs-examples/node/nativeexample/insert_example.js
@@ -11,10 +11,10 @@ try {
cursor.execute(
"CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)"
);
- var sql = `INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
-power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
-power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
-power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)`;
+ var sql = `INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)`;
cursor.execute(sql);
} finally {
cursor.close();
diff --git a/docs-examples/node/nativeexample/multi_bind_example.js b/docs-examples/node/nativeexample/multi_bind_example.js
index d52581ec8e10c6edfbc8fc8f7ca78512b5c93d74..6ef8b30c097393fef8c6a2837f8683c736b363f1 100644
--- a/docs-examples/node/nativeexample/multi_bind_example.js
+++ b/docs-examples/node/nativeexample/multi_bind_example.js
@@ -25,7 +25,7 @@ function insertData() {
// bind table name and tags
let tagBind = new taos.TaosBind(2);
- tagBind.bindBinary("Beijing.Chaoyang");
+ tagBind.bindBinary("California.SanFrancisco");
tagBind.bindInt(2);
cursor.stmtSetTbnameTags("d1001", tagBind.getBind());
diff --git a/docs-examples/node/nativeexample/opentsdb_json_example.js b/docs-examples/node/nativeexample/opentsdb_json_example.js
index 6d436a8e9ebe0230bba22064e8fb6c180c14b5d1..2d78444a3f805bc77ab5e11925a28dd18fe221fe 100644
--- a/docs-examples/node/nativeexample/opentsdb_json_example.js
+++ b/docs-examples/node/nativeexample/opentsdb_json_example.js
@@ -17,25 +17,25 @@ function insertData() {
metric: "meters.current",
timestamp: 1648432611249,
value: 10.3,
- tags: { location: "Beijing.Chaoyang", groupid: 2 },
+ tags: { location: "California.SanFrancisco", groupid: 2 },
},
{
metric: "meters.voltage",
timestamp: 1648432611249,
value: 219,
- tags: { location: "Beijing.Haidian", groupid: 1 },
+ tags: { location: "California.LosAngeles", groupid: 1 },
},
{
metric: "meters.current",
timestamp: 1648432611250,
value: 12.6,
- tags: { location: "Beijing.Chaoyang", groupid: 2 },
+ tags: { location: "California.SanFrancisco", groupid: 2 },
},
{
metric: "meters.voltage",
timestamp: 1648432611250,
value: 221,
- tags: { location: "Beijing.Haidian", groupid: 1 },
+ tags: { location: "California.LosAngeles", groupid: 1 },
},
];
diff --git a/docs-examples/node/nativeexample/opentsdb_telnet_example.js b/docs-examples/node/nativeexample/opentsdb_telnet_example.js
index 01e79c2dcacd923cd708d1d228959a628d0ff26a..7f80f558838e18f07ad79e580e7d08638b74e940 100644
--- a/docs-examples/node/nativeexample/opentsdb_telnet_example.js
+++ b/docs-examples/node/nativeexample/opentsdb_telnet_example.js
@@ -13,14 +13,14 @@ function createDatabase() {
function insertData() {
const lines = [
- "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+ "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
];
cursor.schemalessInsert(
lines,
diff --git a/docs-examples/node/nativeexample/param_bind_example.js b/docs-examples/node/nativeexample/param_bind_example.js
index 9117f46c3eeabd9009b72fa9d4a8503e65884242..c7e04c71a0d19ff8666f3d43fe09109009741266 100644
--- a/docs-examples/node/nativeexample/param_bind_example.js
+++ b/docs-examples/node/nativeexample/param_bind_example.js
@@ -24,7 +24,7 @@ function insertData() {
// bind table name and tags
let tagBind = new taos.TaosBind(2);
- tagBind.bindBinary("Beijing.Chaoyang");
+ tagBind.bindBinary("California.SanFrancisco");
tagBind.bindInt(2);
cursor.stmtSetTbnameTags("d1001", tagBind.getBind());
diff --git a/docs-examples/php/connect.php b/docs-examples/php/connect.php
index 5af77b9768e5c5ac4b774b433479a4ac8902beda..b825b447805a3923248042d2cdff79c51bdcdbe3 100644
--- a/docs-examples/php/connect.php
+++ b/docs-examples/php/connect.php
@@ -4,7 +4,7 @@ use TDengine\Connection;
use TDengine\Exception\TDengineException;
try {
- // 实例化
+ // instantiate
$host = 'localhost';
$port = 6030;
$username = 'root';
@@ -12,9 +12,9 @@ try {
$dbname = null;
$connection = new Connection($host, $port, $username, $password, $dbname);
- // 连接
+ // connect
$connection->connect();
} catch (TDengineException $e) {
- // 连接失败捕获异常
+ // throw exception
throw $e;
}
diff --git a/docs-examples/php/insert.php b/docs-examples/php/insert.php
index 0d9cfc4843a2ec3e72d0ad128fa4c2650d6b9cf6..6e38fa0c46d31aa0a939d471ccbd255cfa453a16 100644
--- a/docs-examples/php/insert.php
+++ b/docs-examples/php/insert.php
@@ -4,7 +4,7 @@ use TDengine\Connection;
use TDengine\Exception\TDengineException;
try {
- // 实例化
+ // instantiate
$host = 'localhost';
$port = 6030;
$username = 'root';
@@ -12,22 +12,22 @@ try {
$dbname = 'power';
$connection = new Connection($host, $port, $username, $password, $dbname);
- // 连接
+ // connect
$connection->connect();
- // 插入
+ // insert
$connection->query('CREATE DATABASE if not exists power');
$connection->query('CREATE STABLE if not exists meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)');
$resource = $connection->query(<<<'SQL'
- INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
- power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
- power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
- power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
+ INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+ power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+ power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+ power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
SQL);
- // 影响行数
+ // get affected rows
var_dump($resource->affectedRows());
} catch (TDengineException $e) {
- // 捕获异常
+ // throw exception
throw $e;
}
diff --git a/docs-examples/php/insert_stmt.php b/docs-examples/php/insert_stmt.php
index 5d4b4809d215d781807c21172982feff2171fe07..99a9a6aef3f69a8880316355e17396e06ca985c9 100644
--- a/docs-examples/php/insert_stmt.php
+++ b/docs-examples/php/insert_stmt.php
@@ -4,7 +4,7 @@ use TDengine\Connection;
use TDengine\Exception\TDengineException;
try {
- // 实例化
+ // instantiate
$host = 'localhost';
$port = 6030;
$username = 'root';
@@ -12,18 +12,18 @@ try {
$dbname = 'power';
$connection = new Connection($host, $port, $username, $password, $dbname);
- // 连接
+ // connect
$connection->connect();
- // 插入
+ // insert
$connection->query('CREATE DATABASE if not exists power');
$connection->query('CREATE STABLE if not exists meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)');
$stmt = $connection->prepare('INSERT INTO ? USING meters TAGS(?, ?) VALUES(?, ?, ?, ?)');
- // 设置表名和标签
+ // set table name and tags
$stmt->setTableNameTags('d1001', [
// 支持格式同参数绑定
- [TDengine\TSDB_DATA_TYPE_BINARY, 'Beijing.Chaoyang'],
+ [TDengine\TSDB_DATA_TYPE_BINARY, 'California.SanFrancisco'],
[TDengine\TSDB_DATA_TYPE_INT, 2],
]);
@@ -41,9 +41,9 @@ try {
]);
$resource = $stmt->execute();
- // 影响行数
+ // get affected rows
var_dump($resource->affectedRows());
} catch (TDengineException $e) {
- // 捕获异常
+ // throw exception
throw $e;
}
diff --git a/docs-examples/php/query.php b/docs-examples/php/query.php
index 4e86a2cec7426887686049977a8647e786ac2744..2607940ea06a70eaa30e4c165c05bd72aa89857c 100644
--- a/docs-examples/php/query.php
+++ b/docs-examples/php/query.php
@@ -4,7 +4,7 @@ use TDengine\Connection;
use TDengine\Exception\TDengineException;
try {
- // 实例化
+ // instantiate
$host = 'localhost';
$port = 6030;
$username = 'root';
@@ -12,12 +12,12 @@ try {
$dbname = 'power';
$connection = new Connection($host, $port, $username, $password, $dbname);
- // 连接
+ // connect
$connection->connect();
$resource = $connection->query('SELECT ts, current FROM meters LIMIT 2');
var_dump($resource->fetch());
} catch (TDengineException $e) {
- // 捕获异常
+ // throw exception
throw $e;
}
diff --git a/docs-examples/python/bind_param_example.py b/docs-examples/python/bind_param_example.py
index 503a2eb5dd91a3516f87a4d3c1c3218cb6505236..6a67434f876f159cf32069a55e9527ca19034640 100644
--- a/docs-examples/python/bind_param_example.py
+++ b/docs-examples/python/bind_param_example.py
@@ -2,14 +2,14 @@ import taos
from datetime import datetime
# note: lines have already been sorted by table name
-lines = [('d1001', '2018-10-03 14:38:05.000', 10.30000, 219, 0.31000, 'Beijing.Chaoyang', 2),
- ('d1001', '2018-10-03 14:38:15.000', 12.60000, 218, 0.33000, 'Beijing.Chaoyang', 2),
- ('d1001', '2018-10-03 14:38:16.800', 12.30000, 221, 0.31000, 'Beijing.Chaoyang', 2),
- ('d1002', '2018-10-03 14:38:16.650', 10.30000, 218, 0.25000, 'Beijing.Chaoyang', 3),
- ('d1003', '2018-10-03 14:38:05.500', 11.80000, 221, 0.28000, 'Beijing.Haidian', 2),
- ('d1003', '2018-10-03 14:38:16.600', 13.40000, 223, 0.29000, 'Beijing.Haidian', 2),
- ('d1004', '2018-10-03 14:38:05.000', 10.80000, 223, 0.29000, 'Beijing.Haidian', 3),
- ('d1004', '2018-10-03 14:38:06.500', 11.50000, 221, 0.35000, 'Beijing.Haidian', 3)]
+lines = [('d1001', '2018-10-03 14:38:05.000', 10.30000, 219, 0.31000, 'California.SanFrancisco', 2),
+ ('d1001', '2018-10-03 14:38:15.000', 12.60000, 218, 0.33000, 'California.SanFrancisco', 2),
+ ('d1001', '2018-10-03 14:38:16.800', 12.30000, 221, 0.31000, 'California.SanFrancisco', 2),
+ ('d1002', '2018-10-03 14:38:16.650', 10.30000, 218, 0.25000, 'California.SanFrancisco', 3),
+ ('d1003', '2018-10-03 14:38:05.500', 11.80000, 221, 0.28000, 'California.LosAngeles', 2),
+ ('d1003', '2018-10-03 14:38:16.600', 13.40000, 223, 0.29000, 'California.LosAngeles', 2),
+ ('d1004', '2018-10-03 14:38:05.000', 10.80000, 223, 0.29000, 'California.LosAngeles', 3),
+ ('d1004', '2018-10-03 14:38:06.500', 11.50000, 221, 0.35000, 'California.LosAngeles', 3)]
def get_ts(ts: str):
diff --git a/docs-examples/python/conn_native_pandas.py b/docs-examples/python/conn_native_pandas.py
index 314759f7662c7bf4c9df2c8b3396ad3101c91cd4..56942ef57085766cd128b03cabb7a357587eab16 100644
--- a/docs-examples/python/conn_native_pandas.py
+++ b/docs-examples/python/conn_native_pandas.py
@@ -13,7 +13,7 @@ print(df.head(3))
# output:
# RangeIndex(start=0, stop=8, step=1)
#
-# ts current voltage phase location groupid
-# 0 2018-10-03 14:38:05.000 10.3 219 0.31 beijing.chaoyang 2
-# 1 2018-10-03 14:38:15.000 12.6 218 0.33 beijing.chaoyang 2
-# 2 2018-10-03 14:38:16.800 12.3 221 0.31 beijing.chaoyang 2
+# ts current ... location groupid
+# 0 2018-10-03 14:38:05.500 11.8 ... california.losangeles 2
+# 1 2018-10-03 14:38:16.600 13.4 ... california.losangeles 2
+# 2 2018-10-03 14:38:05.000 10.8 ... california.losangeles 3
diff --git a/docs-examples/python/conn_rest_pandas.py b/docs-examples/python/conn_rest_pandas.py
index 143e4275fa4eda685766297e4b90cba3935a574d..0164080cd5a05e72dce40b1d111ea423623ff9b2 100644
--- a/docs-examples/python/conn_rest_pandas.py
+++ b/docs-examples/python/conn_rest_pandas.py
@@ -11,9 +11,9 @@ print(type(df.ts[0]))
print(df.head(3))
# output:
-#
# RangeIndex(start=0, stop=8, step=1)
-# ts current ... location groupid
-# 0 2018-10-03 14:38:05+08:00 10.3 ... beijing.chaoyang 2
-# 1 2018-10-03 14:38:15+08:00 12.6 ... beijing.chaoyang 2
-# 2 2018-10-03 14:38:16.800000+08:00 12.3 ... beijing.chaoyang 2
+#
+# ts current ... location groupid
+# 0 2018-10-03 06:38:05.500000+00:00 11.8 ... california.losangeles 2
+# 1 2018-10-03 06:38:16.600000+00:00 13.4 ... california.losangeles 2
+# 2 2018-10-03 06:38:05+00:00 10.8 ... california.losangeles 3
diff --git a/docs-examples/python/connect_rest_examples.py b/docs-examples/python/connect_rest_examples.py
index a043d506b965bc31179dbb6f38749d196ab338ff..3303eb0e194ac28e9486ab153183c3b1f0b639f2 100644
--- a/docs-examples/python/connect_rest_examples.py
+++ b/docs-examples/python/connect_rest_examples.py
@@ -16,10 +16,10 @@ cursor.execute("CREATE DATABASE power")
cursor.execute("CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)")
# insert data
-cursor.execute("""INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
- power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
- power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
- power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""")
+cursor.execute("""INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+ power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+ power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+ power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""")
print("inserted row count:", cursor.rowcount)
# query data
@@ -38,8 +38,7 @@ for row in data:
# inserted row count: 8
# queried row count: 3
# ['ts', 'current', 'voltage', 'phase', 'location', 'groupid']
-# [datetime.datetime(2018, 10, 3, 14, 38, 5, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 10.3, 219, 0.31, 'beijing.chaoyang', 2]
-# [datetime.datetime(2018, 10, 3, 14, 38, 15, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 12.6, 218, 0.33, 'beijing.chaoyang', 2]
-# [datetime.datetime(2018, 10, 3, 14, 38, 16, 800000, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 12.3, 221, 0.31, 'beijing.chaoyang', 2]
-
+# [datetime.datetime(2018, 10, 3, 14, 38, 5, 500000, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 11.8, 221, 0.28, 'california.losangeles', 2]
+# [datetime.datetime(2018, 10, 3, 14, 38, 16, 600000, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 13.4, 223, 0.29, 'california.losangeles', 2]
+# [datetime.datetime(2018, 10, 3, 14, 38, 5, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 10.8, 223, 0.29, 'california.losangeles', 3]
# ANCHOR_END: basic
diff --git a/docs-examples/python/json_protocol_example.py b/docs-examples/python/json_protocol_example.py
index 5bb4d629bccf3d79e74b381d6259de86d6522315..58b38f3ff667bcbbd902434d3409441a4d2c5b45 100644
--- a/docs-examples/python/json_protocol_example.py
+++ b/docs-examples/python/json_protocol_example.py
@@ -3,12 +3,12 @@ import json
import taos
from taos import SmlProtocol, SmlPrecision
-lines = [{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
+lines = [{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
{"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219,
- "tags": {"location": "Beijing.Haidian", "groupid": 1}},
+ "tags": {"location": "California.LosAngeles", "groupid": 1}},
{"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6,
- "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
- {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "Beijing.Haidian", "groupid": 1}}]
+ "tags": {"location": "California.SanFrancisco", "groupid": 2}},
+ {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]
def get_connection():
diff --git a/docs-examples/python/line_protocol_example.py b/docs-examples/python/line_protocol_example.py
index 02baeb2104f9f48984b4d34afb5e67af641d4e32..735e8e7eb8aed1a8133de7a6de50bd50d076c472 100644
--- a/docs-examples/python/line_protocol_example.py
+++ b/docs-examples/python/line_protocol_example.py
@@ -1,10 +1,10 @@
import taos
from taos import SmlProtocol, SmlPrecision
-lines = ["meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000",
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
+lines = ["meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249000",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249300",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611249800",
]
diff --git a/docs-examples/python/multi_bind_example.py b/docs-examples/python/multi_bind_example.py
index 1714121d72705ab8d619a41f3463af4aa3193871..205ba69fb267ae1781415e4f0995b41f908ceb17 100644
--- a/docs-examples/python/multi_bind_example.py
+++ b/docs-examples/python/multi_bind_example.py
@@ -3,10 +3,10 @@ from datetime import datetime
# ANCHOR: bind_batch
table_tags = {
- "d1001": ('Beijing.Chaoyang', 2),
- "d1002": ('Beijing.Chaoyang', 3),
- "d1003": ('Beijing.Haidian', 2),
- "d1004": ('Beijing.Haidian', 3)
+ "d1001": ('California.SanFrancisco', 2),
+ "d1002": ('California.SanFrancisco', 3),
+ "d1003": ('California.LosAngeles', 2),
+ "d1004": ('California.LosAngeles', 3)
}
table_values = {
diff --git a/docs-examples/python/native_insert_example.py b/docs-examples/python/native_insert_example.py
index 94d4888a8f5330b9e39d5ae051fcb68f9825505f..3b6b73cb2236c8d9d11019349f99f79135a5c1d6 100644
--- a/docs-examples/python/native_insert_example.py
+++ b/docs-examples/python/native_insert_example.py
@@ -1,13 +1,13 @@
import taos
-lines = ["d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,Beijing.Chaoyang,2",
- "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,Beijing.Haidian,3",
- "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,Beijing.Haidian,2",
- "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,Beijing.Haidian,3",
- "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,Beijing.Chaoyang,3",
- "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,Beijing.Chaoyang,2",
- "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,Beijing.Chaoyang,2",
- "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,Beijing.Haidian,2"]
+lines = ["d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,California.SanFrancisco,2",
+ "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,California.LosAngeles,3",
+ "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,California.LosAngeles,2",
+ "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,California.LosAngeles,3",
+ "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,California.SanFrancisco,3",
+ "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,California.SanFrancisco,2",
+ "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,California.SanFrancisco,2",
+ "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,California.LosAngeles,2"]
def get_connection() -> taos.TaosConnection:
@@ -25,10 +25,10 @@ def create_stable(conn: taos.TaosConnection):
# The generated SQL is:
-# INSERT INTO d1001 USING meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
-# d1002 USING meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
-# d1003 USING meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
-# d1004 USING meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
+# INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+# d1002 USING meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+# d1003 USING meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+# d1004 USING meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
def get_sql():
global lines
diff --git a/docs-examples/python/query_example.py b/docs-examples/python/query_example.py
index 6d33c49c968d9210b475931b5d8cecca0ceff3e3..8afd7f07358d7e9c9a3677ee04f8eb92aae6856b 100644
--- a/docs-examples/python/query_example.py
+++ b/docs-examples/python/query_example.py
@@ -12,10 +12,10 @@ def query_api_demo(conn: taos.TaosConnection):
# field count: 7
-# meta of files[1]: {name: ts, type: 9, bytes: 8}
+# meta of fields[1]: {name: ts, type: 9, bytes: 8}
# ======================Iterate on result=========================
-# ('d1001', datetime.datetime(2018, 10, 3, 14, 38, 5), 10.300000190734863, 219, 0.3100000023841858, 'Beijing.Chaoyang', 2)
-# ('d1001', datetime.datetime(2018, 10, 3, 14, 38, 15), 12.600000381469727, 218, 0.33000001311302185, 'Beijing.Chaoyang', 2)
+# ('d1003', datetime.datetime(2018, 10, 3, 14, 38, 5, 500000), 11.800000190734863, 221, 0.2800000011920929, 'california.losangeles', 2)
+# ('d1003', datetime.datetime(2018, 10, 3, 14, 38, 16, 600000), 13.399999618530273, 223, 0.28999999165534973, 'california.losangeles', 2)
# ANCHOR_END: iter
# ANCHOR: fetch_all
@@ -29,8 +29,8 @@ def fetch_all_demo(conn: taos.TaosConnection):
# row count: 2
# ===============all data===================
-# [{'ts': datetime.datetime(2018, 10, 3, 14, 38, 5), 'current': 10.300000190734863},
-# {'ts': datetime.datetime(2018, 10, 3, 14, 38, 15), 'current': 12.600000381469727}]
+# [{'ts': datetime.datetime(2018, 10, 3, 14, 38, 5, 500000), 'current': 11.800000190734863},
+# {'ts': datetime.datetime(2018, 10, 3, 14, 38, 16, 600000), 'current': 13.399999618530273}]
# ANCHOR_END: fetch_all
if __name__ == '__main__':
diff --git a/docs-examples/python/telnet_line_protocol_example.py b/docs-examples/python/telnet_line_protocol_example.py
index 072835109ee238940e6fe5880b72b2b04e0157fa..d812e186af86be6811ee7774f10458e46df1f39f 100644
--- a/docs-examples/python/telnet_line_protocol_example.py
+++ b/docs-examples/python/telnet_line_protocol_example.py
@@ -2,14 +2,14 @@ import taos
from taos import SmlProtocol, SmlPrecision
# format: =[ =]
-lines = ["meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+lines = ["meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
]
diff --git a/docs-examples/rust/nativeexample/examples/stmt_example.rs b/docs-examples/rust/nativeexample/examples/stmt_example.rs
index a791a4135984a33dded145e8175d7ade57de8d77..190f8c1ef6d50a8e9c925178c1a9d31c22e3d4df 100644
--- a/docs-examples/rust/nativeexample/examples/stmt_example.rs
+++ b/docs-examples/rust/nativeexample/examples/stmt_example.rs
@@ -12,7 +12,7 @@ async fn main() -> Result<(), Error> {
stmt.set_tbname_tags(
"d1001",
[
- Field::Binary(BString::from("Beijing.Chaoyang")),
+ Field::Binary(BString::from("California.SanFrancisco")),
Field::Int(2),
],
)?;
diff --git a/docs-examples/rust/restexample/examples/insert_example.rs b/docs-examples/rust/restexample/examples/insert_example.rs
index d7acc98d096fb3cd6bea22d6c5f6f0f5caea50af..9261536f627c297fc707708f88f57eed647dbf3e 100644
--- a/docs-examples/rust/restexample/examples/insert_example.rs
+++ b/docs-examples/rust/restexample/examples/insert_example.rs
@@ -5,10 +5,10 @@ async fn main() -> Result<(), Error> {
let taos = TaosCfg::default().connect().expect("fail to connect");
taos.create_database("power").await?;
taos.exec("CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)").await?;
- let sql = "INSERT INTO power.d1001 USING power.meters TAGS(Beijing.Chaoyang, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
- power.d1002 USING power.meters TAGS(Beijing.Chaoyang, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
- power.d1003 USING power.meters TAGS(Beijing.Haidian, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
- power.d1004 USING power.meters TAGS(Beijing.Haidian, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
+ let sql = "INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
+ power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
+ power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
+ power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)";
let result = taos.query(sql).await?;
println!("{:?}", result);
Ok(())
diff --git a/docs-examples/rust/schemalessexample/examples/influxdb_line_example.rs b/docs-examples/rust/schemalessexample/examples/influxdb_line_example.rs
index e93888cc83d12f3bec7370a66e8a85d38cec42ad..64d1a3c9ac6037c16e3e1c3be0258e19cce632a0 100644
--- a/docs-examples/rust/schemalessexample/examples/influxdb_line_example.rs
+++ b/docs-examples/rust/schemalessexample/examples/influxdb_line_example.rs
@@ -5,10 +5,10 @@ fn main() {
let taos = TaosCfg::default().connect().expect("fail to connect");
taos.raw_query("CREATE DATABASE test").unwrap();
taos.raw_query("USE test").unwrap();
- let lines = ["meters,location=Beijing.Haidian,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
- "meters,location=Beijing.Haidian,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
- "meters,location=Beijing.Haidian,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
- "meters,location=Beijing.Haidian,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"];
+ let lines = ["meters,location=California.LosAngeles,groupid=2 current=11.8,voltage=221,phase=0.28 1648432611249",
+ "meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611250",
+ "meters,location=California.LosAngeles,groupid=3 current=10.8,voltage=223,phase=0.29 1648432611249",
+ "meters,location=California.LosAngeles,groupid=3 current=11.3,voltage=221,phase=0.35 1648432611250"];
let affected_rows = taos
.schemaless_insert(
&lines,
diff --git a/docs-examples/rust/schemalessexample/examples/opentsdb_json_example.rs b/docs-examples/rust/schemalessexample/examples/opentsdb_json_example.rs
index 1d66bd1f2b1bcbe82dc3ee3e8e25ea4c521c81f0..e61691596704c8aaf979081429802df6e5aa86f9 100644
--- a/docs-examples/rust/schemalessexample/examples/opentsdb_json_example.rs
+++ b/docs-examples/rust/schemalessexample/examples/opentsdb_json_example.rs
@@ -6,10 +6,10 @@ fn main() {
taos.raw_query("CREATE DATABASE test").unwrap();
taos.raw_query("USE test").unwrap();
let lines = [
- r#"[{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
- {"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "Beijing.Haidian", "groupid": 1}},
- {"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "Beijing.Chaoyang", "groupid": 2}},
- {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "Beijing.Haidian", "groupid": 1}}]"#,
+ r#"[{"metric": "meters.current", "timestamp": 1648432611249, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
+ {"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "California.LosAngeles", "groupid": 1}},
+ {"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "California.SanFrancisco", "groupid": 2}},
+ {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]"#,
];
let affected_rows = taos
diff --git a/docs-examples/rust/schemalessexample/examples/opentsdb_telnet_example.rs b/docs-examples/rust/schemalessexample/examples/opentsdb_telnet_example.rs
index 18d7500714d9e41b1bebd490199d296ead3dc7c4..c8cab7655a24806e5c7659af80e83da383539c55 100644
--- a/docs-examples/rust/schemalessexample/examples/opentsdb_telnet_example.rs
+++ b/docs-examples/rust/schemalessexample/examples/opentsdb_telnet_example.rs
@@ -6,14 +6,14 @@ fn main() {
taos.raw_query("CREATE DATABASE test").unwrap();
taos.raw_query("USE test").unwrap();
let lines = [
- "meters.current 1648432611249 10.3 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611250 12.6 location=Beijing.Chaoyang groupid=2",
- "meters.current 1648432611249 10.8 location=Beijing.Haidian groupid=3",
- "meters.current 1648432611250 11.3 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611249 219 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611250 218 location=Beijing.Chaoyang groupid=2",
- "meters.voltage 1648432611249 221 location=Beijing.Haidian groupid=3",
- "meters.voltage 1648432611250 217 location=Beijing.Haidian groupid=3",
+ "meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
+ "meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
+ "meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
+ "meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
+ "meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",
];
let affected_rows = taos
.schemaless_insert(
diff --git a/src/os/inc/osDir.h b/src/os/inc/osDir.h
index 079f2aca644165d9d871c77c47c1c08d2523f515..899b99a182aeb09cadbe3560f0976c885e609a20 100644
--- a/src/os/inc/osDir.h
+++ b/src/os/inc/osDir.h
@@ -22,7 +22,7 @@ extern "C" {
void taosRemoveDir(char *rootDir);
bool taosDirExist(const char* dirname);
-int32_t taosMkdirP(const char *pathname);
+int32_t taosMkdirP(const char *pathname, int keepBase);
int32_t taosMkDir(const char *pathname, mode_t mode);
void taosRemoveOldLogFiles(char *rootDir, int32_t keepDays);
int32_t taosRename(char *oldName, char *newName);
diff --git a/src/os/src/detail/osDir.c b/src/os/src/detail/osDir.c
index 3c07266d45558282872c42de1a70b3de5f9a193c..17c844ed863c227fe1178b7d99fee4a300a0b3e2 100644
--- a/src/os/src/detail/osDir.c
+++ b/src/os/src/detail/osDir.c
@@ -49,7 +49,7 @@ bool taosDirExist(const char* dirname) {
return access(dirname, F_OK) == 0;
}
-int32_t taosMkdirP(const char *dir) {
+int32_t taosMkdirP(const char *dir, int keepLast) {
char tmp[256];
char *p = NULL;
size_t len;
@@ -57,11 +57,13 @@ int32_t taosMkdirP(const char *dir) {
snprintf(tmp, sizeof(tmp),"%s",dir);
len = strlen(tmp);
- for (i = len - 1; i > 0; --i)
- if (tmp[i] == '/') {
- tmp[i] = 0;
- break;
- }
+ if (!keepLast) {
+ for (i = len - 1; i > 0; --i)
+ if (tmp[i] == '/') {
+ tmp[i] = 0;
+ break;
+ }
+ }
for (p = tmp + 1; *p; p++)
if (*p == '/') {
diff --git a/src/tsdb/inc/tsdbFile.h b/src/tsdb/inc/tsdbFile.h
index 6872385a8aad329fdcd5517886e974115bdd365a..75e95631513e354960df5119b25ac3b6620a29d8 100644
--- a/src/tsdb/inc/tsdbFile.h
+++ b/src/tsdb/inc/tsdbFile.h
@@ -288,7 +288,7 @@ static FORCE_INLINE int64_t tsdbReadDFile(SDFile* pDFile, void* buf, int64_t nby
static FORCE_INLINE int tsdbCopyDFile(SDFile* pSrc, SDFile* pDest) {
if (tfscopy(TSDB_FILE_F(pSrc), TSDB_FILE_F(pDest)) < 0) {
- int32_t ret = taosMkdirP(TSDB_FILE_FULL_NAME(pDest));
+ int32_t ret = taosMkdirP(TSDB_FILE_FULL_NAME(pDest), 0);
if (ret < 0 || tfscopy(TSDB_FILE_F(pSrc), TSDB_FILE_F(pDest)) < 0) {
terrno = TAOS_SYSTEM_ERROR(errno);
return -1;
diff --git a/src/wal/src/walMgmt.c b/src/wal/src/walMgmt.c
index 05324d31eec56ee74b81c70dc451eadf83d518d2..f50cf4c6df67db23fc3d0e9732b9a0eda53fca2f 100644
--- a/src/wal/src/walMgmt.c
+++ b/src/wal/src/walMgmt.c
@@ -139,7 +139,7 @@ void walClose(void *handle) {
}
static int32_t walInitObj(SWal *pWal) {
- if (taosMkDir(pWal->path, 0755) != 0) {
+ if (taosMkdirP(pWal->path, 1) != 0) {
wError("vgId:%d, path:%s, failed to create directory since %s", pWal->vgId, pWal->path, strerror(errno));
return TAOS_SYSTEM_ERROR(errno);
}