3. UPDATE set to 2 means updating a part of columns for a row is allowed, the columns for which no value is specified will be kept as no change
3. The maximum length of database name is 33 bytes.
4. The maximum length of a SQL statement is 65,480 bytes.
5. For more parameters that can be used when creating a database, like cache, blocks, days, keep, minRows, maxRows, wal, fsync, update, cacheLast, replica, quorum, maxVgroupsPerDb, ctime, comp, prec, Please refer to [Configuration Parameters](/reference/config/).
5. Below are the parameters that can be used when creating a database
6. Please be noted that all of the parameters mentioned in this section can be configured in configuration file `taosd.cfg` at server side and used by default, can be override if they are specifically in `create database` statement.
Aggregate by time window is supported in TDengine. For example, each temperature sensor reports the temperature every second, the average temperature every 10 minutes can be retrieved by query with time window.
There are about 200 keywords reserved by TDengine, they can't be used as the name of database, STable or table with either upper case, lower case or mixed case.
When the client is unable to access the server, the network connection between the client side and the server side needs to be checked to find out the root cause and resolve problems.
taosAdapter is a TDengine companion tool that acts as a bridge and adapter between TDengine clusters and applications. It provides an easy-to-use and efficient way to ingest data directly from data collection agent software such as Telegraf, StatsD, collectd, etc. It also provides an InfluxDB/OpenTSDB compatible data ingestion interface that allows InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine.
taosAdapter 提供以下功能:
taosAdapter provides the following features.
- RESTful 接口
-兼容 InfluxDB v1 写接口
-兼容 OpenTSDB JSON 和 telnet 格式写入
-无缝连接到 Telegraf
-无缝连接到 collectd
-无缝连接到 StatsD
-支持 Prometheus remote_read 和 remote_write
- RESTful interface
-InfluxDB v1 compliant write interface
-OpenTSDB JSON and telnet format writes compatible
taosAdapter has been part of TDengine server software since TDengine v2.4.0.0. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [Taos Data official website](https://taosdata. com/cn/all-downloads/) to download the TDengine server (taosAdapter is included in v2.4.0.0 and above) installation package. If you need to deploy taosAdapter separately on a server other than the TDengine server, you should install the full TDengine on that server to install taosAdapter. If you need to generate taosAdapter using source code compilation, you can refer to the [Building taosAdapter](https://github.com/taosdata/taosadapter/blob/develop/BUILD-CN.md) documentation.
On Linux systems, the taosAdapter service is managed by systemd by default. You can use the command `systemctl start taosadapter` to start the taosAdapter service and use the command `systemctl stop taosadapter` to stop the taosAdapter service.
### 移除 taosAdapter
### Remove taosAdapter
使用命令 rmtaos 可以移除包括 taosAdapter 在内的 TDengine server 软件。
Use the command `rmtaos` to remove the TDengine server software, including taosAdapter.
### 升级 taosAdapter
### Upgrade taosAdapter
taosAdapter 和 TDengine server 需要使用相同版本。请通过升级 TDengine server 来升级 taosAdapter。
与 taosd 分离部署的 taosAdapter 必须通过升级其所在服务器的 TDengine server 才能得到升级。
taosAdapter and TDengine server need to use the same version. Please upgrade the taosAdapter by upgrading the TDengine server.
You need to upgrade the taosAdapter deployed separately from taosd by upgrading the TDengine server of the deployed server.
taosAdapter supports configuration via command-line arguments, environment variables and configuration files. The default configuration file is /etc/taos/taosadapter.toml.
Command-line arguments take precedence over environment variables over configuration files. The command line usage is arg=val, e.g., taosadapter -p=30000 --debug=true. The detailed list is as follows:
```shell
Usage of taosAdapter:
...
...
@@ -133,8 +133,8 @@ Usage of taosAdapter:
--version Print the version and exit
```
备注:
使用浏览器进行接口调用请根据实际情况设置如下跨源资源共享(CORS)参数:
Note:
Please set the following Cross-Origin Resource Sharing (CORS) parameters according to the actual situation when using a browser for interface calls.
```text
AllowAllOrigins
...
...
@@ -145,39 +145,39 @@ AllowCredentials
AllowWebSockets
```
如果不通过浏览器进行接口调用无需关心这几项配置。
You do not need to care about these configurations if you do not make interface calls through the browser.
关于 CORS 协议细节请参考:[https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled) 或 [https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS](https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS)。
For details on the CORS protocol, please refer to: [https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled) or [https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS](https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS).
See [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/develop/example/config/taosadapter.toml) for sample configuration files.
StatsD is a simple yet powerful daemon for aggregating statistical information. Please visit [https://github.com/statsd/statsd](https://github.com/statsd/statsd) for more information.
-Seamless connection with icinga2
icinga2 is a software that collects inspection result metrics and performance data. Please visit [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14- features/#opentsdb-writer) for more information.
-Seamless connection to tcollector
TCollector is a client process that collects data from a local collector and pushes the data to OpenTSDB. Please visit [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) for more information.
-Seamless connection to node_exporter
node_export is an exporter for machine metrics. Please visit [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) for more information.
-Support for Prometheus remote_read and remote_write
remote_read and remote_write are clustering solutions for Prometheus data read and write separation. Please visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote- read-meets-streaming/#remote-apis) for more information.
You can use any client that supports the http protocol to write data to or query data from TDengine by accessing the RESTful interface address `http://<fqdn>:6041/<APIEndPoint>`. See the [official documentation](/reference/connector#restful) for details. The following EndPoint is supported.
You can use any client that supports the http protocol to access the Restful interface address `http://<fqdn>:6041/<APIEndPoint>` to write data in InfluxDB compatible format to TDengine. The EndPoint is as follows:
You can use any client that supports the http protocol to access the Restful interface address `http://<fqdn>:6041/<APIEndPoint>` to write data in OpenTSDB compatible format to TDengine.
```text
/opentsdb/v1/put/json/:db
...
...
@@ -229,110 +229,110 @@ AllowWebSockets
### node_exporter
Prometheus 使用的由\*NIX 内核暴露的硬件和操作系统指标的输出器
Exporter of hardware and OS metrics exposed by the \*NIX kernel used by Prometheus
-启用 taosAdapter 的配置 node_exporter.enable
-设置 node_exporter 的相关配置
-重新启动 taosAdapter
-Enable the taosAdapter configuration node_exporter.enable
taosAdapter will monitor its memory usage during operation and adjust it with two thresholds. Valid values range from -1 to 100 integers in percent of the system's physical memory.
- pauseQueryMemoryThreshold
- pauseAllMemoryThreshold
当超过 pauseQueryMemoryThreshold 阈值时时停止处理查询请求。
Stops processing query requests when the pauseQueryMemoryThreshold threshold is exceeded.
http 返回内容:
http response content.
- code 503
- body "query memory exceeds threshold"
当超过 pauseAllMemoryThreshold 阈值时停止处理所有写入和查询请求。
Stops processing all write and query requests when the pauseAllMemoryThreshold threshold is exceeded.
http 返回内容:
http response: code 503
- code 503
- body "memory exceeds threshold"
当内存回落到阈值之下时恢复对应功能。
Resume the corresponding function when the memory falls back below the threshold.
monitor.collectDuration monitoring interval environment variable "TAOS_MONITOR_COLLECT_DURATION" (default value 3s)
monitor.incgroup whether to run in cgroup (set to true for running in container) environment variable "TAOS_MONITOR_INCGROUP"
monitor.pauseAllMemoryThreshold memory threshold for no more inserts and queries environment variable "TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD" (default 80)
monitor.pauseQueryMemoryThreshold memory threshold for no more queries Environment variable "TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD" (default 70)
You can adjust it according to the specific project application scenario and operation strategy, and it is recommended to use operation monitoring software for timely system memory status monitoring. The load balancer can also check the taosAdapter running status through this interface.
## taosAdapter 监控指标
## taosAdapter Monitoring Metrics
taosAdapter 采集 http 相关指标、cpu 百分比和内存百分比。
taosAdapter collects http-related metrics, CPU percentage, and memory percentage.
You can also adjust the level of the taosAdapter log output by setting the --logLevel parameter or the environment variable TAOS_ADAPTER_LOG_LEVEL. Valid values are: panic, fatal, error, warn, warning, info, debug and trace.
## 如何从旧版本 TDengine 迁移到 taosAdapter
## How to migrate from older TDengine versions to taosAdapter
In TDengine server 2.2.x.x or earlier, the taosd process contains an embedded http service. As mentioned earlier, taosAdapter is a standalone software managed using systemd and has its process. And there are some configuration parameters and behaviors that are different between the two. See the following table.
| 2 | httpMaxThreads | n/a | taosAdapter Automatically manages thread pools without this parameter |
| 3 | telegrafUseFieldNum | See the taosAdapter telegraf configuration method | |
| 4 | restfulRowLimit | restfulRowLimit | Embedded httpd outputs 10240 rows of data by default, the maximum allowed is 102400. taosAdapter also provides restfulRowLimit but it is not limited by default. You can configure it according to the actual scenario.
| 5 | httpDebugFlag | Not applicable | httpdDebugFlag does not work for taosAdapter |
| 6 | httpDBNameMandatory | N/A | taosAdapter requires the database name to be specified in the URL |
taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can flexibly control the number and type of databases, supertables, tag columns, number and type of data columns, and sub-tables, and types of databases, super tables, the number and types of data columns, the number of sub-tables, the amount of data per sub-table, the time interval for inserting data, the number of working threads, whether and how to insert disordered data, and so on. The installer provides taosdemo as a soft link to taosBenchmark for compatibility with past users.
-Installing the official TDengine installer will automatically install taosBenchmark. Please refer to [TDengine installation](/operation/pkg-install) for details.
taosBenchmark supports two configuration methods: [command line arguments](# command line arguments detailed) and [JSON configuration file](# configuration file arguments detailed). These two methods are mutually exclusive, and with only one command line parameter, users can use `-f <json file>` to specify a configuration file when using a configuration file. When running taosBenchmark with command-line arguments and controlling its behavior, users should use other parameters for configuration rather than `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters.
taosBenchmark supports complete performance testing of TDengine. taosBenchmark supports the TDengine functions in three categories: write, query, and subscribe. These three functions are mutually exclusive, and users can select only one of them each time taosBenchmark runs. It is important to note that the type of functionality to be tested is not configurable when using the command line configuration method, which can only test writing performance. To test the query and subscription performance of the TDengine, you must use the configuration file method and specify the function type to test via the parameter `filetype` in the configuration file.
**在运行 taosBenchmark 之前要确保 TDengine 集群已经在正确运行。**
**Make sure that the TDengine cluster is running correctly before running taosBenchmark. **
Execute the following commands to quickly experience taosBenchmark's default configuration-based write performance testing of TDengine.
```bash
taosBenchmark
```
在无参数运行时,taosBenchmark 默认连接 `/etc/taos` 下指定的 TDengine 集群,并在 TDengine 中创建一个名为 test 的数据库,test 数据库下创建名为 meters 的一张超级表,超级表下创建 10000 张表,每张表中写入 10000 条记录。注意,如果已有 test 数据库,这个命令会先删除该数据库后建立一个全新的 test 数据库。
When run without parameters, taosBenchmark connects to the TDengine cluster specified in `/etc/taos` by default and creates a database named test in TDengine, a super table named `meters` under the test database, and 10,000 tables under the super table with 10,000 records written to each table. Note that if there is already a test database, this table is not used. Note that if there is already a test database, this command will delete it first and create a new test database.
### 使用命令行配置参数运行
### Run with command line configuration parameters
The `-f <json file>` argument cannot be used when running taosBenchmark with command-line parameters and controlling its behavior. Users must specify all configuration parameters from the command line. The following is an example of testing taosBenchmark writing performance using the command line approach.
The above command, `taosBenchmark` will create a database named `test`, create a super table `meters` in it, create 100 sub-tables in the super table and insert 200 records for each sub-table using parameter binding.
specify the configuration file to use. This file includes All parameters. And users should not use this parameter with other parameters on the command line. There is no default value.
-**-c/--config-dir <dir\>** :
TDengine 集群配置文件所在的目录,默认路径是 /etc/taos 。
specify the directory where the TDengine cluster configuration file. the default path is `/etc/taos`.
-**-h/--host <host\>** :
指定要连接的 TDengine 服务端的 FQDN,默认值为 localhost 。
Specify the FQDN of the TDengine server to connect to. The default value is localhost.
-**-P/--port <port\>** :
要连接的 TDengine 服务器的端口号,默认值为 6030 。
The port number of the TDengine server to connect to, the default value is 6030.
Insert mode. Options are taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface writing, parameter binding interface writing, schemaless interface writing, RESTful schemaless interface writing (provided by taosAdapter). The default value is taosc.
-**-u/--user <user\>** :
用于连接 TDengine 服务端的用户名,默认为 root 。
User name to connect to the TDengine server. Default is root.
-**-p/--password <passwd\>** :
用于连接 TDengine 服务端的密码,默认值为 taosdata。
The default password to connect to the TDengine server is `taosdata`.
-**-o/--output <file\>** :
结果输出文件的路径,默认值为 ./output.txt。
specify the path of the result output file, the default value is `. /output.txt`.
-**-T/--thread <threadNum\>** :
插入数据的线程数量,默认为 8 。
The number of threads to insert data. Default is 8.
Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted.
Specify the insert interval in `ms` for interleaved insert mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. That means that after inserting interlaced rows for each child table, the data insertion with multiple threads will wait for the interval specified by this value before proceeding to the next round of writes.
-**-r/--rec-per-req <rowNum\>** :
每次向 TDegnine 请求写入的数据行数,默认值为 30000 。
Writing the number of rows of records per request to TDengine, the default value is 30000.
-**-t/--tables <tableNum\>** :
指定子表的数量,默认为 10000 。
Specify the number of sub-tables. The default is 10000.
-**-S/--timestampstep <stepLength\>** :
每个子表中插入数据的时间戳步长,单位是 ms,默认值是 1。
Timestamp step for inserting data in each child table in ms, default is 1.
-**-n/--records <recordNum\>** :
每个子表插入的记录数,默认值为 10000 。
The default value of the number of records inserted in each sub-table is 10000.
-**-d/--database <dbName\>** :
所使用的数据库的名称,默认值为 test 。
The name of the database used, the default value is `test`.
specify the number of columns in the super table. If both this parameter and `-b/--data-type` is set, the final result number of columns is the greater of the two. If the number specified by this parameter is greater than the number of columns specified by `-b/--data-type`, the unspecified column type defaults to INT, for example: `-l 5 -b float,double`, then the final column is `FLOAT,DOUBLE,INT,INT,INT`. If the number of columns specified is less than or equal to the number of columns specified by `-b/--data-type`, then the result is the column and type specified by `-b/--data-type`, e.g.: `-l 3 -b float,double,float,bigint`. The last column is `FLOAT,DOUBLE, FLOAT,BIGINT`.
-**-A/--tag-type <tagType\>** :
超级表的标签列类型。nchar 和 binary 类型可以同时设置长度,例如:
The tag column type of the super table. nchar and binary types can both set the length, for example:
```
taosBenchmark -A INT,DOUBLE,NCHAR,BINARY(16)
```
如果没有设置标签类型,默认是两个标签,其类型分别为 INT 和 BINARY(16)。
注意:在有的 shell 比如 bash 命令里面 “()” 需要转义,则上述指令应为:
If users did not set tag type, the default is two tags, whose types are INT and BINARY(16).
Note: In some shells, such as bash, "()" needs to be escaped, so the above command should be
```
taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
```
-**-w/--binwidth <length\>**:
nchar 和 binary 类型的默认长度,默认值为 64。
specify the default length for nchar and binary types. The default value is 64.
-**-m/--table-prefix <tablePrefix\>** :
子表名称的前缀,默认值为 "d"。
The prefix of the sub-table name, the default value is "d".
-**-E/--escape-character** :
开关参数,指定在超级表和子表名称中是否使用转义字符。默认值为不使用。
Switch parameter specifying whether to use escape characters in the super table and sub-table names. By default is not used.
-**-C/--chinese** :
开关参数,指定 nchar 和 binary 是否使用 Unicode 中文字符。默认值为不使用。
Switch specifying whether to use Unicode Chinese characters in nchar and binary. By default is not used.
This parameter indicates that taosBenchmark will create only normal tables instead of super tables. The default value is false. It can be used if the insert mode is taosc, stmt, and rest.
This parameter indicates writing data with random values. The default is false. If users use this parameter, taosBenchmark will generate the random values. For tag/data columns of numeric type, the value is a random value within the range of values of that type. For NCHAR and BINARY type tag columns/data columns, the value is the random string within the specified length range.
-**-x/--aggr-func** :
开关参数,指示插入后查询聚合函数。默认值为 false。
Switch parameter to indicate query aggregation function after insertion. The default value is false.
-**-y/--answer-yes** :
开关参数,要求用户在提示后确认才能继续。默认值为 false 。
Switch parameter that requires the user to confirm at the prompt to continue. The default value is false.
-**-O/--disorder <Percentage\>** :
指定乱序数据的百分比概率,其值域为 [0,50]。默认为 0,即没有乱序数据。
Specify the percentage probability of disordered data, with a value range of [0,50]. The default is 0, i.e., there is no disordered data.
Specify the timestamp range for the disordered data. It leads the resulting disorder timestamp as the ordered timestamp minus a random value in this range. Valid only if the percentage of disordered data specified by `-O/--disorder` is greater than 0.
-**-F/--prepare_rand <Num\>** :
生成的随机数据中唯一值的数量。若为 1 则表示所有数据都相同。默认值为 10000 。
Specify the number of unique values in the generated random data. A value of 1 means that all data are equal. The default value is 10000.
-**-a/--replica <replicaNum\>** :
创建数据库时指定其副本数,默认值为 1 。
Specify the number of replicas when creating the database. The default value is 1.
-**-V/--version** :
显示版本信息并退出。不能与其它参数混用。
Show version information only. Users should not use it with other parameters.
-**-?/--help** :
显示帮助信息并退出。不能与其它参数混用。
-**-?/--help** :
Show help information and exit. Users should not use it with other parameters.
## 配置文件参数详解
## Configuration file parameters in detail
### 通用配置参数
### General configuration parameters
本节所列参数适用于所有功能模式。
The parameters listed in this section apply to all function modes.
-**filetype** : The function to be tested, with optional values `insert`, `query` and `subscribe`. These correspond to the insert, query, and subscribe functions, respectively. Users can specify only one of these in each configuration file.
**cfgdir**: specify the TDengine cluster configuration file's directory. The default path is /etc/taos.
The parameters related to database creation are configured in `dbinfo` in the json configuration file, as follows. These parameters correspond to the database parameters specified when `create database` in TDengine.
-**name** : 数据库名。
-**name**: specify the name of the database.
-**drop** : 插入前是否删除数据库,默认为 true。
-**drop**: indicate whether to delete the database before inserting. The default is true.
-**replica** : 创建数据库时指定的副本数。
-**replica**: specify the number of replicas when creating the database.
-**days** : 单个数据文件中存储数据的时间跨度,默认值为 10。
-**days**: specify the time span for storing data in a single data file. The default is 10.
-**cache** : 缓存块的大小,单位是 MB,默认值是 16。
-**cache**: specify the size of the cache blocks in MB. The default value is 16.
-**blocks** : 每个 vnode 中缓存块的数量,默认为 6。
-**blocks**: specify the number of cache blocks in each vnode. The default is 6.
-**precision** : 数据库时间精度,默认值为 "ms"。
-**precision**: specify the database time precision. The default value is "ms".
-**keep** : 保留数据的天数,默认值为 3650。
-**keep**: specify the number of days to keep the data. The default value is 3650.
-**minRows** : 文件块中的最小记录数,默认值为 100。
-**minRows**: specify the minimum number of records in the file block. The default value is 100.
-**maxRows** : 文件块中的最大记录数,默认值为 4096。
-**maxRows**: specify the maximum number of records in the file block. The default value is 4096.
-**comp** : 文件压缩标志,默认值为 2。
-**comp**: specify the file compression level. The default value is 2.
-**auto_create_table**: only when insert_mode is taosc, rest, stmt, and childtable_exists is "no". "yes" means taosBenchmark will automatically create non-existent tables when inserting data; "no" means that taosBenchmark will create all tables before inserting.
-**batch_create_tbl_num** : the number of tables per batch when creating sub-tables, default is 10. Note: the actual number of batches may not be the same as this value when the executed SQL statement is larger than the maximum length supported, it will be automatically truncated and re-executed to continue creating.
-**data_source**: specify the source of data-generating. Default is taosBenchmark randomly generated. Users can configure it as "rand" and "sample". When "sample" is used, taosBenchmark will use the data in the file specified by the `sample_file` parameter.
-**non_stop_mode**: Specify whether to keep writing. If "yes", insert_rows will be disabled, and writing will not stop until Ctrl + C stops the program. The default value is "no", i.e., taosBenchmark will stop the writing after the specified number of rows are written. Note: insert_rows must be configured as a non-zero positive integer even if it fails in continuous write mode.
-**tcp_transfer**: Communication protocol in telnet mode only takes effect when insert_mode is sml-rest, and line_protocol is telnet. If not configured, the default protocol is http.
-**insert_rows** : 每个子表插入的记录数,默认为 0 。
-**insert_rows** : The number of inserted rows per child table, default is 0.
-**childtable_offset**: Effective only if childtable_exists is yes, specifies the offset when fetching the list of child tables from the super table, i.e., starting from the first child table.
-**childtable_limit**: Effective only when childtable_exists is yes, specifies the upper limit for fetching the list of child tables from the super table.
-**interlace_rows**: Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table at a time. Staggered insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted.
-**insert_interval** : Specifies the insertion interval in ms for interleaved insertion mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. After inserting interlaced rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes.
-**partial_col_num** : 若该值为正数 n 时, 则仅向前 n 列写入,仅当 insert_mode 为 taosc 和 rest 时生效,如果 n 为 0 则是向全部列写入。
-**partial_col_num**: If this value is a positive number n, only the first n columns are written to, only if insert_mode is taosc and rest, or all columns if n is 0.
-**disorder_ratio** : Specifies the percentage probability of disordered data in the value range [0,50]. The default is 0, which means there is no disorder data.
-**disorder_range** : Specifies the timestamp fallback range for the disordered data. The generated disorder timestamp is the timestamp that should be used in the non-disorder case minus a random value in this range. Valid only if the percentage of disordered data specified by `-O/--disorder` is greater than 0.
-**timestamp_step**: The timestamp step for inserting data in each child table, in units consistent with the `precision` of the database, the default value is 1.
-**start_timestamp** : 每个子表的时间戳起始值,默认值是 now。
-**start_timestamp** : The timestamp start value of each sub-table, the default value is now.
-**sample_format** : 样本数据文件的类型,现在只支持 "csv" 。
-**sample_format**: The type of the sample data file, now only "csv" is supported.
-**sample_file**: Specify a CSV format file as the data source. It only works when data_source is a sample. If the number of rows in the CSV file is less than or equal to prepared_rand, then taosBenchmark will read the CSV file data cyclically until it is the same as prepared_rand; otherwise, taosBenchmark will read only the rows with the number of prepared_rand. The final number of rows of data generated is the smaller of the two.
-**use_sample_ts**: effective only when data_source is `sample`, indicates whether the CSV file specified by sample_file contains the first timestamp column. Default is no. If set to yes, the first column of the CSV file is used as `timestamp`. Since the timestamp of the same sub-table cannot be repeated, the amount of data generated depends on the same number of rows of data in the CSV file, and insert_rows will be invalidated.
-**tags_file** : 仅当 insert_mode 为 taosc, rest 的模式下生效。 最终的 tag 的数值与 childtable_count 有关,如果 csv 文件内的 tag 数据行小于给定的子表数量,那么会循环读取 csv 文件数据直到生成 childtable_count 指定的子表数量;否则则只会读取 childtable_count 行 tag 数据。也即最终生成的子表数量为二者取小。
-**tags_file** : only works when insert_mode is taosc, rest. The final tag value is related to the childtable_count. Suppose the tag data rows in the CSV file are smaller than the given number of child tables. In that case, taosBenchmark will read the CSV file data cyclically until the number of child tables specified by childtable_count is generated. Otherwise, taosBenchmark will read the childtable_count rows of tag data only. The final number of child tables generated is the smaller of the two.
The configuration parameters for specifying super table tag columns and data columns are in `columns` and `tag` in `super_tables`, respectively.
-**type** : 指定列类型,可选值请参考 TDengine 支持的数据类型。
注:JSON 数据类型比较特殊,只能用于标签,当使用 JSON 类型作为 tag 时有且只能有这一个标签,此时 count 和 len 代表的意义分别是 JSON tag 内的 key-value pair 的个数和每个 KV pair 的 value 的值的长度,value 默认为 string。
-**type**: Specify the column type. For optional values, please refer to the data types supported by TDengine.
Note: JSON data type is unique and can only be used for tags. When using JSON type as a tag, there is and can only be this one tag. At this time, `count` and `len` represent the meaning of the number of key-value pairs within the JSON tag and the length of the value of each KV pair. Respectively, the value is a string by default.
-**len**: Specifies the length of this data type, valid for NCHAR, BINARY, and JSON data types. If this parameter is configured for other data types, a value of 0 means that the column is always written with a null value; if it is not 0, it is ignored.
-**name** : The name of the column, if used together with count, e.g. "name": "current", "count":3, then the names of the 3 columns are current, current_2. current_3.
-**min** : 数据类型的 列/标签 的最小值。
-**min**: The minimum value of the column/label of the data type.
-**max** : 数据类型的 列/标签 的最大值。
-**max**: The maximum value of the column/label of the data type.
-**values** : nchar/binary 列/标签的值域,将从值中随机选择。
-**values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.
#### 插入行为配置参数
#### insertion behavior configuration parameters
-**thread_count** : 插入数据的线程数量,默认为 8。
-**thread_count**: specify the number of threads to insert data. Default is 8.
-**create_table_thread_count** : 建表的线程数量,默认为 8。
-**create_table_thread_count** : The number of threads to build the table, default is 8.
-**connection_pool_size** : The number of pre-established connections to the TDengine server. If not configured, it is the same number of threads specified.
-**result_file** : 结果输出文件的路径,默认值为 ./output.txt。
-**result_file** : The path to the result output file, the default value is . /output.txt.
-**interlace_rows**: Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table at a time. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables are inserted. The default value is 0, which means that data will be inserted into the following child table only after data is inserted into one child table.
This parameter can also be configured in `super_tables`, and if so, the configuration in `super_tables` takes precedence and overrides the global setting.
Specifies the insertion interval in ms for interleaved insertion mode. The default value is 0. Only works if `-B/--interlace-rows` is greater than 0. It means that after inserting interlace rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes.
This parameter can also be configured in `super_tables`, and if configured, the configuration in `super_tables` takes high priority, overriding the global setting.
The number of rows of data to be written per request to TDengine, the default value is 30000. When it is set too large, the TDengine client driver will return the corresponding error message, so you need to lower the setting of this parameter to meet the writing requirements.
-**prepare_rand**: The number of unique values in the generated random data. A value of 1 means that all data are the same. The default value is 10000.
`filetype` must be set to `query` in the query scenario. See [General Configuration Parameters](#General Configuration Parameters) for details of this parameter and other general parameters
#### 执行指定查询语句的配置参数
#### Configuration parameters for executing the specified query statement
查询子表或者普通表的配置参数在 `specified_table_query` 中设置。
The configuration parameters for querying the sub-tables or the normal tables are set in `specified_table_query`.
-**query_interval** : 查询时间间隔,单位是秒,默认值为 0。
-**query_interval** : The query interval in seconds, the default value is 0.
-**threads** : 执行查询 SQL 的线程数,默认值为 1。
-**threads**: The number of threads to execute the query SQL, the default value is 1.
-**sqls**:
-**sql**: 执行的 SQL 命令,必填。
-**result**: 保存查询结果的文件,未指定则不保存。
-**sqls**.
-**sql**: the SQL command to be executed.
-**result**: the file to save the query result. If it is unspecified, taosBenchark will not save the result.
#### 查询超级表的配置参数
#### Configuration parameters of query super table
查询超级表的配置参数在 `super_table_query` 中设置。
The configuration parameters of the super table query are set in `super_table_query`.
-**stblname** : 指定要查询的超级表的名称,必填。
-**stblname**: Specify the name of the super table to be queried, required.
-**query_interval** : 查询时间间隔,单位是秒,默认值为 0。
-**query_interval** : The query interval in seconds, the default value is 0.
-**threads** : 执行查询 SQL 的线程数,默认值为 1。
-**threads**: The number of threads to execute the query SQL, the default value is 1.
-**sql**: The SQL command to be executed. For the query SQL of super table, keep "xxxx" in the SQL command. The program will automatically replace it with all the sub-table names of the super table.
Replace it with all the sub-table names in the super table.
-**result**: The file to save the query result. If not specified, taosBenchmark will not save result.
`filetype` must be set to `subscribe` in the subscription scenario. See [General Configuration Parameters](#General Configuration Parameters) for details of this and other general parameters
#### 执行指定订阅语句的配置参数
#### Configuration parameters for executing the specified subscription statement
订阅子表或者普通表的配置参数在 `specified_table_query` 中设置。
The configuration parameters for subscribing to a sub-table or a generic table are set in `specified_table_query`.
-**threads** : 执行 SQL 的线程数,默认为 1。
-**threads**: The number of threads to execute SQL, default is 1.
-**interva** : 执行订阅的时间间隔,单位为秒,默认为 0。
-**interval**: The time interval to execute the subscription, in seconds, default is 0.
-**resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no".
-**sqls**:
-**sql** : 执行的 SQL 命令,必填。
-**result** : 保存查询结果的文件,未指定则不保存。
-**sqls**: The default value is "no".
-**sql** : The SQL command to be executed, required.
-**result** : The file to save the query result, unspecified is not saved.
#### 订阅超级表的配置参数
#### Configuration parameters for subscribing to supertables
订阅超级表的配置参数在 `super_table_query` 中设置。
The configuration parameters for subscribing to a super table are set in `super_table_query`.
-**stblname** : 要订阅的超级表名称,必填。
-**stblname**: The name of the super table to subscribe.
-**threads** : 执行 SQL 的线程数,默认为 1。
-**threads**: The number of threads to execute SQL, default is 1.
-**interva** : 执行订阅的时间间隔,单位为秒,默认为 0。
-**interval**: The time interval to execute the subscription, in seconds, default is 0.
-**resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no".
-**sql**: SQL command to be executed, required; for the query SQL of the super table, keep "xxxx" in the SQL command, and the program will replace it with all the sub-table names of the super table automatically.
Replace it with all the sub-table names in the super table.
-**result**: The file to save the query result, if not specified, it will not be saved.
taosdump is a tool application that supports backing up data from a running TDengine cluster and restoring the backed up data to the same or another running TDengine cluster.
taosdump 可以用数据库、超级表或普通表作为逻辑数据单元进行备份,也可以对数据库、超级
表和普通表中指定时间段内的数据记录进行备份。使用时可以指定数据备份的目录路径,如果
不指定位置,taosdump 默认会将数据备份到当前目录。
taosdump can back up a database, a super table, or a normal table as a logical data unit or backup data records in the database, super tables, and normal tables. When using taosdump, you can specify the directory path for data backup. If you do not specify a directory, taosdump will back up the data to the current directory by default.
Suppose the specified location already has data files. In that case, taosdump will prompt the user and exit immediately to avoid data overwriting which means that the same path can only be used for one backup.
taosdump is a logical backup tool and should not be used to back up any raw data, environment settings,
Users should not use taosdump to back up raw data, environment settings, hardware information, server configuration, or cluster topology. taosdump uses [Apache AVRO](https://avro.apache.org/) as the data file format to store backup data.
-Install the taosTools official installer. Please find taosTools from [All download links](https://www.taosdata.com/all-downloads) page and download and install it.
1.backing up all databases: specify `-A` or `-all-databases` parameter.
2.backup multiple specified databases: use `-D db1,db2,... ` parameters; 3.
3.back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
4.back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command line parameter.
5.Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use This can reduce the backup data time and backup data footprint if table names, column names, and tag names do not use `escape character`. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](/taos-sql/escape) for a description of escaped characters.
- taosdump versions after 1.4.1 provide the `-I` argument for parsing Avro file schema and data. If users specify `-s` then only taosdump will parse schema.
-Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ..." can be tried by challenging the `-B` parameter to a smaller value.
Restore the data file in the specified path: use the `-i` parameter plus the path to the data file. You should not use the same directory to backup different data sets, and you should not backup the same data set multiple times in the same path. Otherwise, the backup data will cause overwriting or multiple backups.
taosdump internally uses TDengine stmt binding API for writing recovery data and currently uses 16384 as one write batch for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust to a smaller value by using the `-B` parameter.
:::
## 详细命令行参数列表
## Detailed command line parameter list
以下为 taosdump 详细命令行参数列表:
The following is a detailed list of taosdump command line arguments.
The TDengine command-line application (hereafter referred to as TDengine CLI) is the cleanest and most common way for users to manipulate and interact with TDengine instances.
If executed on the TDengine server-side, there is no need for additional installation as it is already installed automatically. To run on the non-TDengine server-side, the TDengine client driver needs to be installed. For details, please refer to [connector](/reference/connector/).
## 执行
## Execution
要进入 TDengine CLI,您只要在 Linux 终端或Windos 终端执行 `taos` 即可。
To access the TDengine CLI, you can execute `taos` from a Linux terminal or Windows terminal.
TDengine will display a welcome message and version information if the connection to the service is successful. If it fails, TDengine will print an error message (see [FAQ](/train-faq/faq) to solve the problem of terminal connection failure to the server.) The TDengine CLI prompt symbols are as follows:
```cmd
taos>
```
进入CLI后,你可执行各种SQL语句,包括插入、查询以及各种管理命令。
After entering the CLI, you can execute various SQL statements, including inserts, queries, and administrative commands.
## 执行 SQL 脚本
## Execute SQL scripts
在 TDengine CLI 里可以通过 `source` 命令来运行 SQL 命令脚本。
Run SQL command scripts in the TDengine CLI via the `source` command.
```sql
taos>source<filename>;
```
## 在线修改显示字符宽度
## Modify display character width online
可以在 TDengine CLI 里使用如下命令调整字符显示宽度
Users can adjust the character display width in TDengine CLI with the following command:
```sql
taos>SETMAX_BINARY_DISPLAY_WIDTH<nn>;
```
如显示的内容后面以...结尾时,表示该内容已被截断,可通过本命令修改显示字符宽度以显示完整的内容。
If the displayed content is followed by `...` you can use this command to change the display width to display the full content.
## 命令行参数
## Command Line Parameters
您可通过配置命令行参数来改变 TDengine CLI 的行为。以下为常用的几个命令行参数:
You can change the behavior of TDengine CLI by configuring command-line parameters. The following command-line arguments are commonly used.
-c, --config-dir: Specify the configuration file directory. The default is `/etc/taos`, and the default name of the configuration file in this directory is taos.cfg
-C, --dump-config: Print the configuration parameters of taos.cfg in the directory specified by -c
-d, --database=DATABASE: Specify the database to use when connecting to the server
-D, --directory=DIRECTORY: Import the SQL script file in the specified path
-f, --file=FILE: Execute the SQL script file in non-interactive mode
-k, --check=CHECK: Specify the table to be checked
-l, --pktlen=PKTLEN: Test package size to be used for network testing
-n, --netrole=NETROLE: test scope for network connection test, default is `startup`, The value can be `client`, `server`, `rpc`, `startup`, `sync`, `speed`, or `fqdn`.
-r, --raw-time: output the time to uint64_t
-s, --commands=COMMAND: execute SQL commands in non-interactive mode
-S, --pkttype=PKTTYPE: Specify the packet type used for network testing. The default is TCP. only `netrole` can be specified as either TCP or UDP when speed is specified
-T, --thread=THREADNUM: The number of threads to import data in multi-threaded mode
-s, --commands: Run TDengine commands without entering the terminal
-z, --timezone=TIMEZONE: Specify time zone. Default is local
-V, --version: Print out the current version number
-You can use the up and down cursor keys to see the history of commands entered
-Change user password: use `alter user` command in TDengine CLI. The default password is `taosdata`.
- ctrl+c to stop a query in progress
-Execute `RESET QUERY CACHE` to clear the local cache of the table schema
-Execute SQL statements in batches. You can store a series of shell commands (ending with ;, one line for each SQL statement) in a file and execute the command `source <file-name>` in the shell to execute all SQL statements in that file automatically
| /var/lib/taos | TDengine's default data file directory. The location can be changed via [configuration file]. |
| /var/log/taos | TDengine default log file directory. The location can be changed via [configure file]. |
## Executable files
All executable files of TDengine are stored in the _/usr/local/taos/bin_ directory by default. These include.
- _taosd_: TDengine server-side executable files
- _taos_: TDengine shell executable
- _taosdump_: data import and export tool
- _taosBenchmark_: TDengine testing tool
- _remove.sh_: script to uninstall TDengine, please execute it carefully, link to the **rmtaos** command in the /usr/bin directory. Will remove the TDengine installation directory /usr/local/taos, but will keep /etc/taos, /var/lib/taos, /var/log/taos
- _taosadapter_: server-side executable that provides RESTful services and accepts writing requests from a variety of other software
- _tarbitrator_: provides arbitration for two-node cluster deployments
- _run_taosd_and_taosadapter.sh_: script to start both taosd and taosAdapter
- _TDinsight.sh_: script to download TDinsight and install it
- _set_core.sh_: script for setting up the system to generate core dump files for easy debugging
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.