未验证 提交 43ad67a6 编写于 作者: arielyangpan's avatar arielyangpan 提交者: GitHub

Merge branch 'develop' into docs/typo-collection

......@@ -11,7 +11,7 @@
# TDengine 简介
TDengine 是一款高性能、分布式、支持 SQL 的时序数据库。而且除时序数据库功能外,它还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。与其他时序数据数据库相比,TDengine 有以下特点:
TDengine 是一款高性能、分布式、支持 SQL 的时序数据库(Time-Series Database)。而且除时序数据库功能外,它还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。与其他时序数据数据库相比,TDengine 有以下特点:
- **高性能**:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,而且存储空间也大为节省。
......
......@@ -19,25 +19,6 @@ MESSAGE(STATUS "Project binary files output path: " ${PROJECT_BINARY_DIR})
MESSAGE(STATUS "Project executable files output path: " ${EXECUTABLE_OUTPUT_PATH})
MESSAGE(STATUS "Project library files output path: " ${LIBRARY_OUTPUT_PATH})
find_package(Git QUIET)
if(GIT_FOUND AND EXISTS "${TD_COMMUNITY_DIR}/.git")
# Update submodules as needed
option(GIT_SUBMODULE "Check submodules during build" ON)
if(GIT_SUBMODULE)
message(STATUS "Submodule update")
execute_process(COMMAND ${GIT_EXECUTABLE} submodule update --init --recursive
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE GIT_SUBMOD_RESULT)
if(NOT GIT_SUBMOD_RESULT EQUAL "0")
message(WARNING "git submodule update --init --recursive failed with ${GIT_SUBMOD_RESULT}, please checkout submodules")
endif()
endif()
endif()
if(NOT EXISTS "${TD_COMMUNITY_DIR}/deps/jemalloc/Makefile.in")
message(WARNING "The submodules were not downloaded! GIT_SUBMODULE was turned off or failed. Please update submodules manually if you need build them.")
endif()
IF (TD_BUILD_JDBC)
FIND_PROGRAM(TD_MVN_INSTALLED mvn)
IF (TD_MVN_INSTALLED)
......
......@@ -4,9 +4,13 @@ title: 产品简介
toc_max_heading_level: 2
---
## TDengine 主要功能
TDengine 是一款高性能、分布式、支持 SQL 的时序数据库,其核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供[缓存](/develop/cache/)[数据订阅](/develop/subscribe)[流式计算](/develop/continuous-query)等大数据平台所需要的系列功能,最大程度减少研发和运维的复杂度。
TDengine 是一款高性能、分布式、支持 SQL 的时序数据库,其核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供[缓存](/develop/cache/)[数据订阅](/develop/subscribe)[流式计算](/develop/continuous-query)等大数据平台所需要的系列功能,最大程度减少研发和运维的复杂度。主要功能如下:
本章节介绍TDengine的主要功能、竞争优势、适用场景、与其他数据库的对比测试等等,让大家对TDengine有个整体的了解。
## 主要功能
TDengine的主要功能如下:
1. 高速数据写入,除 [SQL 写入](/develop/insert-data/sql-writing)外,还支持 [Schemaless 写入](/reference/schemaless/),支持 [InfluxDB LINE 协议](/develop/insert-data/influxdb-line)[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json)等协议写入;
2. 第三方数据采集工具 [Telegraf](/third-party/telegraf)[Prometheus](/third-party/prometheus)[StatsD](/third-party/statsd)[collectd](/third-party/collectd)[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQ](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker) 等都可以进行配置后,不用任何代码,即可将数据写入;
......@@ -26,7 +30,7 @@ TDengine 是一款高性能、分布式、支持 SQL 的时序数据库,其核
更多细小的功能,请阅读整个文档。
## TDengine 主要亮点
## 竞争优势
由于 TDengine 充分利用了[时序数据特点](https://www.taosdata.com/blog/2019/07/09/105.html),比如结构化、无需事务、很少删除或更新、写多读少等等,设计了全新的针对时序数据的存储引擎和计算引擎,因此与其他时序数据库相比,TDengine 有以下特点:
......@@ -53,7 +57,7 @@ TDengine 是一款高性能、分布式、支持 SQL 的时序数据库,其核
3. 因为其 All In One 的特性,系统复杂度降低,能降研发成本
4. 因为运维维护简单,运营维护成本能大幅降低
## TDengine 技术生态
## 技术生态
在整个时序大数据平台中,TDengine 在其中扮演的角色如下:
......@@ -111,7 +115,7 @@ TDengine 是一款高性能、分布式、支持 SQL 的时序数据库,其核
| 要求运维学习成本可控 | | | √ | 同上。 |
| 要求市场有大量人才储备 | √ | | | TDengine 作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。 |
## TDengine 与其他数据库的对比测试
## 与其他数据库的对比测试
- [用 InfluxDB 开源的性能测试工具对比 InfluxDB 和 TDengine](https://www.taosdata.com/blog/2020/01/13/1105.html)
- [TDengine 与 OpenTSDB 对比测试](https://www.taosdata.com/blog/2019/08/21/621.html)
......
......@@ -137,7 +137,18 @@ TDengine 建议用数据采集点的名字(如上表中的 D1001)来做表
超级表是指某一特定类型的数据采集点的集合。同一类型的数据采集点,其表的结构是完全一样的,但每个表(数据采集点)的静态属性(标签)是不一样的。描述一个超级表(某一特定类型的数据采集点的集合),除需要定义采集量的表结构之外,还需要定义其标签的 schema,标签的数据类型可以是整数、浮点数、字符串,标签可以有多个,可以事后增加、删除或修改。如果整个系统有 N 个不同类型的数据采集点,就需要建立 N 个超级表。
在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**。当为某个具体数据采集点创建表时,用户使用超级表的定义做模板,同时指定该具体采集点(表)的标签值。与传统的关系型数据库相比,表(一个数据采集点)是带有静态标签的,而且这些标签可以事后增加、删除、修改。超级表与与基于超级表建立的子表之间的关系表现在:
在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**
## 子表 (Subtable)
当为某个具体数据采集点创建表时,用户可以使用超级表的定义做模板,同时指定该具体采集点(表)的具体标签值来创建该表。**通过超级表创建的表称之为子表**。正常的表与子表的差异在于:
1. 子表就是表,因此所有正常表的SQL操作都可以在子表上执行。
2. 子表在正常表的基础上有扩展,它是带有静态标签的,而且这些标签可以事后增加、删除、修改,而正常的表没有。
3. 子表一定属于一张超级表,但普通表不属于任何超级表
4. 普通表无法转为子表,子表也无法转为普通表。
超级表与与基于超级表建立的子表之间的关系表现在:
1. 一张超级表包含有多张子表,这些子表具有相同的采集量 schema,但带有不同的标签值。
2. 不能通过子表调整数据或标签的模式,对于超级表的数据模式修改立即对所有的子表生效。
......@@ -145,6 +156,8 @@ TDengine 建议用数据采集点的名字(如上表中的 D1001)来做表
查询既可以在表上进行,也可以在超级表上进行。针对超级表的查询,TDengine 将把所有子表中的数据视为一个整体数据集进行处理,会先把满足标签过滤条件的表从超级表中找出来,然后再扫描这些表的时序数据,进行聚合操作,这样需要扫描的数据集会大幅减少,从而显著提高查询的性能。本质上,TDengine 通过对超级表查询的支持,实现了多个同类数据采集点的高效聚合。
TDengine系统建议给一个数据采集点建表,需要通过超级表建表,而不是建普通表。
## 库 (database)
库是指一组表的集合。TDengine 容许一个运行实例有多个库,而且每个库可以配置不同的存储策略。不同类型的数据采集点往往具有不同的数据特征,包括数据采集频率的高低,数据保留时间的长短,副本的数目,数据块的大小,是否允许更新数据等等。为了在各种场景下 TDengine 都能最大效率的工作,TDengine 建议将不同数据特征的超级表创建在不同的库里。
......
label: 写入数据
link:
type: generated-index
slug: /insert-data/
description: "TDengine 支持多种写入协议,包括 SQL,InfluxDB Line 协议, OpenTSDB Telnet 协议,OpenTSDB JSON 格式协议。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。同时,TDengine 支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。InfluxDB Line 协议、OpenTSDB Telnet 协议和 OpenTSDB JSON 格式协议是 TDengine 支持的三种无模式写入协议。使用无模式方式写入无需提前创建超级表和子表,并且引擎能自适用数据对表结构做调整。"
label: 写入数据
\ No newline at end of file
---
title: 写入数据
---
TDengine 支持多种写入协议,包括 SQL,InfluxDB Line 协议, OpenTSDB Telnet 协议,OpenTSDB JSON 格式协议。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。同时,TDengine 支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。InfluxDB Line 协议、OpenTSDB Telnet 协议和 OpenTSDB JSON 格式协议是 TDengine 支持的三种无模式写入协议。使用无模式方式写入无需提前创建超级表和子表,并且引擎能自适用数据对表结构做调整。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
label: 开发指南
link:
type: generated-index
slug: /develop
description: "开始指南是对开发者友好的使用教程,既包括数据建模、写入、查询等基础功能的使用,也包括数据订阅、连续查询等高级功能的使用。对于每个主题,都配有各编程语言的连接器的示例代码,方便开发者快速上手。如果想更深入地了解各连接器的使用,请阅读连接器参考指南。"
label: 开发指南
\ No newline at end of file
---
title: 开发指南
---
开始指南是对开发者友好的使用教程,既包括数据建模、写入、查询等基础功能的使用,也包括数据订阅、连续查询等高级功能的使用。对于每个主题,都配有各编程语言的连接器的示例代码,方便开发者快速上手。如果想更深入地了解各连接器的使用,请阅读连接器参考指南。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
label: 集群管理
link:
type: generated-index
slug: /cluster/
description: "TDengine支持以集群方式部署,以提升系统的处理能力和高可用性。TDengine集群支持任意数据的多副本从而提升高可用性,并自动实现负载均衡。同时TDengine集群具有很好的横向扩展能力以处理更多的数据采集点和更大的数据量。"
keywords:
[
集群,
高可用,
负载均衡,
横向扩展
]
---
title: 集群管理
---
TDengine支持以集群方式部署,以提升系统的处理能力和高可用性。TDengine集群支持任意数据的多副本从而提升高可用性,并自动实现负载均衡。同时TDengine集群具有很好的横向扩展能力以处理更多的数据采集点和更大的数据量。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -19,7 +19,23 @@ CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
4. 更多关于 UPDATE 参数的用法,请参考[FAQ](/train-faq/faq)
3. 数据库名最大长度为 33;
4. 一条 SQL 语句的最大长度为 65480 个字符;
5. 数据库还有更多与数据库相关的配置参数,如 cache, blocks, days, keep, minRows, maxRows, wal, fsync, update, cacheLast, replica, quorum, maxVgroupsPerDb, ctime, comp, prec, 具体细节请参见 [配置参数](/reference/config/) 章节。
5. 创建数据库时可用的参数有:
- cache: [Description](/reference/config/#cache)
- blocks: [Description](/reference/config/#blocks)
- days: [Description](/reference/config/#days)
- keep: [Description](/reference/config/#keep)
- minRows: [Description](/reference/config/#minrows)
- maxRows: [Description](/reference/config/#maxrows)
- wal: [Description](/reference/config/#wallevel)
- fsync: [Description](/reference/config/#fsync)
- update: [Description](/reference/config/#update)
- cacheLast: [Description](/reference/config/#cachelast)
- replica: [Description](/reference/config/#replica)
- quorum: [Description](/reference/config/#quorum)
- maxVgroupsPerDb: [Description](/reference/config/#maxvgroupsperdb)
- comp: [Description](/reference/config/#comp)
- precision: [Description](reference/config/#precision)
6. 请注意上面列出的所有参数都可以配置在配置文件 `taosd.cfg` 中作为创建数据库时使用的默认配置, `create database` 的参数中明确指定的会覆盖配置文件中的设置。
:::
......
......@@ -30,4 +30,11 @@ taos> DESCRIBE meters;
groupid | INT | 4 | TAG |
```
数据集包含 4 个智能电表的数据,按照 TDengine 的建模规则,对应 4 个子表,其名称分别是 d1001, d1002, d1003, d1004。
\ No newline at end of file
数据集包含 4 个智能电表的数据,按照 TDengine 的建模规则,对应 4 个子表,其名称分别是 d1001, d1002, d1003, d1004。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
label: 运维指南
link:
slug: /operation/
type: generated-index
---
title: 运维指南
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -4,7 +4,9 @@ title: REST API
为支持各种不同类型平台的开发,TDengine 提供符合 REST 设计标准的 API,即 REST API。为最大程度降低学习成本,不同于其他数据库 REST API 的设计方法,TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST 连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。
注意:与原生连接器的一个区别是,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。(从 2.2.0.0 版本开始,支持在 RESTful url 中指定 db_name,这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 url 中指定的这个 db_name。从 2.4.0.0 版本开始,RESTful 默认由 taosAdapter 提供,要求必须在 url 中指定 db_name。)
:::note
与原生连接器的一个区别是,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。从 2.2.0.0 版本开始,支持在 RESTful URL 中指定 db_name,这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 URL 中指定的这个 db_name。从 2.4.0.0 版本开始,RESTful 默认由 taosAdapter 提供,要求必须在 URL 中指定 db_name。
:::
## 安装
......@@ -16,11 +18,10 @@ RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安
下面以 Ubuntu 环境中使用 curl 工具(确认已经安装)来验证 RESTful 接口的正常。
下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041(缺省值)替换为实际运行的 TDengine 服务 fqdn 和端口号:
下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041(缺省值)替换为实际运行的 TDengine 服务 FQDN 和端口号:
```html
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;'
h1.taosdata.com:6041/rest/sql
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' h1.taosdata.com:6041/rest/sql
```
返回值结果如下表示验证通过:
......@@ -84,7 +85,7 @@ http://<fqdn>:<port>/rest/sql/[db_name]
- port: 配置文件中 httpPort 配置项,缺省为 6041
- db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。(从 2.2.0.0 版本开始支持)
例如:http://h1.taos.com:6041/rest/sql/test 是指向地址为 h1.taos.com:6041 的 url,并将默认使用的数据库库名设置为 test
例如:`http://h1.taos.com:6041/rest/sql/test` 是指向地址为 `h1.taos.com:6041` 的 URL,并将默认使用的数据库库名设置为 `test`
HTTP 请求的 Header 里需带有身份认证信息,TDengine 支持 Basic 认证与自定义认证两种机制,后续版本将提供标准安全的数字签名机制来做身份验证。
......@@ -100,9 +101,9 @@ HTTP 请求的 Header 里需带有身份认证信息,TDengine 支持 Basic 认
Authorization: Basic <TOKEN>
```
HTTP 请求的 BODY 里就是一个完整的 SQL 语句,SQL 语句中的数据表应提供数据库前缀,例如 \<db_name>.\<tb_name>。如果表名不带数据库前缀,又没有在 url 中指定数据库名的话,系统会返回错误。因为 HTTP 模块只是一个简单的转发,没有当前 DB 的概念。
HTTP 请求的 BODY 里就是一个完整的 SQL 语句,SQL 语句中的数据表应提供数据库前缀,例如 db_name.tb_name。如果表名不带数据库前缀,又没有在 URL 中指定数据库名的话,系统会返回错误。因为 HTTP 模块只是一个简单的转发,没有当前 DB 的概念。
使用 curl 通过自定义身份认证方式来发起一个 HTTP Request,语法如下:
使用 `curl` 通过自定义身份认证方式来发起一个 HTTP Request,语法如下:
```bash
curl -H 'Authorization: Basic <TOKEN>' -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
......@@ -136,7 +137,7 @@ curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
说明:
- status: 告知操作结果是成功还是失败。
- head: 表的定义,如果不返回结果集,则仅有一列 “affected_rows”。(从 2.0.17.0 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在未来版本中,有可能会从返回值中去掉 head 这一项。)
- head: 表的定义,如果不返回结果集,则仅有一列 “affected_rows”。(从 2.0.17.0 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在后续版本中,有可能会从返回值中去掉 head 这一项。)
- column_meta: 从 2.0.17.0 版本开始,返回值中增加这一项来说明 data 里每一列的数据类型。具体每个列会用三个值来说明,分别为:列名、列类型、类型长度。例如`["current",6,4]`表示列名为“current”;列类型为 6,也即 float 类型;类型长度为 4,也即对应 4 个字节表示的 float。如果列类型为 binary 或 nchar,则类型长度表示该列最多可以保存的内容长度,而不是本次返回值中的具体数据长度。当列类型是 nchar 的时候,其类型长度表示可以保存的 unicode 字符数量,而不是 bytes。
- data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有 [[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
- rows: 表明总共多少行数据。
......@@ -162,7 +163,7 @@ HTTP 请求中需要带有授权码 `<TOKEN>`,用于身份识别。授权码
curl http://<fqnd>:<port>/rest/login/<username>/<password>
```
其中,`fqdn` 是 TDengine 数据库的 fqdn 或 ip 地址,port 是 TDengine 服务的端口号,`username` 为数据库用户名,`password` 为数据库密码,返回值为 `JSON` 格式,各字段含义如下:
其中,`fqdn` 是 TDengine 数据库的 FQDN 或 IP 地址,`port` 是 TDengine 服务的端口号,`username` 为数据库用户名,`password` 为数据库密码,返回值为 JSON 格式,各字段含义如下:
- status:请求结果的标志位
......@@ -236,13 +237,13 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata
### 结果集采用 Unix 时间戳
HTTP 请求 URL 采用 `sqlt` 时,返回结果集的时间戳将采用 Unix 时间戳格式表示,例如
HTTP 请求 URL 采用 `/rest/sqlt` 时,返回结果集的时间戳将采用 Unix 时间戳格式表示,例如
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001' 192.168.0.1:6041/rest/sqlt
```
返回
返回结果
```json
{
......@@ -264,7 +265,7 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001
### 结果集采用 UTC 时间字符串
HTTP 请求 URL 采用 `sqlutc` 时,返回结果集的时间戳将采用 UTC 时间字符串表示,例如
HTTP 请求 URL 采用 `/rest/sqlutc` 时,返回结果集的时间戳将采用 UTC 时间字符串表示,例如
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.t1' 192.168.0.1:6041/rest/sqlutc
......@@ -298,10 +299,9 @@ HTTP 请求 URL 采用 `sqlutc` 时,返回结果集的时间戳将采用 UTC
- httpMaxThreads: 启动的线程数量,默认为 2(2.0.17.0 版本开始,默认值改为 CPU 核数的一半向下取整)。
- restfulRowLimit: 返回结果集(JSON 格式)的最大条数,默认值为 10240。
- httpEnableCompress: 是否支持压缩,默认不支持,目前 TDengine 仅支持 gzip 压缩格式。
- httpDebugFlag: 日志开关,默认 131。131:仅错误和报警信息,135:调试信息,143:非常详细的调试信息,默认 131。
- httpDbNameMandatory: 是否必须在 RESTful url 中指定默认的数据库名。默认为 0,即关闭此检查。如果设置为 1,那么每个 RESTful url 中都必须设置一个默认数据库名,否则无论此时执行的 SQL 语句是否需要指定数据库,都会返回一个执行错误,拒绝执行此 SQL 语句。
:::note
如果使用 taosd 提供的 REST API, 那么以上配置需要写在 taosd 的配置文件 taos.cfg 中。如果使用 taosAdaper 提供的 REST API, 那么需要参考 taosAdaper [对应的配置方法](/reference/taosadapter/)。
- httpDebugFlag: 日志开关,默认 131。131:仅错误和报警信息,135:调试信息,143:非常详细的调试信息。
- httpDbNameMandatory: 是否必须在 RESTful URL 中指定默认的数据库名。默认为 0,即关闭此检查。如果设置为 1,那么每个 RESTful URL 中都必须设置一个默认数据库名,否则无论此时执行的 SQL 语句是否需要指定数据库,都会返回一个执行错误,拒绝执行此 SQL 语句。
:::note
如果使用 taosd 提供的 REST API, 那么以上配置需要写在 taosd 的配置文件 taos.cfg 中。如果使用 taosAdapter 提供的 REST API, 那么需要参考 taosAdapter [对应的配置方法](/reference/taosadapter/)。
:::
......@@ -51,12 +51,12 @@ TDengine 客户端驱动的安装请参考 [安装指南](/reference/connector#
taos_cleanup();
```
在上面的示例代码中, `taos_connect` 建立到客户端程序所在主机的 6030 端口的连接,`taos_close`关闭当前连接,`taos_cleanup`清除客户端驱动所申请和使用的资源。
在上面的示例代码中, `taos_connect()` 建立到客户端程序所在主机的 6030 端口的连接,`taos_close()`关闭当前连接,`taos_cleanup()`清除客户端驱动所申请和使用的资源。
:::note
- 如未特别说明,当 API 的返回值是整数时,_0_ 代表成功,其它是代表失败原因的错误码,当返回值是指针时, _NULL_ 表示失败。
- 所有的错误码以及对应的原因描述在 taoserror.h 文件中。
- 所有的错误码以及对应的原因描述在 `taoserror.h` 文件中。
:::
......@@ -120,8 +120,8 @@ TDengine 客户端驱动的安装请参考 [安装指南](/reference/connector#
</details>
:::info
更多示例代码及下载请见 [github](https://github.com/taosdata/TDengine/tree/develop/examples/c)
也可以在安装目录下的 examples/c 路径下找到。 该目录下有 makefile,在 Linux 环境下,直接执行 make 就可以编译得到执行文件。
更多示例代码及下载请见 [GitHub](https://github.com/taosdata/TDengine/tree/develop/examples/c)。
也可以在安装目录下的 `examples/c` 路径下找到。 该目录下有 makefile,在 Linux 环境下,直接执行 make 就可以编译得到执行文件。
**提示:**在 ARM 环境下编译时,请将 makefile 中的 `-msse4.2` 去掉,这个选项只有在 x64/x86 硬件平台上才能支持。
:::
......@@ -362,7 +362,7 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
(2.1.3.0 版本新增)
用于在其他 STMT API 返回错误(返回错误码或空指针)时获取错误信息。
### 无模式写入 API
### 无模式(schemaless)写入 API
除了使用 SQL 方式或者使用参数绑定 API 写入数据外,还可以使用 Schemaless 的方式完成写入。Schemaless 可以免于预先创建超级表/数据子表的数据结构,而是可以直接写入数据,TDengine 系统会根据写入的数据内容自动创建和维护所需要的表结构。Schemaless 的使用方式详见 [Schemaless 写入](/reference/schemaless/) 章节,这里介绍与之配套使用的 C/C++ API。
......@@ -390,7 +390,7 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
- TSDB_SML_TELNET_PROTOCOL: OpenTSDB Telnet 文本行协议
- TSDB_SML_JSON_PROTOCOL: OpenTSDB Json 协议格式
时间戳分辨率的定义,定义在 taos.h 文件中,具体内容如下:
时间戳分辨率的定义,定义在 `taos.h` 文件中,具体内容如下:
- TSDB_SML_TIMESTAMP_NOT_CONFIGURED = 0,
- TSDB_SML_TIMESTAMP_HOURS,
......@@ -448,3 +448,4 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
- `void taos_unsubscribe(TAOS_SUB *tsub, int keepProgress)`
取消订阅。 如参数 `keepProgress` 不为 0,API 会保留订阅的进度信息,后续调用 `taos_subscribe()` 时可以基于此进度继续;否则将删除进度信息,后续只能重新开始读取数据。
......@@ -9,7 +9,7 @@ description: TDengine Java 连接器基于标准 JDBC API 实现, 并提供原
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
`taos-jdbcdriver` 是 TDengine 的官方 Java 语言连接器,Java 开发人员可以通过它开发存取 TDengine 数据库的应用软件。`taos-jdbcdriver` 实现了 JDBC driver 标准的接口,并提供两种形式的连接器。一种是通过 TDengine 客户端驱动程序(taosc)原生连接 TDengine 实例,支持数据写入、查询、订阅、schemaless 接口和参数绑定接口等功能,一种是通过 taosAdapter 提供的 REST 接口连接 TDengine 实例(2.0.18 及更高版本)。REST 连接实现的功能集合和原生连接有少量不同。
`taos-jdbcdriver` 是 TDengine 的官方 Java 语言连接器,Java 开发人员可以通过它开发存取 TDengine 数据库的应用软件。`taos-jdbcdriver` 实现了 JDBC driver 标准的接口,并提供两种形式的连接器。一种是通过 TDengine 客户端驱动程序(taosc)原生连接 TDengine 实例,支持数据写入、查询、订阅、schemaless 接口和参数绑定接口等功能,一种是通过 taosAdapter 提供的 REST 接口连接 TDengine 实例(2.4.0.0 及更高版本)。REST 连接实现的功能集合和原生连接有少量不同。
![tdengine-connector](tdengine-jdbc-connector.png)
......@@ -804,7 +804,7 @@ Query OK, 1 row(s) in set (0.000141s)
请参考:[JDBC example](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC)
## 重要更新记录
## 最近更新记录
| taos-jdbcdriver 版本 | 主要变化 |
| :------------------: | :----------------------------: |
......@@ -814,7 +814,7 @@ Query OK, 1 row(s) in set (0.000141s)
## 常见问题
1. 使用 Statement 的 `addBatch` 和 `executeBatch` 来执行“批量写入/更行”,为什么没有带来性能上的提升?
1. 使用 Statement 的 `addBatch()` 和 `executeBatch()` 来执行“批量写入/更新”,为什么没有带来性能上的提升?
**原因**:TDengine 的 JDBC 实现中,通过 `addBatch` 方法提交的 SQL 语句,会按照添加的顺序,依次执行,这种方式没有减少与服务端的交互次数,不会带来性能上的提升。
......
......@@ -122,7 +122,7 @@ Requirement already satisfied: taospy in c:\users\username\appdata\local\program
<Tabs>
<TabItem value="native" label="原生连接">
请确保 TDengine 集群已经启动, 且集群中机器的 FQDN (如果启动的是单机版,FQDN 默认为 hostname)在本机能够解析, 可用 ping 命令进行测试:
请确保 TDengine 集群已经启动, 且集群中机器的 FQDN (如果启动的是单机版,FQDN 默认为 hostname)在本机能够解析, 可用 `ping` 命令进行测试:
```
ping <FQDN>
......@@ -197,7 +197,7 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
{{#include docs-examples/python/connect_rest_examples.py:connect}}
```
`connect` 函数的所有参数都是可选的关键字参数。下面是连接参数的具体说明:
`connect()` 函数的所有参数都是可选的关键字参数。下面是连接参数的具体说明:
- `host`: 要连接的主机。默认是 localhost。
- `user`: TDenigne 用户名。默认是 root。
......@@ -205,10 +205,6 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
- `port`: taosAdapter REST 服务监听端口。默认是 6041.
- `timeout`: HTTP 请求超时时间。单位为秒。默认为 `socket._GLOBAL_DEFAULT_TIMEOUT`。 一般无需配置。
:::note
:::
</TabItem>
</Tabs>
......@@ -232,12 +228,12 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
```
:::tip
查询结果只能获取一次。比如上面的示例中 `featch_all` 和 `fetch_all_into_dict` 只能用一个。重复获取得到的结果为空列表。
查询结果只能获取一次。比如上面的示例中 `fetch_all()` 和 `fetch_all_into_dict()` 只能用一个。重复获取得到的结果为空列表。
:::
##### TaosResult 类的使用
上面 `TaosConnection` 类的使用示例中,我们已经展示了两种获取查询结果的方法: `featch_all` 和 `fetch_all_into_dict`。除此之外 `TaosResult` 还提供了按行迭代(`rows_iter`)或按数据块迭代(`blocks_iter`)结果集的方法。在查询数据量较大的场景,使用这两个方法会更高效。
上面 `TaosConnection` 类的使用示例中,我们已经展示了两种获取查询结果的方法: `fetch_all()` 和 `fetch_all_into_dict()`。除此之外 `TaosResult` 还提供了按行迭代(`rows_iter`)或按数据块迭代(`blocks_iter`)结果集的方法。在查询数据量较大的场景,使用这两个方法会更高效。
```python title="blocks_iter 方法"
{{#include docs-examples/python/result_set_examples.py}}
......
......@@ -8,7 +8,7 @@ import Prometheus from "./_prometheus.mdx"
import CollectD from "./_collectd.mdx"
import StatsD from "./_statsd.mdx"
import Icinga2 from "./_icinga2.mdx"
import Tcollector from "./_tcollector.mdx"
import TCollector from "./_tcollector.mdx"
taosAdapter 是一个 TDengine 的配套工具,是 TDengine 集群和应用程序之间的桥梁和适配器。它提供了一种易于使用和高效的方式来直接从数据收集代理软件(如 Telegraf、StatsD、collectd 等)摄取数据。它还提供了 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。
......@@ -225,7 +225,7 @@ AllowWebSockets
### TCollector
<Tcollector />
<TCollector />
### node_exporter
......
......@@ -400,7 +400,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
- **threads** : 执行 SQL 的线程数,默认为 1。
- **interva** : 执行订阅的时间间隔,单位为秒,默认为 0。
- **interval** : 执行订阅的时间间隔,单位为秒,默认为 0。
- **restart** : "yes" 表示开始新的订阅,"no" 表示继续之前的订阅,默认值为 "no"。
......@@ -420,7 +420,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
- **threads** : 执行 SQL 的线程数,默认为 1。
- **interva** : 执行订阅的时间间隔,单位为秒,默认为 0。
- **interval** : 执行订阅的时间间隔,单位为秒,默认为 0。
- **restart** : "yes" 表示开始新的订阅,"no" 表示继续之前的订阅,默认值为 "no"。
......
......@@ -8,7 +8,7 @@ TDengine 命令行程序(以下简称 TDengine CLI)是用户操作 TDengine
## 安装
如果在 TDengine 服务器端执行,无需任何安装,已经自动安装好。如果要在非 TDengine 服务器端运行,需要安装 TDengine 客户端驱动,具体安装,请参考 [连接器](/reference/connector/)
如果在 TDengine 服务器端执行,无需任何安装,已经自动安装好 TDengine CLI。如果要在非 TDengine 服务器端运行,需要安装 TDengine 客户端驱动安装包,具体安装,请参考 [连接器](/reference/connector/)
## 执行
......@@ -18,17 +18,17 @@ TDengine 命令行程序(以下简称 TDengine CLI)是用户操作 TDengine
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。TDengine CLI 的提示符号如下:
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。TDengine CLI 的提示符号如下:
```cmd
taos>
```
进入 CLI 后,你可执行各种 SQL 语句,包括插入、查询以及各种管理命令。
进入 TDengine CLI 后,你可执行各种 SQL 语句,包括插入、查询以及各种管理命令。
## 执行 SQL 脚本
在 TDengine CLI 里可以通过 `source` 命令来运行 SQL 命令脚本
在 TDengine CLI 里可以通过 `source` 命令来运行脚本文件中的多条 SQL 命令
```sql
taos> source <filename>;
......@@ -42,7 +42,7 @@ taos> source <filename>;
taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
```
如显示的内容后面以...结尾时,表示该内容已被截断,可通过本命令修改显示字符宽度以显示完整的内容。
如显示的内容后面以 ... 结尾时,表示该内容已被截断,可通过本命令修改显示字符宽度以显示完整的内容。
## 命令行参数
......@@ -56,20 +56,20 @@ taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
还有更多其他参数:
- -c, --config-dir: 指定配置文件目录,默认为 `/etc/taos`,该目录下的配置文件默认名称为 taos.cfg
- -C, --dump-config: 打印 -c 指定的目录中 taos.cfg 的配置参数
- -c, --config-dir: 指定配置文件目录,Linux 环境下默认为 `/etc/taos`,该目录下的配置文件默认名称为 `taos.cfg`
- -C, --dump-config: 打印 -c 指定的目录中 `taos.cfg` 的配置参数
- -d, --database=DATABASE: 指定连接到服务端时使用的数据库
- -D, --directory=DIRECTORY: 导入指定路径中的 SQL 脚本文件
- -f, --file=FILE: 以非交互模式执行 SQL 脚本文件
- -f, --file=FILE: 以非交互模式执行 SQL 脚本文件。文件中一个 SQL 语句只能占一行
- -k, --check=CHECK: 指定要检查的表
- -l, --pktlen=PKTLEN: 网络测试时使用的测试包大小
- -n, --netrole=NETROLE: 网络连接测试时的测试范围,默认为 startup, 可选值为 client, server, rpc, startup, sync, speed, fqdn
- -r, --raw-time: 将时间输出出 uint64_t
- -n, --netrole=NETROLE: 网络连接测试时的测试范围,默认为 `startup`, 可选值为 `client``server``rpc``startup``sync``speed``fqdn` 之一
- -r, --raw-time: 将时间输出出无符号 64 位整数类型(即 C 语音中 uint64_t)
- -s, --commands=COMMAND: 以非交互模式执行的 SQL 命令
- -S, --pkttype=PKTTYPE: 指定网络测试所用的包类型,默认为 TCP。只有 netrole 为 speed 时既可以指定为 TCP 也可以指定为 UDP
- -S, --pkttype=PKTTYPE: 指定网络测试所用的包类型,默认为 TCP。只有 netrole 为 `speed` 时既可以指定为 TCP 也可以指定为 UDP
- -T, --thread=THREADNUM: 以多线程模式导入数据时的线程数
- -s, --commands: 在不进入终端的情况下运行 TDengine 命令
- -z, --timezone=TIMEZONE: 指定时区,默认为本地
- -z, --timezone=TIMEZONE: 指定时区,默认为本地时区
- -V, --version: 打印出当前版本号
示例:
......@@ -81,8 +81,8 @@ taos -h h1.taos.com -s "use db; show tables;"
## TDengine CLI 小技巧
- 可以使用上下光标键查看历史输入的指令
- 修改用户密码:在 shell 中使用 `alter user` 命令,缺省密码为 taosdata
- ctrl+c 中止正在进行中的查询
- 执行 `RESET QUERY CACHE` 可清除本地缓存的表 schema
- 批量执行 SQL 语句。可以将一系列的 shell 命令(以英文 ; 结尾,每个 SQL 语句为一行)按行存放在文件里,在 shell 里执行命令 `source <file-name>` 自动执行该文件里所有的 SQL 语句
- 输入 q 回车,退出 taos shell
- 在 TDengine CLI 中使用 `alter user` 命令可以修改用户密码,缺省密码为 `taosdata`
- Ctrl+C 中止正在进行中的查询
- 执行 `RESET QUERY CACHE` 可清除本地表 Schema 的缓存
- 批量执行 SQL 语句。可以将一系列的 TDengine CLI 命令(以英文 ; 结尾,每个 SQL 语句为一行)按行存放在文件里,在 TDengine CLI 里执行命令 `source <file-name>` 自动执行该文件里所有的 SQL 语句
- 输入 `q``quit``exit` 回车,可以退出 TDengine CLI
......@@ -47,19 +47,19 @@ taos --dump-config
### firstEp
| 属性 | 说明 |
| -------- | -------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 属性 | 说明 |
| -------- | --------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | taosd 或者 taos 启动时,主动连接的集群中首个 dnode 的 endpoint |
| 缺省值 | localhost:6030 |
| 缺省值 | localhost:6030 |
### secondEp
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 属性 | 说明 |
| -------- | -------------------------------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | taosd 或者 taos 启动时,如果 firstEp 连接不上,尝试连接集群中第二个 dnode 的 endpoint |
| 缺省值 | 无 |
| 缺省值 | 无 |
### fqdn
......@@ -476,14 +476,23 @@ charset 的有效值是 UTF-8。
### arbitrator
| 属性 | 说明 |
| -------- | ----------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 属性 | 说明 |
| -------- | ------------------------------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 系统中裁决器的 endpoint,其格式如 firstEp |
| 缺省值 | 空 |
| 缺省值 | 空 |
## 时间相关
### precision
| 属性 | 说明 |
| -------- | ------------------------------------------------- |
| 适用范围 | 仅服务端 |
| 含义 | 创建数据库时使用的时间精度 |
| 取值范围 | ms: millisecond; us: microsecond ; ns: nanosecond |
| 缺省值 | ms |
### rpcTimer
| 属性 | 说明 |
......
label: 参考指南
link:
slug: /reference/
type: generated-index
description: "参考指南是对 TDengine 本身、 TDengine 各语言连接器及自带的工具最详细的介绍。"
label: 参考指南
\ No newline at end of file
---
title: 参考指南
---
参考指南是对 TDengine 本身、 TDengine 各语言连接器及自带的工具最详细的介绍。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -3,7 +3,7 @@ sidebar_label: TCollector
title: TCollector 写入
---
import Tcollector from "../14-reference/_tcollector.mdx"
import TCollector from "../14-reference/_tcollector.mdx"
TCollector 是 openTSDB 的一部分,它用来采集客户端日志发送给数据库。
......@@ -17,7 +17,7 @@ TCollector 是 openTSDB 的一部分,它用来采集客户端日志发送给
- TCollector 已经安装。安装 TCollector 请参考[官方文档](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html#installation-of-tcollector)
## 配置步骤
<Tcollector />
<TCollector />
## 验证方法
......
......@@ -35,7 +35,7 @@ MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/em
CREATE TABLE sensor_data (ts timestamp, temperature float, humidity float, volume float, PM10 float, pm25 float, SO2 float, NO2 float, CO float, sensor_id NCHAR(255), area TINYINT, coll_time timestamp);
```
注:表结构以博客[数据传输、存储、展现,EMQ X + TDengine 搭建 MQTT 物联网数据可视化平台](https://www.taosdata.com/blog/2020/08/04/1722.html)为例。后续操作均以此博客场景为例进行,请你根据实际应用场景进行修改。
注:表结构以博客[数据传输、存储、展现,EMQX + TDengine 搭建 MQTT 物联网数据可视化平台](https://www.taosdata.com/blog/2020/08/04/1722.html)为例。后续操作均以此博客场景为例进行,请你根据实际应用场景进行修改。
## 配置 EMQX 规则
......@@ -188,5 +188,5 @@ node mock.js
![img](./emqx/check-result-in-taos.png)
TDengine 详细使用方法请参考 [TDengine 官方文档](https://docs.taosdata.com/)
EMQX 详细使用方法请参考 [EMQ 官方文档](https://www.emqx.io/docs/zh/v4.4/rule/rule-engine.html)
EMQX 详细使用方法请参考 [EMQX 官方文档](https://www.emqx.io/docs/zh/v4.4/rule/rule-engine.html)
label: 第三方工具
link:
type: generated-index
slug: /third-party/
description: TDengine 通过对标准 SQL 命令、常用数据库连接器标准(例如 JDBC)、ORM 以及其他流行时序数据库写入协议(例如 InfluxDB Line Protocol、OpenTSDB JSON、OpenTSDB Telnet 等)的支持可以使 TDengine 非常容易和第三方工具共同使用。
label: 第三方工具
\ No newline at end of file
---
title: 第三方工具
---
TDengine 通过对标准 SQL 命令、常用数据库连接器标准(例如 JDBC)、ORM 以及其他流行时序数据库写入协议(例如 InfluxDB Line Protocol、OpenTSDB JSON、OpenTSDB Telnet 等)的支持可以使 TDengine 非常容易和第三方工具共同使用。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
label: 技术内幕
link:
slug: /tdinternal/
type: generated-index
\ No newline at end of file
label: 技术内幕
\ No newline at end of file
---
title: 技术内幕
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
label: 应用实践
link:
slug: /application/
type: generated-index
---
title: 应用实践
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
label: FAQ、教程及其它
link:
slug: /train-faq/
type: generated-index
label: FAQ 及其他
---
title: FAQ 及其他
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -3,19 +3,23 @@ title: Introduction
toc_max_heading_level: 2
---
## TDengine Major Features
TDengine is a high-performance, scalable time-series database with SQL support. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/continuous-query), [data subscription](/develop/subscribe) and other functionalities to reduce the complexity and cost of development and operation.
TDengine is a high-performance, scalable time-series database with SQL support. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/continuous-query), [data subscription](/develop/subscribe) and other functionalities to reduce the complexity and cost of development and operation. The major features are listed below:
This section introduces the major features, competitive advantages, suited scenarios and benchmarks to help you get a high level picture for TDengine.
## Major Features
The major features are listed below:
1. Besides [using SQL to insert](/develop/insert-data/sql-writing),supports [Schemaless writing](/reference/schemaless/),and supports [InfluxDB LINE](/develop/insert-data/influxdb-line)[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) and other protocols.
2. Support seamless integration with third-party data collection agent like [Telegraf](/third-party/telegraf)[Prometheus](/third-party/prometheus)[StatsD](/third-party/statsd)[collectd](/third-party/collectd)[icinga2](/third-party/icinga2), [Tcollector](/third-party/tcollector), [EMQ](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). Without a line of code, those agents can write data points into TDengine just by configuration.
2. Support seamless integration with third-party data collection agent like [Telegraf](/third-party/telegraf)[Prometheus](/third-party/prometheus)[StatsD](/third-party/statsd)[collectd](/third-party/collectd)[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). Without a line of code, those agents can write data points into TDengine just by configuration.
3. Support [all kinds of queries](/query-data), including aggregation, nested query, downsampling, interpolation, etc.
4. Support [user defined functions](/develop/udf)
5. Support [caching](/develop/cache). TDengine always save the last data point in cache, so Redis is not needed in some scenarios.
6. Support [continuous query](/develop/continuous-query).
7. Support [data subscription](/develop/subscribe),and the filter condition can be specified.
8. Support [cluster](/cluster/), so it can gain more processing power by adding more nodes. The high availability is supported by replication.
9. Provide interactive [command line intrerface](/reference/taos-shell) for management, maintainence and ad-hoc query.
9. Provide interactive [command-line intrerface](/reference/taos-shell) for management, maintainence and ad-hoc query.
10. Provide many ways to [import](/operation/import), [export](/operation/export) data.
11. Provide [monitoring](/operation/monitor) on TDengine running instances.
12. Provide [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages.
......@@ -25,7 +29,7 @@ TDengine is a high-performance, scalable time-series database with SQL support.
For more detailed features, please read through the whole document.
## TDenginge Highlights
## Competitive Advantages
TDengine makes full use of [the characteristics of time series data](https://tdengine.com/2019/07/09/86.html), such as structured, no transaction, rarely delete or update, etc., and builds its own innovative storage engine and computing engine to differentiate itself from other TSDBs with the following advantages.
......@@ -54,7 +58,7 @@ In the time-series data processing platform, TDengine stands in a role like this
<center>Figure 1. TDengine Technical Ecosystem</center>
On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, IoT App can be connected. TDengine itself provides interactive command line interface and web interface for management and maintainence.
On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, IoT App can be connected. TDengine itself provides interactive command-line interface and web interface for management and maintainence.
## Suited Scenarios for TDengine
......@@ -101,7 +105,7 @@ From the perspective of data sources, designers can analyze the applicability of
| Minimize learning and maintenance costs | | | √ | In addition to being easily configurable, standard SQL support and the Taos shell for ad hoc queries makes maintenance simpler, allows reuse and reduces learning costs.|
| Abundant talent supply | √ | | | Given the above, and given the extensive training and professional services provided by TDengine, it is easy to migrate from existing solutions or create a new and lasting solution based on TDengine.|
## Benchmark comparision between TDengine and other databases
## Comparision with other databases
- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/2022/02/23/4975.html)
- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/2022/02/24/5120.html)
......
......@@ -135,13 +135,25 @@ The design of one table for one data collection point will require a huge number
STable is an set for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes(tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**. When creating a table for a specific data collection point, the user uses a STable as a template and specifies the tag value of the specific DCP (table). Compared with the traditional relational database, the table (a DCP) has static tags, and these tags can be added, deleted, and updated afterward. The relationship between the STable and the tables created based on the STable is as follows:
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**.
1. A STable contains multiple tables with the same metric schema but with different tag values.
2. The schema of metrics or labels cannot be adjusted through tables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all belonged tables.
3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to tables.
## Subtable
Query can be executed on both table and STable. For a query on a STable, TDengine will treat the data in all belonged tables as a whole data set for processing. TDengine will first find out the tables that meet the tag filter conditions, then scan the time-series data of these tables to perform aggregation operation, which can greatly reduce the data sets to be scanned, thus greatly improving the performance of data aggregation across multiple DCPs.
When creating a table for a specific data collection point, the user can use a STable as a template and specifies the tag value of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine system. The difference between regular table and subtable is:
1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
2. Subtable is a table with extension, it has static tags(labels), and these tags can be added, deleted, and updated afterward. But regular table does not have tags.
3. A subtable belongs to only one STable, but a STable may have many subtables. Regular table does not belong to any STable.
4. A regular table could not converted into a subtable, and vice versa.
The relationship between a STable and the subtables created based on this STable is as follows:
1. A STable contains multiple subtables with the same metric schema but with different tag values.
2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all belonged subtables.
3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
Query can be executed on both table(subtable) and STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find out the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which can greatly reduce the data sets to be scanned, thus greatly improving the performance of data aggregation across multiple DCPs.
In TDengine system, it is recommend to use a substable instead of a regular table for a DCP.
## Database
......
......@@ -10,7 +10,7 @@ import AptGetInstall from "./\_apt_get_install.mdx";
## Quick Install
The full package of TDengine includes server(taosd), taosAdapter for connecting with third-party systems and providing RESTful interface, application driver(taosc), command line program(CLI, taos) and some tools. For current version, the server taosd and taosAdapter can only be installed and run on Linux systems, and will support Windows, macOS and other systems in the future. The application driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to the RESTful interface, TDengine also provides connectors for a number of programming languages. In versions before 2.4, there is no taosAdapter, and the RESTful interface is provided by the built-in http service of taosd.
The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to the RESTful interface, TDengine also provides connectors for a number of programming languages. In versions before 2.4, there is no taosAdapter, and the RESTful interface is provided by the built-in http service of taosd.
TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms, and will support ARM32, RISC-V and other CPU architectures in the future.
......@@ -71,13 +71,13 @@ Check if taosd is running:
systemctl status taosd
```
If everything is fine,you can run TDengine command line interface `taos` to access TDengine and play around it.
If everything is fine, you can run TDengine command-line interface `taos` to access TDengine and test it out yourself.
:::info
- systemctl requires _root_ privileges,if you are not _root_ ,please add sudo before the command.
- To get feedback and keep polishing the product, TDengine is collecting some basic usage information, but you can turn it off by setting telemetryReporting to 0 in configuration file taos.cfg.
- TDengine uses FQDN (usually hostname)as the ID for a node. To make system work, you need to configure the FQDN for the server running taosd, and configure the DNS service or hosts file on the the machine where the application or TDengine CLI runs to ensure that the FQDN can be resolved.
- To get feedback and keep improving the product, TDengine is collecting some basic usage information, but you can turn it off by setting telemetryReporting to 0 in configuration file taos.cfg.
- TDengine uses FQDN (usually hostname)as the ID for a node. To make the system work, you need to configure the FQDN for the server running taosd, and configure the DNS service or hosts file on the the machine where the application or TDengine CLI runs to ensure that the FQDN can be resolved.
- `systemctl stop taosd` won't stop the server right away, it will wait until all the data in memory are flushed to disk. It may takes time depending on the cache size.
TDengine supports the installation on system which runs [`systemd`](https://en.wikipedia.org/wiki/Systemd) for process management,use `which systemctl` to check if the system has `systemd` installed:
......@@ -92,7 +92,7 @@ If the system does not have `systemd`,you can start TDengine manually by execu
## Command Line Interface
To manage the TDengine running instance,or execute ad-hoc queries, TDengine provides a Command Line Interface(hereinafter referred to as TDengine CLI) taos. To enter into the interactive CLI,execute `taos` on a Linux terminal where TDengine is installed.
To manage the TDengine running instance,or execute ad-hoc queries, TDengine provides a Command Line Interface (hereinafter referred to as TDengine CLI) taos. To enter into the interactive CLI,execute `taos` on a Linux terminal where TDengine is installed.
```bash
taos
......@@ -120,25 +120,25 @@ select * from t;
Query OK, 2 row(s) in set (0.003128s)
```
Besides executing SQL commands, system administrator can check running status, add/drop user accounts and manage the running instances. TAOS CLI with application driver can be installed and run on either Linux or Windows machine. For more details on CLI, please [check here](../reference/taos-shell/).
Besides executing SQL commands, system administrators can check running status, add/drop user accounts and manage the running instances. TAOS CLI with client driver can be installed and run on either Linux or Windows machines. For more details on CLI, please [check here](../reference/taos-shell/).
## Experience the blazing fast speed
After TDengine server is running,execute `taosBenchmark`(named as taosdemo before) from a Linux terminal:
After TDengine server is running,execute `taosBenchmark` (previously named taosdemo) from a Linux terminal:
```bash
taosBenchmark
```
This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with name from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "beijing" or "shanghai".
This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with names from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "beijing" or "shanghai".
This command will insert 100 million rows into database quickly. Depends on the hardware configuration, it only takes a dozen seconds for a regular PC server.
This command will insert 100 million rows into the database quickly. Time to insert depends on the hardware configuration, it only takes a dozen seconds for a regular PC server.
taosBenchmark provides you command line options and configuration file to customize the scenarios, like number of tables, number of rows per table, number of columns and more. Please execute `taosBenchmark --help` to list them. For details on running taosBenchmark, please check [reference for taosBenchmark](/reference/taosbenchmark)
taosBenchmark provides command-line options and a configuration file to customize the scenarios, like number of tables, number of rows per table, number of columns and more. Please execute `taosBenchmark --help` to list them. For details on running taosBenchmark, please check [reference for taosBenchmark](/reference/taosbenchmark)
## Experience query speed
After using taosBenchmark to insert a number of rows data, you can execute queries from TDengine CLI to experience the lightning query speed.
After using taosBenchmark to insert a number of rows data, you can execute queries from TDengine CLI to experience the lightning fast query speed.
query the total number of rows under super table "meters":
......
---
sidebar_label: Connect
sidebar_label: Connection
title: Connect to TDengine
description: "This document explains how to establish connection to TDengine, and briefly introduce how to install and use TDengine connectors."
---
......@@ -19,25 +19,25 @@ import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.md
import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx";
import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx";
Any application programs running on any kind of platforms can access TDengine through the REST API provided by TDengine. For the details please refer to [REST API](/reference/rest-api/). Besides, application programs can use the connectors of multiple languages to access TDengine, including C/C++, Java, Python, Go, Node.js, C#, and Rust. This chapter describes how to establish connection to TDengine and briefly introduce how to install and use connectors. For details about the connectors please refer to [Connectors](/reference/connector/)
Any application programs running on any kind of platforms can access TDengine through the REST API provided by TDengine. For the details, please refer to [REST API](/reference/rest-api/). Besides, application programs can use the connectors of multiple programming languages to access TDengine, including C/C++, Java, Python, Go, Node.js, C#, and Rust. This chapter describes how to establish connection to TDengine and briefly introduce how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/)
## Establish Connection
There are two ways to establish connections to TDengine:
There are two ways for a connector to establish connections to TDengine:
1. Connection to taosd can be established through the REST API provided by taosAdapter component, this way is called "REST connection" hereinafter.
2. Connection to taosd can be established through the client side driver taosc, this way is called "Native connection" hereinafter.
1. Connection through the REST API provided by taosAdapter component, this way is called "REST connection" hereinafter.
2. Connection through the TDengine client driver (taosc), this way is called "Native connection" hereinafter.
Either way, same or similar APIs are provided by connectors to access database or execute SQL statements, no obvious difference can be observed.
Key differences:
1. With REST connection, it's not necessary to install the client side driver taosc, it's more friendly for cross-platform with the cost of 30% performance downgrade.
2. With native connection, full compatibility of TDengine can be utilized, like [Parameter Binding](/reference/connector/cpp#Parameter Binding-api), [Subscription](reference/connector/cpp#Subscription), etc.
1. With REST connection, it's not necessary to install TDengine client driver (taosc), it's more friendly for cross-platform with the cost of 30% performance downgrade. When taosc has an upgrade, application does not need to make changes.
2. With native connection, full compatibility of TDengine can be utilized, like [Parameter Binding](/reference/connector/cpp#Parameter Binding-api), [Subscription](reference/connector/cpp#Subscription), etc. But taosc has to be installed, some platforms may not be supported.
## Install Client Driver taosc
If choosing to use native connection and the client program is not on the same host as TDengine server, TDengine client driver needs to be installed on the host where the client program is. If choosing to use REST connection or the client is on the same host as server side, this step can be skipped. It's better to use same version of client as the server.
If choosing to use native connection and the application is not on the same host as TDengine server, TDengine client driver taosc needs to be installed on the host where the application is. If choosing to use REST connection or the application is on the same host as server side, this step can be skipped. It's better to use same version of taosc as the server.
### Install
......@@ -52,7 +52,7 @@ If choosing to use native connection and the client program is not on the same h
### Verify
After the above installation and configuration are done and making sure TDengine service is already started and in service, the Shell command `taos` can be launched to access TDengine.以
After the above installation and configuration are done and making sure TDengine service is already started and in service, the TDengine command-line interface `taos` can be launched to access TDengine.
<Tabs defaultValue="linux" groupId="os">
<TabItem value="linux" label="Linux">
......@@ -198,7 +198,7 @@ install.packages("RJDBC")
</TabItem>
<TabItem label="C" value="c">
If the client driver taosc is already installed, then the C connector is already available.
If the client driver (taosc) is already installed, then the C connector is already available.
<br/>
</TabItem>
......
---
sidebar_label: Data Model
slug: /model
title: Data Model
---
......@@ -43,13 +42,12 @@ If you are using versions prior to 2.0.15, the `STable` keyword needs to be repl
:::
Similar to creating a normal table, when creating a STable, name and schema need to be provided too. In the STable schema, the first column must be timestamp (like ts in the example), and other columns (like current, voltage and phase in the example) are the data collected. The type of a column can be integer, floating point number, string ,etc. Besides, the schema for tags need to be provided, like location and groupId in the example. The type of a tag can be integer, floating point number, string, etc. The static properties of a data collection point can be defined as tags, like the location, device type, device group ID, manager ID, etc. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
Similar to creating a regular table, when creating a STable, name and schema need to be provided too. In the STable schema, the first column must be timestamp (like ts in the example), and other columns (like current, voltage and phase in the example) are the data collected. The type of a column can be integer, float, double, string ,etc. Besides, the schema for tags need to be provided, like location and groupId in the example. The type of a tag can be integer, float, string, etc. The static properties of a data collection point can be defined as tags, like the location, device type, device group ID, manager ID, etc. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
Each kind of data collection points needs a corresponding STable to be created, so there may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another point for environmental data like temperature, humidity and wind direction, multiple STables are required for such kind of device.
For each kind of data collection points, a corresponding STable must be created. There may be man y STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another point for environmental data like temperature, humidity and wind direction, multiple STables are required for such kind of device.
At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable. If there are more than 4096 of metrics to bo collected for a data collection point, multiple STables are required for such kind of data collection point. There can be multiple databases in system, while one or more STables can exist in a database.
## Create Table
A specific table needs to be created for each data collection point. Similar to RDBMS, table name and schema are required to create a table. Beside, one or more tags can be created for each table. To create a table, a STable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement.
......@@ -60,6 +58,7 @@ CREATE TABLE d1001 USING meters TAGS ("Beijing.Chaoyang", 2);
In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "Beijing.Chaoyang" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
In TDengine system, it's recommended to create a table for a data collection point via STable. Table created via STable is called subtable in some parts of TDengine document. All SQL commands applied on regular table can be applied on subtable.
:::warning
It's not recommended to create a table in a database while using a STable from another database as template.
......
......@@ -22,7 +22,7 @@ import CStmt from "./_c_stmt.mdx";
## Introduction
Application program can execute `INSERT` statement through connectors to insert rows. TAOS Shell can be launched manually to insert data too.
Application program can execute `INSERT` statement through connectors to insert rows. TAOS CLI can be launched manually to insert data too.
### Insert Single Row
......@@ -53,7 +53,7 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
:::info
- Inserting in batch can gain better performance. Normally, the higher the batch size, the better the performance. Please be noted each single row can't exceed 16K bytes and each single SQL statement can't exceed 1M bytes.
- Inserting with multiple threads can gain better performance too. However, depending on the system resources on the client side and the server side, with the number of inserting threads grows to a specific point, the performance may drop instead of growing. The proper number of threads need to be tested in a specific environment to find the best number.
- Inserting with multiple threads can gain better performance too. However, depending on the system resources on the application side and the server side, with the number of inserting threads grows to a specific point, the performance may drop instead of growing. The proper number of threads need to be tested in a specific environment to find the best number.
:::
......
......@@ -15,7 +15,7 @@ import CJson from "./_c_opts_json.mdx";
## Introduction
A JSON string is sued in OpenTSDB JSON to represent one or more rows of data, for exmaple:
A JSON string is used in OpenTSDB JSON to represent one or more rows of data, for exmaple:
```json
[
......
label: Insert
link:
type: generated-index
slug: /insert-data/
description: "TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, OpenTSDB JSON protocol. Data can be inserted row by row, or in batch. Data from one or more collecting points can be inserted simultaneously. In the meantime, data can be inserted with multiple threads, out of order data and historical data can be inserted too. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create stable and table in advance if using schemaless protocols, and the schemas can be adjusted automatically according to the data to be inserted."
label: Insert Data
---
title: Insert
---
TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, OpenTSDB JSON protocol. Data can be inserted row by row, or in batch. Data from one or more collecting points can be inserted simultaneously. In the meantime, data can be inserted with multiple threads, out of order data and historical data can be inserted too. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create stable and table in advance if using schemaless protocols, and the schemas can be adjusted automatically according to the data to be inserted.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -21,7 +21,7 @@ import CAsync from "./_c_async.mdx";
## Introduction
SQL is used by TDengine as the language for query. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc query. Here is the list of major query functionalities supported by TDengine:
SQL is used by TDengine as the query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc query. Here is the list of major query functionalities supported by TDengine:
- Query on single column or multiple columns
- Filter on tags or data columns:>, <, =, <\>, like
......@@ -47,13 +47,15 @@ taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
Query OK, 2 row(s) in set (0.001100s)
```
To meet the requirements in IoT use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), `last_row` (the last row), more and more functions will be added to better perform in IoT use cases. Furthermore, continuous query is also supported in TDengine.
To meet the requirements in many use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), `last_row` (the last row), more and more functions will be added to better perform in many use cases. Furthermore, continuous query is also supported in TDengine.
For detailed query syntax please refer to [Select](/taos-sql/select).
## Join Query
## Aggregation among Tables
In IoT use cases, there are always multiple data collection points of same kind. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection points, and a table is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, join query can be performed efficiently between all the tables belonging to same STable, i.e. same kind of data collection points, can be. Aggregate functions applicable for tables can be used directly on STables, syntax is exactly same.
In many use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection points, and a table is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same kind of data collection points, can be. Aggregate functions applicable for tables can be used directly on STables, syntax is exactly same.
In summary, for a STable, its subtables can be aggregated by a simple query on STable, it's kind of join operation. But tables belong to different STables could not be aggregated.
### Example 1
......@@ -109,7 +111,7 @@ taos> SELECT SUM(current) FROM meters where location like "Beijing%" INTERVAL(1s
Query OK, 5 row(s) in set (0.001538s)
```
Down sample also supports time offset. For example, below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
Down sampling also supports time offset. For example, below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
```
taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
......@@ -123,7 +125,7 @@ taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
Query OK, 5 row(s) in set (0.001521s)
```
In IoT use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle by themselves in many systems. In TDengine, it's easy to achieve the alignment using down sampling.
In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle by themselves in many systems. In TDengine, it's easy to achieve the alignment using down sampling.
Interpolation can be performed in TDengine if there is no data in a time range.
......
label: Develop
link:
type: generated-index
slug: /develop
description: "The guide is for developers to quickly learn about the functionalities of TDengine, including fundamentals like data model, inserting data, query and advanced features like data subscription, continuous query. For each functionality, sample code of multiple programming languages are provided for developers to get started quickly."
label: Develop
\ No newline at end of file
---
title: Develop
---
The guide is for developers to quickly learn about the functionalities of TDengine, including fundamentals like data model, inserting data, query and advanced features like data subscription, continuous query. For each functionality, sample code of multiple programming languages are provided for developers to get started quickly.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
---
title: Deploy
title: Deployment
---
## Prerequisites
......
......@@ -6,7 +6,7 @@ title: Manage DNODEs
It has been introduced that how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually.\
:::note
All the commands to be introduced in this chapter need to be run after login TDengine, sometimes it's necessary to use root privilege.
All the commands to be introduced in this chapter need to be run through TDengine CLI, sometimes it's necessary to use root privilege.
:::
......@@ -67,7 +67,7 @@ Query OK, 8 row(s) in set (0.001154s)
## Add DNODE
Launch TDengine CLI `taos` and execute to add the end point of a new dnode into the EPI (end point) list of the cluster. "fqdn:port" must be quoted using double quotes.
Launch TDengine CLI `taos` and execute the command below to add the end point of a new dnode into the EPI (end point) list of the cluster. "fqdn:port" must be quoted using double quotes.
```sql
CREATE DNODE "fqdn:port";
......@@ -100,7 +100,7 @@ Query OK, 2 row(s) in set (0.001316s)
## Drop DNODE
Launch TDengine CLI `taos` and execute command below to drop or remove a dndoe from the cluster. In the command, `dnodeId` can be gotten from `show dnodes`.
Launch TDengine CLI `taos` and execute the command below to drop or remove a dndoe from the cluster. In the command, `dnodeId` can be gotten from `show dnodes`.
```sql
DROP DNODE "fqdn:port";
......
......@@ -7,7 +7,7 @@ title: High Availability and Load Balancing
High availability of vnode and mnode can be achieved through replicas in TDengine.
The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. For the purpose of operation, different number of replicas can be configured properly for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the reliability of the system can't be guaranteed. Whenever one node is down, data service would be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". Below SQL statement is used to create a database named as "demo" with 3 replicas.
The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. For the purpose of operation, different number of replicas can be configured properly for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, data service would be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". Below SQL statement is used to create a database named as "demo" with 3 replicas.
```sql
CREATE DATABASE demo replica 3;
......@@ -40,7 +40,7 @@ If high availability is important for your system, both vnode and mnode must be
Load balance will be triggered in 3 cades without manual intervention.
- When a new dnode is joined in the cluster, automatic load balancing may be triggered, some data from some dnodes may be transferred to the new dnode automatically.
- When a new dnode is joined in the cluster, automatic load balancing may be triggered, some data from some dnodes may be transferred to the new dnode automatically.
- When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically.
- When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes.
- :::tip
......
label: Cluster
link:
type: generated-index
slug: /cluster/
description: "TDengine can be deployed in cluster mode to increase the processing capability and high availability. In cluster mode, any data can have multiple replications for the purpose of high availability and load balance. TDengine cluster can be scaled out easily to support more data collecting points and more data."
keywords:
[
cluster,
high availability,
load balance,
scale out
]
---
title: Cluster
keywords: ["cluster", "high availability", "load balance", "scale out"]
---
TDengine can be deployed in cluster mode to increase the processing capability and high availability. In cluster mode, any data can have multiple replications for the purpose of high availability and load balance. TDengine cluster can be scaled out easily to support more data collecting points and more data.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
---
sidebar_label: Data Types
title: Data Types
description: "The data types supported by TDengine include timestamp, float, JSON, etc"
---
......
......@@ -19,8 +19,24 @@ CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
3. UPDATE set to 2 means updating a part of columns for a row is allowed, the columns for which no value is specified will be kept as no change
3. The maximum length of database name is 33 bytes.
4. The maximum length of a SQL statement is 65,480 bytes.
5. For more parameters that can be used when creating a database, like cache, blocks, days, keep, minRows, maxRows, wal, fsync, update, cacheLast, replica, quorum, maxVgroupsPerDb, ctime, comp, prec, Please refer to [Configuration Parameters](/reference/config/).
5. Below are the parameters that can be used when creating a database
- cache: [Description](/reference/config/#cache)
- blocks: [Description](/reference/config/#blocks)
- days: [Description](/reference/config/#days)
- keep: [Description](/reference/config/#keep)
- minRows: [Description](/reference/config/#minrows)
- maxRows: [Description](/reference/config/#maxrows)
- wal: [Description](/reference/config/#wallevel)
- fsync: [Description](/reference/config/#fsync)
- update: [Description](/reference/config/#update)
- cacheLast: [Description](/reference/config/#cachelast)
- replica: [Description](/reference/config/#replica)
- quorum: [Description](/reference/config/#quorum)
- maxVgroupsPerDb: [Description](/reference/config/#maxvgroupsperdb)
- comp: [Description](/reference/config/#comp)
- precision: [Description](reference/config/#precision)
6. Please be noted that all of the parameters mentioned in this section can be configured in configuration file `taosd.cfg` at server side and used by default, can be override if they are specifically in `create database` statement.
:::
## Show Current Configuration
......
......@@ -22,15 +22,15 @@ CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_nam
:::
### Create Table Using STable As Template
### Create Subtable Using STable As Template
```
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name TAGS (tag_value1, ...);
```
The above command creates a sub table using the specified super table as template and the specified tab values.
The above command creates a subtable using the specified super table as template and the specified tab values.
### Create Table Using STable As Template With A Part of Tags
### Create Subtable Using STable As Template With A Part of Tags
```
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
......
---
sidebar_label: Insert
title: Insert
---
......@@ -106,7 +105,7 @@ Then data in this file can be inserted by below SQL statement:
INSERT INTO d1001 FILE '/tmp/csvfile.csv';
```
## CreateTables Automatically and Insert Rows From File
## Create Tables Automatically and Insert Rows From File
From version 2.1.5.0, tables can be automatically created using a super table as template when inserting data from a CSV file, Like below:
......
---
sidebar_label: Select
title: Select
---
......
---
sidebar_label: Functions
title: Functions
---
......
---
sidebar_label: Window
title: Aggregate by Window
sidebar_label: Interval
title: Aggregate by Time Window
---
Aggregate by time window is supported in TDengine. For example, each temperature sensor reports the temperature every second, the average temperature every 10 minutes can be retrieved by query with time window.
......
---
sidebar_label: Limits
title: Limits and Restrictions
title: Limits & Restrictions
---
## Naming Rules
......
---
sidebar_label: JSON
title: JSON Type
---
......
---
sidebar-label: Escape
title: Escape
title: Escape Characters
---
## Escape Characters
Below table is the list of escape characters used in TDengine.
| Escape Character | **Actual Meaning** |
| :--------------: | ------------------------ |
......
---
sidebar_label: Keywords
title: Reserved Keywords
title: Keywords
---
## Reserved Keywords
There are about 200 keywords reserved by TDengine, they can't be used as the name of database, STable or table with either upper case, lower case or mixed case.
**Keywords List**
......
---
title: TAOS SQL
description: "The syntax, select, functions and tips supported by TAOS SQL "
description: "The syntax supported by TAOS SQL "
---
This document explains the syntax, select, functions and some tips that can be used in TAOS SQL. It would be easier to understand with some fundamental knowledge of SQL.
This document explains the syntax about operating database, table, STable, inserting data, selecting data, functions and some tips that can be used in TAOS SQL. It would be easier to understand with some fundamental knowledge of SQL.
TAOS SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please be noted that TAOS SQL is not standard SQL. Besides, because TDengine doesn't provide the functionality of deleting time series data, corresponding statements are not provided in TAOS SQL.
......
---
sidebar_label: Planning
title: Capacity Planning
title: Resource Planning
---
The computing and storage resources need to be planned if using TDengine to build an IoT platform. How to plan the CPU, memory and disk required will be described in this chapter.
......
---
sidebar_label: Tolerance
title: Tolerance and Disaster Recovery
sidebar_label: Fault Tolerance
title: Fault Tolerance & Disaster Recovery
---
## Tolerance
## Fault Tolerance
TDengine uses **WAL**, i.e. Write Ahead Log, to achieve fault tolerance and make sure high availability.
TDengine uses **WAL**, i.e. Write Ahead Log, to achieve fault tolerance and high reliability.
When a data block is received by TDengine, the original data block is firstly written into WAL. The log in WAL will be deleted only after the data has been written into data files in the database. Data can be recovered from WAL in case the server is stopped abnormally due to any reason and then restarted.
......@@ -14,7 +14,7 @@ There are 2 configuration parameters related to WAL:
- walLevel:0:wal is disabled; 1:wal is enabled without fsync; 2:wal is enabled with fsync.
- fsync:only valid when walLevel is set to 2, it specified the interval of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written.
To achieve absolutely no data loss, walLevel needs to be set to 2 and fsync needs to be set to 1. The penalty is the speed of data insertion downgrades. However, if the concurrent threads of data insertion on the client side can reach a big enough number, for example 50, the data insertion performance would be still good enough, our verification shows that the drop is only 30% compared to fsync is set to 3,000 milliseconds.
To achieve absolutely no data loss, walLevel needs to be set to 2 and fsync needs to be set to 1. The penalty is the performance of data ingestion downgrades. However, if the concurrent threads of data insertion on the client side can reach a big enough number, for example 50, the data ingestion performance would be still good enough, our verification shows that the drop is only 30% compared to fsync is set to 3,000 milliseconds.
## Disaster Recovery
......@@ -24,6 +24,6 @@ TDengine cluster is managed by mnode. To make sure the high availability of mnod
The number of replicas for time series data in TDengine is associated with each database, there can be a lot of databases in a cluster while each database can be configured with a different number of replicas. When creating a database, parameter `replica` is used to configure the number of replications. To achieve high availability, `replica` needs to be higher than 1.
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create table.
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create table.
As long as the dnodes of a TDengine cluster are deployed on different physical machines and replica number is set to bigger than 1, high availability can be achieved without any other assistance. If dnodes of TDengine cluster are deployed in geographically different data centers, disaster recovery can be achieved too.
---
sidebar_label: Import
title: Import Data
title: Data Import
---
There are multiple ways of importing data provided byTDengine: import with script, import from data file, import using `taosdump`.
......
---
sidebar_label: Export
title: Export Data
title: Data Export
---
There are two ways of exporting data from a TDengine cluster, one is SQL statement in TDengine CLI, the other one is `taosdump`.
......
---
sidebar_label: Monitor
title: Monitor TDengine
title: TDengine Monitoring
---
After TDengine is started, a database named `log` for monitoring is created automatically. The information about CPU, memory, disk, bandwidth, number of requests, disk I/O speed, slow query is written into `log` database on the basis of a predefined interval. Besides, some important system operations, like logon, create user, drop database, and alerts and warnings generated in TDengine are written into `log` database too. System operator can view the data in `log` database from TDengine CLI or from a web console.
......
---
sidebar_label: Optimize
title: Optimize Performance
title: Performance Optimization
---
After a TDengine cluster has been running for long enough time, because of updating data, deleting tables and deleting expired data, there may be fragments in data files and query performance may be impacted. To resolve the problem of fragments, from version 2.1.3.0 a new SQL command `COMPACT` can be used to defragment the data files.
......
---
sidebar_label: Diagnose
title: Diagnose Problems
title: Problem Diagnostics
---
## Diagnose Network Connection
## Network Connection Diagnostics
When the client is unable to access the server, the network connection between the client side and the server side needs to be checked to find out the root cause and resolve problems.
......
label: Administration
link:
slug: /operation/
type: generated-index
---
title: Administration
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -2,28 +2,29 @@
title: REST API
---
为支持各种不同类型平台的开发,TDengine 提供符合 REST 设计标准的 API,即 REST API。为最大程度降低学习成本,不同于其他数据库 REST API 的设计方法,TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST 连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。
To support the development of various types of platforms, TDengine provides an API that conforms to the REST principle, namely REST API. To minimize the learning cost, different from the other database REST APIs, TDengine directly requests the SQL command contained in the request BODY through HTTP POST to operate the database and only requires a URL.
注意:与原生连接器的一个区别是,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。(从 2.2.0.0 版本开始,支持在 RESTful url 中指定 db_name,这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 url 中指定的这个 db_name。从 2.4.0.0 版本开始,RESTful 默认由 taosAdapter 提供,要求必须在 url 中指定 db_name。)
:::note
One difference from the native connector is that the REST interface is stateless, so the `USE db_name` command has no effect. All references to table names and super table names need to specify the database name prefix. (Since version 2.2.0.0, it is supported to specify db_name in RESTful URL. If the database name prefix is not specified in the SQL command, the `db_name` specified in the URL will be used. Since version 2.4.0.0, REST service is provided by taosAdapter by default. And it requires that the `db_name` must be specified in the URL.)
:::
## 安装
## Installation
RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安装任何 TDengine 的库,只要客户端的开发语言支持 HTTP 协议即可。
The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language supports the HTTP protocol is enough.
## 验证
## Verification
在已经安装 TDengine 服务器端的情况下,可以按照如下方式进行验证。
If the TDengine server is already installed, it can be verified as follows:
下面以 Ubuntu 环境中使用 curl 工具(确认已经安装)来验证 RESTful 接口的正常。
The following is an Ubuntu environment using the `curl` tool (to confirm that it is installed) to verify that the REST interface is working.
下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041(缺省值)替换为实际运行的 TDengine 服务 fqdn 和端口号:
The following example lists all databases, replacing `h1.taosdata.com` and `6041` (the default port) with the actual running TDengine service FQDN and port number.
```html
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;'
h1.taosdata.com:6041/rest/sql
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' h1.taosdata.com:6041/rest/sql
```
返回值结果如下表示验证通过:
The following return value results indicate that the verification passed.
```json
{
......@@ -72,111 +73,111 @@ h1.taosdata.com:6041/rest/sql
}
```
## HTTP 请求格式
## HTTP request URL format
```
http://<fqdn>:<port>/rest/sql/[db_name]
```
参数说明:
Parameter Description:
- fqnd: 集群中的任一台主机 FQDN 或 IP 地址
- port: 配置文件中 httpPort 配置项,缺省为 6041
- db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。(从 2.2.0.0 版本开始支持)
- fqnd: FQDN or IP address of any host in the cluster
- port: httpPort configuration item in the configuration file, default is 6041
- db_name: Optional parameter that specifies the default database name for the executed SQL command. (supported since version 2.2.0.0)
例如:http://h1.taos.com:6041/rest/sql/test 是指向地址为 h1.taos.com:6041 的 url,并将默认使用的数据库库名设置为 test。
For example, `http://h1.taos.com:6041/rest/sql/test` is a URL to `h1.taos.com:6041` and sets the default database name to `test`.
HTTP 请求的 Header 里需带有身份认证信息,TDengine 支持 Basic 认证与自定义认证两种机制,后续版本将提供标准安全的数字签名机制来做身份验证。
TDengine supports both Basic authentication and custom authentication mechanisms, and subsequent versions will provide a standard secure digital signature mechanism for authentication.
- 自定义身份认证信息如下所示(token 稍后介绍)
- The custom authentication information is as follows (Let's introduce token later)
```
Authorization: Taosd <TOKEN>
```
- Basic 身份认证信息如下所示
- Basic authentication information is shown below
```
Authorization: Basic <TOKEN>
```
HTTP 请求的 BODY 里就是一个完整的 SQL 语句,SQL 语句中的数据表应提供数据库前缀,例如 \<db_name>.\<tb_name>。如果表名不带数据库前缀,又没有在 url 中指定数据库名的话,系统会返回错误。因为 HTTP 模块只是一个简单的转发,没有当前 DB 的概念。
The HTTP request's BODY is a complete SQL command, and the data table in the SQL statement should be provided with a database prefix, e.g., `db_name.tb_name`. If the table name does not have a database prefix and the database name is not specified in the URL, the system will response an error because the HTTP module is a simple forwarder and has no awareness of the current DB.
使用 curl 通过自定义身份认证方式来发起一个 HTTP Request,语法如下:
Use `curl` to initiate an HTTP request with a custom authentication method, with the following syntax.
```bash
curl -H 'Authorization: Basic <TOKEN>' -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
```
或者
Or
```bash
curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
```
其中,`TOKEN` 为 `{username}:{password}` 经过 Base64 编码之后的字符串,例如 `root:taosdata` 编码后为 `cm9vdDp0YW9zZGF0YQ==`
where `TOKEN` is the string after Base64 encoding of `{username}:{password}`, e.g. `root:taosdata` is encoded as `cm9vdDp0YW9zZGF0YQ==`.
## HTTP 返回格式
## HTTP Return Format
返回值为 JSON 格式,如下:
The return result is in JSON format, as follows:
```json
{
"status": "succ",
"head": ["ts","current", …],
"column_meta": [["ts",9,8],["current",6,4], ],
"head": ["ts", "current", ...],
"column_meta": [["ts",9,8],["current",6,4], ...],
"data": [
["2018-10-03 14:38:05.000", 10.3, ],
["2018-10-03 14:38:15.000", 12.6, ]
["2018-10-03 14:38:05.000", 10.3, ...],
["2018-10-03 14:38:15.000", 12.6, ...]
],
"rows": 2
}
```
说明:
Description:
- status: 告知操作结果是成功还是失败。
- head: 表的定义,如果不返回结果集,则仅有一列 “affected_rows”。(从 2.0.17.0 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在未来版本中,有可能会从返回值中去掉 head 这一项。)
- column_meta: 从 2.0.17.0 版本开始,返回值中增加这一项来说明 data 里每一列的数据类型。具体每个列会用三个值来说明,分别为:列名、列类型、类型长度。例如`["current",6,4]`表示列名为“current”;列类型为 6,也即 float 类型;类型长度为 4,也即对应 4 个字节表示的 float。如果列类型为 binary 或 nchar,则类型长度表示该列最多可以保存的内容长度,而不是本次返回值中的具体数据长度。当列类型是 nchar 的时候,其类型长度表示可以保存的 unicode 字符数量,而不是 bytes。
- data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有 [[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
- rows: 表明总共多少行数据。
- status: tell if the operation result is success or failure.
- head: the definition of the table, or just one column "affected_rows" if no result set is returned. (As of version 2.0.17.0, it is recommended not to rely on the head return value to determine the data column type but rather use column_meta. In later versions, the head item may be removed from the return value.)
- column_meta: this item is added to the return value to indicate the data type of each column in the data with version 2.0.17.0 and later versions. Each column is described by three values: column name, column type, and type length. For example, `["current",6,4]` means that the column name is "current", the column type is 6, which is the float type, and the type length is 4, which is the float type with 4 bytes. If the column type is binary or nchar, the type length indicates the maximum length of content stored in the column, not the length of the specific data in this return value. When the column type is nchar, the type length indicates the number of Unicode characters that can be saved, not bytes.
- data: The exact data returned, presented row by row, or just [[affected_rows]] if no result set is returned. The order of the data columns in each row of data is the same as that of the data columns described in column_meta.
- rows: Indicates how many rows of data there are.
column_meta 中的列类型说明:
The column types in column_meta are described as follows:
- 1BOOL
- 2TINYINT
- 3SMALLINT
- 4INT
- 5BIGINT
- 6FLOAT
- 7DOUBLE
- 8BINARY
- 9TIMESTAMP
- 10NCHAR
- 1:BOOL
- 2:TINYINT
- 3:SMALLINT
- 4:INT
- 5:BIGINT
- 6:FLOAT
- 7:DOUBLE
- 8:BINARY
- 9:TIMESTAMP
- 10:NCHAR
## 自定义授权码
## Custom Authorization Code
HTTP 请求中需要带有授权码 `<TOKEN>`,用于身份识别。授权码通常由管理员提供,可简单的通过发送 `HTTP GET` 请求来获取授权码,操作如下:
HTTP requests require an authorization code `<TOKEN>` for identification purposes. The administrator usually provides the authorization code, and it can be obtained simply by sending an ``HTTP GET`` request as follows:
```bash
curl http://<fqnd>:<port>/rest/login/<username>/<password>
```
其中,`fqdn` 是 TDengine 数据库的 fqdn 或 ip 地址,port 是 TDengine 服务的端口号,`username` 为数据库用户名,`password` 为数据库密码,返回值为 `JSON` 格式,各字段含义如下:
Where `fqdn` is the FQDN or IP address of the TDengine database. `port` is the port number of the TDengine service. `username` is the database username. `password` is the database password. The return value is in `JSON` format, and the meaning of each field is as follows.
- status:请求结果的标志位
- status: flag bit of the request result
- code:返回值代码
- code: return value code
- desc:授权码
- desc: authorization code
获取授权码示例:
Example of getting authorization code.
```bash
curl http://192.168.0.1:6041/rest/login/root/taosdata
```
返回值:
Response body:
```json
{
......@@ -186,15 +187,15 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata
}
```
## 使用示例
## For example
- 在 demo 库里查询表 d1001 的所有记录:
- query all records from table d1001 of database demo
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001' 192.168.0.1:6041/rest/sql
```
返回值:
Response body:
```json
{
......@@ -214,13 +215,13 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata
}
```
- 创建库 demo:
- Create database demo:
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'create database demo' 192.168.0.1:6041/rest/sql
```
返回值:
Response body:
```json
{
......@@ -232,17 +233,17 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata
}
```
## 其他用法
## Other Uses
### 结果集采用 Unix 时间戳
### Unix timestamps for result sets
HTTP 请求 URL 采用 `sqlt` 时,返回结果集的时间戳将采用 Unix 时间戳格式表示,例如
When the HTTP request URL uses `/rest/sqlt`, the returned result set's timestamp value will be in Unix timestamp format, for example:
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001' 192.168.0.1:6041/rest/sqlt
```
返回值:
Response body:
```json
{
......@@ -262,15 +263,15 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001
}
```
### 结果集采用 UTC 时间字符串
### UTC format for the result set
HTTP 请求 URL 采用 `sqlutc` 时,返回结果集的时间戳将采用 UTC 时间字符串表示,例如
When the HTTP request URL uses `/rest/sqlutc`, the timestamp of the returned result set will be expressed as a UTC format, for example:
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.t1' 192.168.0.1:6041/rest/sqlutc
```
返回值:
Respones body:
```json
{
......@@ -290,18 +291,17 @@ HTTP 请求 URL 采用 `sqlutc` 时,返回结果集的时间戳将采用 UTC
}
```
## 重要配置项
## Important configuration items
下面仅列出一些与 RESTful 接口有关的配置参数,其他系统参数请看配置文件里的说明。
Only some configuration parameters related to the RESTful interface are listed below. Please see the description in the configuration file for other system parameters.
- 对外提供 RESTful 服务的端口号,默认绑定到 6041(实际取值是 serverPort + 11,因此可以通过修改 serverPort 参数的设置来修改)。
- httpMaxThreads: 启动的线程数量,默认为 2(2.0.17.0 版本开始,默认值改为 CPU 核数的一半向下取整)。
- restfulRowLimit: 返回结果集(JSON 格式)的最大条数,默认值为 10240。
- httpEnableCompress: 是否支持压缩,默认不支持,目前 TDengine 仅支持 gzip 压缩格式。
- httpDebugFlag: 日志开关,默认 131。131:仅错误和报警信息,135:调试信息,143:非常详细的调试信息,默认 131。
- httpDbNameMandatory: 是否必须在 RESTful url 中指定默认的数据库名。默认为 0,即关闭此检查。如果设置为 1,那么每个 RESTful url 中都必须设置一个默认数据库名,否则无论此时执行的 SQL 语句是否需要指定数据库,都会返回一个执行错误,拒绝执行此 SQL 语句。
- The port number of the external RESTful service is bound to 6041 by default (the actual value is serverPort + 11, so it can be changed by modifying the setting of the serverPort parameter).
- httpMaxThreads: the number of threads to start, default is 2 (the default value is rounded down to half of the CPU cores with version 2.0.17.0 and later versions).
- restfulRowLimit: the maximum number of result sets (in JSON format) to return. The default value is 10240.
- httpEnableCompress: whether to support compression, the default is not supported. Currently, TDengine only supports the gzip compression format.
- httpDebugFlag: logging switch, default is 131. 131: error and alarm messages only, 135: debug messages, 143: very detailed debug messages.
- httpDbNameMandatory: users must specify the default database name in the RESTful URL. The default is 0, which turns off this check. If set to 1, users must put a default database name in every RESTful URL. Otherwise, it will return an execution error and reject this SQL statement, regardless of whether the SQL statement executed at this time requires a specified database.
:::note
如果使用 taosd 提供的 REST API, 那么以上配置需要写在 taosd 的配置文件 taos.cfg 中。如果使用 taosAdaper 提供的 REST API, 那么需要参考 taosAdaper [对应的配置方法](/reference/taosadapter/)。
If you are using the REST API provided by taosd, you should write the above configuration in taosd's configuration file taos.cfg. If you use the REST API of taosAdapter, you need to refer to taosAdapter [corresponding configuration method](/reference/taosadapter/).
:::
--
---
title: Connector
---
TDengine provides a rich application development interface. To facilitate users to develop their own applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in some languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
TDengine provides a rich set of APIs (application development interface). To facilitate users to develop their applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in a few languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
! [image-connector](/img/connector.png)
![image-connector](/img/connector.png)
## Supported platforms
Currently, TDengine's native interface connectors can support platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha hardware platforms and Linux/Win64/Win32 development environments. The comparison matrix is as follows.
| **CPU** | **OS** | **JDBC** | **Python** | **Go** | **Node.js** | **C#** | **Rust** | C/C++ |
| -------------- | --------- | -------- | ---------- | ------ | ----------- | ------ | -------- | ----- |
| | **X86 64bit** | **Linux** | ● | ● | ● | ● | ● | ● | ● | ●
| **X86 64bit** | **Win64** | ● | ● | ● | ● | ● | ● | ● | ●
| **X86 64bit** | **Win32** | ● | ● ● | ○ | ○ | ●
| **X86 32bit** | **Win32** | ○ | ○ | ○ | ○ | ○ | ●
| **ARM64** | **Linux** | ● | ● ● | ○ | ○ | ●
| **ARM32** | **Linux** | ● ● ● ● ● ● | ○ | ○ | ●
| **MIPS Longcore** | **Linux** | ○ | ○ | ○ | ○ | ○ | ○ | ○
| **Alpha Shenwei** | **Linux** | ○ | ○ ○ | -- | -- | --- | --- | ○ |
| **X86 Haiguang** | **Linux** | ○ | ○ | ○ | -- | -- | --- | ○ |
| ------- | ------ | -------- | ---------- | ------ | ----------- | ------ | -------- | ----- |
| **X86 64bit** | **Linux** | ● | ● | ● | ● | ● | ● | ● |
| **X86 64bit** | **Win64** | ● | ● | ● | ● | ● | ● | ● |
| **X86 64bit** | **Win32** | ● | ● | ● | ● | ○ | ○ | ● |
| **X86 32bit** | **Win32** | ○ | ○ | ○ | ○ | ○ | ○ | ● |
| **ARM64** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
| **ARM32** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
| **MIPS** | **Linux** | ○ | ○ | ○ | ○ | ○ | ○ | ○ |
Where ● means the official test verification passed, ○ means the unofficial test verification passed, -- means no assurance.
......@@ -46,13 +44,13 @@ Comparing the connector support for TDengine functional features as follows.
| **Functional Features** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| -------------- | -------- | ---------- | ------ | ------ | ----------- | -------- |
| **Connection Management** | Support | Support | Support | Support | Support | Support | Support
| Support | Support | Support | Support | Support | Support | Support | Support | Support
| Support | Support | Support | Support | Support | Support | Support | Support | Support
| **Parameter Binding** | Support | Support | Support | Support | Support | Support | Support
| Support | Support | Support | Support | Support | Support | Support | Not Supported
| **Schemaless** | Support | Support | Support | Support | Support | Support | Support
| **DataFrame** | Not Supported | Support | Not Supported | Not Supported | Not Supported | Not Supported
| **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Continous Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Support | Support | Support | Support | Support | Support |
| **Subscription** | Support | Support | Support | Support | Support | Not Supported |
| **Schemaless** | Support | Support | Support | Support | Support | Support |
| **DataFrame** | Not Supported | Support | Not Supported | Not Supported | Not Supported | Not Supported |
:::info
The different database framework specifications for various programming languages do not mean that all C/C++ interfaces need a wrapper.
......@@ -62,14 +60,14 @@ The different database framework specifications for various programming language
| **Functional Features** | **Java** | **Python** | **Go** | **C# (not supported yet)** | **Node.js** | **Rust** |
| ------------------------------ | -------- | ---------- | -------- | ------------------ | ----------- | -------- |
| **Connection Management** | Support | Support | Support | N/A | Support | Support | Support
| Support | Support | N/A | Support | Support | Support
| Support | Support | N/A | Support | Support | Support
| Support | N/A | Support | N/A | Support | N/A
| | N/A | Support | N/A | Support | N/A
| **Schemaless** | Not supported | N/A | Not supported | Not supported | N/A
| N/A | Not Supported | Not Supported | N/A
| **DataFrame** | Not supported | Support | Not supported | N/A | Not supported | Not supported
| **Connection Management** | Support | Support | Support | N/A | Support | Support |
| **Regular Query** | Support | Support | Support | N/A | Support | Support |
| **Continous Query ** | Support | Support | Support | N/A | Support | Support |
| **Parameter Binding** | Not Supported | Not Supported | Not Supported | N/A | Not Supported | Not Supported |
| **Subscription** | Not Supported | Not Supported | Not Supported | N/A | Not Supported | Not Supported |
| **Schemaless** | Not supported | Not Supported | Not Supported | N/A | Not Supported | Not supported |
| **Bulk Pulling (based on WebSocket) **| Support | Not Supported | Not Supported | N/A | Not Supported | Not Supported |
| **DataFrame** | Not supported | Support | Not supported | N/A | Not supported | Not supported |
:::warning
......@@ -79,12 +77,12 @@ The different database framework specifications for various programming language
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import InstallOnWindows from ". /_linux_install.mdx";
import InstallOnLinux from ". /_windows_install.mdx";
import VerifyWindows from ". /_verify_windows.mdx";
import VerifyLinux from ". /_verify_linux.mdx";
import InstallOnWindows from "./_linux_install.mdx";
import InstallOnLinux from "./_windows_install.mdx";
import VerifyWindows from "./_verify_windows.mdx";
import VerifyLinux from "./_verify_linux.mdx";
## Install client driver
## Install Client Driver
:::info
The client driver needs to be installed if you use the native interface connector on a system that does not have the TDengine server software installed.
......
......@@ -15,17 +15,19 @@ import PkgList from "/components/PkgList";
Once the package is unzipped, you will see the following files in the directory:
- _ install_client.sh_: install script
- _ taos.tar.gz_: application driver package
- _ driver_: TDengine application driver
- _ taos.tar.gz_: client driver package
- _ driver_: TDengine client driver
- _examples_: some example programs of different programming languages (C/C#/go/JDBC/MATLAB/python/R)
You can run `install_client.sh` to install it.
4. Edit taos.cfg
Edit `taos.cfg` file (full path is `/etc/taos/taos.cfg` by default), modify `firstEP` with actual TDengine server's End Point, for example `h1.tdengine.com:6030`
:::tip
1. If the computer does not run the TDengine service but installs the TDengine application driver, then you need to config `firstEP` in `taos.cfg` only, and there is no need to configure `FQDN`;
1. If the computer does not run the TDengine service but installs the TDengine client driver, then you need to config `firstEP` in `taos.cfg` only, and there is no need to configure `FQDN`;
2. If you encounter the "Unable to resolve FQDN" error, please make sure the FQDN in the `/etc/hosts` file of the current computer is correctly configured, or the DNS service is correctly configured.
......
......@@ -4,7 +4,7 @@
Since the TDengine client driver is written in C, using the native connection requires loading the client driver shared library file, which is usually included in the TDengine installer. You can install either standard TDengine server installation package or [TDengine client installtion package](/get-started/). For Windows development, you need to install the corresponding [Windows client](https://www.taosdata.com/cn/all-downloads/#TDengine-Windows-Client) for TDengine.
- libtaos.so: After successful installation of TDengine on a Linux system, the dependent Linux version of the client driver libtaos.so file will be automatically copied to /usr/lib/libtaos.so, which is included in the Linux scanable path and does not need to be specified separately.
- libtaos.so: After successful installation of TDengine on a Linux system, the dependent Linux version of the client driver `libtaos.so` file will be automatically linked to `/usr/lib/libtaos.so`, which is included in the Linux scannable path and does not need to be specified separately.
- taos.dll: After installing the client on Windows, the dependent Windows version of the client driver taos.dll file will be automatically copied to the system default search path C:/Windows/System32, again without the need to specify it separately.
:::
Execute `taos` directly under the Linux shell to connect to the TDengine service and enter the TDengine CLI interface, as shown in the following example.
Execute TDengine CLI program `taos` directly from the Linux shell to connect to the TDengine service and enter the TDengine CLI interface, as shown in the following example.
```text
$ taos
......
Go to the C:\TDengine directory under cmd and execute `taos.exe` directly to connect to the TDengine service and enter the TDengine CLI interface, for example, as follows:
Go to the `C:\TDengine` directory from `cmd` and execute TDengine CLI program `taos.exe` directly to connect to the TDengine service and enter the TDengine CLI interface, for example, as follows:
```text
C:\TDengine>taos
......
......@@ -4,16 +4,16 @@ import PkgList from "/components/PkgList";
<PkgList type={1} sys="Windows" />
[all downloads](https://www.taosdata.com/cn/all-downloads/)
[All downloads](https://www.taosdata.com/cn/all-downloads/)
2. Execute the installer, select the default value as prompted, and complete the installation
3. Installation path
The default installation path is C:\TDengine, including the following files (directories).
- _taos.exe_ : TDengine CLI command line program
- _taos.exe_ : TDengine CLI command-line program
- _cfg_ : configuration file directory
- _driver_: application driver dynamic link library
- _driver_: client driver dynamic link library
- _examples_: sample programs bash/C/C#/go/JDBC/Python/Node.js
- _include_: header files
- _log_ : log file
......@@ -25,7 +25,7 @@ import PkgList from "/components/PkgList";
:::tip
1. If you use FQDN to connect to the server, you must ensure the local network environment DNS is configured, or add FQDN addressing records in the `hosts` file, e.g., edit C:\Windows\system32\drivers\etc\hosts and add a record like the following: `192.168.1.99 h1.tados.com`. 2.
2. Uninstall: Run unins000.exe to uninstall the TDengine application driver.
1. If you use FQDN to connect to the server, you must ensure the local network environment DNS is configured, or add FQDN addressing records in the `hosts` file, e.g., edit C:\Windows\system32\drivers\etc\hosts and add a record like the following: `192.168.1.99 h1.tados.com`..
2. Uninstall: Run unins000.exe to uninstall the TDengine client driver.
:::
......@@ -4,7 +4,7 @@ sidebar_label: C/C++
title: C/C++ Connector
---
C/C++ developers can use TDengine's client driver, the C/C++ connector (hereafter referred to as the TDengine client driver), to develop their applications to connect to TDengine clusters for data storing, querying, and other functions. To use it, you need to include the TDengine header file _taos.h_, which lists the function prototypes of the provided APIs; the application also needs to link to the corresponding dynamic libraries on the platform where it is located.
C/C++ developers can use TDengine's client driver and the C/C++ connector, to develop their applications to connect to TDengine clusters for data writing, querying, and other functions. To use it, you need to include the TDengine header file _taos.h_, which lists the function prototypes of the provided APIs; the application also needs to link to the corresponding dynamic libraries on the platform where it is located.
```c
#include <taos.h>
......@@ -22,19 +22,19 @@ The dynamic libraries for the TDengine client driver are located in.
## Supported platforms
Please refer to [list of supported platforms](/reference/connector#supported platforms)
Please refer to [list of supported platforms](/reference/connector#supported-platforms)
## Supported versions
The version number of the TDengine client driver and the version number of the TDengine server require one-to-one correspondence and recommend using the same client driver as the TDengine server. Although a lower version of the client driver is compatible with a higher version of the server, if the first three version numbers are the same (i.e., only the fourth version number is different), it is not recommended. It is strongly discouraged to use a high version of the client driver to access a low version of the server.
The version number of the TDengine client driver and the version number of the TDengine server require one-to-one correspondence and recommend using the same version of client driver as what the TDengine server version is. Although a lower version of the client driver is compatible to work with a higher version of the server, if the first three version numbers are the same (i.e., only the fourth version number is different), but it is not recommended. It is strongly discouraged to use a higher version of the client driver to access a lower version of the TDengine server.
## Installation steps
Please refer to the [Installation Guide](/reference/connector#installation steps) for TDengine client driver installation
Please refer to the [Installation Steps](/reference/connector#installation-steps) for TDengine client driver installation
## Establishing a connection
The basic process of accessing a TDengine cluster using the client driver is to establish a connection, query and write, close the connection, and clear the resource.
The basic process of accessing a TDengine cluster using the client driver is to establish a connection, query and write data, close the connection, and clear the resource.
The following is sample code for establishing a connection, which omits the query and writing sections and shows how to establish a connection, close a connection, and clear a resource.
......@@ -51,12 +51,12 @@ The following is sample code for establishing a connection, which omits the quer
taos_cleanup();
```
In the above example code, `taos_connect` establishes a connection to port 6030 on the host where the client application is located, `taos_close` closes the current connection, and `taos_cleanup` clears the resources requested and used by the client driver.
In the above example code, `taos_connect()` establishes a connection to port 6030 on the host where the client application is located, `taos_close()` closes the current connection, and `taos_cleanup()` clears the resources requested and used by the client driver.
:::note
- If not specified, when the return value of the API is an integer, _0_ means success, the others are error codes representing the reason for failure, and when the return value is a pointer, _NULL_ means failure.
- All error codes and their corresponding causes are described in the taoserror.h file.
- All error codes and their corresponding causes are described in the `taoserror.h` file.
:::
......@@ -66,7 +66,7 @@ This section shows sample code for standard access methods to TDengine clusters
### Synchronous query example
<details
<details>
<summary>Synchronous query</summary>
```c
......@@ -120,15 +120,15 @@ This section shows sample code for standard access methods to TDengine clusters
</details>
:::info
More example code and downloads are available at [github](https://github.com/taosdata/TDengine/tree/develop/examples/c)
You can find it in the installation directory under the examples/c path. This directory has a makefile and can be compiled under Linux by executing make directly.
**Hint:** When compiling in an ARM environment, please remove `-msse4.2` from the makefile. This option is only supported on x64/x86 hardware platforms.
More example code and downloads are available at [GitHub](https://github.com/taosdata/TDengine/tree/develop/examples/c).
You can find it in the installation directory under the `examples/c` path. This directory has a makefile and can be compiled under Linux by executing `make` directly.
**Hint:** When compiling in an ARM environment, please remove `-msse4.2` from the makefile. This option is only supported on the x64/x86 hardware platforms.
:::
## API reference
The following describes the basic API, synchronous API, asynchronous API, subscription API, and modeless write API of TDengine client driver, respectively.
The following describes the basic API, synchronous API, asynchronous API, subscription API, and schemaless write API of TDengine client driver, respectively.
### Basic API
......@@ -154,7 +154,7 @@ The base API is used to do things like create database connections and provide a
- `TAOS *taos_connect(const char *host, const char *user, const char *pass, const char *db, int port)`
Creates a database connection and initializes the connection context. Among the parameters required from the user are
Creates a database connection and initializes the connection context. Among the parameters required from the user are:
- host: FQDN of any node in the TDengine cluster
- user: user name
......@@ -162,7 +162,7 @@ The base API is used to do things like create database connections and provide a
- db: database name, if the user does not provide, it can also be connected correctly, the user can create a new database through this connection, if the user provides the database name, it means that the database user has already created, the default use of the database
- port: the port the tasd program is listening on
A null return value indicates a failure. The application needs to save the returned parameters for subsequent use.
NULL indicates a failure. The application needs to save the returned parameters for subsequent use.
:::info
The same process can connect to multiple TDengine clusters according to different host/port
......@@ -187,7 +187,7 @@ The APIs described in this subsection are all synchronous interfaces. After bein
- `TAOS_RES* taos_query(TAOS *taos, const char *sql)`
Executes an SQL statement, either a DQL, DML, or DDL statement. The `taos` parameter is a handle obtained with `taos_connect()`. You can't tell if the result failed by whether the return value is `NULL`, but by parsing the error code in the result set with the `taos_errno()` function.
Executes an SQL command, either a DQL, DML, or DDL statement. The `taos` parameter is a handle obtained with `taos_connect()`. You can't tell if the result failed by whether the return value is `NULL`, but by parsing the error code in the result set with the `taos_errno()` function.
- `int taos_result_precision(TAOS_RES *res)`
......@@ -248,15 +248,15 @@ TDengine version 2.0 and above recommends that each thread of a database applica
### Asynchronous query API
TDengine also provides a higher performance asynchronous API to handle data insertion and query operations. Given the same hardware and software environment, the asynchronous API can run data insertion 2 to 4 times faster than the synchronous API. The asynchronous API is called non-blocking and returns immediately before the system completes a specific database operation. The calling thread can go to work on other tasks, which can improve the performance of the whole application. Asynchronous APIs are particularly advantageous in the case of severe network latency.
TDengine also provides a set of asynchronous API to handle data insertion and query operations with a higher performance. Given the same hardware and software environment, the asynchronous API can run data insertion 2 to 4 times faster than the synchronous API. The asynchronous API is called non-blocking and returns immediately before the system completes a specific database operation. The calling thread can go to work on other tasks, which can improve the performance of the whole application. Asynchronous APIs are particularly advantageous in the case of severe network latency.
The asynchronous APIs require the application to provide a callback function with the following parameters: the first two parameters are consistent, and the third parameter depends on the API. The first parameter, param, is provided to the system when the application calls the asynchronous API. It is used for the callback so that the application can retrieve the context of the specific operation, depending on the implementation. The second parameter is the result set of the SQL operation. If it is empty, such as insert operation, it means that there are no records returned, and if it is not empty, such as select operation, it means that there are records returned.
The asynchronous APIs require the application to provide a callback function with the following parameters: the first two parameters are consistent, and the third parameter depends on the API. The first parameter, `param`, is provided to the system when the application calls the asynchronous API. It is used for the callback so that the application can retrieve the context of the specific operation, depending on the implementation. The second parameter is the result set of the SQL operation. If it is NULL, such as insert operation, it means that there are no records returned, and if it is not NULL, such as select operation, it means that there are records returned.
The asynchronous API has relatively high user requirements, so users can use it selectively according to specific application scenarios. The following are two important asynchronous APIs.
- `void taos_query_a(TAOS *taos, const char *sql, void (*fp)(void *param, TAOS_RES *, int code), void *param);`
Execute SQL statements asynchronously.
Execute SQL command asynchronously.
- taos: the database connection returned by calling `taos_connect()`
- sql: the SQL statement to be executed
......@@ -270,13 +270,13 @@ The asynchronous API has relatively high user requirements, so users can use it
- res: the result set returned by the `taos_query_a()` callback
- fp: callback function. Its parameter `param` is a user-definable parameter structure passed to the callback function; `numOfRows` is the number of rows of the fetched data (not a function of the entire query result set). In the callback function, the application can iterate forward to fetch each row of records in the batch by calling `taos_fetch_row()`. After reading all the rows in a block, the application needs to continue calling `taos_fetch_rows_a()` in the callback function to get the next batch of rows for processing until the number of rows returned, `numOfRows`, is zero (result return complete) or the number of rows is negative (query error).
TDengine's asynchronous APIs all use a non-blocking call pattern. Applications can open multiple tables simultaneously using multiple threads and perform queries or inserts on each open table at the same time. It is important to note that **client applications must ensure that operations on the same table are fully serialized**. i.e., no second insert or query operation can be performed while an insert or query operation on the same table is incomplete (not returned).
All TDengine's asynchronous APIs use a non-blocking call pattern. Applications can open multiple tables simultaneously using multiple threads and perform queries or inserts on each open table at the same time. It is important to note that **client applications must ensure that operations on the same table are fully serialized**. i.e., no second insert or query operation can be performed while an insert or query operation on the same table is incomplete (not returned).
### Parameter Binding API
In addition to direct calls to `taos_query()` to perform queries, TDengine also provides a Prepare API that supports parameter binding, similar in style to MySQL, and currently only supports using a question mark `? ` to represent the parameter to be bound.
In addition to direct calls to `taos_query()` to perform queries, TDengine also provides a set of `bind` APIs that supports parameter binding, similar in style to MySQL, and currently only supports using a question mark `? ` to represent the parameter to be bound.
Starting with versions 2.1.1.0 and 2.1.2.0, TDengine has significantly improved the parameter binding interface's support for data writing (INSERT) scenarios. This avoids the resource consumption of SQL syntax parsing when writing data through the parameter binding interface, thus significantly improving write performance in most cases. A typical operation, in this case, is as follows.
Starting with versions 2.1.1.0 and 2.1.2.0, TDengine has significantly improved the bind APIs to support for data writing (INSERT) scenarios. This avoids the resource consumption of SQL syntax parsing when writing data through the parameter binding interface, thus significantly improving write performance in most cases. A typical operation, in this case, is as follows.
1. call `taos_stmt_init()` to create the parameter binding object.
2. call `taos_stmt_prepare()` to parse the INSERT statement. 3.
......@@ -288,7 +288,7 @@ Starting with versions 2.1.1.0 and 2.1.2.0, TDengine has significantly improved
8. call `taos_stmt_execute()` to execute the prepared batch instructions.
9. When execution is complete, call `taos_stmt_close()` to release all resources.
Note: If `taos_stmt_execute()` succeeds, you can reuse the parsed result of `taos_stmt_prepare()` to bind new data in steps 3 to 6 if you don't need to change the SQL statement. However, if there is an execution error, it is not recommended to continue working in the current context but release the resources and start again with `taos_stmt_init()` steps.
Note: If `taos_stmt_execute()` succeeds, you can reuse the parsed result of `taos_stmt_prepare()` to bind new data in steps 3 to 6 if you don't need to change the SQL command. However, if there is an execution error, it is not recommended to continue working in the current context but release the resources and start again with `taos_stmt_init()` steps.
The specific functions related to the interface are as follows (see also the [prepare.c](https://github.com/taosdata/TDengine/blob/develop/examples/c/prepare.c) file for the way to use the corresponding functions)
......@@ -298,7 +298,7 @@ The specific functions related to the interface are as follows (see also the [pr
- ` int taos_stmt_prepare(TAOS_STMT *stmt, const char *sql, unsigned long length)`
Parse a SQL statement, and bind the parsed result and parameter information to stmt. If the parameter length is greater than 0, use this parameter as the length of the SQL statement. If it is equal to 0, the length of the SQL statement will be determined automatically.
Parse a SQL command, and bind the parsed result and parameter information to `stmt`. If the parameter length is greater than 0, use this parameter as the length of the SQL command. If it is equal to 0, the length of the SQL command will be determined automatically.
- `int taos_stmt_bind_param(TAOS_STMT *stmt, TAOS_BIND *bind)`
......@@ -319,17 +319,17 @@ The specific functions related to the interface are as follows (see also the [pr
- `int taos_stmt_set_tbname(TAOS_STMT* stmt, const char* name)`
(New in version 2.1.1.0, only supported for replacing parameter values in INSERT statements)
When the table name in the SQL statement uses `? ` placeholder, you can use this function to bind a specific table name.
(Available in 2.1.1.0 and later versions, only supported for replacing parameter values in INSERT statements)
When the table name in the SQL command uses `? ` placeholder, you can use this function to bind a specific table name.
- `int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags)`
(New in version 2.1.2.0, only supported for replacing parameter values in INSERT statements)
When the table name and TAGS in the SQL statement both use `? `, you can use this function to bind the specific table name and the specific TAGS value. The most typical usage scenario is an INSERT statement that uses the automatic table building function (the current version does not support specifying specific TAGS columns.) The number of columns in the TAGS parameter needs to be the same as the number of TAGS requested in the SQL statement.
(Available in 2.1.2.0 and later versions, only supported for replacing parameter values in INSERT statements)
When the table name and TAGS in the SQL command both use `? `, you can use this function to bind the specific table name and the specific TAGS value. The most typical usage scenario is an INSERT statement that uses the automatic table building function (the current version does not support specifying specific TAGS columns.) The number of columns in the TAGS parameter needs to be the same as the number of TAGS requested in the SQL command.
- `int taos_stmt_bind_param_batch(TAOS_STMT* stmt, TAOS_MULTI_BIND* bind)`
(new in version 2.1.1.0, only supported for replacing parameter values in INSERT statements)
(Available in 2.1.1.0 and later versions, only supported for replacing parameter values in INSERT statements)
To pass the data to be bound in a multi-column manner, it is necessary to ensure that the order of the data columns and the number of columns given here are the same as the VALUES parameter in the SQL statement. The specific definition of TAOS_MULTI_BIND is as follows.
```c
......@@ -345,7 +345,7 @@ The specific functions related to the interface are as follows (see also the [pr
- ` int taos_stmt_add_batch(TAOS_STMT *stmt)`
Adds the currently bound parameter to the batch. After calling this function, you can call `taos_stmt_bind_param()` or `taos_stmt_bind_param_batch()` again to bind a new parameter. Note that this function only supports INSERT/IMPORT statements. Other SQL statements such as SELECT will return an error.
Adds the currently bound parameter to the batch. After calling this function, you can call `taos_stmt_bind_param()` or `taos_stmt_bind_param_batch()` again to bind a new parameter. Note that this function only supports INSERT/IMPORT statements. Other SQL command such as SELECT will return an error.
- `int taos_stmt_execute(TAOS_STMT *stmt)`
......@@ -361,12 +361,12 @@ The specific functions related to the interface are as follows (see also the [pr
- ` char * taos_stmt_errstr(TAOS_STMT *stmt)`
(new in version 2.1.3.0)
(Available in 2.1.3.0 and later versions)
Used to get error information if other STMT APIs return errors (return error codes or null pointers).
### Write-without-mode API
### Schemaless Writing API
In addition to writing data using the SQL method or the parameter binding API, writing can also be done using Schemaless, which eliminates the need to create a super table/data sub-table structure in advance and writes the data directly. The TDengine system automatically creates and maintains the required table structure based on the written data content. The use of Schemaless is described in the chapter [Schemaless Writing](/reference/schemaless/), and the C/C++ API used with it is described here.
In addition to writing data using the SQL method or the parameter binding API, writing can also be done using schemaless writing, which eliminates the need to create a super table/data sub-table structure in advance and writes the data directly. The TDengine system automatically creates and maintains the required table structure based on the written data content. The use of schemaless writing is described in the chapter [Schemaless Writing](/reference/schemaless/), and the C/C++ API used with it is described here.
- `TAOS_RES* taos_schemaless_insert(TAOS* taos, const char* lines[], int numLines, int protocol, int precision)`
......@@ -374,11 +374,11 @@ In addition to writing data using the SQL method or the parameter binding API, w
This interface writes the text data of the line protocol to TDengine.
**Parameter description**
taos: database connection, established by the `taos_connect()` function.
lines: text data. A pattern-free text string that meets the parsing format requirements.
numLines: the number of lines of text data, cannot be 0.
protocol: the protocol type of the lines, used to identify the text data format.
precision: precision string for the timestamp in the text data.
- taos: database connection, established by the `taos_connect()` function.
- lines: text data. A pattern-free text string that meets the parsing format requirements.
- numLines: the number of lines of text data, cannot be 0.
- protocol: the protocol type of the lines, used to identify the text data format.
- precision: precision string for the timestamp in the text data.
**return value**
TAOS_RES structure, application can get error message by using `taos_errstr()` and also error code by using `taos_errno()`.
......@@ -419,7 +419,7 @@ The Subscription API currently supports subscribing to one or more tables and co
- taos: the database connection that has been established
- restart: if the subscription already exists, whether to restart or continue the previous subscription
- topic: the topic of the subscription (i.e., the name). This parameter is the unique identifier of the subscription
- sql: the query statement of the subscription, this statement can only be `select` statement, only the original data should be queried, only the data can be queried in time order
- sql: the query statement of the subscription, this statement can only be _select_ statement, only the original data should be queried, only the data can be queried in time order
- fp: the callback function when the query result is received (the function prototype will be introduced later), only used when called asynchronously. This parameter should be passed `NULL` when called synchronously
- param: additional parameter when calling the callback function, the system API will pass it to the callback function as it is, without any processing
- interval: polling period in milliseconds. The callback function will be called periodically according to this parameter when called asynchronously. not recommended to set this parameter too small To avoid impact on system performance when called synchronously. If the interval between two calls to `taos_consume()` is less than this period, the API will block until the interval exceeds this period.
......
......@@ -17,9 +17,9 @@ import CSQuery from "../../04-develop/04-query-data/_cs.mdx"
import CSAsyncQuery from "../../04-develop/04-query-data/_cs_async.mdx"
TDengine.Connector` is a C# language connector provided by TDengine that allows C# developers to develop C# applications that access TDengine cluster data.
`TDengine.Connector` is a C# language connector provided by TDengine that allows C# developers to develop C# applications that access TDengine cluster data.
The `TDengine.Connector` connector supports connection to TDengine runtime instances via the TDengine client driver (taosc), providing data writing, querying, subscription, schemaless data writing, bind interface data writing, etc. The `TDengine.Connector` currently does not provide a REST connection. REST connection is not yet available. Users can write their RESTful APIs by referring to the [RESTful APIs](https://docs.taosdata.com//reference/restful-api/) documentation.
The `TDengine.Connector` connector supports connect to TDengine instances via the TDengine client driver (taosc), providing data writing, querying, subscription, schemaless writing, bind interface, etc. The `TDengine.Connector` currently does not provide a REST connection interface. Developers can write their RESTful application by referring to the [RESTful APIs](https://docs.taosdata.com//reference/restful-api/) documentation.
This article describes how to install `TDengine.Connector` in a Linux or Windows environment and connect to TDengine clusters via `TDengine.Connector` to perform basic operations such as data writing and querying.
......@@ -31,15 +31,15 @@ The supported platforms are the same as those supported by the TDengine client d
## Version support
Please refer to [version support list](/reference/connector#version support)
Please refer to [version support list](/reference/connector#version-support)
## Supported features
1. connection management
2. general query
3. continuous query
4. parameter binding
5. subscription function
1. Connection Mmanagement
2. General Query
3. Continuous Query
4. Parameter Binding
5. Subscription
6. Schemaless
## Installation Steps
......@@ -50,7 +50,7 @@ Please refer to [version support list](/reference/connector#version support)
* [Nuget Client](https://docs.microsoft.com/en-us/nuget/install-nuget-client-tools) (optional installation)
* Install TDengine client driver, please refer to [Install client driver](/reference/connector#Install client driver) for details
### Install using dotnet CLI
### Install via dotnet CLI
<Tabs defaultValue="CLI">
<TabItem value="CLI" label="Get C# driver using dotnet CLI">
......@@ -61,7 +61,7 @@ You can reference the `TDengine.Connector` published in Nuget to the current pro
dotnet add package TDengine.Connector
```
</TabItem
</TabItem>
<TabItem value="source" label="Use source code to get C# driver">
You can download TDengine's source code and directly reference the latest version of the TDengine.Connector library
......@@ -179,7 +179,7 @@ namespace TDengineExample
1. "Unable to establish connection", "Unable to resolve FQDN"
Usually, because the FQDN configuration is incorrect, you can refer to [How to understand TDengine's FQDN thoroughly](https://www.taosdata.com/blog/2021/07/29/2741.html) to solve it. 2.
Usually, it cause by the FQDN configuration is incorrect, you can refer to [How to understand TDengine's FQDN (Chinese)](https://www.taosdata.com/blog/2021/07/29/2741.html) to solve it. 2.
Unhandled exception. System.DllNotFoundException: Unable to load DLL 'taos' or one of its dependencies: The specified module cannot be found.
......
......@@ -15,9 +15,9 @@ import GoOpenTSDBTelnet from "../../04-develop/03-insert-data/_go_opts_telnet.md
import GoOpenTSDBJson from "../../04-develop/03-insert-data/_go_opts_json.mdx"
import GoQuery from "../../04-develop/04-query-data/_go.mdx"
`driver-go` is the official Go language connector for TDengine, which implements the interface to the Go language [ database/sql ](https://golang.org/pkg/database/sql/) package. Go developers can use it to develop applications that access TDengine cluster data.
`driver-go` is the official Go language connector for TDengine, which implements the interface to the Go language [database/sql](https://golang.org/pkg/database/sql/) package. Go developers can use it to develop applications that access TDengine cluster data.
`driver-go` provides two ways to establish connections. One is **native connection**, which connects to TDengine runtime instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless interface, and parameter binding interface. The other is the **REST connection**, which connects to TDengine runtime instances via the REST interface provided by taosAdapter. The set of features implemented by the REST connection differs slightly from the native connection.
`driver-go` provides two ways to establish connections. One is **native connection**, which connects to TDengine instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless writing, and bind interface. The other is the **REST connection**, which connects to TDengine instances via the REST interface provided by taosAdapter. The set of features implemented by the REST connection differs slightly from the native connection.
This article describes how to install `driver-go` and connect to TDengine clusters and perform basic operations such as data query and data writing through `driver-go`.
......@@ -30,13 +30,13 @@ REST connections are supported on all platforms that can run Go.
## Version support
Please refer to [version support list](/reference/connector#version support)
Please refer to [version support list](/reference/connector#version-support)
## Supported features
### Native connections
A "native connection" is established by the connector directly to the TDengine runtime instance via the TDengine client driver (taosc). The supported functional features are
A "native connection" is established by the connector directly to the TDengine instance via the TDengine client driver (taosc). The supported functional features are:
* Normal queries
* Continuous queries
......@@ -46,7 +46,7 @@ A "native connection" is established by the connector directly to the TDengine r
### REST connection
A "REST connection" is a connection between a connector and a TDengine runtime instance via the REST API provided by the taosAdapter component. The following features are supported.
A "REST connection" is a connection between the application and the TDengine instance via the REST API provided by the taosAdapter component. The following features are supported:
* General queries
* Continuous queries
......@@ -75,7 +75,7 @@ Configure the environment variables and check the command.
go mod init taos-demo
``` text
2. Introduce taosSql: ``text
2. Introduce taosSql
```go
import (
......@@ -101,7 +101,7 @@ Configure the environment variables and check the command.
### Data source name (DSN)
Data source names have a standard format, e.g. [PEAR DB](http://pear.php.net/manual/en/package.database.db.intro-dsn.php), but no type prefix (square brackets indicate optionally): the
Data source names have a standard format, e.g. [PEAR DB](http://pear.php.net/manual/en/package.database.db.intro-dsn.php), but no type prefix (square brackets indicate optionally):
``` text
[username[:password]@][protocol[(address)]]/[dbname][?param1=value1&... &paramN=valueN]
......@@ -112,7 +112,8 @@ DSN in full form.
```text
username:password@protocol(address)/dbname?param=value
```
### Connecting using connectors
### Connecting via connector
<Tabs defaultValue="native">
<TabItem value="native" label="native connection">
......@@ -121,7 +122,7 @@ _taosSql_ implements Go's `database/sql/driver` interface via cgo. You can use t
Use `taosSql` as `driverName` and use a correct [DSN](#DSN) as `dataSourceName`, DSN supports the following parameters.
* configPath specifies the taos.cfg directory
* configPath specifies the `taos.cfg` directory
Example.
......@@ -145,7 +146,7 @@ func main() {
}
```
</TabItem
</TabItem>
<TabItem value="rest" label="REST connection">
_taosRestful_ implements Go's `database/sql/driver` interface via `http client`. You can use the [`database/sql`](https://golang.org/pkg/database/sql/) interface by simply introducing the driver.
......@@ -210,7 +211,7 @@ func main() {
## Usage limitations
Since the REST interface is stateless, the `use db` syntax will not work. You need to put the db name into the SQL statement, e.g. `create table if not exists tb1 (ts timestamp, a int)` to `create table if not exists test.tb1 (ts timestamp, a int)` otherwise it will report the error `[0x217] Database not specified or available`.
Since the REST interface is stateless, the `use db` syntax will not work. You need to put the db name into the SQL command, e.g. `create table if not exists tb1 (ts timestamp, a int)` to `create table if not exists test.tb1 (ts timestamp, a int)` otherwise it will report the error `[0x217] Database not specified or available`.
You can also put the db name in the DSN by changing `root:taosdata@http(localhost:6041)/` to `root:taosdata@http(localhost:6041)/test`. This method is supported by taosAdapter in TDengine 2.4.0.5. is supported since TDengine 2.4.0.5. Executing the `create database` statement when the specified db does not exist will not report an error while executing other queries or writing against that db will report an error.
......@@ -268,27 +269,27 @@ func main() {
1. Cannot find the package `github.com/taosdata/driver-go/v2/taosRestful`
Change the reference to `github.com/taosdata/driver-go/v2` in the require block in `go.mod` to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
Change the `github.com/taosdata/driver-go/v2` line in the require block of the `go.mod` file to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
2. stmt (parameter binding) related interface in database/sql crashes
2. bind interface in database/sql crashes
REST does not support parameter binding related interface. It is recommended to use `db.Exec` and `db.Query`.
3. error `[0x217] Database not specified or available` after executing other statements with `use db` statement
The execution of SQL statements in the REST interface is not contextual, so using `use db` statement will not work, see the usage restrictions section above.
The execution of SQL command in the REST interface is not contextual, so using `use db` statement will not work, see the usage restrictions section above.
4. use taosSql without error use taosRestful with error `[0x217] Database not specified or available`
4. use `taosSql` without error but use `taosRestful` with error `[0x217] Database not specified or available`
Because the REST interface is stateless, using the `use db` statement will not take effect. See the usage restrictions section above.
5. Upgrade `github.com/taosdata/driver-go/v2/taosRestful`
Change the reference to `github.com/taosdata/driver-go/v2` in the `go.mod` file to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
Change the `github.com/taosdata/driver-go/v2` line in the `go.mod` file to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
6. `readBufferSize` parameter has no significant effect after being increased
If you increase `readBufferSize` will reduce the number of `syscall` calls when fetching results. If the query result is more petite, modifying this parameter will not improve significantly. If you increase the parameter too much, the bottleneck will be parsing JSON data. If you need to optimize the query speed, you must adjust the value according to the actual situation to achieve the best query result.
If you increase `readBufferSize` will reduce the number of `syscall` calls when fetching results. If the query result is smaller, modifying this parameter will not improve significantly. If you increase the parameter value too much, the bottleneck will be parsing JSON data. If you need to optimize the query speed, you must adjust the value according to the actual situation to achieve the best query result.
7. `disableCompression` parameter is set to `false` when the query efficiency is reduced
......@@ -408,4 +409,4 @@ The `af` package encapsulates TDengine advanced functions such as connection man
## API Reference
Full API see [driver-go documentation](https://pkg.go.dev/github.com/taosdata/driver-go/v2)
\ No newline at end of file
Full API see [driver-go documentation](https://pkg.go.dev/github.com/taosdata/driver-go/v2)
......@@ -9,16 +9,16 @@ description: TDengine Java based on JDBC API and provide both native and REST co
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions such as data writing, querying, subscription, schemaless interface, and parameter binding interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.0.18 and later). Rest connections implement small differences between the set of features implemented and native connections.
'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). REST connections implement has a slight differences to compare the set of features implemented and native connections.
! [tdengine-connector] (tdengine-jdbc-connector.png)
![tdengine-connector](tdengine-jdbc-connector.png)
The preceding diagram shows two ways for a Java app to access TDengine using connectors:
The preceding diagram shows two ways for a Java app to access TDengine via connector:
- JDBC native connectivity: Java applications use TSDBDriver on physical node 1 (pnode1) to call client-driven directly (libtaos.so or taos.dll) APIs to send writing and query requests to taosd instances located on physical node 2 (pnode2).
- JDBC REST connection: The Java application encapsulates the SQL as a REST request via RestfulDriver, sends it to the REST server of physical node 2 (taosAdapter), requests taosd through the REST server, and returns the result.
- JDBC native connection: Java applications use TSDBDriver on physical node 1 (pnode1) to call client-driven directly (`libtaos.so` or `taos.dll`) APIs to send writing and query requests to taosd instances located on physical node 2 (pnode2).
- JDBC REST connection: The Java application encapsulates the SQL as a REST request via RestfulDriver, sends it to the REST server of physical node 2 (taosAdapter), requests TDengine server through the REST server, and returns the result.
Using REST connectivity, which does not rely on TDengine client drivers, can be cross-platform, more convenient, and flexible but has about 30% lower performance than native connectors.
Using REST connection, which does not rely on TDengine client drivers.It can be cross-platform more convenient and flexible but introduce about 30% lower performance than native connection.
:::info
TDengine's JDBC driver implementation is as consistent as possible with the relational database driver. Still, there are differences in the use scenarios and technical characteristics of TDengine and relational object databases, so 'taos-jdbcdriver' also has some differences from traditional JDBC drivers. You need to pay attention to the following points when using:
......@@ -30,12 +30,12 @@ TDengine's JDBC driver implementation is as consistent as possible with the rela
## Supported platforms
Native connectivity supports the same platform as TDengine client-driven support.
REST connectivity supports all platforms that can run Java.
Native connection supports the same platform as TDengine client-driven support.
REST connection supports all platforms that can run Java.
## Version support
Please refer to [Version Support List] (/reference/connector# version support)
Please refer to [Version Support List](/reference/connector#version-support).
## TDengine DataType vs. Java DataType
......@@ -55,17 +55,27 @@ TDengine currently supports timestamp, number, character, Boolean type, and the
| NCHAR | java.lang.String | java.lang.String |
| JSON | - | java.lang.String |
In the above example, JDBC uses the client's configuration file to establish a connection to a hostname of cluster_node1, port 6030, and a database named test. When the firstEp node fails, JDBC attempts to connect to the cluster using secondEp.
**Note**: Only TAG supports JSON types
In TDengine, as long as one node in firstEp and secondEp is valid, the connection to the cluster can be established and worked.
## Installation steps
> **Note**: The configuration file here refers to the configuration file on the machine where the application that calls the JDBC Connector is located, the default value of /etc/taos/taos .cfg on Linux OS, and the default value of C://TDengine/cfg/taos.cfg on Windows OS.
### Pre-installation preparation
Before using Java Connector to connect to the database, the following conditions are required.
- Java 1.8 or above runtime environment and Maven 3.6 or above installed
- TDengine client driver installed (required for native connections, not required for REST connections), please refer to [Installing Client Driver](/reference/connector#Install-Client-Driver)
### Install the connectors
<Tabs defaultValue="maven">
<TabItem value="maven" label="install via Maven">
- [sonatype](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver)
- [mvnrepository](https://mvnrepository.com/artifact/com.taosdata.jdbc/taos-jdbcdriver)
- [maven.aliyun](https://maven.aliyun.com/mvn/search)
Add following dependency in the pom.xml file of your Maven project:
Add following dependency in the `pom.xml` file of your Maven project:
```xml-dtd
<dependency>
......@@ -99,7 +109,7 @@ TDengine's JDBC URL specification format is:
For establishing connections, native connections differ slightly from REST connections.
<Tabs defaultValue="native">
<TabItem value="native" label="原生连接">
<TabItem value="native" label="Native connection">
```java
Class.forName("com.taosdata.jdbc.TSDBDriver");
......@@ -107,9 +117,9 @@ String jdbcUrl = "jdbc:TAOS://taosdemo.com:6030/test?user=root&password=taosdata
Connection conn = DriverManager.getConnection(jdbcUrl);
```
In the above example, TSDBDriver, which uses a JDBC native connection, establishes a connection to a hostname of taosdemo.com, port 6030 (the default port for TDengine), and a database named test. In this URL, the user name (user) is specified as root, and the password (password) is taosdata.
In the above example, TSDBDriver, which uses a JDBC native connection, establishes a connection to a hostname `taosdemo.com`, port `6030` (the default port for TDengine), and a database named `test`. In this URL, the user name `user` is specified as `root`, and the `password` is `taosdata`.
Note: With JDBC native connections, taos-jdbcdriver relies on the client driver (libtaos.so under Linux; taos .dll under Windows).
Note: With JDBC native connections, taos-jdbcdriver relies on the client driver (`libtaos.so` on Linux; `taos.dll` on Windows).
The configuration parameters in the URL are as follows:
......@@ -119,14 +129,14 @@ The configuration parameters in the URL are as follows:
- charset: The character set used by the client, the default value is the system character set.
- locale: Client locale, by default, use the system's current locale.
- timezone: The time zone used by the client, the default value is the system's current time zone.
- batchfetch: true: pulls result sets in batches when executing queries; false: pulls result sets row by row. The default value is: false. Enabling batch pull and obtaining a batch of data can improve query performance when the query data volume is large.
- batchfetch: true: pulls result sets in batches when executing queries; false: pulls result sets row by row. The default value is: false. Enabling batch pulling and obtaining a batch of data can improve query performance when the query data volume is large.
- batchErrorIgnore:true: When executing statement executeBatch, if there is a SQL execution failure in the middle, the following SQL will continue to be executed. false: No more statements after the failed SQL are executed. The default value is: false.
For more information about JDBC native connections, see [Video Tutorial] (https://www.taosdata.com/blog/2020/11/11/1955.html).
For more information about JDBC native connections, see [Video Tutorial](https://www.taosdata.com/blog/2020/11/11/1955.html).
**Connect using the TDengine client-driven configuration file **
When you use a JDBC native connection to connect to a TDengine cluster, you can use the TDengine client-driven configuration file to specify parameters such as firstEp and secondEp of the cluster in the configuration file as below:
When you use a JDBC native connection to connect to a TDengine cluster, you can use the TDengine client driver configuration file to specify parameters such as `firstEp` and `secondEp` of the cluster in the configuration file as below:
1. Do not specify hostname and port in Java applications.
......@@ -159,14 +169,14 @@ secondEp cluster_node2:6030
# locale en_US.UTF-8
```
In the above example, JDBC uses the client's configuration file to establish a connection to a hostname of cluster_node1, port 6030, and a database named test. When the firstEp node in the cluster fails, JDBC attempts to connect to the cluster using secondEp.
In the above example, JDBC uses the client's configuration file to establish a connection to a hostname `cluster_node1`, port 6030, and a database named `test`. When the firstEp node in the cluster fails, JDBC attempts to connect to the cluster using secondEp.
In TDengine, as long as one node in firstEp and secondEp is valid, the connection to the cluster can be established normally.
> **Note**: The configuration file here refers to the configuration file on the machine where the application that calls the JDBC Connector is located, the default value of /etc/taos/taos .cfg on Linux OS, and the default value of C://TDengine/cfg/taos.cfg on Windows OS.
> **Note**: The configuration file here refers to the configuration file on the machine where the application that calls the JDBC Connector is located, the default path is `/etc/taos/taos.cfg` on Linux, and the default path is `C://TDengine/cfg/taos.cfg` on Windows.
</TabItem>
<TabItem value="rest" label="REST 连接">
<TabItem value="rest" label="REST connection">
```java
Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
......@@ -174,12 +184,12 @@ String jdbcUrl = "jdbc:TAOS-RS://taosdemo.com:6041/test?user=root&password=taosd
Connection conn = DriverManager.getConnection(jdbcUrl);
```
In the above example, a RestfulDriver with a JDBC REST connection is used to establish a connection to a database named test with hostname taosdemo.com on port 6041. The URL specifies the user name (user) as root and the password (password) as taosdata.
In the above example, a RestfulDriver with a JDBC REST connection is used to establish a connection to a database named `test` with hostname `taosdemo.com` on port `6041`. The URL specifies the user name as `root` and the password as `taosdata`.
There is no dependency on the client driver when Using a JDBC REST connection. Compared to a JDBC native connection, only the following are required: 1.
1. driverClass specified as "com.taosdata.jdbc.rs.RestfulDriver".
2. jdbcUrl starting with "jdbc:TAOS-RS://". 3.
2. jdbcUrl starting with "jdbc:TAOS-RS://".
3. use 6041 as the connection port.
The configuration parameters in the URL are as follows.
......@@ -199,7 +209,7 @@ The configuration parameters in the URL are as follows.
INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES(now, 24.6);
```
- Starting from taos-jdbcdriver-2.0.36 and TDengine 2.2.0.0, if dbname is specified in the url, JDBC REST connections will use /rest/sql/dbname as the url for restful requests by default, and there is no need to specify dbname in SQL. For example, if the url is jdbc:TAOS-RS://127.0.0.1:6041/test, then the sql can be executed: insert into t1 using weather(ts, temperature) tags('beijing') values(now, 24.6);
- Starting from taos-jdbcdriver-2.0.36 and TDengine 2.2.0.0, if dbname is specified in the URL, JDBC REST connections will use `/rest/sql/dbname` as the URL for REST requests by default, and there is no need to specify dbname in SQL. For example, if the URL is `jdbc:TAOS-RS://127.0.0.1:6041/test`, then the SQL can be executed: insert into t1 using weather(ts, temperature) tags('beijing') values(now, 24.6);
:::
......@@ -239,7 +249,7 @@ public Connection getRestConn() throws Exception{
}
```
In the above example, a connection is established to taosdemo.com with hostname taosdemo.com, port 6030/6041, and database named test. The connection specifies the user name (user) as root and the password (password) as taosdata in the URL and specifies the character set, language environment, time zone, and whether to enable bulk fetching in the connProps.
In the above example, a connection is established to `taosdemo.com`, port is 6030/6041, and database named `test`. The connection specifies the user name as `root` and the password as `taosdata` in the URL and specifies the character set, language environment, time zone, and whether to enable bulk fetching in the connProps.
The configuration parameters in properties are as follows.
......@@ -251,11 +261,11 @@ The configuration parameters in properties are as follows.
- TSDBDriver.PROPERTY_KEY_CHARSET: takes effect only when using JDBC native connection. In the character set used by the client, the default value is the system character set.
- TSDBDriver.PROPERTY_KEY_LOCALE: this only takes effect when using JDBC native connection. Client language environment, the default value is system current locale.
- TSDBDriver.PROPERTY_KEY_TIME_ZONE: only takes effect when using JDBC native connection. In the time zone used by the client, the default value is the system's current time zone.
For JDBC native connections, you can specify other parameters, such as log level, SQL length, etc., by specifying URL and Properties. For more detailed configuration, please refer to [Client Configuration](/reference/config/#Client only).
For JDBC native connections, you can specify other parameters, such as log level, SQL length, etc., by specifying URL and Properties. For more detailed configuration, please refer to [Client Configuration](/reference/config/#Client-Only).
### Priority of configuration parameters
If the configuration parameters are duplicated in the URL, Properties, or client configuration file, the `priority` of the parameters, from highest to lowest, are as follows.
If the configuration parameters are duplicated in the URL, Properties, or client configuration file, the `priority` of the parameters, from highest to lowest, are as follows:
1. JDBC URL parameters, as described above, can be specified in the parameters of the JDBC URL.
2. Properties connProps
......@@ -340,16 +350,16 @@ There are three types of error codes that the JDBC connector can report:
For specific error codes, please refer to.
- [TDengine Java Connector](https://github.com/taosdata/TDengine/blob/develop/src/connector/jdbc/src/main/java/com/taosdata/jdbc/ TSDBErrorNumbers.java)
- [TDengine Java Connector](https://github.com/taosdata/TDengine/blob/develop/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java)
- [TDengine_ERROR_CODE](https://github.com/taosdata/TDengine/blob/develop/src/inc/taoserror.h)
### Writing data via parameter binding
TDengine's native JDBC connection implementation has significantly improved its support for data writing (INSERT) scenarios via parameter binding with version 2.1.2.0 and later versions. Writing data in this way avoids the resource consumption of SQL syntax parsing, resulting in significant write performance improvements in many cases.
TDengine's native JDBC connection implementation has significantly improved its support for data writing (INSERT) scenarios via bind interface with version 2.1.2.0 and later versions. Writing data in this way avoids the resource consumption of SQL syntax parsing, resulting in significant write performance improvements in many cases.
**Note**.
- JDBC REST connections do not currently support parameter binding
- JDBC REST connections do not currently support bind interface
- The following sample code is based on taos-jdbcdriver-2.0.36
- The setString method should be called for binary type data, and the setNString method should be called for nchar type data
- both setString and setNString require the user to declare the width of the corresponding column in the size parameter of the table definition
......@@ -611,9 +621,9 @@ public void setString(int columnIndex, ArrayList<String> list, int size) throws
public void setNString(int columnIndex, ArrayList<String> list, int size) throws SQLException
```
### Writing without mode
### Schemaless Writing
Starting with version 2.2.0.0, TDengine has added the ability to write without mode. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. See [schemaless writing](/reference/schemaless/) for details.
Starting with version 2.2.0.0, TDengine has added the ability to schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. See [schemaless writing](/reference/schemaless/) for details.
**Note**.
......@@ -659,13 +669,13 @@ The TDengine Java Connector supports subscription functionality with the followi
TSDBSubscribe sub = ((TSDBConnection)conn).subscribe("topic", "select * from meters", false);
```
The three parameters of the `subscribe` method have the following meanings.
The three parameters of the `subscribe()` method have the following meanings.
- topic: the subscribed topic (i.e., name). This parameter is the unique identifier of the subscription
- sql: the query statement of the subscription, this statement can only be `select` statement, only the original data should be queried, and you can query only the data in the positive time order
- restart: if the subscription already exists, whether to restart or continue the previous subscription
The above example will use the SQL statement `select * from meters` to create a subscription named `topic`. If the subscription exists, it will continue the progress of the previous query instead of consuming all the data from the beginning.
The above example will use the SQL command `select * from meters` to create a subscription named `topic`. If the subscription exists, it will continue the progress of the previous query instead of consuming all the data from the beginning.
#### Subscribe to consume data
......@@ -683,7 +693,7 @@ while(true) {
}
```
The `consume` method returns a result set containing all new data from the last `consume`. Be sure to choose a reasonable frequency for calling `consume` as needed (e.g. `Thread.sleep(1000)` in the example). Otherwise, it will cause unnecessary stress on the server-side.
The `consume()` method returns a result set containing all new data from the last `consume()`. Be sure to choose a reasonable frequency for calling `consume()` as needed (e.g. `Thread.sleep(1000)` in the example). Otherwise, it will cause unnecessary stress on the server-side.
#### Close subscriptions
......@@ -691,7 +701,7 @@ The `consume` method returns a result set containing all new data from the last
sub.close(true);
```
The ``close`` method closes a subscription. If its argument is ``true`'' it means that the subscription progress information is retained, and the subscription with the same name can be created to continue consuming data; if it is ``false`'' it does not retain the subscription progress.
The `close()` method closes a subscription. If its argument is `true` it means that the subscription progress information is retained, and the subscription with the same name can be created to continue consuming data; if it is `false` it does not retain the subscription progress.
### Closing resources
......@@ -701,9 +711,9 @@ stmt.close();
conn.close();
```
> ``Be sure to close the connection``, otherwise, there will be a connection leak.
> **Be sure to close the connection**, otherwise, there will be a connection leak.
### Use with connection pools
### Use with connection pool
#### HikariCP
......@@ -796,7 +806,7 @@ The source code of the sample application is under `TDengine/examples/JDBC`:
Please refer to: [JDBC example](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC)
## Important update logs
## Recent update logs
| taos-jdbcdriver version | major changes |
| :------------------: | :----------------------------: |
......@@ -806,17 +816,17 @@ Please refer to: [JDBC example](https://github.com/taosdata/TDengine/tree/develo
## Frequently Asked Questions
1. Why is there no performance improvement when using Statement's `addBatch` and `executeBatch` to perform `batch writes/reviews`?
1. Why is there no performance improvement when using Statement's `addBatch()` and `executeBatch()` to perform `batch data writing/update`?
**Cause**: In TDengine's JDBC implementation, SQL statements submitted by `addBatch` method are executed sequentially in the order they are added, which does not reduce the number of interactions with the server and does not bring performance improvement.
**Cause**: In TDengine's JDBC implementation, SQL statements submitted by `addBatch()` method are executed sequentially in the order they are added, which does not reduce the number of interactions with the server and does not bring performance improvement.
**Solution**: 1. splice multiple values in a single insert statement; 2. use multi-threaded concurrent insertion; 3. use parameter-bound writing
2. java.lang.UnsatisfiedLinkError: no taos in java.library.path
**Cause**: The program did not find the dependent native library taos.
**Cause**: The program did not find the dependent native library `taos`.
**Solution**: Under Windows you can copy C:\TDengine\driver\taos.dll to the C:\Windows\System32\ directory, under Linux the following softlink will be created `ln -s /usr/local/taos/driver/libtaos.so.x.x. x.x /usr/lib/libtaos.so` will work.
**Solution**: On Windows you can copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32` directory, on Linux the following soft link will be created `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` will work.
3. java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on an IA 32-bit platform
......
......@@ -16,86 +16,86 @@ import NodeOpenTSDBJson from "../../04-develop/03-insert-data/_js_opts_json.mdx"
import NodeQuery from "../../04-develop/04-query-data/_js.mdx";
import NodeAsyncQuery from "../../04-develop/04-query-data/_js_async.mdx";
`td2.0-connector` 和 `td2.0-rest-connector` 是 TDengine 的官方 Node.js 语言连接器。Node.js 开发人员可以通过它开发可以存取 TDengine 集群数据的应用软件。
`td2.0-connector` and `td2.0-rest-connector` are the official Node.js language connectors for TDengine. Node.js developers can develop applications to access TDengine instance data.
`td2.0-connector` 是**原生连接器**,它通过 TDengine 客户端驱动程序(taosc)连接 TDengine 运行实例,支持数据写入、查询、订阅、schemaless 接口和参数绑定接口等功能。`td2.0-rest-connector` 是 **REST 连接器**,它通过 taosAdapter 提供的 REST 接口连接 TDengine 的运行实例。REST 连接器可以在任何平台运行,但性能略为下降,接口实现的功能特性集合和原生接口有少量不同。
`td2.0-connector` is a **native connector** that connects to TDengine instances via the TDengine client driver (taosc) and supports data writing, querying, subscriptions, schemaless writing, and bind interface. The `td2.0-rest-connector` is a **REST connector** that connects to TDengine instances via the REST interface provided by taosAdapter. The REST connector can run on any platform, but performance is slightly degraded, and the interface implements a somewhat different set of functional features than the native interface.
Node.js 连接器源码托管在 [GitHub](https://github.com/taosdata/taos-connector-node)。
The Node.js connector source code is hosted on [GitHub](https://github.com/taosdata/taos-connector-node).
## 支持的平台
## Supported Platforms
原生连接器支持的平台和 TDengine 客户端驱动支持的平台一致。
REST 连接器支持所有能运行 Node.js 的平台。
The platforms supported by the native connector are the same as those supported by the TDengine client driver.
The REST connector supports all platforms that can run Node.js.
## 版本支持
## Version support
请参考[版本支持列表](/reference/connector#版本支持)
Please refer to [version support list](/reference/connector#version-support)
## 支持的功能特性
## Supported features
### 原生连接器
### Native connectors
1. 连接管理
2. 普通查询
3. 连续查询
4. 参数绑定
5. 订阅功能
1. connection management
2. general query
3. continuous query
4. parameter binding
5. subscription function
6. Schemaless
### REST 连接器
### REST Connector
1. 连接管理
2. 普通查询
3. 连续查询
1. connection management
2. general queries
3. Continuous query
## 安装步骤
## Installation steps
### 安装前准备
### Pre-installation
- 安装 Node.js 开发环境
- 如果使用 REST 连接器,跳过此步。但如果使用原生连接器,请安装 TDengine 客户端驱动,具体步骤请参考[安装客户端驱动](/reference/connector#安装客户端驱动)。我们使用 [node-gyp](https://github.com/nodejs/node-gyp) 和 TDengine 实例进行交互,还需要根据具体操作系统来安装下文提到的一些依赖工具。
- Install the Node.js development environment
- If you are using the REST connector, skip this step. However, if you use the native connector, please install the TDengine client driver. Please refer to [Install Client Driver](/reference/connector#Install-Client-Driver) for more details. We use [node-gyp](https://github.com/nodejs/node-gyp) to interact with TDengine instances and also need to install some dependencies mentioned below depending on the specific OS.
<Tabs defaultValue="Linux">
<TabItem value="Linux" label="Linux 系统安装依赖工具">
<TabItem value="Linux" label="Linux system installation dependencies">
- `python` (建议`v2.7` , `v3.x.x` 目前还不支持)
- `td2.0-connector` 2.0.6 支持 Node.js LTS v10.9.0 或更高版本, Node.js LTS v12.8.0 或更高版本;2.0.5 及更早版本支持 Node.js LTS v10.x 版本。其他版本可能存在包兼容性的问题
- `python` (recommended for `v2.7` , `v3.x.x` currently not supported)
- `td2.0-connector` 2.0.6 supports Node.js LTS v10.9.0 or later, Node.js LTS v12.8.0 or later; 2.0.5 and earlier support Node.js LTS v10.x versions. Other versions may have package compatibility issues
- `make`
- C 语言编译器,[GCC](https://gcc.gnu.org) v4.8.5 或更高版本
- C compiler, [GCC](https://gcc.gnu.org) v4.8.5 or higher
</TabItem>
<TabItem value="Windows" label="Windows 系统安装依赖工具">
<TabItem value="Windows" label="Windows system installation dependencies">
- 安装方法 1
- Installation method 1
使用微软的[ windows-build-tools ](https://github.com/felixrieseberg/windows-build-tools)在`cmd` 命令行界面执行`npm install --global --production windows-build-tools` 即可安装所有的必备工具。
Use Microsoft's [windows-build-tools](https://github.com/felixrieseberg/windows-build-tools) to execute `npm install --global --production` from the `cmd` command-line interface to install all the necessary tools.
- 安装方法 2
- Installation method 2
手动安装以下工具:
Manually install the following tools.
- 安装 Visual Studio 相关:[Visual Studio Build 工具](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) 或者 [Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community)
- 安装 [Python](https://www.python.org/downloads/) 2.7(`v3.x.x` 暂不支持) 并执行 `npm config set python python2.7`
- 进入`cmd`命令行界面,`npm config set msvs_version 2017`
- Install Visual Studio related: [Visual Studio Build Tools](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) or [Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community)
- Install [Python](https://www.python.org/downloads/) 2.7 (`v3.x.x` is not supported) and execute `npm config set python python2.7`.
- Go to the `cmd` command-line interface, `npm config set msvs_version 2017`
参考微软的 Node.js 用户手册[ Microsoft's Node.js Guidelines for Windows](https://github.com/Microsoft/nodejs-guidelines/blob/master/windows-environment.md#compiling-native-addon-modules)。
Refer to Microsoft's Node.js User Manual [Microsoft's Node.js Guidelines for Windows](https://github.com/Microsoft/nodejs-guidelines/blob/master/windows- environment. md#compiling-native-addon-modules).
如果在 Windows 10 ARM 上使用 ARM64 Node.js,还需添加 "Visual C++ compilers and libraries for ARM64" 和 "Visual C++ ATL for ARM64"。
If using ARM64 Node.js on Windows 10 ARM, you must add "Visual C++ compilers and libraries for ARM64" and "Visual C++ ATL for ARM64".
</TabItem>
</Tabs>
### 使用 npm 安装
### Install via npm
<Tabs defaultValue="install_native">
<TabItem value="install_native" label="安装原生连接器">
<TabItem value="install_native" label="Install native connector">
```bash
npm install td2.0-connector
```
</TabItem>
<TabItem value="install_rest" label="安装 REST 连接器">
<TabItem value="install_rest" label="Install REST connector">
```bash
npm i td2.0-rest-connector
......@@ -104,15 +104,15 @@ npm i td2.0-rest-connector
</TabItem>
</Tabs>
### 安装验证
### Installation verification
在安装好 TDengine 客户端后,使用 nodejsChecker.js 程序能够验证当前环境是否支持 Node.js 方式访问 TDengine。
After installing the TDengine client, use the `nodejsChecker.js` program to verify that the current environment supports Node.js access to TDengine.
验证方法:
Verification in details:
- 新建安装验证目录,例如:`~/tdengine-test`,下载 GitHub 上 [nodejsChecker.js 源代码](https://github.com/taosdata/TDengine/tree/develop/examples/nodejs/nodejsChecker.js)到本地。
- Create a new installation verification directory, e.g. `~/tdengine-test`, and download the [nodejsChecker.js source code](https://github.com/taosdata/TDengine/tree/develop/examples/nodejs/) from GitHub. to the work directory.
- 在命令行中执行以下命令。
- Execute the following command from the command-line.
```bash
npm init -y
......@@ -120,16 +120,16 @@ npm install td2.0-connector
node nodejsChecker.js host=localhost
```
- 执行以上步骤后,在命令行会输出 nodejsChecker.js 连接 TDengine 实例,并执行简单插入和查询的结果。
- After executing the above steps, the command-line will output the result of `nodejsChecker.js` connecting to the TDengine instance and performing a simple insert and query.
## 建立连接
## Establishing a connection
请选择使用一种连接器。
Please choose to use one of the connectors.
<Tabs defaultValue="native">
<TabItem value="native" label="原生连接">
<TabItem value="native" label="Native connection">
安装并引用 `td2.0-connector` 包。
Install and refer to `td2.0-connector` package:
```javascript
//A cursor also needs to be initialized in order to interact with TDengine from Node.js.
......@@ -148,9 +148,9 @@ conn.close();
```
</TabItem>
<TabItem value="rest" label="REST 连接">
<TabItem value="rest" label="REST connection">
安装并引用 `td2.0-rest-connector` 包。
Install and refer to `td2.0-rest-connector`package:
```javascript
//A cursor also needs to be initialized in order to interact with TDengine from Node.js.
......@@ -167,93 +167,89 @@ let cursor = conn.cursor();
</TabItem>
</Tabs>
## 使用示例
## Usage examples
### 写入数据
### Write data
#### SQL 写入
#### SQL Writing
<NodeInsert />
#### InfluxDB 行协议写入
#### InfluxDB line protocol writing
<NodeInfluxLine />
#### OpenTSDB Telnet 行协议写入
#### OpenTSDB Telnet line protocol writing
<NodeOpenTSDBTelnet />
#### OpenTSDB JSON 行协议写入
#### OpenTSDB JSON line protocol writing
<NodeOpenTSDBJson />
### 查询数据
### Query data
#### 同步查询
#### Synchronous queries
<NodeQuery />
#### 异步查询
#### asynchronous query
<NodeAsyncQuery />
## 更多示例程序
## More Sample Programs
| 示例程序 | 示例程序描述 |
| ------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------- |
| [connection](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/cursorClose.js) | 建立连接的示例。 |
| [stmtBindBatch](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtBindParamBatchSample.js) | 绑定多行参数插入的示例。 |
| [stmtBind](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtBindParamSample.js) | 一行一行绑定参数插入的示例。 |
| [stmtBindSingleParamBatch](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtBindSingleParamBatchSample.js) | 按列绑定参数插入的示例。 |
| [stmtUseResult](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtUseResultSample.js) | 绑定参数查询的示例。 |
| [json tag](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testJsonTag.js) | Json tag 的使用示例。 |
| [Nanosecond](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testNanoseconds.js) | 时间戳为纳秒精度的使用的示例。 |
| [Microsecond](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testMicroseconds.js) | 时间戳为微秒精度的使用的示例。 |
| [schemless insert](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testSchemalessInsert.js) | schemless 插入的示例。 |
| [subscribe](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testSubscribe.js) | 订阅的使用示例。 |
| [asyncQuery](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/tset.js) | 异步查询的使用示例。 |
| [REST](https://github.com/taosdata/taos-connector-node/blob/develop/typescript-rest/example/example.ts) | 使用 REST 连接的 TypeScript 使用示例。 |
| Sample Programs | Sample Program Description |
| --------------------------------------------------------------------------------------------------------------------------------- --------- | -------------------------------------- |
| [connection](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/cursorClose.js) | Example of establishing a connection. |
| [stmtBindBatch](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtBindParamBatchSample.js) | Example of binding a multi-line parameter Example of inserting. |
| [stmtBind](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtBindParamSample.js) | Example of a line-by-line bind parameter insertion. |
| [stmtBindSingleParamBatch](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/) stmtBindSingleParamBatchSample.js) | Example of binding parameters by column. |
| [stmtUseResult](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtUseResultSample.js) | Example of a bound parameter query. |
| [json tag](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testJsonTag.js) | Example of using Json tag. |
| [Nanosecond](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testNanoseconds.js) | An example of using timestamps with nanosecond precision. |
| [Microsecond](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testMicroseconds.js) | Example of using microsecond timestamp. |
| [schemless insert](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testSchemalessInsert.js) | schemless Example of a schemless insert. |
| [subscribe](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testSubscribe.js) | Example of using subscribe. |
| [asyncQuery](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/tset.js) | An example of using asynchronous queries. |
| [REST](https://github.com/taosdata/taos-connector-node/blob/develop/typescript-rest/example/example.ts) | An example of using TypeScript with REST connections. |
## 使用限制
## Usage restrictions
Node.js 连接器 >= v2.0.6 目前支持 node 的版本为:支持 >=v12.8.0 <= v12.9.1 || >=v10.20.0 <= v10.9.0 ;2.0.5 及更早版本支持 v10.x 版本,其他版本可能存在包兼容性的问题。
Node.js Connector >= v2.0.6 currently supports node versions >=v12.8.0 <= v12.9.1 || >=v10.20.0 <= v10.9.0; v10.x versions are supported in 2.0.5 and earlier, other versions may have package compatibility issues.
## 其他说明
## Other notes
Node.js 连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1957.html)。
See [video tutorial](https://www.taosdata.com/blog/2020/11/11/1957.html) for the Node.js connector usage.
## 常见问题
## Frequently Asked Questions
1. 使用 REST 连接需要启动 taosadapter。
1. Using REST connections requires starting taosadapter.
```bash
sudo systemctl start taosadapter
```
2. Node.js 版本
2. "Unable to establish connection", "Unable to resolve FQDN"
连接器 >v2.0.6 目前兼容的 Node.js 版本为:>=v10.20.0 <= v10.9.0 || >=v12.8.0 <= v12.9.1
Usually, root cause is the FQDN is not configured correctly. You can refer to [How to understand TDengine's FQDN (In Chinese)](https://www.taosdata.com/blog/2021/07/29/2741.html).
3. "Unable to establish connection","Unable to resolve FQDN"
## Important Updates
一般都是因为配置 FQDN 不正确。 可以参考[如何彻底搞懂 TDengine 的 FQDN](https://www.taosdata.com/blog/2021/07/29/2741.html) 。
### Native connectors
## 重要更新记录
### 原生连接器
| td2.0-connector 版本 | 说明 |
| td2.0-connector version | description |
| -------------------- | ---------------------------------------------------------------- |
| 2.0.12 | 修复 cursor.close() 报错的 bug。 |
| 2.0.11 | 支持绑定参数、json tag、schemaless 接口等功能。 |
| 2.0.10 | 支持连接管理,普通查询、连续查询、获取系统信息、订阅功能等功能。 |
| 2.0.12 | Fix bug with cursor.close() error. | 2.0.12 | Fix bug with cursor.close() error.
| 2.0.11 | Support for binding parameters, json tag, schemaless interface, etc. |
| 2.0.10 | Support connection management, general query, continuous query, get system information, subscribe function, etc. | ### REST Connector
### REST 连接器
### REST Connector
| td2.0-rest-connector 版本 | 说明 |
| td2.0-rest-connector version | Description |
| ------------------------- | ---------------------------------------------------------------- |
| 1.0.3 | 支持连接管理、普通查询、获取系统信息、错误信息、连续查询等功能。 |
| 1.0.3 | Support connection management, general query, get system information, error message, continuous query, etc. |# API Reference
## API 参考
## API Reference
[API 参考](https://docs.taosdata.com/api/td2.0-connector/)
[API Reference](https://docs.taosdata.com/api/td2.0-connector/)
......@@ -2,16 +2,16 @@
sidebar_position: 3
sidebar_label: Python
title: TDengine Python Connector
description: "taospy 是 TDengine 的官方 Python 连接器。taospy 提供了丰富的 API, 使得 Python 应用可以很方便地使用 TDengine。tasopy 对 TDengine 的原生接口和 REST 接口都进行了封装, 分别对应 tasopy 的两个子模块:tasos 和 taosrest。除了对原生接口和 REST 接口的封装,taospy 还提供了符合 Python 数据访问规范(PEP 249)的编程接口。这使得 taospy 和很多第三方工具集成变得简单,比如 SQLAlchemy 和 pandas"
description: "taospy is the official Python connector for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. tasopy wraps both the native and REST interfaces of TDengine, corresponding to the two submodules of tasopy: taos and taosrest. In addition to wrapping the native and REST interfaces, taospy also provides a programming interface that conforms to the Python Data Access Specification (PEP 249), making it easy to integrate taospy with many third-party tools, such as SQLAlchemy and pandas."
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
`taospy` is the official Python connector for TDengine. `taospy` provides a rich API that makes it easy for Python applications to use TDengine. `taospy` wraps both the [native interface](/reference/connector/cpp) and [REST interface](/reference/rest-api) of TDengine, which correspond to the `taos` and `taosrest` modules of the `taospy` package, respectively.
In addition to wrapping the native and REST interfaces, `taospy` also provides a programming interface that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
`taospy` is the official Python connector for TDengine. `taospy` provides a rich set of APIs that makes it easy for Python applications to access TDengine. `taospy` wraps both the [native interface](/reference/connector/cpp) and [REST interface](/reference/rest-api) of TDengine, which correspond to the `taos` and `taosrest` modules of the `taospy` package, respectively.
In addition to wrapping the native and REST interfaces, `taospy` also provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
The connection to the server directly using the native interface provided by the client driver is referred to hereinafter as a "native connection"; the connection to the server using the REST interface provided by taosAdapter is referred to hereinafter as a "REST connection". ".
The connection to the server directly using the native interface provided by the client driver is referred to hereinafter as a "native connection"; the connection to the server using the REST interface provided by taosAdapter is referred to hereinafter as a "REST connection".
The source code for the Python connector is hosted on [GitHub](https://github.com/taosdata/taos-connector-python).
......@@ -22,33 +22,34 @@ The source code for the Python connector is hosted on [GitHub](https://github.co
## Version selection
We recommend using the latest version of `taospy`, regardless of the version of TDengine used.
We recommend using the latest version of `taospy`, regardless what the version of TDengine is.
## Supported features
- Native connections support all the core features of TDeingine, including connection management, SQL execution, parameter binding, subscriptions, and schemaless writing.
- Native connections support all the core features of TDeingine, including connection management, SQL execution, bind interface, subscriptions, and schemaless writing.
- REST connections support features such as connection management and SQL execution. (SQL execution allows you to: manage databases, tables, and supertables, write data, query data, create continuous queries, etc.).
## Installation
### Preparation
1. Install Python. Python >= 3.6 is recommended. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it. 2.
Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip docuemntation](https://pip.pypa.io/en/stable/installation/) to install it.
If you use a native connection, you will also need to [install the client driver](...). /#install client driver). The client software contains the TDengine client dynamic link library (libtaos.so or taos.dll) and the TDengine CLI.
1. Install Python. Python >= 3.6 is recommended. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it.
2. Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip docuemntation](https://pip.pypa.io/en/stable/installation/) to install it.
### Install using pip
If you use a native connection, you will also need to [Install Client Driver](/reference/connector#Install-Client-Driver). The client install package includes the TDengine client dynamic link library (`libtaos.so` or `taos.dll`) and the TDengine CLI.
### Install via pip
#### Uninstalling an older version
If you have previously installed an older version of the Python Connector, please uninstall it beforehand.
If you have installed an older version of the Python Connector, please uninstall it beforehand.
```
pip3 uninstall taos taospy
```
:::note
Earlier TDengine client software includes the Python connector. If the Python connector is installed from the client software's installation directory, the corresponding Python package name is `taos`. So the above uninstall command includes `taos`, and it doesn't matter if it doesn't exist.
Earlier TDengine client software includes the Python connector. If the Python connector is installed from the client package's installation directory, the corresponding Python package name is `taos`. So the above uninstall command includes `taos`, and it doesn't matter if it doesn't exist.
:::
......@@ -57,19 +58,19 @@ Earlier TDengine client software includes the Python connector. If the Python co
<Tabs>
<TabItem label="Install from PyPI" value="pypi">
Install the latest version of
Install the latest version of:
```
pip3 install taospy
```
You can also specify a specific version to install.
You can also specify a specific version to install:
```
pip3 install taospy==2.3.0
```
</TabItem
</TabItem>
<TabItem label="Install from GitHub" value="github">
```
......@@ -84,7 +85,7 @@ pip3 install git+https://github.com/taosdata/taos-connector-python.git
<Tabs groupId="connect" default="native">
<TabItem value="native" label="native connection">
For native connections, you need to verify that both the client driver and the Python connector itself are installed correctly. The client driver and Python connector have been installed properly if you can successfully import the `taos` module. In the Python Interactive Shell, you can type.
For native connection, you need to verify that both the client driver and the Python connector itself are installed correctly. The client driver and Python connector have been installed properly if you can successfully import the `taos` module. In the Python Interactive Shell, you can type.
```python
import taos
......@@ -93,7 +94,7 @@ import taos
</TabItem>
<TabItem value="rest" label="REST connection">
For REST connections, verifying that the ``taosrest`'' module can be imported successfully can be done in the Python Interactive Shell by typing.
For REST connections, verifying that the `taosrest` module can be imported successfully can be done in the Python Interactive Shell by typing.
```python
import taosrest
......@@ -109,7 +110,6 @@ If you have multiple versions of Python on your system, you may have various `pi
C:\> pip3 install taospy
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: taospy in c:\users\username\appdata\local\programs\python\python310\lib\site-packages (2.3.0)
ðŸ™'ðŸ™'
:::
......@@ -119,16 +119,16 @@ Requirement already satisfied: taospy in c:\users\username\appdata\local\program
Before establishing a connection with the connector, we recommend testing the connectivity of the local TDengine CLI to the TDengine cluster.
<Tabs
<Tabs>
<TabItem value="native" label="native connection">
Ensure that the TDengine cluster is up and that the FQDN of the machines in the cluster (the FQDN defaults to hostname if you are starting a standalone version) can be resolved locally, by testing with the ping command.
Ensure that the TDengine instance is up and that the FQDN of the machines in the cluster (the FQDN defaults to hostname if you are starting a standalone version) can be resolved locally, by testing with the `ping` command.
```
ping <FQDN>
```
Then test if the cluster can be appropriately connected with TDengine CLI: ``` ping <FQDN>```
Then test if the cluster can be appropriately connected with TDengine CLI:
```
taos -h <FQDN> -p <PORT>
......@@ -136,7 +136,7 @@ taos -h <FQDN> -p <PORT>
The FQDN above can be the FQDN of any dnode in the cluster, and the PORT is the serverPort corresponding to this dnode.
</TabItem
</TabItem>
<TabItem value="rest" label="REST connection" groupId="connect">
For REST connections and making sure the cluster is up, make sure the taosAdapter component is up. This can be tested using the following `curl ` command.
......@@ -145,7 +145,7 @@ For REST connections and making sure the cluster is up, make sure the taosAdapte
curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
```
The FQDN above is the FQDN of the machine running taosAdapter, PORT is the port taosAdapter listening, default is 6041.
The FQDN above is the FQDN of the machine running taosAdapter, PORT is the port taosAdapter listening, default is `6041`.
If the test is successful, it will output the server version information, e.g.
```json
......@@ -165,14 +165,14 @@ If the test is successful, it will output the server version information, e.g.
The following example code assumes that TDengine is installed locally and that the default configuration is used for both FQDN and serverPort.
<Tabs
<Tabs>
<TabItem value="native" label="native connection" groupId="connect">
```python
{{#include docs-examples/python/connect_native_reference.py}}
```
All arguments of the ``connect`` function are optional keyword arguments. The following are the connection parameters specified.
All arguments of the `connect()` function are optional keyword arguments. The following are the connection parameters specified.
- `host` : The FQDN of the node to connect to. There is no default value. If this parameter is not provided, the firstEP in the client configuration file will be connected.
- `user` : The TDengine user name. The default value is `root`.
......@@ -182,11 +182,11 @@ All arguments of the ``connect`` function are optional keyword arguments. The fo
- `timezone` : The timezone used to convert the TIMESTAMP data in the query results to python `datetime` objects. The default is the local timezone.
:::warning
`config` and `timezone` are both process-level configurations. we recommend that all connections made by a process use the same parameter values. Otherwise, unpredictable errors may occur.
`config` and `timezone` are both process-level configurations. We recommend that all connections made by a process use the same parameter values. Otherwise, unpredictable errors may occur.
:::
:::tip
The `connect` function returns a `taos.TaosConnection` instance. In client-side multi-threaded scenarios, we recommend that each thread request a separate connection instance rather than sharing a connection between multiple threads.
The `connect()` function returns a `taos.TaosConnection` instance. In client-side multi-threaded scenarios, we recommend that each thread request a separate connection instance rather than sharing a connection between multiple threads.
:::
......@@ -197,7 +197,7 @@ The `connect` function returns a `taos.TaosConnection` instance. In client-side
{{#include docs-examples/python/connect_rest_examples.py:connect}}
```
All arguments to the `connect` function are optional keyword arguments. The following are the connection parameters specified.
All arguments to the `connect()` function are optional keyword arguments. The following are the connection parameters specified.
- `host`: The host to connect to. The default is localhost.
- `user`: TDenigne user name. The default is `root`.
......@@ -205,23 +205,19 @@ All arguments to the `connect` function are optional keyword arguments. The foll
- `port`: The port on which the taosAdapter REST service listens. Default is 6041.
- `timeout`: HTTP request timeout in seconds. The default is `socket._GLOBAL_DEFAULT_TIMEOUT`. Usually, no configuration is needed.
:::note
:::
</TabItem>
</Tabs>
## Sample program
### Basic use
### Basic Usage
<Tabs default="native" groupId="connect">
<TabItem value="native" label="native connection">
Use of the ##### TaosConnection class
##### TaosConnection class
The `TaosConnection` class contains both an implementation of the PEP249 Connection interface (e.g., the `cursor` method and the `close` method) and many extensions (e.g., the `execute`, `query`, `schemaless_insert`, and `subscribe` methods). .
The `TaosConnection` class contains both an implementation of the PEP249 Connection interface (e.g., the `cursor()` method and the `close()` method) and many extensions (e.g., the `execute()`, `query()`, `schemaless_insert()`, and `subscribe()` methods).
```python title="execute method"
{{#include docs-examples/python/connection_usage_native_reference.py:insert}}
......@@ -232,12 +228,12 @@ The `TaosConnection` class contains both an implementation of the PEP249 Connect
```
:::tip
The queried results can only be fetched once. For example, only one of `featch_all` and `fetch_all_into_dict` can be used in the example above. Repeated fetches will result in an empty list.
The queried results can only be fetched once. For example, only one of `fetch_all()` and `fetch_all_into_dict()` can be used in the example above. Repeated fetches will result in an empty list.
:::
##### Use of TaosResult class
In the above example of using the `TaosConnection` class, we have shown two ways to get the result of a query: `featch_all` and `fetch_all_into_dict`. In addition, `TaosResult` also provides methods to iterate through the result set by rows (`rows_iter`) or by data blocks (`blocks_iter`). Using these two methods will be more efficient in scenarios where the query has a large amount of data.
In the above example of using the `TaosConnection` class, we have shown two ways to get the result of a query: `fetch_all()` and `fetch_all_into_dict()`. In addition, `TaosResult` also provides methods to iterate through the result set by rows (`rows_iter`) or by data blocks (`blocks_iter`). Using these two methods will be more efficient in scenarios where the query has a large amount of data.
```python title="blocks_iter method"
{{#include docs-examples/python/result_set_examples.py}}
......@@ -255,7 +251,7 @@ The TaosCursor class uses native connections for write and query operations. In
:::
</TabItem
</TabItem>
<TabItem value="rest" label="REST connection">
##### Use of TaosRestCursor class
......@@ -271,7 +267,7 @@ The ``TaosRestCursor`` class is an implementation of the PEP249 Cursor interface
##### Use of the RestClient class
The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-api). It contains only a ``sql()` method for executing arbitrary SQL statements and returning the result.
The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-api). It contains only a `sql()` method for executing arbitrary SQL statements and returning the result.
```python title="Use of RestClient"
{{#include docs-examples/python/rest_client_example.py}}
......@@ -279,8 +275,6 @@ The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-ap
For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
</TabItem>
</Tabs>
......@@ -293,7 +287,7 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
{{#include docs-examples/python/conn_native_pandas.py}}
```
</TabItem
</TabItem>
<TabItem value="rest" label="REST connection">
```python
......@@ -309,7 +303,7 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
| ------------------------------------------------------------------------------------------------------------- | ------------------- ---- |
| [bind_multi.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-multi.py) | parameter binding, bind multiple rows at once |
| [bind_row.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-row.py) | bind_row.py
| [insert_lines.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/insert-lines.py) | InfluxDB row protocol write |
| [insert_lines.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/insert-lines.py) | InfluxDB line protocol writing |
| [json_tag.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/json-tag.py) | Use JSON type tags |
| [subscribe-async.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/subscribe-async.py) | Asynchronous subscription |
| [subscribe-sync.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/subscribe-sync.py) | synchronous-subscribe |
......
......@@ -15,9 +15,9 @@ import RustOpenTSDBTelnet from "../../04-develop/03-insert-data/_rust_opts_telne
import RustOpenTSDBJson from "../../04-develop/03-insert-data/_rust_opts_json.mdx"
import RustQuery from "../../04-develop/04-query-data/_rust.mdx"
`libtaos` is the official Rust language connector for TDengine, through which Rust developers can develop applications that access the TDengine database.
`libtaos` is the official Rust language connector for TDengine. Rust developers can develop applications to access the TDengine instance data.
`libtaos` provides two ways to establish connections. One is the **Native Connection**, which connects to TDengine runtime instances via the TDengine client driver (taosc). The other is **REST connection**, which connects to TDengine runtime instances via taosAdapter's REST interface. The REST connection supports any platform, but the native connection supports all platforms on which the TDengine client can run.
`libtaos` provides two ways to establish connections. One is the **Native Connection**, which connects to TDengine instances via the TDengine client driver (taosc). The other is **REST connection**, which connects to TDengine instances via taosAdapter's REST interface.
The source code for `libtaos` is hosted on [GitHub](https://github.com/taosdata/libtaos-rs).
......@@ -28,7 +28,7 @@ REST connections are supported on all platforms that can run Rust.
## Version support
Please refer to [version support list](/reference/connector#version support)
Please refer to [version support list](/reference/connector#version-support).
The Rust Connector is still under rapid development and is not guaranteed to be backward compatible before 1.0. Recommend to use TDengine version 2.4 or higher to avoid known issues.
......@@ -36,7 +36,7 @@ The Rust Connector is still under rapid development and is not guaranteed to be
### Pre-installation
* Install the Rust development toolchain
* If using the native connection, please install the TDengine client driver. Please refer to [install client driver](/reference/connector#install client driver)
* If using the native connection, please install the TDengine client driver. Please refer to [install client driver](/reference/connector#install-client-driver)
### Adding libtaos dependencies
......@@ -56,7 +56,7 @@ libtaos = "*"
</TabItem
<TabItem value="rest" label="REST connection">
Add [libtaos][libtaos] to the ``Cargo.toml`'' file and enable the ``rest`'' feature.
Add [libtaos][libtaos] to the `Cargo.toml` file and enable the `rest` feature.
```toml
[dependencies]
......@@ -180,7 +180,7 @@ async fn demo() -> Result<(), Error> {
The [Builder Pattern](https://doc.rust-lang.org/1.0.0/style/ownership/builders.html) constructor pattern is Rust's solution for handling complex data types or optional configuration types. The [libtaos] implementation uses the connection constructor [TaosCfgBuilder] as the entry point for the TDengine Rust connector. The [TaosCfgBuilder] provides optional configuration of servers, ports, databases, usernames, passwords, etc.
Using the ``default()` method, you can construct a [TaosCfg] with default parameters for subsequent connections to the database or establishing connection pools.
Using the `default()` method, you can construct a [TaosCfg] with default parameters for subsequent connections to the database or establishing connection pools.
```rust
let cfg = TaosCfgBuilder::default().build()? ;
......@@ -206,7 +206,7 @@ let conn: Taos = cfg.connect();
### Connection pooling
In complex applications, recommand to enable connection pooling. Connection pooling for [libtaos] is implemented using [r2d2].
In complex applications, recommand to enable connection pool. Connection pool for [libtaos] is implemented using [r2d2].
As follows, a connection pool with default parameters can be generated.
......@@ -226,7 +226,7 @@ You can set the same connection pool parameters using the connection pool's cons
.build(cfg);
```
In the application code, use ``pool.get()? ` to get a connection object [Taos].
In the application code, use `pool.get()? ` to get a connection object [Taos].
```rust
let taos = pool.get()? ;
......@@ -234,13 +234,13 @@ let taos = pool.get()? ;
The [Taos] structure is the connection manager in [libtaos] and provides two main APIs.
1. ``exec``: Execute some non-query SQL statements, such as ``CREATE`, ``ALTER`, ``INSERT`, etc.
1. `exec`: Execute some non-query SQL statements, such as `CREATE`, `ALTER`, `INSERT`, etc.
```rust
taos.exec().await?
```
2. ``query``: Execute the query statement and return the [TaosQueryData] object.
2. `query`: Execute the query statement and return the [TaosQueryData] object.
```rust
let q = taos.query("select * from log.logs").await?
......@@ -275,17 +275,17 @@ Note that Rust asynchronous functions and an asynchronous runtime are required.
- `.create_database(database: &str)`: Executes the `CREATE DATABASE` statement.
- `.use_database(database: &str)`: Executes the `USE` statement.
In addition, this structure is also the entry point for [Parameter Binding](#Parameter Binding Interface) and [Row Protocol Interface](#Row Protocol Interface). Please refer to the specific API descriptions for usage.
In addition, this structure is also the entry point for [Parameter Binding](#Parameter Binding Interface) and [Line Protocol Interface](#Line Protocol Interface). Please refer to the specific API descriptions for usage.
### Parameter Binding Interface
### Bind Interface
Similar to the C interface, Rust provides a parameter binding interface. First, create a parameter binding object [Stmt] for a SQL statement from the [Taos] object.
Similar to the C interface, Rust provides the bind interface's wraping. First, create a bind object [Stmt] for a SQL command from the [Taos] object.
```rust
let mut stmt: Stmt = taos.stmt("insert into ? values(? ,?)") ? ;
```
The parameter binding object provides a set of interfaces for implementing parameter binding.
The bind object provides a set of interfaces for implementing parameter binding.
##### `.set_tbname(tbname: impl ToCString)`
......@@ -325,7 +325,7 @@ stmt.execute()? ;
//stmt.execute()? ;
```
### Row protocol interface
### Line protocol interface
The line protocol interface supports multiple modes and different precision and requires the introduction of constants in the schemaless module to set.
......@@ -334,7 +334,7 @@ use libtaos::*;
use libtaos::schemaless::*;
```
- InfluxDB row protocol
- InfluxDB line protocol
```rust
let lines = [
......
---
title: "taosAdapter"
description: "taosAdapter 是一个 TDengine 的配套工具,是 TDengine 集群和应用程序之间的桥梁和适配器。它提供了一种易于使用和高效的方式来直接从数据收集代理软件(如 Telegraf、StatsD、collectd 等)摄取数据。它还提供了 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine"
description: "taosAdapter is a TDengine companion tool that acts as a bridge and adapter between TDengine clusters and applications. It provides an easy-to-use and efficient way to ingest data directly from data collection agent software such as Telegraf, StatsD, collectd, etc. It also provides an InfluxDB/OpenTSDB compatible data ingestion interface, allowing InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine."
sidebar_label: "taosAdapter"
---
......@@ -8,48 +8,48 @@ import Prometheus from "./_prometheus.mdx"
import CollectD from "./_collectd.mdx"
import StatsD from "./_statsd.mdx"
import Icinga2 from "./_icinga2.mdx"
import Tcollector from "./_tcollector.mdx"
import TCollector from "./_tcollector.mdx"
taosAdapter 是一个 TDengine 的配套工具,是 TDengine 集群和应用程序之间的桥梁和适配器。它提供了一种易于使用和高效的方式来直接从数据收集代理软件(如 Telegraf、StatsD、collectd 等)摄取数据。它还提供了 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。
taosAdapter is a TDengine companion tool that acts as a bridge and adapter between TDengine clusters and applications. It provides an easy-to-use and efficient way to ingest data directly from data collection agent software such as Telegraf, StatsD, collectd, etc. It also provides an InfluxDB/OpenTSDB compatible data ingestion interface that allows InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine.
taosAdapter 提供以下功能:
taosAdapter provides the following features.
- RESTful 接口
- 兼容 InfluxDB v1 写接口
- 兼容 OpenTSDB JSON 和 telnet 格式写入
- 无缝连接到 Telegraf
- 无缝连接到 collectd
- 无缝连接到 StatsD
- 支持 Prometheus remote_read 和 remote_write
- RESTful interface
- InfluxDB v1 compliant write interface
- OpenTSDB JSON and telnet format writes compatible
- Seamless connection to Telegraf
- Seamless connection to collectd
- Seamless connection to StatsD
- Supports Prometheus remote_read and remote_write
## taosAdapter 架构图
## taosAdapter architecture diagram
![taosAdapter Architecture](taosAdapter-architecture.png)
## taosAdapter 部署方法
## taosAdapter Deployment Method
### 安装 taosAdapter
### Install taosAdapter
taosAdapter 从 TDengine v2.4.0.0 版本开始成为 TDengine 服务端软件 的一部分,如果您使用 TDengine server 您不需要任何额外的步骤来安装 taosAdapter。您可以从[涛思数据官方网站](https://taosdata.com/cn/all-downloads/)下载 TDengine server(taosAdapter 包含在 v2.4.0.0 及以上版本)安装包。如果需要将 taosAdapter 分离部署在 TDengine server 之外的服务器上,则应该在该服务器上安装完整的 TDengine 来安装 taosAdapter。如果您需要使用源代码编译生成 taosAdapter,您可以参考[构建 taosAdapter](https://github.com/taosdata/taosadapter/blob/develop/BUILD-CN.md)文档。
taosAdapter has been part of TDengine server software since TDengine v2.4.0.0. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TAOSData official website](https://taosdata.com/en/all-downloads/) to download the TDengine server installation package (taosAdapter is included in v2.4.0.0 and later version). If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/develop/BUILD.md) documentation.
### start/stop taosAdapter
在 Linux 系统上 taosAdapter 服务默认由 systemd 管理。使用命令 `systemctl start taosadapter` 可以启动 taosAdapter 服务。使用命令 `systemctl stop taosadapter` 可以停止 taosAdapter 服务。
On Linux systems, the taosAdapter service is managed by `systemd` by default. You can use the command `systemctl start taosadapter` to start the taosAdapter service and use the command `systemctl stop taosadapter` to stop the taosAdapter service.
### 移除 taosAdapter
### Remove taosAdapter
使用命令 rmtaos 可以移除包括 taosAdapter 在内的 TDengine server 软件。
Use the command `rmtaos` to remove the TDengine server software if you use tar.gz package or use package management command like rpm or apt to remove the TDengine server, including taosAdapter.
### 升级 taosAdapter
### Upgrade taosAdapter
taosAdapter 和 TDengine server 需要使用相同版本。请通过升级 TDengine server 来升级 taosAdapter。
与 taosd 分离部署的 taosAdapter 必须通过升级其所在服务器的 TDengine server 才能得到升级。
taosAdapter and TDengine server need to use the same version. Please upgrade the taosAdapter by upgrading the TDengine server.
You need to upgrade the taosAdapter deployed separately from TDengine server by upgrading the TDengine server on the deployed server.
## taosAdapter 参数列表
## taosAdapter parameter list
taosAdapter 支持通过命令行参数、环境变量和配置文件来进行配置。默认配置文件是 /etc/taos/taosadapter.toml。
taosAdapter is configurable via command-line arguments, environment variables and configuration files. The default configuration file is /etc/taos/taosadapter.toml on Linux.
命令行参数优先于环境变量优先于配置文件,命令行用法是 arg=val,如 taosadapter -p=30000 --debug=true,详细列表如下:
Command-line arguments take precedence over environment variables over configuration files. The command-line usage is arg=val, e.g., taosadapter -p=30000 --debug=true. The detailed list is as follows:
```shell
Usage of taosAdapter:
......@@ -133,8 +133,8 @@ Usage of taosAdapter:
--version Print the version and exit
```
备注:
使用浏览器进行接口调用请根据实际情况设置如下跨源资源共享(CORS)参数:
Note:
Please set the following Cross-Origin Resource Sharing (CORS) parameters according to the actual situation when using a browser for interface calls.
```text
AllowAllOrigins
......@@ -145,39 +145,39 @@ AllowCredentials
AllowWebSockets
```
如果不通过浏览器进行接口调用无需关心这几项配置。
You do not need to care about these configurations if you do not make interface calls through the browser.
关于 CORS 协议细节请参考:[https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled)[https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS](https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS)
For details on the CORS protocol, please refer to: [https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled) or [https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS](https://developer.mozilla.org/zh-CN/docs/Web/HTTP/CORS).
示例配置文件参见 [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/develop/example/config/taosadapter.toml)
See [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/develop/example/config/taosadapter.toml) for sample configuration files.
## 功能列表
## Feature List
- 与 RESTful 接口兼容
- Compatible with RESTful interfaces
[https://www.taosdata.com/cn/documentation/connector#restful](https://www.taosdata.com/cn/documentation/connector#restful)
- 兼容 InfluxDB v1 写接口
- Compatible with InfluxDB v1 write interface
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
- 兼容 OpenTSDB JSON 和 telnet 格式写入
- Compatible with OpenTSDB JSON and telnet format writes
- <http://opentsdb.net/docs/build/html/api_http/put.html>
- <http://opentsdb.net/docs/build/html/api_telnet/put.html>
- 与 collectd 无缝连接
collectd 是一个系统统计收集守护程序,请访问 [https://collectd.org/](https://collectd.org/) 了解更多信息。
- Seamless connection to collectd
collectd is a system statistics collection daemon, please visit [https://collectd.org/](https://collectd.org/) for more information.
- Seamless connection with StatsD
StatsD 是一个简单而强大的统计信息汇总的守护程序。请访问 [https://github.com/statsd/statsd](https://github.com/statsd/statsd) 了解更多信息。
- 与 icinga2 的无缝连接
icinga2 是一个收集检查结果指标和性能数据的软件。请访问 [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) 了解更多信息。
- 与 tcollector 无缝连接
TCollector 是一个客户端进程,从本地收集器收集数据,并将数据推送到 OpenTSDB。请访问 [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) 了解更多信息。
- 无缝连接 node_exporter
node_export 是一个机器指标的导出器。请访问 [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) 了解更多信息。
- 支持 Prometheus remote_read 和 remote_write
remote_read 和 remote_write 是 Prometheus 数据读写分离的集群方案。请访问[https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) 了解更多信息。
StatsD is a simple yet powerful daemon for aggregating statistical information. Please visit [https://github.com/statsd/statsd](https://github.com/statsd/statsd) for more information.
- Seamless connection with icinga2
icinga2 is a software that collects inspection result metrics and performance data. Please visit [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) for more information.
- Seamless connection to TCollector
TCollector is a client process that collects data from a local collector and pushes the data to OpenTSDB. Please visit [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) for more information.
- Seamless connection to node_exporter
node_export is an exporter for machine metrics. Please visit [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) for more information.
- Support for Prometheus remote_read and remote_write
remote_read and remote_write are interfaces for Prometheus data read and write from/to other data storage solution. Please visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) for more information.
## 接口
## Interfaces
### TDengine RESTful 接口
### TDengine RESTful interface
您可以使用任何支持 http 协议的客户端通过访问 RESTful 接口地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入数据到 TDengine 或从 TDengine 中查询数据。细节请参考[官方文档](/reference/connector#restful)。支持如下 EndPoint :
You can use any client that supports the http protocol to write data to or query data from TDengine by accessing the REST interface address `http://<fqdn>:6041/<APIEndPoint>`. See the [official documentation](/reference/connector#restful) for details. The following EndPoint is supported.
```text
/rest/sql
......@@ -187,24 +187,24 @@ AllowWebSockets
### InfluxDB
您可以使用任何支持 http 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 InfluxDB 兼容格式的数据到 TDengine。EndPoint 如下:
You can use any client that supports the http protocol to access the Restful interface address `http://<fqdn>:6041/<APIEndPoint>` to write data in InfluxDB compatible format to TDengine. The EndPoint is as follows:
```text
/influxdb/v1/write
```
支持 InfluxDB 查询参数如下:
Support InfluxDB query parameters as follows.
- `db` 指定 TDengine 使用的数据库名
- `precision` TDengine 使用的时间精度
- `u` TDengine 用户名
- `p` TDengine 密码
- `db` Specifies the database name used by TDengine
- `precision` The time precision used by TDengine
- `u` TDengine user name
- `p` TDengine password
注意: 目前不支持 InfluxDB 的 token 验证方式只支持 Basic 验证和查询参数验证。
Note: InfluxDB token authorization is not supported at present. Only Basic authorization and query parameter validation are supported.
### OpenTSDB
您可以使用任何支持 http 协议的客户端访问 Restful 接口地址 `http://<fqdn>:6041/<APIEndPoint>` 来写入 OpenTSDB 兼容格式的数据到 TDengine。EndPoint 如下:
You can use any client that supports the http protocol to access the Restful interface address `http://<fqdn>:6041/<APIEndPoint>` to write data in OpenTSDB compatible format to TDengine.
```text
/opentsdb/v1/put/json/:db
......@@ -225,114 +225,114 @@ AllowWebSockets
### TCollector
<Tcollector />
<TCollector />
### node_exporter
Prometheus 使用的由\*NIX 内核暴露的硬件和操作系统指标的输出器
node_export is an exporter of hardware and OS metrics exposed by the \*NIX kernel used by Prometheus
- 启用 taosAdapter 的配置 node_exporter.enable
- 设置 node_exporter 的相关配置
- 重新启动 taosAdapter
- Enable the taosAdapter configuration `node_exporter.enable`
- Set the configuration of the node_exporter
- Restart taosAdapter
### prometheus
### Prometheus
<Prometheus />
## 内存使用优化方法
## Memory usage optimization methods
taosAdapter 将监测自身运行过程中内存使用率并通过两个阈值进行调节。有效值范围为 -1 到 100 的整数,单位为系统物理内存的百分比。
taosAdapter will monitor its memory usage during operation and adjust it with two thresholds. Valid values range from -1 to 100 integers in percent of the system's physical memory.
- pauseQueryMemoryThreshold
- pauseAllMemoryThreshold
当超过 pauseQueryMemoryThreshold 阈值时时停止处理查询请求。
Stops processing query requests when the `pauseQueryMemoryThreshold` threshold is exceeded.
http 返回内容:
HTTP response content.
- code 503
- body "query memory exceeds threshold"
当超过 pauseAllMemoryThreshold 阈值时停止处理所有写入和查询请求。
Stops processing all write and query requests when the `pauseAllMemoryThreshold` threshold is exceeded.
http 返回内容:
HTTP response: code 503
- code 503
- body "memory exceeds threshold"
当内存回落到阈值之下时恢复对应功能。
Resume the corresponding function when the memory falls back below the threshold.
状态检查接口 `http://<fqdn>:6041/-/ping`
Status check interface `http://<fqdn>:6041/-/ping`
- 正常返回 `code 200`
- 无参数 如果内存超过 pauseAllMemoryThreshold 将返回 `code 503`
- 请求参数 `action=query` 如果内存超过 pauseQueryMemoryThreshold 或 pauseAllMemoryThreshold 将返回 `code 503`
- Normal returns `code 200`
- No parameter If memory exceeds pauseAllMemoryThreshold returns `code 503`
- Request parameter `action=query` returns `code 503` if memory exceeds `pauseQueryMemoryThreshold` or `pauseAllMemoryThreshold`
对应配置参数
Corresponding configuration parameter
```text
monitor.collectDuration 监测间隔 环境变量 "TAOS_MONITOR_COLLECT_DURATION" (默认值 3s)
monitor.incgroup 是否是cgroup中运行(容器中运行设置为 true) 环境变量 "TAOS_MONITOR_INCGROUP"
monitor.pauseAllMemoryThreshold 不再进行插入和查询的内存阈值 环境变量 "TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD" (默认值 80)
monitor.pauseQueryMemoryThreshold 不再进行查询的内存阈值 环境变量 "TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD" (默认值 70)
``text
monitor.collectDuration monitoring interval environment variable `TAOS_MONITOR_COLLECT_DURATION` (default value 3s)
monitor.incgroup whether to run in cgroup (set to true for running in container) environment variable `TAOS_MONITOR_INCGROUP`
monitor.pauseAllMemoryThreshold memory threshold for no more inserts and queries environment variable `TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD` (default 80)
monitor.pauseQueryMemoryThreshold memory threshold for no more queries Environment variable `TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD` (default 70)
```
您可以根据具体项目应用场景和运营策略进行相应调整,并建议使用运营监控软件及时进行系统内存状态监控。负载均衡器也可以通过这个接口检查 taosAdapter 运行状态。
You can adjust it according to the specific application scenario and operation strategy, and it is recommended to use operation monitoring software to monitor system memory status timely. The load balancer can also check the taosAdapter running status through this interface.
## taosAdapter 监控指标
## taosAdapter Monitoring Metrics
taosAdapter 采集 http 相关指标、cpu 百分比和内存百分比。
taosAdapter collects HTTP-related metrics, CPU percentage, and memory percentage.
### http 接口
### HTTP interface
提供符合 [OpenMetrics](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md) 接口:
Provides an interface conforming to [OpenMetrics](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md).
```text
http://<fqdn>:6041/metrics
```
### 写入 TDengine
### Write to TDengine
taosAdapter 支持将 http 监控、cpu 百分比和内存百分比写入 TDengine。
taosAdapter supports writing the metrics of HTTP monitoring, CPU percentage, and memory percentage to TDengine.
有关配置参数
For configuration parameters
| **配置项** | **描述** | **默认值** |
| **Configuration items** | **Description** | **Default values** |
| ----------------------- | --------------------------------------------------------- | ---------- |
| monitor.collectDuration | cpu 和内存采集间隔 | 3s |
| monitor.identity | 当前 taosadapter 的标识符如果不设置将使用 'hostname:port' | |
| monitor.incgroup | 是否是 cgroup 中运行(容器中运行设置为 true) | false |
| monitor.writeToTD | 是否写入到 TDengine | true |
| monitor.user | TDengine 连接用户名 | root |
| monitor.password | TDengine 连接密码 | taosdata |
| monitor.writeInterval | 写入 TDengine 间隔 | 30s |
| monitor.collectDuration | CPU and memory collection interval | 3s |
| monitor.identity | The current taosadapter identifier will be used if not set to `hostname:port` | |
| monitor.incgroup | whether it is running in a cgroup (set to true for running in a container) | false |
| monitor.writeToTD | Whether to write to TDengine | true |
| monitor.user | TDengine connection username | root |
| monitor.password | TDengine connection password | taosdata |
| monitor.writeInterval | Write to TDengine interval | 30s |
## 结果返回条数限制
## Limit the number of results returned
taosAdapter 通过参数 `restfulRowLimit` 来控制结果的返回条数,-1 代表无限制,默认无限制。
taosAdapter controls the number of results returned by the parameter `restfulRowLimit`, -1 means no limit, default is no limit.
该参数控制以下接口返回
This parameter controls the number of results returned by the following interfaces:
- `http://<fqdn>:6041/rest/sql`
- `http://<fqdn>:6041/rest/sqlt`
- `http://<fqdn>:6041/rest/sqlutc`
- `http://<fqdn>:6041/prometheus/v1/remote_read/:db`
- ` http://<fqdn>:6041/prometheus/v1/remote_read/:db`
## 故障解决
## Troubleshooting
您可以通过命令 `systemctl status taosadapter` 来检查 taosAdapter 运行状态。
You can check the taosAdapter running status with the `systemctl status taosadapter` command.
您也可以通过设置 --logLevel 参数或者环境变量 TAOS_ADAPTER_LOG_LEVEL 来调节 taosAdapter 日志输出详细程度。有效值包括: panic、fatal、error、warn、warning、info、debug 以及 trace。
You can also adjust the level of the taosAdapter log output by setting the `--logLevel` parameter or the environment variable `TAOS_ADAPTER_LOG_LEVEL`. Valid values are: panic, fatal, error, warn, warning, info, debug and trace.
## 如何从旧版本 TDengine 迁移到 taosAdapter
## How to migrate from older TDengine versions to taosAdapter
在 TDengine server 2.2.x.x 或更早期版本中,taosd 进程包含一个内嵌的 http 服务。如前面所述,taosAdapter 是一个使用 systemd 管理的独立软件,拥有自己的进程。并且两者有一些配置参数和行为是不同的,请见下表:
In TDengine server 2.2.x.x or earlier, the TDengine server process (taosd) contains an embedded HTTP service. As mentioned earlier, taosAdapter is a standalone software managed using `systemd` and has its process ID. And there are some configuration parameters and behaviors that are different between the two. See the following table for details.
| **#** | **embedded httpd** | **taosAdapter** | **comment** |
| ----- | ------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------ |
| 1 | httpEnableRecordSql | --logLevel=debug | |
| 2 | httpMaxThreads | n/a | taosAdapter 自动管理线程池,无需此参数 |
| 3 | telegrafUseFieldNum | 请参考 taosAdapter telegraf 配置方法 | |
| 4 | restfulRowLimit | restfulRowLimit | 内嵌 httpd 默认输出 10240 行数据,最大允许值为 102400。taosAdapter 也提供 restfulRowLimit 但是默认不做限制。您可以根据实际场景需求进行配置 |
| 5 | httpDebugFlag | 不适用 | httpdDebugFlag 对 taosAdapter 不起作用 |
| 6 | httpDBNameMandatory | 不适用 | taosAdapter 要求 URL 中必须指定数据库名 |
| **#** | **embedded httpd** | **taosAdapter** | **comment** |
| ----- | ------------------- | ------------------------------------ | ------------------------------------------------------------------ ------------------------------------------------------------------------ |
| 1 | httpEnableRecordSql | --logLevel=debug | |
| 2 | httpMaxThreads | n/a | taosAdapter Automatically manages thread pools without this parameter |
| 3 | telegrafUseFieldNum | See the taosAdapter telegraf configuration method | |
| 4 | restfulRowLimit | restfulRowLimit | Embedded httpd outputs 10240 rows of data by default, the maximum allowed is 102400. taosAdapter also provides restfulRowLimit but it is not limited by default. You can configure it according to the actual scenario.
| 5 | httpDebugFlag | Not applicable | httpdDebugFlag does not work for taosAdapter |
| 6 | httpDBNameMandatory | N/A | taosAdapter requires the database name to be specified in the URL |
......@@ -2,64 +2,64 @@
title: taosBenchmark
sidebar_label: taosBenchmark
toc_max_heading_level: 4
description: "taosBenchmark (曾用名 taosdemo ) 是一个用于测试 TDengine 产品性能的工具"
description: "taosBenchmark (once called taosdemo ) is a tool for testing the performance of TDengine."
---
## 简介
## Introduction
taosBenchmark (曾用名 taosdemo ) 是一个用于测试 TDengine 产品性能的工具。taosBenchmark 可以测试 TDengine 的插入、查询和订阅等功能的性能,它可以模拟由大量设备产生的大量数据,还可以灵活地控制数据库、超级表、标签列的数量和类型、数据列的数量和类型、子表的数量、每张子表的数据量、插入数据的时间间隔、taosBenchmark 的工作线程数量、是否以及如何插入乱序数据等。为了兼容过往用户的使用习惯,安装包提供 了 taosdemo 作为 taosBenchmark 的软链接。
taosBenchmark (formerly taosdemo ) is a tool for testing the performance of TDengine products. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions and simulate large amounts of data generated by many devices. taosBenchmark can flexibly control the number and type of databases, supertables, tag columns, number and type of data columns, and sub-tables, and types of databases, super tables, the number and types of data columns, the number of sub-tables, the amount of data per sub-table, the time interval for inserting data, the number of working threads, whether and how to insert disordered data, and so on. The installer provides taosdemo as a soft link to taosBenchmark for compatibility with past users.
## 安装
## Installation
taosBenchmark 有两种安装方式:
There are two ways to install taosBenchmark:
- 安装 TDengine 官方安装包的同时会自动安装 taosBenchmark, 详情请参考[ TDengine 安装](/operation/pkg-install)
- Installing the official TDengine installer will automatically install taosBenchmark. Please refer to [TDengine installation](/operation/pkg-install) for details.
- 单独编译 taos-tools 并安装, 详情请参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。
- Compile taos-tools separately and install them. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
## 运行
## Run
### 配置和运行方式
### Configuration and running methods
taosBenchmark 支持两种配置方式:[命令行参数](#命令行参数详解)[JSON 配置文件](#配置文件参数详解)。这两种方式是互斥的,在使用配置文件时只能使用一个命令行参数 `-f <json file>` 指定配置文件。在使用命令行参数运行 taosBenchmark 并控制其行为时则不能使用 `-f` 参数而要用其它参数来进行配置。除此之外,taosBenchmark 还提供了一种特殊的运行方式,即无参数运行。
taosBenchmark supports two configuration methods: [Command-line arguments](#Command-line arguments in detailed) and [JSON configuration file](#Configuration file arguments in detailed). These two methods are mutually exclusive, and with only one command-line parameter, users can use `-f <json file>` to specify a configuration file when using a configuration file. When running taosBenchmark with command-line arguments and controlling its behavior, users should use other parameters for configuration rather than `-f` parameter. In addition, taosBenchmark offers a special way of running without parameters.
taosBenchmark 支持对 TDengine 做完备的性能测试,其所支持的 TDengine 功能分为三大类:写入、查询和订阅。这三种功能之间是互斥的,每次运行 taosBenchmark 只能选择其中之一。值得注意的是,所要测试的功能类型在使用命令行配置方式时是不可配置的,命令行配置方式只能测试写入性能。若要测试 TDegnine 的查询和订阅性能,必须使用配置文件的方式,通过配置文件中的参数 `filetype` 指定所要测试的功能类型。
taosBenchmark supports complete performance testing of TDengine. taosBenchmark supports the TDengine functions in three categories: write, query, and subscribe. These three functions are mutually exclusive, and users can select only one of them each time taosBenchmark runs. It is important to note that the type of functionality to be tested is not configurable when using the command-line configuration method, which can only test writing performance. To test the query and subscription performance of the TDengine, you must use the configuration file method and specify the function type to test via the parameter `filetype` in the configuration file.
**在运行 taosBenchmark 之前要确保 TDengine 集群已经在正确运行。**
**Make sure that the TDengine cluster is running correctly before running taosBenchmark. **
### 无命令行参数运行
### Run without command-line arguments
执行下列命令即可快速体验 taosBenchmark 对 TDengine 进行基于默认配置的写入性能测试。
Execute the following commands to quickly experience taosBenchmark's default configuration-based write performance testing of TDengine.
```bash
taosBenchmark
```
在无参数运行时,taosBenchmark 默认连接 `/etc/taos` 下指定的 TDengine 集群,并在 TDengine 中创建一个名为 test 的数据库,test 数据库下创建名为 meters 的一张超级表,超级表下创建 10000 张表,每张表中写入 10000 条记录。注意,如果已有 test 数据库,这个命令会先删除该数据库后建立一个全新的 test 数据库。
When run without parameters, taosBenchmark connects to the TDengine cluster specified in `/etc/taos` by default and creates a database named test in TDengine, a super table named `meters` under the test database, and 10,000 tables under the super table with 10,000 records written to each table. Note that if there is already a test database, this table is not used. Note that if there is already a test database, this command will delete it first and create a new test database.
### 使用命令行配置参数运行
### Run with command-line configuration parameters
在使用命令行参数运行 taosBenchmark 并控制其行为时,`-f <json file>` 参数不能使用。所有配置参数都必须通过命令行指定。以下是使用命令行方式测试 taosBenchmark 写入性能的一个示例。
The `-f <json file>` argument cannot be used when running taosBenchmark with command-line parameters and controlling its behavior. Users must specify all configuration parameters from the command-line. The following is an example of testing taosBenchmark writing performance using the command-line approach.
```bash
taosBenchmark -I stmt -n 200 -t 100
```
上面的命令 `taosBenchmark` 将创建一个名为`test`的数据库,在其中建立一张超级表`meters`,在该超级表中建立 100 张子表并使用参数绑定的方式为每张子表插入 200 条记录。
The above command, `taosBenchmark` will create a database named `test`, create a super table `meters` in it, create 100 sub-tables in the super table and insert 200 records for each sub-table using parameter binding.
### 使用配置文件运行
### Run with the configuration file
taosBenchmark 安装包中提供了配置文件的示例,位于 `<install_directory>/examples/taosbenchmark-json`
A sample configuration file is provided in the taosBenchmark installation package under `<install_directory>/examples/taosbenchmark-json`.
使用如下命令行即可运行 taosBenchmark 并通过配置文件控制其行为。
Use the following command-line to run taosBenchmark and control its behavior via a configuration file.
```bash
taosBenchmark -f <json file>
```
**下面是几个配置文件的示例:**
**Here are a few examples of configuration files:**
#### 插入场景 JSON 配置文件示例
#### Example of inserting a scenario JSON configuration file
<details>
<summary>insert.json</summary>
......@@ -70,7 +70,7 @@ taosBenchmark -f <json file>
</details>
#### 查询场景 JSON 配置文件示例
#### Query Scenario JSON Profile Example
<details>
<summary>query.json</summary>
......@@ -81,7 +81,7 @@ taosBenchmark -f <json file>
</details>
#### 订阅场景 JSON 配置文件示例
#### Subscription JSON configuration example
<details>
<summary>subscribe.json</summary>
......@@ -92,343 +92,343 @@ taosBenchmark -f <json file>
</details>
## 命令行参数详解
## Command-line argument in detailed
- **-f/--file <json file\>** :
要使用的 JSON 配置文件,由该文件指定所有参数,本参数与命令行其他参数不能同时使用。没有默认值。
specify the configuration file to use. This file includes All parameters. And users should not use this parameter with other parameters on the command-line. There is no default value.
- **-c/--config-dir <dir\>** :
TDengine 集群配置文件所在的目录,默认路径是 /etc/taos 。
specify the directory where the TDengine cluster configuration file. the default path is `/etc/taos`.
- **-h/--host <host\>** :
指定要连接的 TDengine 服务端的 FQDN,默认值为 localhost 。
Specify the FQDN of the TDengine server to connect to. The default value is localhost.
- **-P/--port <port\>** :
要连接的 TDengine 服务器的端口号,默认值为 6030 。
The port number of the TDengine server to connect to, the default value is 6030.
- **-I/--interface <insertMode\>** :
插入模式,可选项有 taosc, rest, stmt, sml, sml-rest, 分别对应普通写入、restful 接口写入、参数绑定接口写入、schemaless 接口写入、restful schemaless 接口写入 (由 taosAdapter 提供)。默认值为 taosc。
Insert mode. Options are taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface writing, parameter binding interface writing, schemaless interface writing, RESTful schemaless interface writing (provided by taosAdapter). The default value is taosc.
- **-u/--user <user\>** :
用于连接 TDengine 服务端的用户名,默认为 root 。
User name to connect to the TDengine server. Default is root.
- **-p/--password <passwd\>** :
用于连接 TDengine 服务端的密码,默认值为 taosdata。
The default password to connect to the TDengine server is `taosdata`.
- **-o/--output <file\>** :
结果输出文件的路径,默认值为 ./output.txt。
specify the path of the result output file, the default value is `. /output.txt`.
- **-T/--thread <threadNum\>** :
插入数据的线程数量,默认为 8 。
The number of threads to insert data. Default is 8.
- **-B/--interlace-rows <rowNum\>** :
启用交错插入模式并同时指定向每个子表每次插入的数据行数。交错插入模式是指依次向每张子表插入由本参数所指定的行数并重复这个过程,直到所有子表的数据都插入完成。默认值为 0, 即向一张子表完成数据插入后才会向下一张子表进行数据插入。
Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted.
- **-i/--insert-interval <timeInterval\>** :
指定交错插入模式的插入间隔,单位为 ms,默认值为 0。 只有当 `-B/--interlace-rows` 大于 0 时才起作用。意味着数据插入线程在为每个子表插入隔行扫描记录后,会等待该值指定的时间间隔后再进行下一轮写入。
Specify the insert interval in `ms` for interleaved insert mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. That means that after inserting interlaced rows for each child table, the data insertion with multiple threads will wait for the interval specified by this value before proceeding to the next round of writes.
- **-r/--rec-per-req <rowNum\>** :
每次向 TDegnine 请求写入的数据行数,默认值为 30000 。
Writing the number of rows of records per request to TDengine, the default value is 30000.
- **-t/--tables <tableNum\>** :
指定子表的数量,默认为 10000 。
Specify the number of sub-tables. The default is 10000.
- **-S/--timestampstep <stepLength\>** :
每个子表中插入数据的时间戳步长,单位是 ms,默认值是 1。
Timestamp step for inserting data in each child table in ms, default is 1.
- **-n/--records <recordNum\>** :
每个子表插入的记录数,默认值为 10000 。
The default value of the number of records inserted in each sub-table is 10000.
- **-d/--database <dbName\>** :
所使用的数据库的名称,默认值为 test 。
The name of the database used, the default value is `test`.
- **-b/--data-type <colType\>** :
超级表的数据列的类型。如果不使用则默认为有三个数据列,其类型分别为 FLOAT, INT, FLOAT 。
specify the type of the data columns of the super table. It defaults to three columns of type FLOAT, INT, and FLOAT if not used.
- **-l/--columns <colNum\>** :
超级表的数据列的总数量。如果同时设置了该参数和 `-b/--data-type`,则最后的结果列数为两者取大。如果本参数指定的数量大于 `-b/--data-type` 指定的列数,则未指定的列类型默认为 INT, 例如: `-l 5 -b float,double`, 那么最后的列为 `FLOAT,DOUBLE,INT,INT,INT`。如果 columns 指定的数量小于或等于 `-b/--data-type` 指定的列数,则结果为 `-b/--data-type` 指定的列和类型,例如: `-l 3 -b float,double,float,bigint`,那么最后的列为 `FLOAT,DOUBLE,FLOAT,BIGINT`
specify the number of columns in the super table. If both this parameter and `-b/--data-type` is set, the final result number of columns is the greater of the two. If the number specified by this parameter is greater than the number of columns specified by `-b/--data-type`, the unspecified column type defaults to INT, for example: `-l 5 -b float,double`, then the final column is `FLOAT,DOUBLE,INT,INT,INT`. If the number of columns specified is less than or equal to the number of columns specified by `-b/--data-type`, then the result is the column and type specified by `-b/--data-type`, e.g.: `-l 3 -b float,double,float,bigint`. The last column is `FLOAT,DOUBLE, FLOAT,BIGINT`.
- **-A/--tag-type <tagType\>** :
超级表的标签列类型。nchar 和 binary 类型可以同时设置长度,例如:
The tag column type of the super table. nchar and binary types can both set the length, for example:
```
taosBenchmark -A INT,DOUBLE,NCHAR,BINARY(16)
```
如果没有设置标签类型,默认是两个标签,其类型分别为 INT 和 BINARY(16)。
注意:在有的 shell 比如 bash 命令里面 “()” 需要转义,则上述指令应为:
If users did not set tag type, the default is two tags, whose types are INT and BINARY(16).
Note: In some shells, such as bash, "()" needs to be escaped, so the above command should be
```
taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
```
- **-w/--binwidth <length\>**:
nchar 和 binary 类型的默认长度,默认值为 64。
specify the default length for nchar and binary types. The default value is 64.
- **-m/--table-prefix <tablePrefix\>** :
子表名称的前缀,默认值为 "d"。
The prefix of the sub-table name, the default value is "d".
- **-E/--escape-character** :
开关参数,指定在超级表和子表名称中是否使用转义字符。默认值为不使用。
Switch parameter specifying whether to use escape characters in the super table and sub-table names. By default is not used.
- **-C/--chinese** :
开关参数,指定 nchar 和 binary 是否使用 Unicode 中文字符。默认值为不使用。
Switch specifying whether to use Unicode Chinese characters in nchar and binary. By default is not used.
- **-N/--normal-table** :
开关参数,指定只创建普通表,不创建超级表。默认值为 false。仅当插入模式为 taosc, stmt, rest 模式下可以使用。
This parameter indicates that taosBenchmark will create only normal tables instead of super tables. The default value is false. It can be used if the insert mode is taosc, stmt, and rest.
- **-M/--random** :
开关参数,插入数据为生成的随机值。默认值为 false。若配置此参数,则随机生成要插入的数据。对于数值类型的 标签列/数据列,其值为该类型取值范围内的随机值。对于 NCHAR 和 BINARY 类型的 标签列/数据列,其值为指定长度范围内的随机字符串。
This parameter indicates writing data with random values. The default is false. If users use this parameter, taosBenchmark will generate the random values. For tag/data columns of numeric type, the value is a random value within the range of values of that type. For NCHAR and BINARY type tag columns/data columns, the value is the random string within the specified length range.
- **-x/--aggr-func** :
开关参数,指示插入后查询聚合函数。默认值为 false。
Switch parameter to indicate query aggregation function after insertion. The default value is false.
- **-y/--answer-yes** :
开关参数,要求用户在提示后确认才能继续。默认值为 false 。
Switch parameter that requires the user to confirm at the prompt to continue. The default value is false.
- **-O/--disorder <Percentage\>** :
指定乱序数据的百分比概率,其值域为 [0,50]。默认为 0,即没有乱序数据。
Specify the percentage probability of disordered data, with a value range of [0,50]. The default is 0, i.e., there is no disordered data.
- **-R/--disorder-range <timeRange\>** :
指定乱序数据的时间戳回退范围。所生成的乱序时间戳为非乱序情况下应该使用的时间戳减去这个范围内的一个随机值。仅在 `-O/--disorder` 指定的乱序数据百分比大于 0 时有效。
Specify the timestamp range for the disordered data. It leads the resulting disorder timestamp as the ordered timestamp minus a random value in this range. Valid only if the percentage of disordered data specified by `-O/--disorder` is greater than 0.
- **-F/--prepare_rand <Num\>** :
生成的随机数据中唯一值的数量。若为 1 则表示所有数据都相同。默认值为 10000 。
Specify the number of unique values in the generated random data. A value of 1 means that all data are equal. The default value is 10000.
- **-a/--replica <replicaNum\>** :
创建数据库时指定其副本数,默认值为 1 。
Specify the number of replicas when creating the database. The default value is 1.
- **-V/--version** :
显示版本信息并退出。不能与其它参数混用。
Show version information only. Users should not use it with other parameters.
- **-?/--help** :
显示帮助信息并退出。不能与其它参数混用。
- **-? /--help** :
Show help information and exit. Users should not use it with other parameters.
## 配置文件参数详解
## Configuration file parameters in detailed
### 通用配置参数
### General configuration parameters
本节所列参数适用于所有功能模式。
The parameters listed in this section apply to all function modes.
- **filetype** : 要测试的功能,可选值为 `insert`, `query``subscribe`。分别对应插入、查询和订阅功能。每个配置文件中只能指定其中之一。
- **cfgdir** : TDengine 集群配置文件所在的目录,默认路径是 /etc/taos 。
- **filetype** : The function to be tested, with optional values `insert`, `query` and `subscribe`. These correspond to the insert, query, and subscribe functions, respectively. Users can specify only one of these in each configuration file.
**cfgdir**: specify the TDengine cluster configuration file's directory. The default path is /etc/taos.
- **host** : 指定要连接的 TDengine 服务端的 FQDN,默认值为 localhost。
- **host**: Specify the FQDN of the TDengine server to connect. The default value is `localhost`.
- **port** : 要连接的 TDengine 服务器的端口号,默认值为 6030。
- **port**: The port number of the TDengine server to connect to, the default value is `6030`.
- **user** : 用于连接 TDengine 服务端的用户名,默认为 root。
- **user**: The user name of the TDengine server to connect to, the default is `root`.
- **password** : 用于连接 TDengine 服务端的密码,默认值为 taosdata。
- **password**: The password to connect to the TDengine server, the default value is `taosdata`.
### 插入场景配置参数
### Insert scenario configuration parameters
插入场景下 `filetype` 必须设置为 `insert`,该参数及其它通用参数详见[通用配置参数](#通用配置参数)
`filetype` must be set to `insert` in the insertion scenario. See [General Configuration Parameters](#General Configuration Parameters)
#### 数据库相关配置参数
#### Database related configuration parameters
创建数据库时的相关参数在 json 配置文件中的 `dbinfo` 中配置,具体参数如下。这些参数与 TDengine 中 `create database` 时所指定的数据库参数相对应。
The parameters related to database creation are configured in `dbinfo` in the json configuration file, as follows. These parameters correspond to the database parameters specified when `create database` in TDengine.
- **name** : 数据库名。
- **name**: specify the name of the database.
- **drop** : 插入前是否删除数据库,默认为 true。
- **drop**: indicate whether to delete the database before inserting. The default is true.
- **replica** : 创建数据库时指定的副本数。
- **replica**: specify the number of replicas when creating the database.
- **days** : 单个数据文件中存储数据的时间跨度,默认值为 10。
- **days**: specify the time span for storing data in a single data file. The default is 10.
- **cache** : 缓存块的大小,单位是 MB,默认值是 16。
- **cache**: specify the size of the cache blocks in MB. The default value is 16.
- **blocks** : 每个 vnode 中缓存块的数量,默认为 6。
- **blocks**: specify the number of cache blocks in each vnode. The default is 6.
- **precision** : 数据库时间精度,默认值为 "ms"。
- **precision**: specify the database time precision. The default value is "ms".
- **keep** : 保留数据的天数,默认值为 3650。
- **keep**: specify the number of days to keep the data. The default value is 3650.
- **minRows** : 文件块中的最小记录数,默认值为 100。
- **minRows**: specify the minimum number of records in the file block. The default value is 100.
- **maxRows** : 文件块中的最大记录数,默认值为 4096。
- **maxRows**: specify the maximum number of records in the file block. The default value is 4096.
- **comp** : 文件压缩标志,默认值为 2。
- **comp**: specify the file compression level. The default value is 2.
- **walLevel** : WAL 级别,默认为 1。
- **walLevel** : specify WAL level, default is 1.
- **cacheLast** : 是否允许将每个表的最后一条记录保留在内存中,默认值为 0,可选值为 0,1,2,3。
- **cacheLast**: indicate whether to allow the last record of each table to be kept in memory. The default value is 0. The value can be 0, 1, 2, or 3.
- **quorum** : 多副本模式下的写确认数量,默认值为 1。
- **quorum**: specify the number of writing acknowledgments in multi-replica mode. The default value is 1.
- **fsync** : 当 wal 设置为 2 时,fsync 的间隔时间,单位为 ms,默认值为 3000。
- **fsync**: specify the interval of fsync in ms when users set WAL to 2. The default value is 3000.
- **update** : 是否支持数据更新,默认值为 0, 可选值为 0, 1, 2。
- **update** : indicate whether to support data update, default value is 0, optional values are 0, 1, 2.
#### 超级表相关配置参数
#### Super table related configuration parameters
创建超级表时的相关参数在 json 配置文件中的 `super_tables` 中配置,具体参数如下表。
The parameters for creating super tables are configured in `super_tables` in the json configuration file, as shown below.
- **name**: 超级表名,必须配置,没有默认值。
- **child_table_exists** : 子表是否已经存在,默认值为 "no",可选值为 "yes" 或 "no"。
- **name**: Super table name, mandatory, no default value.
- **child_table_exists** : whether the child table already exists, default value is "no", optional value is "yes" or "no".
- **child_table_count** : 子表的数量,默认值为 10。
- **child_table_count** : The number of child tables, the default value is 10.
- **child_table_prefix** : 子表名称的前缀,必选配置项,没有默认值。
- **child_table_prefix** : The prefix of the child table name, mandatory configuration item, no default value.
- **escape_character** : 超级表和子表名称中是否包含转义字符,默认值为 "no",可选值为 "yes" 或 "no"。
- **escape_character**: specify the super table and child table names containing escape characters. By default is "no". The value can be "yes" or "no".
- **auto_create_table** : 仅当 insert_mode 为 taosc, rest, stmt 并且 childtable_exists 为 "no" 时生效,该参数为 "yes" 表示 taosBenchmark 在插入数据时会自动创建不存在的表;为 "no" 则表示先提前建好所有表再进行插入。
- **auto_create_table**: only when insert_mode is taosc, rest, stmt, and childtable_exists is "no". "yes" means taosBenchmark will automatically create non-existent tables when inserting data; "no" means that taosBenchmark will create all tables before inserting.
- **batch_create_tbl_num** : 创建子表时每批次的建表数量,默认为 10。注:实际的批数不一定与该值相同,当执行的 SQL 语句大于支持的最大长度时,会自动截断再执行,继续创建。
- **batch_create_tbl_num** : the number of tables per batch when creating sub-tables, default is 10. Note: the actual number of batches may not be the same as this value when the executed SQL statement is larger than the maximum length supported, it will be automatically truncated and re-executed to continue creating.
- **data_source** : 数据的来源,默认为 taosBenchmark 随机产生,可以配置为 "rand" 和 "sample"。为 "sample" 时使用 sample_file 参数指定的文件内的数据。
- **data_source**: specify the source of data-generating. Default is taosBenchmark randomly generated. Users can configure it as "rand" and "sample". When "sample" is used, taosBenchmark will use the data in the file specified by the `sample_file` parameter.
- **insert_mode** : 插入模式,可选项有 taosc, rest, stmt, sml, sml-rest, 分别对应普通写入、restful 接口写入、参数绑定接口写入、schemaless 接口写入、restful schemaless 接口写入 (由 taosAdapter 提供)。默认值为 taosc 。
- **insert_mode**: insertion mode with options taosc, rest, stmt, sml, sml-rest, corresponding to normal write, restful interface write, parameter binding interface write, schemaless interface write, restful schemaless interface write (provided by taosAdapter). The default value is taosc.
- **non_stop_mode** : 指定是否持续写入,若为 "yes" 则 insert_rows 失效,直到 Ctrl + C 停止程序,写入才会停止。默认值为 "no",即写入指定数量的记录后停止。注:即使在持续写入模式下 insert_rows 失效,但其也必须被配置为一个非零正整数。
- **non_stop_mode**: Specify whether to keep writing. If "yes", insert_rows will be disabled, and writing will not stop until Ctrl + C stops the program. The default value is "no", i.e., taosBenchmark will stop the writing after the specified number of rows are written. Note: insert_rows must be configured as a non-zero positive integer even if it fails in continuous write mode.
- **line_protocol** : 使用行协议插入数据,仅当 insert_mode 为 sml 或 sml-rest 时生效,可选项为 line, telnet, json。
- **line_protocol**: Insert data using line protocol. Only works when insert_mode is sml or sml-rest. The value can be `line`, `telnet`, or `json`.
- **tcp_transfer** : telnet 模式下的通信协议,仅当 insert_mode 为 sml-rest 并且 line_protocol 为 telnet 时生效。如果不配置,则默认为 http 协议。
- **tcp_transfer**: Communication protocol in telnet mode only takes effect when insert_mode is sml-rest, and line_protocol is telnet. If not configured, the default protocol is http.
- **insert_rows** : 每个子表插入的记录数,默认为 0 。
- **insert_rows** : The number of inserted rows per child table, default is 0.
- **childtable_offset** : 仅当 childtable_exists 为 yes 时生效,指定从超级表获取子表列表时的偏移量,即从第几个子表开始。
- **childtable_offset**: Effective only if childtable_exists is yes, specifies the offset when fetching the list of child tables from the super table, i.e., starting from the first child table.
- **childtable_limit** : 仅当 childtable_exists 为 yes 时生效,指定从超级表获取子表列表的上限。
- **childtable_limit**: Effective only when childtable_exists is yes, specifies the upper limit for fetching the list of child tables from the super table.
- **interlace_rows** : 启用交错插入模式并同时指定向每个子表每次插入的数据行数。交错插入模式是指依次向每张子表插入由本参数所指定的行数并重复这个过程,直到所有子表的数据都插入完成。默认值为 0, 即向一张子表完成数据插入后才会向下一张子表进行数据插入。
- **interlace_rows**: Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table at a time. Staggered insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables have been inserted. The default value is 0, i.e., data is inserted into one sub-table before the next sub-table is inserted.
- **insert_interval** : 指定交错插入模式的插入间隔,单位为 ms,默认值为 0。 只有当 `-B/--interlace-rows` 大于 0 时才起作用。意味着数据插入线程在为每个子表插入隔行扫描记录后,会等待该值指定的时间间隔后再进行下一轮写入。
- **insert_interval** : Specifies the insertion interval in ms for interleaved insertion mode. The default value is 0. It only works if `-B/--interlace-rows` is greater than 0. After inserting interlaced rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes.
- **partial_col_num** : 若该值为正数 n 时, 则仅向前 n 列写入,仅当 insert_mode 为 taosc 和 rest 时生效,如果 n 为 0 则是向全部列写入。
- **partial_col_num**: If this value is a positive number n, only the first n columns are written to, only if insert_mode is taosc and rest, or all columns if n is 0.
- **disorder_ratio** : 指定乱序数据的百分比概率,其值域为 [0,50]。默认为 0,即没有乱序数据。
- **disorder_ratio** : Specifies the percentage probability of disordered data in the value range [0,50]. The default is 0, which means there is no disorder data.
- **disorder_range** : 指定乱序数据的时间戳回退范围。所生成的乱序时间戳为非乱序情况下应该使用的时间戳减去这个范围内的一个随机值。仅在 `-O/--disorder` 指定的乱序数据百分比大于 0 时有效。
- **disorder_range** : Specifies the timestamp fallback range for the disordered data. The generated disorder timestamp is the timestamp that should be used in the non-disorder case minus a random value in this range. Valid only if the percentage of disordered data specified by `-O/--disorder` is greater than 0.
- **timestamp_step** : 每个子表中插入数据的时间戳步长,单位与数据库的 `precision` 一致,默认值是 1。
- **timestamp_step**: The timestamp step for inserting data in each child table, in units consistent with the `precision` of the database, the default value is 1.
- **start_timestamp** : 每个子表的时间戳起始值,默认值是 now。
- **start_timestamp** : The timestamp start value of each sub-table, the default value is now.
- **sample_format** : 样本数据文件的类型,现在只支持 "csv" 。
- **sample_format**: The type of the sample data file, now only "csv" is supported.
- **sample_file** : 指定 csv 格式的文件作为数据源,仅当 data_source 为 sample 时生效。若 csv 文件内的数据行数小于等于 prepared_rand,那么会循环读取 csv 文件数据直到与 prepared_rand 相同;否则则会只读取 prepared_rand 个数的行的数据。也即最终生成的数据行数为二者取小。
- **sample_file**: Specify a CSV format file as the data source. It only works when data_source is a sample. If the number of rows in the CSV file is less than or equal to prepared_rand, then taosBenchmark will read the CSV file data cyclically until it is the same as prepared_rand; otherwise, taosBenchmark will read only the rows with the number of prepared_rand. The final number of rows of data generated is the smaller of the two.
- **use_sample_ts** : 仅当 data_source 为 sample 时生效,表示 sample_file 指定的 csv 文件内是否包含第一列时间戳,默认为 no。 若设置为 yes, 则使用 csv 文件第一列作为时间戳,由于同一子表时间戳不能重复,生成的数据量取决于 csv 文件内的数据行数相同,此时 insert_rows 失效。
- **use_sample_ts**: effective only when data_source is `sample`, indicates whether the CSV file specified by sample_file contains the first timestamp column. Default is no. If set to yes, the first column of the CSV file is used as `timestamp`. Since the timestamp of the same sub-table cannot be repeated, the amount of data generated depends on the same number of rows of data in the CSV file, and insert_rows will be invalidated.
- **tags_file** : 仅当 insert_mode 为 taosc, rest 的模式下生效。 最终的 tag 的数值与 childtable_count 有关,如果 csv 文件内的 tag 数据行小于给定的子表数量,那么会循环读取 csv 文件数据直到生成 childtable_count 指定的子表数量;否则则只会读取 childtable_count 行 tag 数据。也即最终生成的子表数量为二者取小。
- **tags_file** : only works when insert_mode is taosc, rest. The final tag value is related to the childtable_count. Suppose the tag data rows in the CSV file are smaller than the given number of child tables. In that case, taosBenchmark will read the CSV file data cyclically until the number of child tables specified by childtable_count is generated. Otherwise, taosBenchmark will read the childtable_count rows of tag data only. The final number of child tables generated is the smaller of the two.
#### 标签列与数据列配置参数
#### Tag and Data Column Configuration Parameters
指定超级表标签列与数据列的配置参数分别在 `super_tables` 中的 `columns``tag` 中。
The configuration parameters for specifying super table tag columns and data columns are in `columns` and `tag` in `super_tables`, respectively.
- **type** : 指定列类型,可选值请参考 TDengine 支持的数据类型。
注:JSON 数据类型比较特殊,只能用于标签,当使用 JSON 类型作为 tag 时有且只能有这一个标签,此时 count 和 len 代表的意义分别是 JSON tag 内的 key-value pair 的个数和每个 KV pair 的 value 的值的长度,value 默认为 string。
- **type**: Specify the column type. For optional values, please refer to the data types supported by TDengine.
Note: JSON data type is unique and can only be used for tags. When using JSON type as a tag, there is and can only be this one tag. At this time, `count` and `len` represent the meaning of the number of key-value pairs within the JSON tag and the length of the value of each KV pair. Respectively, the value is a string by default.
- **len** : 指定该数据类型的长度,对 NCHAR,BINARY 和 JSON 数据类型有效。如果对其他数据类型配置了该参数,若为 0 , 则代表该列始终都是以 null 值写入;如果不为 0 则被忽略。
- **len**: Specifies the length of this data type, valid for NCHAR, BINARY, and JSON data types. If this parameter is configured for other data types, a value of 0 means that the column is always written with a null value; if it is not 0, it is ignored.
- **count** : 指定该类型列连续出现的数量,例如 "count": 4096 即可生成 4096 个指定类型的列。
- **count**: Specifies the number of consecutive occurrences of the column type, e.g., "count": 4096 generates 4096 columns of the specified type.
- **name** : 列的名字,若与 count 同时使用,比如 "name":"current", "count":3, 则 3 个列的名字分别为 current, current_2. current_3。
- **name** : The name of the column, if used together with count, e.g. "name": "current", "count":3, then the names of the 3 columns are current, current_2. current_3.
- **min** : 数据类型的 列/标签 的最小值。
- **min**: The minimum value of the column/label of the data type.
- **max** : 数据类型的 列/标签 的最大值。
- **max**: The maximum value of the column/label of the data type.
- **values** : nchar/binary 列/标签的值域,将从值中随机选择。
- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.
#### 插入行为配置参数
#### insertion behavior configuration parameters
- **thread_count** : 插入数据的线程数量,默认为 8。
- **thread_count**: specify the number of threads to insert data. Default is 8.
- **create_table_thread_count** : 建表的线程数量,默认为 8。
- **create_table_thread_count** : The number of threads to build the table, default is 8.
- **connection_pool_size** : 预先建立的与 TDengine 服务端之间的连接的数量。若不配置,则与所指定的线程数相同。
- **connection_pool_size** : The number of pre-established connections to the TDengine server. If not configured, it is the same number of threads specified.
- **result_file** : 结果输出文件的路径,默认值为 ./output.txt。
- **result_file** : The path to the result output file, the default value is . /output.txt.
- **confirm_parameter_prompt** : 开关参数,要求用户在提示后确认才能继续。默认值为 false 。
- **confirm_parameter_prompt**: The switch parameter requires the user to confirm after the prompt to continue. The default value is false.
- **interlace_rows** : 启用交错插入模式并同时指定向每个子表每次插入的数据行数。交错插入模式是指依次向每张子表插入由本参数所指定的行数并重复这个过程,直到所有子表的数据都插入完成。默认值为 0, 即向一张子表完成数据插入后才会向下一张子表进行数据插入。
`super_tables` 中也可以配置该参数,若配置则以 `super_tables` 中的配置为高优先级,覆盖全局设置。
- **interlace_rows**: Enables interleaved insertion mode and specifies the number of rows of data to be inserted into each child table at a time. Interleaved insertion mode means inserting the number of rows specified by this parameter into each sub-table and repeating the process until all sub-tables are inserted. The default value is 0, which means that data will be inserted into the following child table only after data is inserted into one child table.
This parameter can also be configured in `super_tables`, and if so, the configuration in `super_tables` takes precedence and overrides the global setting.
- **insert_interval** :
指定交错插入模式的插入间隔,单位为 ms,默认值为 0。 只有当 `-B/--interlace-rows` 大于 0 时才起作用。意味着数据插入线程在为每个子表插入隔行扫描记录后,会等待该值指定的时间间隔后再进行下一轮写入。
`super_tables` 中也可以配置该参数,若配置则以 `super_tables` 中的配置为高优先级,覆盖全局设置。
Specifies the insertion interval in ms for interleaved insertion mode. The default value is 0. Only works if `-B/--interlace-rows` is greater than 0. It means that after inserting interlace rows for each child table, the data insertion thread will wait for the interval specified by this value before proceeding to the next round of writes.
This parameter can also be configured in `super_tables`, and if configured, the configuration in `super_tables` takes high priority, overriding the global setting.
- **num_of_records_per_req** :
每次向 TDegnine 请求写入的数据行数,默认值为 30000 。当其设置过大时,TDegnine 客户端驱动会返回相应的错误信息,此时需要调低这个参数的设置以满足写入要求。
The number of rows of data to be written per request to TDengine, the default value is 30000. When it is set too large, the TDengine client driver will return the corresponding error message, so you need to lower the setting of this parameter to meet the writing requirements.
- **prepare_rand** : 生成的随机数据中唯一值的数量。若为 1 则表示所有数据都相同。默认值为 10000 。
- **prepare_rand**: The number of unique values in the generated random data. A value of 1 means that all data are the same. The default value is 10000.
### 查询场景配置参数
### Query scenario configuration parameters
查询场景下 `filetype` 必须设置为 `qeury`,该参数及其它通用参数详见[通用配置参数](#通用配置参数)
`filetype` must be set to `query` in the query scenario. See [General Configuration Parameters](#General Configuration Parameters) for details of this parameter and other general parameters
#### 执行指定查询语句的配置参数
#### Configuration parameters for executing the specified query statement
查询子表或者普通表的配置参数在 `specified_table_query` 中设置。
The configuration parameters for querying the sub-tables or the normal tables are set in `specified_table_query`.
- **query_interval** : 查询时间间隔,单位是秒,默认值为 0。
- **query_interval** : The query interval in seconds, the default value is 0.
- **threads** : 执行查询 SQL 的线程数,默认值为 1。
- **threads**: The number of threads to execute the query SQL, the default value is 1.
- **sqls**
- **sql**: 执行的 SQL 命令,必填。
- **result**: 保存查询结果的文件,未指定则不保存。
- **sqls**.
- **sql**: the SQL command to be executed.
- **result**: the file to save the query result. If it is unspecified, taosBenchark will not save the result.
#### 查询超级表的配置参数
#### Configuration parameters of query super table
查询超级表的配置参数在 `super_table_query` 中设置。
The configuration parameters of the super table query are set in `super_table_query`.
- **stblname** : 指定要查询的超级表的名称,必填。
- **stblname**: Specify the name of the super table to be queried, required.
- **query_interval** : 查询时间间隔,单位是秒,默认值为 0。
- **query_interval** : The query interval in seconds, the default value is 0.
- **threads** : 执行查询 SQL 的线程数,默认值为 1。
- **threads**: The number of threads to execute the query SQL, the default value is 1.
- **sqls**
- **sql** : 执行的 SQL 命令,必填;对于超级表的查询 SQL,在 SQL 命令中保留 "xxxx",程序会自动将其替换为超级表的所有子表名。
替换为超级表中所有的子表名。
- **result** : 保存查询结果的文件,未指定则不保存。
- **sqls** : The default value is 1.
- **sql**: The SQL command to be executed. For the query SQL of super table, keep "xxxx" in the SQL command. The program will automatically replace it with all the sub-table names of the super table.
Replace it with all the sub-table names in the super table.
- **result**: The file to save the query result. If not specified, taosBenchmark will not save result.
### 订阅场景配置参数
### Subscription scenario configuration parameters
订阅场景下 `filetype` 必须设置为 `subscribe`,该参数及其它通用参数详见[通用配置参数](#通用配置参数)
`filetype` must be set to `subscribe` in the subscription scenario. See [General Configuration Parameters](#General Configuration Parameters) for details of this and other general parameters
#### 执行指定订阅语句的配置参数
#### Configuration parameters for executing the specified subscription statement
订阅子表或者普通表的配置参数在 `specified_table_query` 中设置。
The configuration parameters for subscribing to a sub-table or a generic table are set in `specified_table_query`.
- **threads** : 执行 SQL 的线程数,默认为 1。
- **threads**: The number of threads to execute SQL, default is 1.
- **interva** : 执行订阅的时间间隔,单位为秒,默认为 0。
- **interval**: The time interval to execute the subscription, in seconds, default is 0.
- **restart** : "yes" 表示开始新的订阅,"no" 表示继续之前的订阅,默认值为 "no"。
- **restart** : "yes" means start a new subscription, "no" means continue the previous subscription, the default value is "no".
- **keepProgress** : "yes" 表示保留订阅进度,"no" 表示不保留,默认值为 "no"。
- **keepProgress**: "yes" means keep the progress of the subscription, "no" means don't keep it, and the default value is "no".
- **resubAfterConsume** : "yes" 表示取消之前的订阅然后再次订阅, "no" 表示继续之前的订阅,默认值为 "no"。
- **resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no".
- **sqls**
- **sql** : 执行的 SQL 命令,必填。
- **result** : 保存查询结果的文件,未指定则不保存。
- **sqls** : The default value is "no".
- **sql** : The SQL command to be executed, required.
- **result** : The file to save the query result, unspecified is not saved.
#### 订阅超级表的配置参数
#### Configuration parameters for subscribing to supertables
订阅超级表的配置参数在 `super_table_query` 中设置。
The configuration parameters for subscribing to a super table are set in `super_table_query`.
- **stblname** : 要订阅的超级表名称,必填。
- **stblname**: The name of the super table to subscribe.
- **threads** : 执行 SQL 的线程数,默认为 1。
- **threads**: The number of threads to execute SQL, default is 1.
- **interva** : 执行订阅的时间间隔,单位为秒,默认为 0。
- **interval**: The time interval to execute the subscription, in seconds, default is 0.
- **restart** : "yes" 表示开始新的订阅,"no" 表示继续之前的订阅,默认值为 "no"。
- **restart** : "yes" means start a new subscription, "no" means continue the previous subscription, the default value is "no".
- **keepProgress** : "yes" 表示保留订阅进度,"no" 表示不保留,默认值为 "no"。
- **keepProgress**: "yes" means keep the progress of the subscription, "no" means don't keep it, and the default value is "no".
- **resubAfterConsume** : "yes" 表示取消之前的订阅然后再次订阅, "no" 表示继续之前的订阅,默认值为 "no"。
- **resubAfterConsume**: "yes" means cancel the previous subscription and then subscribe again, "no" means continue the previous subscription, and the default value is "no".
- **sqls**
- **sql** : 执行的 SQL 命令,必填;对于超级表的查询 SQL,在 SQL 命令中保留 "xxxx",程序会自动将其替换为超级表的所有子表名。
替换为超级表中所有的子表名。
- **result** : 保存查询结果的文件,未指定则不保存。
- **sqls** : The default value is "no".
- **sql**: SQL command to be executed, required; for the query SQL of the super table, keep "xxxx" in the SQL command, and the program will replace it with all the sub-table names of the super table automatically.
Replace it with all the sub-table names in the super table.
- **result**: The file to save the query result, if not specified, it will not be saved.
---
title: taosdump
description: "taosdump 是一个支持从运行中的 TDengine 集群备份数据并将备份的数据恢复到相同或另一个运行中的 TDengine 集群中的工具应用程序"
description: "taosdump is a tool application that supports backing up data from a running TDengine cluster and restoring the backed up data to the same or another running TDengine cluster."
---
## 简介
## Introduction
taosdump 是一个支持从运行中的 TDengine 集群备份数据并将备份的数据恢复到相同或另一个运行中的 TDengine 集群中的工具应用程序。
taosdump is a tool application that supports backing up data from a running TDengine cluster and restoring the backed up data to the same or another running TDengine cluster.
taosdump 可以用数据库、超级表或普通表作为逻辑数据单元进行备份,也可以对数据库、超级
表和普通表中指定时间段内的数据记录进行备份。使用时可以指定数据备份的目录路径,如果
不指定位置,taosdump 默认会将数据备份到当前目录。
taosdump can back up a database, a super table, or a normal table as a logical data unit or backup data records in the database, super tables, and normal tables. When using taosdump, you can specify the directory path for data backup. If you do not specify a directory, taosdump will back up the data to the current directory by default.
如果指定的位置已经有数据文件,taosdump 会提示用户并立即退出,避免数据被覆盖。这意味着同一路径只能被用于一次备份。
如果看到相关提示,请小心操作。
Suppose the specified location already has data files. In that case, taosdump will prompt the user and exit immediately to avoid data overwriting which means that the same path can only be used for one backup.
Please be careful if you see a prompt for this.
taosdump 是一个逻辑备份工具,它不应被用于备份任何原始数据、环境设置、
硬件信息、服务端配置或集群的拓扑结构。taosdump 使用
[ Apache AVRO ](https://avro.apache.org/)作为数据文件格式来存储备份数据。
taosdump is a logical backup tool and should not be used to back up any raw data, environment settings,
Users should not use taosdump to back up raw data, environment settings, hardware information, server configuration, or cluster topology. taosdump uses [Apache AVRO](https://avro.apache.org/) as the data file format to store backup data.
## 安装
## Installation
taosdump 有两种安装方式:
There are two ways to install taosdump:
- 安装 taosTools 官方安装包, 请从[所有下载链接](https://www.taosdata.com/all-downloads)页面找到 taosTools 并下载安装。
- Install the taosTools official installer. Please find taosTools from [All download links](https://www.taosdata.com/all-downloads) page and download and install it.
- 单独编译 taos-tools 并安装, 详情请参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。
- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
## 常用使用场景
## Common usage scenarios
### taosdump 备份数据
### taosdump backup data
1. 备份所有数据库:指定 `-A``--all-databases` 参数;
2. 备份多个指定数据库:使用 `-D db1,db2,...` 参数;
3. 备份指定数据库中的某些超级表或普通表:使用 `dbname stbname1 stbname2 tbname1 tbname2 ...` 参数,注意这种输入序列第一个参数为数据库名称,且只支持一个数据库,第二个和之后的参数为该数据库中的超级表或普通表名称,中间以空格分隔;
4. 备份系统 log 库:TDengine 集群通常会包含一个系统数据库,名为 `log`,这个数据库内的数据为 TDengine 自我运行的数据,taosdump 默认不会对 log 库进行备份。如果有特定需求对 log 库进行备份,可以使用 `-a``--allow-sys` 命令行参数。
5. “宽容”模式备份:taosdump 1.4.1 之后的版本提供 `-n` 参数和 `-L` 参数,用于备份数据时不使用转义字符和“宽容”模式,可以在表名、列名、标签名没使用转义字符的情况下减少备份数据时间和备份数据占用空间。如果不确定符合使用 `-n``-L` 条件时请使用默认参数进行“严格”模式进行备份。转义字符的说明请参考[官方文档](/taos-sql/escape)
1. backing up all databases: specify `-A` or `-all-databases` parameter.
2. backup multiple specified databases: use `-D db1,db2,... ` parameters; 3.
3. back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use This can reduce the backup data time and backup data footprint if table names, column names, and tag names do not use `escape character`. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](/taos-sql/escape) for a description of escaped characters.
:::tip
- taosdump 1.4.1 之后的版本提供 `-I` 参数,用于解析 avro 文件 schema 和数据,如果指定 `-s` 参数将只解析 schema。
- taosdump 1.4.2 之后的备份使用 `-B` 参数指定的批次数,默认值为 16384,如果在某些环境下由于网络速度或磁盘性能不足导致 "Error actual dump .. batch .." 可以通过 `-B` 参数挑战为更小的值进行尝试。
- taosdump versions after 1.4.1 provide the `-I` argument for parsing Avro file schema and data. If users specify `-s` then only taosdump will parse schema.
- Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ..." can be tried by challenging the `-B` parameter to a smaller value.
:::
### taosdump 恢复数据
### taosdump recover data
恢复指定路径下的数据文件:使用 `-i` 参数加上数据文件所在路径。如前面提及,不应该使用同一个目录备份不同数据集合,也不应该在同一路径多次备份同一数据集,否则备份数据会造成覆盖或多次备份。
Restore the data file in the specified path: use the `-i` parameter plus the path to the data file. You should not use the same directory to backup different data sets, and you should not backup the same data set multiple times in the same path. Otherwise, the backup data will cause overwriting or multiple backups.
:::tip
taosdump 内部使用 TDengine stmt binding API 进行恢复数据的写入,为提高数据恢复性能,目前使用 16384 为一次写入批次。如果备份数据中有比较多列数据,可能会导致产生 "WAL size exceeds limit" 错误,此时可以通过使用 `-B` 参数调整为一个更小的值进行尝试。
taosdump internally uses TDengine stmt binding API for writing recovery data and currently uses 16384 as one write batch for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust to a smaller value by using the `-B` parameter.
:::
## 详细命令行参数列表
## Detailed command-line parameter list
以下为 taosdump 详细命令行参数列表:
The following is a detailed list of taosdump command-line arguments.
```
Usage: taosdump [OPTION...] dbname [tbname ...]
......
---
title: TDengine 命令行(CLI)
title: TDengine Command Line (CLI)
sidebar_label: TDengine CLI
description: TDengine CLI 的使用说明和技巧
description: Instructions and tips for using the TDengine CLI
---
TDengine 命令行程序(以下简称 TDengine CLI)是用户操作 TDengine 实例并与之交互的最简洁最常用的方式。
The TDengine command-line application (hereafter referred to as `TDengine CLI`) is the most feasility way for users to manipulate and interact with TDengine instances.
## 安装
## Installation
如果在 TDengine 服务器端执行,无需任何安装,已经自动安装好。如果要在非 TDengine 服务器端运行,需要安装 TDengine 客户端驱动,具体安装,请参考 [连接器](/reference/connector/)
If executed on the TDengine server-side, there is no need for additional installation steps to install TDengine CLI as it is already included and installed automatically. To run TDengine CLI on the environemtn which no TDengine server running, the TDengine client installation package needs to be installed first. For details, please refer to [connector](/reference/connector/).
## 执行
## Execution
要进入 TDengine CLI,您只要在 Linux 终端或Windos 终端执行 `taos` 即可。
To access the TDengine CLI, you can execute `taos` command-line utility from a Linux terminal or Windows terminal.
```bash
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。TDengine CLI 的提示符号如下:
TDengine CLI will display a welcome message and version information if it successfully connected to the TDengine service. If it fails, TDengine CLI will print an error message. See [FAQ](/train-faq/faq) to solve the problem of terminal connection failure to the server. The TDengine CLI prompts as follows:
```cmd
taos>
```
进入CLI后,你可执行各种SQL语句,包括插入、查询以及各种管理命令。
After entering the TDengine CLI, you can execute various SQL commands, including inserts, queries, or administrative commands.
## 执行 SQL 脚本
## Execute SQL script file
在 TDengine CLI 里可以通过 `source` 命令来运行 SQL 命令脚本。
Run SQL command script file in the TDengine CLI via the `source` command.
```sql
taos> source <filename>;
```
## 在线修改显示字符宽度
## Adjust display width to show more characters
可以在 TDengine CLI 里使用如下命令调整字符显示宽度
Users can adjust the display width in TDengine CLI to show more characters with the following command:
```sql
taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
```
如显示的内容后面以...结尾时,表示该内容已被截断,可通过本命令修改显示字符宽度以显示完整的内容。
If the displayed content is followed by `...` you can use this command to change the display width to display the full content.
## 命令行参数
## Command Line Parameters
您可通过配置命令行参数来改变 TDengine CLI 的行为。以下为常用的几个命令行参数:
You can change the behavior of TDengine CLI by specifying command-line parameters. The following parameters are commonly used.
- -h, --host=HOST: 要连接的 TDengine 服务端所在服务器的 FQDN, 默认为连接本地服务
- -P, --port=PORT: 指定服务端所用端口号
- -u, --user=USER: 连接时使用的用户名
- -p, --password=PASSWORD: 连接服务端时使用的密码
- -?, --help: 打印出所有命令行参数
- -h, --host=HOST: FQDN of the server where the TDengine server is to be connected. Default is to connect to the local service
- -P, --port=PORT: Specify the port number to be used by the server. Default is `6030`
- -u, --user=USER: the user name to use when connecting. Default is `root`
- -p, --password=PASSWORD: the password to use when connecting to the server. Default is `taosdata`
- -?, --help: print out all command-line arguments
还有更多其他参数:
And many more parameters.
- -c, --config-dir: 指定配置文件目录,默认为 `/etc/taos`,该目录下的配置文件默认名称为 taos.cfg
- -C, --dump-config: 打印 -c 指定的目录中 taos.cfg 的配置参数
- -d, --database=DATABASE: 指定连接到服务端时使用的数据库
- -D, --directory=DIRECTORY: 导入指定路径中的 SQL 脚本文件
- -f, --file=FILE: 以非交互模式执行 SQL 脚本文件
- -k, --check=CHECK: 指定要检查的表
- -l, --pktlen=PKTLEN: 网络测试时使用的测试包大小
- -n, --netrole=NETROLE: 网络连接测试时的测试范围,默认为 startup, 可选值为 client, server, rpc, startup, sync, speed, fqdn
- -r, --raw-time: 将时间输出出 uint64_t
- -s, --commands=COMMAND: 以非交互模式执行的 SQL 命令
- -S, --pkttype=PKTTYPE: 指定网络测试所用的包类型,默认为 TCP。只有 netrole 为 speed 时既可以指定为 TCP 也可以指定为 UDP
- -T, --thread=THREADNUM: 以多线程模式导入数据时的线程数
- -s, --commands: 在不进入终端的情况下运行 TDengine 命令
- -z, --timezone=TIMEZONE: 指定时区,默认为本地
- -V, --version: 打印出当前版本号
- -c, --config-dir: Specify the directory where configuration file exists. The default is `/etc/taos`, and the default name of the configuration file in this directory is `taos.cfg`
- -C, --dump-config: Print the configuration parameters of `taos.cfg` in the default directory or specified by -c
- -d, --database=DATABASE: Specify the database to use when connecting to the server
- -D, --directory=DIRECTORY: Import the SQL script file in the specified path
- -f, --file=FILE: Execute the SQL script file in non-interactive mode
- -k, --check=CHECK: Specify the table to be checked
- -l, --pktlen=PKTLEN: Test package size to be used for network testing
- -n, --netrole=NETROLE: test scope for network connection test, default is `startup`, The value can be `client`, `server`, `rpc`, `startup`, `sync`, `speed`, or `fqdn`.
- -r, --raw-time: output the timestamp format as unsigned 64-bits integer (uint64_t in C language)
- -s, --commands=COMMAND: execute SQL commands in non-interactive mode
- -S, --pkttype=PKTTYPE: Specify the packet type used for network testing. The default is TCP. can be specified as either TCP or UDP when `speed` is specified to netrole parameter
- -T, --thread=THREADNUM: The number of threads to import data in multi-threaded mode
- -s, --commands: Run TDengine CLI commands without entering the terminal
- -z, --timezone=TIMEZONE: Specify time zone. Default is the value of current configruation file
- -V, --version: Print out the current version number
示例:
Example.
```bash
taos -h h1.taos.com -s "use db; show tables;"
```
## TDengine CLI 小技巧
- 可以使用上下光标键查看历史输入的指令
- 修改用户密码:在 shell 中使用 `alter user` 命令,缺省密码为 taosdata
- ctrl+c 中止正在进行中的查询
- 执行 `RESET QUERY CACHE` 可清除本地缓存的表 schema
- 批量执行 SQL 语句。可以将一系列的 shell 命令(以英文 ; 结尾,每个 SQL 语句为一行)按行存放在文件里,在 shell 里执行命令 `source <file-name>` 自动执行该文件里所有的 SQL 语句
- 输入 q 回车,退出 taos shell
## TDengine CLI tips
- You can use the up and down keys to iterate the history of commands entered
- Change user password: use `alter user` command in TDengine CLI to change user's password. The default password is `taosdata`.
- use Ctrl+C to stop a query in progress
- Execute `RESET QUERY CACHE` to clear the local cache of the table schema
- Execute SQL statements in batches. You can store a series of shell commands (ending with ;, one line for each SQL command) in a script file and execute the command `source <file-name>` in the TDengine CLI to execute all SQL commands in that file automatically
- Enter `q` to exit TDengine CLI
---
title: 支持平台列表
description: "TDengine 服务端、客户端和连接器支持的平台列表"
title: List of supported platforms
description: "List of platforms supported by TDengine server, client, and connector"
---
## TDengine 服务端支持的平台列表
| | **CentOS 7/8** | **Ubuntu 16/18/20** | **Other Linux** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **华为 EulerOS** |
| ------------ | -------------- | ------------------- | --------------- | ------------ | ----------------- | ---------------- | ---------------- |
| X64 | ● | ● | | ○ | ● | ● | ● |
| 龙芯 MIPS64 | | | ● | | | | |
| 鲲鹏 ARM64 | | ○ | ○ | | ● | | |
| 申威 Alpha64 | | | ○ | ● | | | |
| 飞腾 ARM64 | | ○ 优麒麟 | | | | | |
| 海光 X64 | ● | ● | ● | ○ | ● | ● | |
| 瑞芯微 ARM64 | | | ○ | | | | |
| 全志 ARM64 | | | ○ | | | | |
| 炬力 ARM64 | | | ○ | | | | |
| 华为云 ARM64 | | | | | | | ● |
注: ● 表示经过官方测试验证, ○ 表示非官方测试验证。
## TDengine 客户端和连接器支持的平台列表
目前 TDengine 的连接器可支持的平台广泛,目前包括:X64/X86/ARM64/ARM32/MIPS/Alpha 等硬件平台,以及 Linux/Win64/Win32 等开发环境。
对照矩阵如下:
| **CPU** | **X64 64bit** | | | **X86 32bit** | **ARM64** | **ARM32** | **MIPS 龙芯** | **Alpha 申威** | **X64 海光** |
| ----------- | ------------- | --------- | --------- | ------------- | --------- | --------- | ------------- | -------------- | ------------ |
| **OS** | **Linux** | **Win64** | **Win32** | **Win32** | **Linux** | **Linux** | **Linux** | **Linux** | **Linux** |
| **C/C++** | ● | ● | ● | ○ | ● | ● | ● | ● | ● |
| **JDBC** | ● | ● | ● | ○ | ● | ● | ● | ● | ● |
| **Python** | ● | ● | ● | ○ | ● | ● | ● | -- | ● |
| **Go** | ● | ● | ● | ○ | ● | ● | ○ | -- | -- |
| **NodeJs** | ● | ● | ○ | ○ | ● | ● | ○ | -- | -- |
| **C#** | ● | ● | ○ | ○ | ○ | ○ | ○ | -- | -- |
| **RESTful** | ● | ● | ● | ● | ● | ● | ● | ● | ● |
注:● 表示官方测试验证通过,○ 表示非官方测试验证通过,-- 表示未经验证。
## List of supported platforms for TDengine server
| | **CentOS 7/8** | **Ubuntu 16/18/20** | **Other Linux** |
| ------------ | -------------- | ------------------- | --------------- |
| X64 | ● | ● | |
| MIPS64 | | | ● |
| ARM64 | | ○ | ○ |
| Alpha64 | | | ○ |
Note: ● means officially tested and verified, ○ means unofficially tested and verified.
## List of supported platforms for TDengine clients and connectors
TDengine's connector can support a wide range of platforms, including X64/X86/ARM64/ARM32/MIPS/Alpha hardware platforms and Linux/Win64/Win32 development environments.
The comparison matrix is as follows.
| **CPU** | **X64 64bit** | | | **X86 32bit** | **ARM64** | **ARM32** | **MIPS** | **Alpha** |
| ----------- | ------------- | --------- | --------- | ------------- | --------- | --------- | --------- | --------- |
| **OS** | **Linux** | **Win64** | **Win32** | **Win32** | **Linux** | **Linux** | **Linux** | **Linux** |
| **C/C++** | ● | ● | ● | ○ | ● | ● | ● | ● |
| **JDBC** | ● | ● | ● | ○ | ● | ● | ● | ● |
| **Python** | ● | ● | ● | ○ | ● | ● | ● | -- |
| **Go** | ● | ● | ● | ○ | ● | ● | ○ | -- |
| **NodeJs** | ● | ● | ○ | ○ | ● | ● | ○ | -- |
| **C#** | ● | ● | ○ | ○ | ○ | ○ | ○ | -- |
| **RESTful** | ● | ● | ● | ● | ● | ● | ● | ● |
Note: ● means the official test is verified, ○ means the unofficial test is verified, -- means not verified.
label: TDengine Docker 镜像
\ No newline at end of file
label: TDengine Docker images
\ No newline at end of file
---
title: 用 Docker 部署 TDengine
description: "本章主要介绍如何在容器中启动 TDengine 服务并访问它"
title: Deploying TDengine with Docker
Description: "This chapter focuses on starting the TDengine service in a container and accessing it."
---
本章主要介绍如何在容器中启动 TDengine 服务并访问它。可以在 docker run 命令行中或者 docker-compose 文件中使用环境变量来控制容器中服务的行为。
This chapter describes how to start the TDengine service in a container and access it. Users can control the behavior of the service in the container by using environment variables on the docker run command-line or in the docker-compose file.
## 启动 TDengine
## Starting TDengine
TDengine 镜像启动时默认激活 HTTP 服务,使用下列命令
The TDengine image starts with the HTTP service activated by default, using the following command:
```shell
docker run -d --name tdengine -p 6041:6041 tdengine/tdengine
```
以上命令启动了一个名为“tdengine”的容器,并把其中的 HTTP 服务的端 6041 映射到了主机端口 6041。使用如下命令可以验证该容器中提供的 HTTP 服务是否可用:
The above command starts a container named "tdengine" and maps the HTTP service end 6041 to the host port 6041. You can verify that the HTTP service provided in this container is available using the following command.
```shell
curl -u root:taosdata -d "show databases" localhost:6041/rest/sql
```
使用如下命令可以在该容器中执行 TDengine 的客户端 taos 对 TDengine 进行访问:
The TDengine client taos can be executed in this container to access TDengine using the following command.
```shell
$ docker exec -it tdengine taos
Welcome to the TDengine shell from Linux, Client Version:2.4.0.0
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
Copyright (c) 2020 by TAOS Data, Inc.
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2022-01-17 13:57:22.270 | 10 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | status precision | update | status |
================================================================================================================================== ================================================================================================================================== ================
log | 2022-01-17 13:57:22.270 | 10 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
Query OK, 1 row(s) in set (0.002843s)
```
因为运行在容器中的 TDengine 服务端使用容器的 hostname 建立连接,使用 taos shell 或者各种连接器(例如 JDBC-JNI)从容器外访问容器内的 TDengine 比较复杂,所以上述方式是访问容器中 TDengine 服务的最简单的方法,适用于一些简单场景。如果在一些复杂场景下想要从容器化使用 taos shell 或者各种连接器访问容器中的 TDengine 服务,请参考下一节。
The TDengine server running in the container uses the container's hostname to establish a connection. Using TDengine CLI or various connectors (such as JDBC-JNI) to access the TDengine inside the container from outside the container is more complicated. So the above is the simplest way to access the TDengine service in the container and is suitable for some simple scenarios. Please refer to the next section if you want to access the TDengine service in the container from containerized using TDengine CLI or various connectors in some complex scenarios.
## 在 host 网络上启动 TDengine
## Start TDengine on the host network
```shell
docker run -d --name tdengine --network host tdengine/tdengine
```
上面的命令在 host 网络上启动 TDengine,并使用主机的 FQDN 建立连接而不是使用容器的 hostname 。这种方式和在主机上使用 `systemctl` 启动 TDengine 效果相同。在主机已安装 TDengine 客户端情况下,可以直接使用下面的命令访问它。
The above command starts TDengine on the host network and uses the host's FQDN to establish a connection instead of the container's hostname. It works too, like using `systemctl` to start TDengine on the host. If the TDengine client is already installed on the host, you can access it directly with the following command.
```shell
$ taos
Welcome to the TDengine shell from Linux, Client Version:2.4.0.0
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
Copyright (c) 2020 by TAOS Data, Inc.
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | myhost:6030 | 1 | 8 | ready | any | 2022-01-17 22:10:32.619 | |
id | end_point | vnodes | cores | status | role | create_time | offline reason |
================================================================================================================================== ====
1 | myhost:6030 | 1 | 8 | ready | any | 2022-01-17 22:10:32.619 | |
Query OK, 1 row(s) in set (0.003233s)
```
## 以指定的 hostname 和 port 启动 TDengine
## Start TDengine with the specified hostname and port
利用 `TAOS_FQDN` 环境变量或者 `taos.cfg` 中的 `fqdn` 配置项可以使 TDengine 在指定的 hostname 上建立连接。这种方式可以为部署提供更大的灵活性。
The `TAOS_FQDN` environment variable or the `fqdn` configuration item in `taos.cfg` allows TDengine to establish a connection at the specified hostname. This approach provides greater flexibility for deployment.
```shell
docker run -d \
......@@ -70,35 +70,35 @@ docker run -d \
tdengine/tdengine
```
上面的命令在容器中启动一个 TDengine 服务,其所监听的 hostname 为 tdengine ,并将容器的 6030 到 6049 端口段映射到主机的 6030 到 6049 端口段 (tcp 和 udp 都需要映射)。如果主机上该端口段已经被占用,可以修改上述命令指定一个主机上空闲的端口段。如果 `rpcForceTcp` 被设置为 `1` ,可以只映射 tcp 协议。
The above command starts a TDengine service in the container, which listens to the hostname tdengine, and maps the container's port segment 6030 to 6049 to the host's port segment 6030 to 6049 (both TCP and UDP ports need to be mapped). If the port segment is already occupied on the host, you can modify the above command to specify a free port segment on the host. If `rpcForceTcp` is set to `1`, you can map only the TCP protocol.
接下来,要确保 "tdengine" 这个 hostname 在 `/etc/hosts` 中可解析。
Next, ensure the hostname "tdengine" is resolvable in `/etc/hosts`.
```shell
echo 127.0.0.1 tdengine |sudo tee -a /etc/hosts
```
最后,可以从 taos shell 或者任意连接器以 "tdengine" 为服务端地址访问 TDengine 服务。
Finally, the TDengine service can be accessed from the taos shell or any connector with "tdengine" as the server address.
```shell
taos -h tdengine -P 6030
```
如果 `TAOS_FQDN` 被设置为与所在主机名相同,则效果与 “在 host 网络上启动 TDengine” 相同。
If set `TAOS_FQDN` to the same hostname, the effect is the same as "Start TDengine on host network".
## 在指定网络上启动 TDengine
## Start TDengine on the specified network
也可以在指定的特定网络上启动 TDengine。下面是详细步骤:
You can also start TDengine on a specific network.
1. 首先,创建一个 docker 网络,命名为 td-net
1. First, create a docker network named `td-net`
```shell
docker network create td-net
```
``` Create td-net
2. 启动 TDengine
2. Start TDengine
以下命令在 td-net 网络上启动 TDengine 服务
Start the TDengine service on the `td-net` network with the following command:
```shell
docker run -d --name tdengine --network td-net \
......@@ -106,17 +106,17 @@ taos -h tdengine -P 6030
tdengine/tdengine
```
3. 在同一网络上的另一容器中启动 TDengine 客户端
3. Start the TDengine client in another container on the same network
```shell
docker run --rm -it --network td-net -e TAOS_FIRST_EP=tdengine tdengine/tdengine taos
# or
#docker run --rm -it --network td-net -e tdengine/tdengine taos -h tdengine
# docker run --rm -it --network td-net -e tdengine/tdengine taos -h tdengine
```
## 在容器中启动客户端应用
## Launching a client application in a container
如果想在容器中启动自己的应用的话,需要将相应的对 TDengine 的依赖也要加入到镜像中,例如:
If you want to start your application in a container, you need to add the corresponding dependencies on TDengine to the image as well, e.g.
```docker
FROM ubuntu:20.04
......@@ -133,7 +133,7 @@ RUN wget -c https://www.taosdata.com/assets-download/TDengine-client-${TDENGINE_
#CMD ["app"]
```
以下是一个 go 应用程序的示例:
Here is an example GO program:
```go
/*
......@@ -218,7 +218,7 @@ func checkErr(err error, prompt string) {
}
```
如下是完整版本的 dockerfile
Here is the full Dockerfile:
```docker
FROM golang:1.17.6-buster as builder
......@@ -251,7 +251,7 @@ COPY --from=builder /usr/src/app/app /usr/bin/
CMD ["app"]
```
目前我们已经有了 `main.go`, `go.mod`, `go.sum`, `app.dockerfile`, 现在可以构建出这个应用程序并在 `td-net` 网络上启动它
Now that we have `main.go`, `go.mod`, `go.sum`, `app.dockerfile`, we can build the application and start it on the `td-net` network.
```shell
$ docker build -t app -f app.dockerfile
......@@ -276,9 +276,9 @@ password: taosdata
2022-01-18 01:43:51.029 +0000 UTC 3
```
## 用 docker-compose 启动 TDengine 集群
## Start the TDengine cluster with docker-compose
1. 如下 docker-compose 文件启动一个 2 副本、2 管理节点、2 数据节点以及 1 个 arbitrator 的 TDengine 集群。
1. The following docker-compose file starts a TDengine cluster with two replicas, two management nodes, two data nodes, and one arbitrator.
```docker
version: "3"
......@@ -316,14 +316,14 @@ password: taosdata
```
:::note
- `VERSION` 环境变量被用来设置 tdengine image tag
- 在新创建的实例上必须设置 `TAOS_FIRST_EP` 以使其能够加入 TDengine 集群;如果有高可用需求,则需要同时使用 `TAOS_SECOND_EP`
- `TAOS_REPLICA` 用来设置缺省的数据库副本数量,其取值范围为[1,3]
在双副本环境下,推荐使用 arbitrator, 用 TAOS_ARBITRATOR 来设置
- The `VERSION` environment variable is used to set the tdengine image tag
- `TAOS_FIRST_EP` must be set on the newly created instance so that it can join the TDengine cluster; if there is a high availability requirement, `TAOS_SECOND_EP` needs to be used at the same time
- `TAOS_REPLICA` is used to set the default number of database replicas. Its value range is [1,3]
We recommend setting with `TAOS_ARBITRATOR` to use arbitrator in a two-nodes environment.
:::
2. 启动集群
2. Start the cluster
```shell
$ VERSION=2.4.0.0 docker-compose up -d
......@@ -337,7 +337,7 @@ password: taosdata
Creating test_td-2_1 ... done
```
3. 查看节点状态
3. Check the status of each node
```shell
$ docker-compose ps
......@@ -348,7 +348,7 @@ password: taosdata
test_td-2_1 /usr/bin/entrypoint.sh taosd Up 6030/tcp, 6031/tcp, 6032/tcp, 6033/tcp, 6034/tcp, 6035/tcp, 6036/tcp, 6037/tcp, 6038/tcp, 6039/tcp, 6040/tcp, 6041/tcp, 6042/tcp
```
4. 用 taos shell 查看 dnodes
4. Show dnodes via TDengine CLI
```shell
$ docker-compose exec td-1 taos -s "show dnodes"
......@@ -367,19 +367,19 @@ password: taosdata
## taosAdapter
1. taosAdapter 在 TDengine 容器中默认是启动的。如果想要禁用它,在启动时指定环境变量 `TAOS_DISABLE_ADAPTER=true`
1. taosAdapter is enabled by default in the TDengine container. If you want to disable it, specify the environment variable `TAOS_DISABLE_ADAPTER=true` at startup
2. 同时为了部署灵活起见,可以在独立的容器中启动 taosAdapter
2. At the same time, for flexible deployment, taosAdapter can be started in a separate container
```docker
services:
# ...
adapter:
image: tdengine/tdengine:$VERSION
command: taosadapter
```
```docker
services:
# ...
adapter:
image: tdengine/tdengine:$VERSION
command: taosadapter
````
如果要部署多个 taosAdapter 来提高吞吐量并提供高可用性,推荐配置方式为使用 nginx 等反向代理来提供统一的访问入口。具体配置方法请参考 nginx 的官方文档。如下是示例:
Suppose you want to deploy multiple taosAdapters to improve throughput and provide high availability. In that case, the recommended configuration method uses a reverse proxy such as Nginx to offer a unified access entry. For specific configuration methods, please refer to the official documentation of Nginx. Here is an example:
```docker
ersion: "3"
......@@ -459,11 +459,11 @@ password: taosdata
taoslog-td2:
```
## 使用 docker swarm 部署
## Deploy with docker swarm
如果要想将基于容器的 TDengine 集群部署在多台主机上,可以使用 docker swarm。首先要在这些主机上建立 docke swarm 集群,请参考 docker 官方文档。
If you want to deploy a container-based TDengine cluster on multiple hosts, you can use docker swarm. First, to establish a docker swarm cluster on these hosts, please refer to the official docker documentation.
docker-compose 文件可以参考上节。下面是使用 docker swarm 启动 TDengine 的命令:
The docker-compose file can refer to the previous section. Here is the command to start TDengine with docker swarm:
```shell
$ VERSION=2.4.0 docker stack deploy -c docker-compose.yml taos
......@@ -476,7 +476,7 @@ Creating service taos_adapter
Creating service taos_nginx
```
查看和管理
Checking status:
```shell
$ docker stack ps taos
......@@ -498,9 +498,9 @@ d8qr52envqzu taos_nginx replicated 1/1
9pzw7u02ichv taos_td-2 replicated 1/1 tdengine/tdengine:2.4.0
```
从上面的输出可以看到有两个 dnode, 和两个 taosAdapter,以及一个 nginx 反向代理服务。
From the above output, you can see two dnodes, two taosAdapters, and one Nginx reverse proxy service.
接下来,我们可以减少 taosAdapter 服务的数量
Next, we can reduce the number of taosAdapter services.
```shell
$ docker service scale taos_adapter=1
......
label: 配置参数
\ No newline at end of file
label: Configuration
\ No newline at end of file
---
title: 配置参数
description: "TDengine 客户端和服务配置列表"
sidebar_label: Configuration
title: Configuration Parameters
description: "Configuration parameters for client and server in TDengine"
---
## 为服务端指定配置文件
In this chapter, all the configuration parameters on both server and client side are described thoroughly.
TDengine 系统后台服务由 taosd 提供,可以在配置文件 taos.cfg 里修改配置参数,以满足不同场景的需求。配置文件的缺省位置在/etc/taos 目录,可以通过 taosd 命令行执行参数 -c 指定配置文件目录。比如,指定配置文件位于`/home/user` 这个目录:
## Configuration File on Server Side
```
On the server side, the actual service of TDengine is provided by an executable `taosd` whose parameters can be configured in file `taos.cfg` to meet the requirements of different use cases. The default location of `taos.cfg` is `/etc/taos`, but can be changed by using `-c` parameter on the CLI of `taosd`. For example, the configuration file can be put under `/home/user` and used like below
```bash
taosd -c /home/user
```
另外可以使用 `-C` 显示当前服务器配置参数:
Parameter `-C` can be used on the CLI of `taosd` to show its configuration, like below:
```
taosd -C
```
## 为客户端指定配置文件
## Configuration File on Client Side
TDengine 系统的前台交互客户端应用程序为 taos,以及应用驱动,它可以与 taosd 共享同一个配置文件 taos.cfg,也可以使用单独指定配置文件。运行 taos 时,使用参数-c 指定配置文件目录,如 taos -c /home/cfg,表示使用/home/cfg/目录下的 taos.cfg 配置文件中的参数,缺省目录是/etc/taos。更多 taos 的使用方法请见帮助信息 `taos --help`
TDengine CLI `taos` is the tool for users to interact with TDengine. It can share same configuration file as `taosd` or use a separate configuration file. When launching `taos`, parameter `-c` can be used to specify the location where its configuration file is. For example `taos -c /home/cfg` means `/home/cfg/taos.cfg` will be used. If `-c` is not used, the default location of the configuration file is `/etc/taos`. For more details please use `taos --help` to get.
**2.0.10.0 之后版本支持命令行以下参数显示当前客户端参数的配置**
From version 2.0.10.0 below commands can be used to show the configuration parameters of the client side.
```bash
taos -C
......@@ -31,1096 +34,1087 @@ taos -C
taos --dump-config
```
# 配置参数详细列表
:::note
本节内容覆盖产品的配置参数,适用于服务端的参数按其对产品行为的影响进行分类,这其中有部分参数也同时适用于客户端;但有少量参数仅适用于客户端,这部分参数进行了单独归类。
:::
# Configuration Parameters
:::note
配置文件参数修改后,需要重启*taosd*服务,或客户端应用才能生效。
`taosd` needs to be restarted for the parameters changed in the configuration file to take effect.
:::
## 连接相关
## Connection Parameters
### firstEp
| 属性 | 说明 |
| -------- | ----------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | taosd 或者 taos 启动时,主动连接的集群中首个 dnode 的 end point |
| 缺省值 | localhost:6030 |
| Attribute | Description |
| ------------- | ---------------------------------------------------------------------------------------------------- |
| Applicable | Server and Client |
| Meaning | The end point of the first dnode in the cluster to be connected to when `taosd` or `taos` is started |
| Default Value | localhost:6030 |
### secondEp
| 属性 | 说明 |
| -------- | ---------------------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | taosd 或者 taos 启动时,如果 firstEp 连接不上,尝试连接集群中第二个 dnode 的 end point |
| 缺省值 | 无 |
| Attribute | Description |
| ------------- | ---------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server and Client |
| Meaning | The end point of the second dnode to be connected to if the firstEp is not available when `taosd` or `taos` is started |
| Default Value | None |
### fqdn
| 属性 | 说明 |
| -------- | ----------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 数据节点的 FQDN。如果习惯 IP 地址访问,可设置为该节点的 IP 地址。 |
| 缺省值 | 缺省为操作系统配置的第一个 hostname。 |
| 补充说明 | 这个参数值的长度需要控制在 96 个字符以内。 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------ |
| Applicable | Server Only |
| Meaning | The FQDN of the host where `taosd` will be started. It can be IP address |
| Default Value | The first hostname configured for the hos |
| Note | It should be within 96 bytes |
### serverPort
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | taosd 启动后,对外服务的端口号 |
| 缺省值 | 6030 |
| 补充说明 | RESTful 服务在2.4.0.0之前(不含)由taosd提供,默认端口为 6041; 在2.4.0.0 及后续版本由 taosAdapter,默认端口为6041 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The port for external access after `taosd` is started 号 |
| Default Value | 6030 |
| Note | REST service is provided by `taosd` before 2.4.0.0 but by `taosAdapter` after 2.4.0.0, the default port of REST service is 6041 |
:::note
对于端口,TDengine 会使用从 serverPort 起 13 个连续的 TCP 和 UDP 端口号,请务必在防火墙打开。因此如果是缺省配置,需要打开从 6030 到 6042 共 13 个端口,而且必须 TCP 和 UDP 都打开。(详细的端口情况请参见下表)
TDengine uses continuous 13 ports, both TCP and TCP, from the port specified by `serverPort`. These ports need to be kept as open if firewall is enabled. Below table describes the ports used by TDengine in details.
:::
| 协议 | 默认端口 | 用途说明 | 修改方法 |
| :--- | :-------- | :---------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------- |
| TCP | 6030 | 客户端与服务端之间通讯。 | 由配置文件设置 serverPort 决定。 |
| TCP | 6035 | 多节点集群的节点间通讯。 | 随 serverPort 端口变化。 |
| TCP | 6040 | 多节点集群的节点间数据同步。 | 随 serverPort 端口变化。 |
| TCP | 6041 | 客户端与服务端之间的 RESTful 通讯。 | 随 serverPort 端口变化。注意 taosAdapter 配置或有不同,请参考相应[文档](/reference/taosadapter/)。 |
| TCP | 6042 | Arbitrator 的服务端口。 | 随 Arbitrator 启动参数设置变化。 |
| TCP | 6043 | TaosKeeper 监控服务端口。 | 随 TaosKeeper 启动参数设置变化。 |
| TCP | 6044 | 支持 StatsD 的数据接入端口。 | 随 taosAdapter 启动参数设置变化(2.3.0.1+以上版本)。 |
| UDP | 6045 | 支持 collectd 数据接入端口。 | 随 taosAdapter 启动参数设置变化(2.3.0.1+以上版本)。 |
| TCP | 6060 | 企业版内 Monitor 服务的网络端口。 | |
| UDP | 6030-6034 | 客户端与服务端之间通讯。 | 随 serverPort 端口变化。 |
| UDP | 6035-6039 | 多节点集群的节点间通讯。 | 随 serverPort 端口变化。
| Protocol | Default Port | Description | How to configure |
| :------- | :----------- | :----------------------------------------------- | :--------------------------------------------------------------------------------------------- |
| TCP | 6030 | Communication between client and server | serverPort |
| TCP | 6035 | Communication among server nodes in cluster | serverPort+5 |
| TCP | 6040 | Data syncup among server nodes in cluster | serverPort+10 |
| TCP | 6041 | REST connection between client and server | Prior to 2.4.0.0: serverPort+11; After 2.4.0.0 refer to [taosAdapter](/reference/taosadapter/) |
| TCP | 6042 | Service Port of Arbitrator | The parameter of Arbitrator |
| TCP | 6043 | Service Port of TaosKeeper | The parameter of TaosKeeper |
| TCP | 6044 | Data access port for StatsD | efer to [taosAdapter](/reference/taosadapter/) |
| UDP | 6045 | Data access for statsd | efer to [taosAdapter](/reference/taosadapter/) |
| TCP | 6060 | Port of Monitoring Service in Enterprise version | |
| UDP | 6030-6034 | Communication between client and server | serverPort |
| UDP | 6035-6039 | Communication among server nodes in cluster | serverPort |
### maxShellConns
| 属性 | 说明 |
| -------- | ----------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 一个 dnode 容许的连接数 |
| 取值范围 | 10-50000000 |
| 缺省值 | 5000 |
| Attribute | Description |
| ------------- | ---------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The maximum number of connections a dnode can accept |
| Value Range | 10-50000000 |
| Default Value | 5000 |
### maxConnections
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 一个数据库连接所容许的 dnode 连接数 |
| 取值范围 | 1-100000 |
| 缺省值 | 5000 |
| 补充说明 | 实际测试下来,如果默认没有配,选 50 个 worker thread 会产生 Network unavailable |
| Attribute | Description |
| ------------- | ----------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The maximum number of connections allowed by a database |
| Value Range | 1-100000 |
| Default Value | 5000 |
| Note | The maximum number of worker threads on the client side is maxConnections/100 |
### rpcForceTcp
| 属性 | 说明 |
| -------- | --------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 强制使用 TCP 传输 |
| 取值范围 | 0: 不开启 1: 开启 |
| 缺省值 | 0 |
| 补充说明 | 在网络比较差的环境中,建议开启。<br/>2.0 版本新增。 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------- |
| Applicable | Server and Client |
| Meaning | TCP is used forcely |
| Value Range | 0: disabled 1: enabled |
| Default Value | 0 |
| Note | It's suggested to configure to enable if network is not good enough |
## 监控相关
## Monitoring Parameters
### monitor
| 属性 | 说明 |
| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 服务器内部的系统监控开关。监控主要负责收集物理节点的负载状况,包括 CPU、内存、硬盘、网络带宽、HTTP 请求量的监控记录,记录信息存储在`LOG`库中。 |
| 取值范围 | 0:关闭监控服务, 1:激活监控服务。 |
| 缺省值 | 0 |
| Attribute | Description |
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The switch for monitoring inside server. The workload of the hosts, including CPU, memory, disk, network, TTP requests, are collected and stored in a system builtin database `LOG` |
| Value Range | 0: monitoring disabled, 1: monitoring enabled 务. |
| Default Value | 0 |
### monitorInterval
| 属性 | 说明 |
| -------- | -------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 监控数据库记录系统参数(CPU/内存)的时间间隔 |
| 单位 | 秒 |
| 取值范围 | 1-600 |
| 缺省值 | 30 |
| Attribute | Description |
| ------------- | ------------------------------------------ |
| Applicable | Server Only |
| Meaning | The interval of collecting system workload |
| Unit | second |
| Value Range | 1-600 |
| Default Value | 30 |
### telemetryReporting
| 属性 | 说明 |
| -------- | ---------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 是否允许 TDengine 采集和上报基本使用信息 |
| 取值范围 | 0:不允许 1:允许 |
| 缺省值 | 1 |
| Attribute | Description |
| ------------- | ---------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Switch for allowing TDengine to collect and report service usage information |
| Value Range | 0: Not allowed; 1: Allowed |
| Default Value | 1 |
## 查询相关
## Query Parameters
### queryBufferSize
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 为所有并发查询占用保留的内存大小。 |
| 单位 | MB |
| 缺省值 | 无 |
| 补充说明 | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。<br/>(2.0.15 以前的版本中,此参数的单位是字节) |
| Attribute | Description |
| ------------- | --------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The total memory size reserved for all queries |
| Unit | MB |
| Default Value | 无 |
| Note | It can be estimated by "maximum number of concurrent quries" _ "number of tables" _ 170 |
### ratioOfQueryCores
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --- |
| 适用范围 | 仅服务端适用 |
| 含义 | 设置查询线程的最大数量。 |
| 缺省值 | 1 |
| 补充说明 | 最小值 0 表示只有 1 个查询线程 <br/> 最大值 2 表示最大建立 2 倍 CPU 核数的查询线程。<br/>默认为 1,表示最大和 CPU 核数相等的查询线程。<br/>该值可以为小数,即 0.5 表示最大建立 CPU 核数一半的查询线程。 |
| Attribute | Description |
| ------------- | ----------------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Maximum number of query threads |
| Default Value | 1 |
| Note | value range: float number between [0, 2] 0: only 1 query thread; >0: the times of the number of cores |
### maxNumOfDistinctRes
| 属性 | 说明 |
| -------- | -------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 允许返回的 distinct 结果最大行数 |
| 取值范围 | 默认值为 10 万,最大值 1 亿 |
| 缺省值 | 10 万 |
| 补充说明 | 2.3 版本新增。 | |
| Attribute | Description |
| ------------- | -------------------------------------------- |
| Applicable | Server Only |
| Meaning | The maximum number of distinct rows returned |
| Value Range | [100,000 - 100, 000, 000] |
| Default Value | 100, 000 |
| Note | After version 2.3.0.0 |
## 区域相关
## Locale Parameters
### timezone
| 属性 | 说明 |
| -------- | ------------------------------ |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 时区 |
| 缺省值 | 从系统中动态获取当前的时区设置 |
| Attribute | Description |
| ------------- | ------------------------------- |
| Applicable | Server and Client |
| Meaning | TimeZone |
| Default Value | TimeZone configured in the host |
:::info
为应对多时区的数据写入和查询问题,TDengine 采用 Unix 时间戳(Unix Timestamp)来记录和存储时间戳。Unix 时间戳的特点决定了任一时刻不论在任何时区,产生的时间戳均一致。需要注意的是,Unix 时间戳是在客户端完成转换和记录。为了确保客户端其他形式的时间转换为正确的 Unix 时间戳,需要设置正确的时区。
To handle the data insertion and data query from multiple timezones, Unix Timestamp is used and stored TDengie. The timestamp generated from any timezones at same time is same in Unix timestamp. To make sure the time on client side can be converted to Unix timestamp correctly, the timezone must be set properly.
在 Linux 系统中,客户端会自动读取系统设置的时区信息。用户也可以采用多种方式在配置文件设置时区。例如:
On Linux system, TDengine clients automatically obtain timezone from the host. Alternatively, the timezone can be configured explicitly in configuration file `taos.cfg` like below.
```
timezone UTC-8
timezone GMT-8
timezone Asia/Shanghai
```
```
timezone UTC-8
timezone GMT-8
timezone Asia/Shanghai
```
均是合法的设置东八区时区的格式。但需注意,Windows 下并不支持 `timezone Asia/Shanghai` 这样的写法,而必须写成 `timezone UTC-8`
The above examples are all proper configuration for the timezone of UTC+8. On Windows system, however, `timezone Asia/Shanghai` is not supported, it must be set as `timezone UTC-8`.
时区的设置对于查询和写入 SQL 语句中非 Unix 时间戳的内容(时间戳字符串、关键词 now 的解析)产生影响。例如:
The setting for timezone impacts the strings not in Unix timestamp, keywords or functions related to date/time, for example
```sql
SELECT count(*) FROM table_name WHERE TS<'2019-04-11 12:01:08';
```
```sql
SELECT count(*) FROM table_name WHERE TS<'2019-04-11 12:01:08';
```
在东八区,SQL 语句等效于
If the timezone is UTC+8, the above SQL statement is equal to:
```sql
SELECT count(*) FROM table_name WHERE TS<1554955268000;
```
```sql
SELECT count(*) FROM table_name WHERE TS<1554955268000;
```
在 UTC 时区,SQL 语句等效于
If the timezone is UTC, it's equal to
```sql
SELECT count(*) FROM table_name WHERE TS<1554984068000;
```
```sql
SELECT count(*) FROM table_name WHERE TS<1554984068000;
```
为了避免使用字符串时间格式带来的不确定性,也可以直接使用 Unix 时间戳。此外,还可以在 SQL 语句中使用带有时区的时间戳字符串,例如:RFC3339 格式的时间戳字符串,2013-04-12T15:52:01.123+08:00 或者 ISO-8601 格式时间戳字符串 2013-04-12T15:52:01.123+0800。上述两个字符串转化为 Unix 时间戳不受系统所在时区的影响。
To avoid the problems of using time strings, Unix timestamp can be used directly. Furthermore, time strings with timezone can be used in SQL statement, for example "2013-04-12T15:52:01.123+08:00" in RFC3339 format or "2013-04-12T15:52:01.123+0800" in ISO-8601 format, they are not influenced by timezone setting when converted to Unix timestamp.
:::
### locale
| 属性 | 说明 |
| -------- | ----------------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 系统区位信息及编码格式 |
| 缺省值 | 系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过 API 设置 |
| Attribute | Description |
| ------------- | ------------------------- |
| Applicable | Server and Client |
| Meaning | Location code |
| Default Value | Locale configured in host |
:::info
TDengine 为存储中文、日文、韩文等非 ASCII 编码的宽字符,提供一种专门的字段类型 nchar。写入 nchar 字段的数据将统一采用 UCS4-LE 格式进行编码并发送到服务器。需要注意的是,编码正确性是客户端来保证。因此,如果用户想要正常使用 nchar 字段来存储诸如中文、日文、韩文等非 ASCII 字符,需要正确设置客户端的编码格式。
A specific type "nchar" is provied in TDengine to store non-ASCII characters such as Chinese, Japanese, Korean. The characters to be stored in nchar type are firstly encoded in UCS4-LE before sending to server side. To store non-ASCII characters correctly, the encoding format of the client side needs to be set properly.
客户端的输入的字符均采用操作系统当前默认的编码格式,在 Linux 系统上多为 UTF-8,部分中文系统编码则可能是 GB18030 或 GBK 等。在 docker 环境中默认的编码是 POSIX。在中文版 Windows 系统中,编码则是 CP936。客户端需要确保正确设置自己所使用的字符集,即客户端运行的操作系统当前编码字符集,才能保证 nchar 中的数据正确转换为 UCS4-LE 编码格式。
The characters input on the client side are encoded using the default system encoding, which is UTF-8 on Linux, or GB18030 or GBK on some systems in Chinese, POSIX in docker, CP936 on Windows in Chinese. The encoding of the operating system in use must be set correctly so that the characters in nchar type can be converted to UCS4-LE.
在 Linux 中 locale 的命名规则为: <语言>\_<地区>.<字符集编码> 如:zh_CN.UTF-8,zh 代表中文,CN 代表大陆地区,UTF-8 表示字符集。字符集编码为客户端正确解析本地字符串提供编码转换的说明。Linux 系统与 Mac OSX 系统可以通过设置 locale 来确定系统的字符编码,由于 Windows 使用的 locale 中不是 POSIX 标准的 locale 格式,因此在 Windows 下需要采用另一个配置参数 charset 来指定字符编码。在 Linux 系统中也可以使用 charset 来指定字符编码。
The locale definition standard on Linux is: <Language\>\_<Region\>.<charset\>, for example, in "zh_CN.UTF-8", "zh" means Chinese, "CN" means China mainland, "UTF-8" means charset. On Linux andMac OSX, the charset can be set by locale in the system. On Windows system another configuration parameter `charset` must be used to configure charset because the locale used on Windows is not POSIX standard. Of course, `charset` can also be used on Linux to specify the charset.
:::
### charset
| 属性 | 说明 |
| -------- | ----------------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 字符集编码 |
| 缺省值 | 系统中动态获取,如果自动获取失败,需要用户在配置文件设置或通过 API 设置 |
| Attribute | Description |
| ------------- | ---------------------------- |
| Applicable | Server and Client |
| Meaning | Character |
| Default Value | charset set in the system 系 |
:::info
如果配置文件中不设置 charset,在 Linux 系统中,taos 在启动时候,自动读取系统当前的 locale 信息,并从 locale 信息中解析提取 charset 编码格式。如果自动读取 locale 信息失败,则尝试读取 charset 配置,如果读取 charset 配置也失败,则中断启动过程。
On Linux, if `charset` is not set in `taos.cfg`, when `taos` is started, the charset is obtained from system locale. If obtaining charset from system locale fails, `taos` would fail to start. So on Linux system, if system locale is set properly, it's not necessary to set `charset` in `taos.cfg`. For example:
在 Linux 系统中,locale 信息包含了字符编码信息,因此正确设置了 Linux 系统 locale 以后可以不用再单独设置 charset。例如:
```
locale zh_CN.UTF-8
```
在 Windows 系统中,无法从 locale 获取系统当前编码。如果无法从配置文件中读取字符串编码信息,taos 默认设置为字符编码为 CP936。其等效在配置文件中添加如下配置:
```
charset CP936
```
如果需要调整字符编码,请查阅当前操作系统使用的编码,并在配置文件中正确设置。
```
locale zh_CN.UTF-8
```
在 Linux 系统中,如果用户同时设置了 locale 和字符集编码 charset,并且 locale 和 charset 的不一致,后设置的值将覆盖前面设置的值。
Besides, on Linux system, if the charset contained in `locale` is not consistent with that set by `charset`, the one who comes later in the configuration file is used.
```
locale zh_CN.UTF-8
charset GBK
```
```title="Effective charset is GBK"
locale zh_CN.UTF-8
charset GBK
```
则 charset 的有效值是 GBK。
```title="Effective charset is UTF-8"
charset GBK
locale zh_CN.UTF-8
```
```
charset GBK
locale zh_CN.UTF-8
```
On Windows system, it's not possible to obtain charset from system locale. If it's not set in configuration file `taos.cfg`, it would be default to CP936, same as set as below in `taos.cfg`. For example
charset 的有效值是 UTF-8。
```
charset CP936
```
:::
## 存储相关
## Storage Parameters
### dataDir
| 属性 | 说明 |
| -------- | ------------------------------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 数据文件目录,所有的数据文件都将写入该目录 |
| 缺省值 | /var/lib/taos |
| Attribute | Description |
| ------------- | ------------------------------------------- |
| Applicable | Server Only |
| Meaning | All data files are stored in this directory |
| Default Value | /var/lib/taos |
### cache
| 属性 | 说明 |
| -------- | ------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 内存块的大小 |
| 单位 | MB |
| 缺省值 | 16 |
| Attribute | Description |
| ------------- | ----------------------------- |
| Applicable | Server Only |
| Meaning | The size of each memory block |
| Unit | MB |
| Default Value | 16 |
### blocks
| 属性 | 说明 |
| -------- | ----------------------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 每个 vnode(tsdb)中有多少 cache 大小的内存块。因此一个 vnode 的用的内存大小粗略为(cache \* blocks) |
| 缺省值 | 6 |
| Attribute | Description |
| ------------- | -------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The number of memory blocks of size `cache` used by each vnode |
| Default Value | 6 |
### days
| 属性 | 说明 |
| -------- | -------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 数据文件存储数据的时间跨度 |
| 单位 | 天 |
| 缺省值 | 10 |
| Attribute | Description |
| ------------- | ----------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The time range of the data stored in single data file |
| Unit | day |
| Default Value | 10 |
### keep
| 属性 | 说明 |
| -------- | -------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 数据保留的天数 |
| 单位 | 天 |
| 缺省值 | 3650 |
| Attribute | Description |
| ------------- | -------------------------------------- |
| Applicable | Server Only |
| Meaning | The number of days for data to be kept |
| Unit | day |
| Default Value | 3650 |
### minRows
| 属性 | 说明 |
| -------- | ---------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 文件块中记录的最小条数 |
| 缺省值 | 100 |
| Attribute | Description |
| ------------- | ------------------------------------------ |
| Applicable | Server Only |
| Meaning | minimum number of rows in single data file |
| Default Value | 100 |
### maxRows
| 属性 | 说明 |
| -------- | ---------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 文件块中记录的最大条数 |
| 缺省值 | 4096 |
| Attribute | Description |
| ------------- | ------------------------------------------ |
| Applicable | Server Only |
| Meaning | maximum number of rows in single data file |
| Default Value | 4096 |
### walLevel
| 属性 | 说明 |
| -------- | --------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | WAL 级别 |
| 取值范围 | 1:写 wal, 但不执行 fsync <br/> 2:写 wal, 而且执行 fsync |
| 缺省值 | 1 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------ |
| Applicable | Server Only |
| Meaning | WAL level |
| Value Range | 1: wal enabled without fsync <br/> 2: wal enabled with fsync |
| Default Value | 1 |
### fsync
| 属性 | 说明 |
| -------- | -------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 当 wal 设置为 2 时,执行 fsync 的周期 |
| 单位 | 毫秒 |
| 取值范围 | 最小为 0,表示每次写入,立即执行 fsync <br/> 最大为 180000(三分钟) |
| 缺省值 | 3000 |
| Attribute | Description |
| ------------- | --------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The waiting time for invoking fsync when walLevel is 2 |
| Unit | millisecond |
| Value Range | 0: no waiting time, fsync is performed immediately once WAL is written; <br/> maximum value is 180000, i.e. 3 minutes |
| Default Value | 3000 |
### update
| 属性 | 说明 |
| -------- | ---------------------------------------------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 允许更新已存在的数据行 |
| 取值范围 | 0:不允许更新 <br/> 1:允许整行更新 <br/> 2:允许部分列更新。(2.1.7.0 版本开始此参数支持设为 2,在此之前取值只能是 [0, 1]) |
| 缺省值 | 0 |
| 补充说明 | 2.0.8.0 版本之前,不支持此参数。 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------------------------------------ |
| Applicable | Server Only |
| Meaning | If it's allowed to update existing data |
| Value Range | 0: not allowed <br/> 1: a row can only be updated as a whole <br/> 2: a part of columns can be updated |
| Default Value | 0 |
| Note | Not available from version 2.0.8.0 |
### cacheLast
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 是否在内存中缓存子表的最近数据 |
| 取值范围 | 0:关闭 <br/> 1:缓存子表最近一行数据 <br/> 2:缓存子表每一列的最近的非 NULL 值 <br/> 3:同时打开缓存最近行和列功能。(2.1.2.0 版本开始此参数支持 0 ~ 3 的取值范围,在此之前取值只能是 [0, 1]) |
| 缺省值 | 0 |
| 补充说明 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。
| Attribute | Description |
| ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Whether to cache the latest rows of each sub table in memory |
| Value Range | 0: not cached <br/> 1: the last row of each sub table is cached <br/> 2: the last non-null value of each column is cached <br/> 3: identical to both 1 and 2 are set |
| Default Value | 0 |
### minimalTmpDirGB
| 属性 | 说明 |
| -------- | ------------------------------------------------ |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 当日志文件夹的磁盘大小小于该值时,停止写临时文件 |
| 单位 | GB |
| 缺省值 | 1.0 |
| Attribute | Description |
| ------------- | ----------------------------------------------------------------------------------------------- |
| Applicable | Server and Client |
| Meaning | When the available disk space in tmpDir is below this threshold, writing to tmpDir is suspended |
| Unit | GB |
| Default Value | 1.0 |
### minimalDataDirGB
| 属性 | 说明 |
| -------- | ------------------------------------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 当日志文件夹的磁盘大小小于该值时,停止写时序数据 |
| 单位 | GB |
| 缺省值 | 2.0 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------------------------------ |
| Applicable | Server Only |
| Meaning | hen the available disk space in dataDir is below this threshold, writing to dataDir is suspended |
| Unit | GB |
| Default Value | 2.0 |
### vnodeBak
| 属性 | 说明 |
| -------- | -------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 删除 vnode 时是否备份 vnode 目录 |
| 取值范围 | 0:否,1:是 |
| 缺省值 | 1 |
| Attribute | Description |
| ------------- | --------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Whether to backup the corresponding vnode directory when a vnode is deleted |
| Value Range | 0: not backed up, 1: backup |
| Default Value | 1 |
## 集群相关
## Cluster Parameters
### numOfMnodes
| 属性 | 说明 |
| -------- | ------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 系统中管理节点个数 |
| 缺省值 | 3 |
| Attribute | Description |
| ------------- | ------------------------------ |
| Applicable | Server Only |
| Meaning | The number of management nodes |
| Default Value | 3 |
### replica
| 属性 | 说明 |
| -------- | ------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 副本个数 |
| 取值范围 | 1-3 |
| 缺省值 | 1 |
| Attribute | Description |
| ------------- | -------------------------- |
| Applicable | Server Only |
| Meaning | The number of replications |
| Value Range | 1-3 |
| Default Value | 1 |
### quorum
| 属性 | 说明 |
| -------- | -------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 多副本环境下指令执行的确认数要求 |
| 取值范围 | 1,2 |
| 缺省值 | 1 |
| Attribute | Description |
| ------------- | --------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The number of required confirmations for data replication in case of multiple replications 多 |
| Value Range | 1,2 |
| Default Value | 1 |
### role
| 属性 | 说明 |
| -------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | dnode 的可选角色 |
| 取值范围 | 0:any(既可作为 mnode,也可分配 vnode) <br/> 1:mgmt(只能作为 mnode,不能分配 vnode) <br/> 2:dnode(不能作为 mnode,只能分配 vnode) |
| 缺省值 | 0 |
| Attribute | Description |
| ------------- | --------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The role of the dnode |
| Value Range | 0: both mnode and vnode <br/> 1: mnode only <br/> 2: dnode only |
| Default Value | 0 |
### balance
| 属性 | 说明 |
| -------- | ---------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 是否启动负载均衡 |
| 取值范围 | 0,1 |
| 缺省值 | 1 |
| Attribute | Description |
| ------------- | ------------------------ |
| Applicable | Server Only |
| Meaning | Automatic load balancing |
| Value Range | 0: disabled, 1: enabled |
| Default Value | 1 |
### balanceInterval
| 属性 | 说明 |
| -------- | ------------------------------------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 管理节点在正常运行状态下,检查负载均衡的时间间隔 |
| 单位 | 秒 |
| 取值范围 | 1-30000 |
| 缺省值 | 300 |
| Attribute | Description |
| ------------- | ----------------------------------------------- |
| Applicable | Server Only |
| Meaning | The interval for checking load balance by mnode |
| Unit | second |
| Value Range | 1-30000 |
| Default Value | 300 |
### arbitrator
| 属性 | 说明 |
| -------- | ------------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 系统中裁决器的 end point,其格式如firstEp |
| 缺省值 | 空 |
| Attribute | Description |
| ------------- | -------------------------------------------------- |
| Applicable | Server Only |
| Meaning | End point of arbitrator, format is same as firstEp |
| Default Value | None |
## Time Parameters
## 时间相关
### precision
| Attribute | Description |
| ------------- | ------------------------------------------------- |
| Applicable | Server only |
| Meaning | Time precision used for each database |
| Value Range | ms: millisecond; us: microsecond ; ns: nanosecond |
| Default Value | ms |
### rpcTimer
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | rpc 重试时长 |
| 单位 | 毫秒 |
| 取值范围 | 100-3000 |
| 缺省值 | 300 |
| Attribute | Description |
| ------------- | ------------------ |
| Applicable | Server and Client |
| Meaning | rpc retry interval |
| Unit | milliseconds |
| Value Range | 100-3000 |
| Default Value | 300 |
### rpcMaxTime
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | rpc 等待应答最大时长 |
| 单位 | 秒 |
| 取值范围 | 100-7200 |
| 缺省值 | 600 |
| Attribute | Description |
| ------------- | ---------------------------------- |
| Applicable | Server and Client |
| Meaning | maximum wait time for rpc response |
| Unit | second |
| Value Range | 100-7200 |
| Default Value | 600 |
### statusInterval
| 属性 | 说明 |
| -------- | --------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | dnode 向 mnode 报告状态间隔 |
| 单位 | 秒 |
| 取值范围 | 1-10 |
| 缺省值 | 1 |
| Attribute | Description |
| ------------- | ----------------------------------------------- |
| Applicable | Server Only |
| Meaning | the interval of dnode reporting status to mnode |
| Unit | second |
| Value Range | 1-10 |
| Default Value | 1 |
### shellActivityTimer
| 属性 | 说明 |
| -------- | --------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | shell 客户端向 mnode 发送心跳间隔 |
| 单位 | 秒 |
| 取值范围 | 1-120 |
| 缺省值 | 3 |
| Attribute | Description |
| ------------- | ------------------------------------------------------ |
| Applicable | Server and Client |
| Meaning | The interval for taos shell to send heartbeat to mnode |
| Unit | second |
| Value Range | 1-120 |
| Default Value | 3 |
### tableMetaKeepTimer
| 属性 | 说明 |
| -------- | --------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 表的元数据 cache 时长 |
| 单位 | 秒 |
| 取值范围 | 1-8640000 |
| 缺省值 | 7200 |
| Attribute | Description |
| ------------- | -------------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The expiration time for metadata in cache, once it's reached the client would refresh the metadata |
| Unit | second |
| Value Range | 1-8640000 |
| Default Value | 7200 |
### maxTmrCtrl
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 定时器个数 |
| 单位 | 个 |
| 取值范围 | 8-2048 |
| 缺省值 | 512 |
| Attribute | Description |
| ------------- | ------------------------ |
| Applicable | Server and Client |
| Meaning | Maximum number of timers |
| Unit | None |
| Value Range | 8-2048 |
| Default Value | 512 |
### offlineThreshold
| 属性 | 说明 |
| -------- | ------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | dnode 离线阈值,超过该时间将导致 dnode 离线 |
| 单位 | 秒 |
| 取值范围 | 5-7200000 |
| 缺省值 | 86400\*10(10 天) |
| Attribute | Description |
| ------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The expiration time for dnode online status, once it's reached before receiving status from a node, the dnode becomes offline |
| Unit | second |
| Value Range | 5-7200000 |
| Default Value | 86400\*10(10 天) |
## 性能调优
## Performance Optimization Parameters
### numOfThreadsPerCore
| 属性 | 说明 |
| -------- | ----------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 每个 CPU 核生成的队列消费者线程数量 |
| 缺省值 | 1.0 |
| Attribute | Description |
| ------------- | ------------------------------------------- |
| Applicable | Server and Client |
| Meaning | The number of consumer threads per CPU core |
| Default Value | 1.0 |
### ratioOfQueryThreads
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 设置查询线程的最大数量 |
| 取值范围 | 0:表示只有 1 个查询线程 <br/> 1:表示最大和 CPU 核数相等的查询线程 <br/> 2:表示最大建立 2 倍 CPU 核数的查询线程。 |
| 缺省值 | 1 |
| 补充说明 | 该值可以为小数,即 0.5 表示最大建立 CPU 核数一半的查询线程。 |
| Attribute | Description |
| ------------- | --------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Maximum number of query threads 量 |
| Value Range | 0: Only one query thread <br/> 1: Same as number of CPU cores <br/> 2: two times of CPU cores |
| Default Value | 1 |
| Note | This value can be a float number, 0.5 means half of the CPU cores |
### maxVgroupsPerDb
| 属性 | 说明 |
| -------- | ------------------------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 每个 DB 中 能够使用的最大 vnode 个数 |
| 取值范围 | 0-8192 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ------------------------------------ |
| Applicable | Server Only |
| Meaning | Maximum number of vnodes for each DB |
| Value Range | 0-8192 |
| Default Value | |
### maxTablesPerVnode
| 属性 | 说明 |
| -------- | --------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 每个 vnode 中能够创建的最大表个数 |
| 缺省值 | 1000000 |
| Attribute | Description |
| ------------- | -------------------------------------- |
| Applicable | Server Only |
| Meaning | Maximum number of tables in each vnode |
| Default Value | 1000000 |
### minTablesPerVnode
| 属性 | 说明 |
| -------- | --------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 每个 vnode 中必须创建表的最小数量 |
| 缺省值 | 1000 |
| Attribute | Description |
| ------------- | -------------------------------------- |
| Applicable | Server Only |
| Meaning | Minimum number of tables in each vnode |
| Default Value | 1000 |
### tableIncStepPerVnode
| 属性 | 说明 |
| -------- | ----------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 每个 vnode 中超过最小表数,i.e. minTablesPerVnode, 后递增步长 |
| 缺省值 | 1000 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | When minTablesPerVnode is reached, the number of tables are allocated for a vnode each time |
| Default Value | 1000 |
### maxNumOfOrderedRes
| 属性 | 说明 |
| -------- | -------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 支持超级表时间排序允许的最多记录数限制 |
| 缺省值 | 10 万 |
| Attribute | Description |
| ------------- | ------------------------------------------- |
| Applicable | Server and Client |
| Meaning | Maximum number of rows ordered for a STable |
| Default Value | 100,000 |
### mnodeEqualVnodeNum
| 属性 | 说明 |
| -------- | ---------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 将一个 mnode 等同于 vnode 消耗的个数 |
| 缺省值 | 4 |
| Attribute | Description |
| ------------- | ----------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The number of vnodes whose system resources consumption are considered as equal to single mnode |
| Default Value | 4 |
### numOfCommitThreads
| 属性 | 说明 |
| -------- | ---------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 设置写入线程的最大数量 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ----------------------------------------- |
| Applicable | Server Only |
| Meaning | Maximum of threads for committing to disk |
| Default Value | |
## 压缩相关
## Compression Parameters
### comp
| 属性 | 说明 |
| -------- | ----------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 文件压缩标志位 |
| 取值范围 | 0:关闭,1:一阶段压缩,2:两阶段压缩 |
| 缺省值 | 2 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Whether data is compressed |
| Value Range | 0: uncompressed, 1: One phase compression, 2: Two phase compression |
| Default Value | 2 |
### tsdbMetaCompactRatio
| 属性 | 说明 |
| -------- | -------------------------------------------------------------- |
| 含义 | tsdb meta 文件中冗余数据超过多少阈值,开启 meta 文件的压缩功能 |
| 取值范围 | 0:不开启,[1-100]:冗余数据比例 |
| 缺省值 | 0 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------------------------- |
| Meaning | The threshold for percentage of redundant in meta file to trigger compression for meta file |
| Value Range | 0: no compression forever, [1-100]: The threshold percentage |
| Default Value | 0 |
### compressMsgSize
| 属性 | 说明 |
| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 客户端与服务器之间进行消息通讯过程中,对通讯的消息进行压缩的阈值。如果要压缩消息,建议设置为 64330 字节,即大于 64330 字节的消息体才进行压缩。 |
| 单位 | bytes |
| 取值范围 | `0 `表示对所有的消息均进行压缩 >0: 超过该值的消息才进行压缩 -1: 不压缩 |
| 缺省值 | -1 |
| Attribute | Description |
| ------------- | -------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The threshold for message size to compress the message.. |
| Unit | bytes |
| Value Range | 0: already compress; >0: compress when message exceeds it; -1: always uncompress |
| Default Value | -1 |
### compressColData
| 属性 | 说明 |
| -------- | --------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 客户端与服务器之间进行消息通讯过程中,对服务器端查询结果进行列压缩的阈值。 |
| 单位 | bytes |
| 取值范围 | 0: 对所有查询结果均进行压缩 >0: 查询结果中任意列大小超过该值的消息才进行压缩 -1: 不压缩 |
| 缺省值 | -1 |
| 补充说明 | 2.3.0.0 版本新增。 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The threshold for size of column data to trigger compression for the query result |
| Unit | bytes |
| Value Range | 0: always compress; >0: only compress when the size of any column data exceeds the threshold; -1: always uncompress |
| Default Value | -1 |
| Note | available from version 2.3.0.0 |
### lossyColumns
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| 适用范围 | 服务器端 |
| 含义 | 配置要进行有损压缩的浮点数据类型 |
| 取值范围 | 空字符串:关闭有损压缩 <br/> float:只对 float 类型进行有损压缩 <br/>double:只对 double 类型进行有损压缩 <br/> float \| double:float double 都进行有损压缩 |
| 缺省值 | 空字符串 |
| 补充说明 | 有损压缩默认为关闭状态,只有配置后才生效 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | The floating number types for lossy compression |
| Value Range | "": lossy compression is disabled <br/> float: only for float <br/>double: only for double <br/> float \| double: for both float and double |
| Default Value | "" , i.e. disabled |
### fPrecision
| 属性 | 说明 |
| -------- | -------------------------------- |
| 适用范围 | 服务器端 |
| 含义 | 设置 float 类型浮点数压缩精度 |
| 取值范围 | 0.1 ~ 0.00000001 |
| 缺省值 | 0.00000001 |
| 补充说明 | 小于此值的浮点数尾数部分将被截取 |
| Attribute | Description |
| ------------- | ----------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Compression precision for float type |
| Value Range | 0.1 ~ 0.00000001 |
| Default Value | 0.00000001 |
| Note | The fractional part lower than this value will be discarded |
### dPrecision
| 属性 | 说明 |
| -------- | -------------------------------- |
| 适用范围 | 服务器端 |
| 含义 | 设置 double 类型浮点数压缩精度 |
| 取值范围 | 0.1 ~ 0.0000000000000001 |
| 缺省值 | 0.0000000000000001 |
| 补充说明 | 小于此值的浮点数尾数部分将被截取 |
| Attribute | Description |
| ------------- | ----------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Compression precision for double type |
| Value Range | 0.1 ~ 0.0000000000000001 |
| Default Value | 0.0000000000000001 |
| Note | The fractional part lower than this value will be discarded |
## 连续查询相关
## Continuous Query Prameters
### stream
| 属性 | 说明 |
| -------- | ------------------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 是否启用连续查询(流计算功能) |
| 取值范围 | 0:不允许 <br/> 1:允许 |
| 缺省值 | 1 |
| Attribute | Description |
| ------------- | ---------------------------------- |
| Applicable | Server Only |
| Meaning | Whether to enable continuous query |
| Value Range | 0: disabled <br/> 1: enabled |
| Default Value | 1 |
### minSlidingTime
| 属性 | 说明 |
| -------- | ----------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 最小滑动窗口时长 |
| 单位 | 毫秒 |
| 取值范围 | 10-1000000 |
| 缺省值 | 10 |
| 补充说明 | 支持 us 补值后,这个值就是 1us 了。 |
| Attribute | Description |
| ------------- | -------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Minimum sliding time of time window |
| Unit | millisecond or microsecond , depending on time precision |
| Value Range | 10-1000000 |
| Default Value | 10 |
### minIntervalTime
| 属性 | 说明 |
| -------- | -------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 时间窗口最小值 |
| 单位 | 毫秒 |
| 取值范围 | 1-1000000 |
| 缺省值 | 10 |
| Attribute | Description |
| ------------- | --------------------------- |
| Applicable | Server Only |
| Meaning | Minimum size of time window |
| Unit | millisecond |
| Value Range | 1-1000000 |
| Default Value | 10 |
### maxStreamCompDelay
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 连续查询启动最大延迟 |
| 单位 | 毫秒 |
| 取值范围 | 10-1000000000 |
| 缺省值 | 20000 |
| Attribute | Description |
| ------------- | ------------------------------------------------ |
| Applicable | Server Only |
| Meaning | Maximum delay before starting a continuous query |
| Unit | millisecond |
| Value Range | 10-1000000000 |
| Default Value | 20000 |
### maxFirstStreamCompDelay
| 属性 | 说明 |
| -------- | -------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 第一次连续查询启动最大延迟 |
| 单位 | 毫秒 |
| 取值范围 | 10-1000000000 |
| 缺省值 | 10000 |
| Attribute | Description |
| ------------- | -------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Maximum delay time before starting a continuous query the first time |
| Unit | millisecond |
| Value Range | 10-1000000000 |
| Default Value | 10000 |
### retryStreamCompDelay
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 连续查询重试等待间隔 |
| 单位 | 毫秒 |
| 取值范围 | 10-1000000000 |
| 缺省值 | 10 |
| Attribute | Description |
| ------------- | --------------------------------------------- |
| Applicable | Server Only |
| Meaning | Delay time before retrying a continuous query |
| Unit | millisecond |
| Value Range | 10-1000000000 |
| Default Value | 10 |
### streamCompDelayRatio
| 属性 | 说明 |
| -------- | -------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 连续查询的延迟时间计算系数,实际延迟时间为本参数乘以计算时间窗口 |
| 取值范围 | 0.1-0.9 |
| 缺省值 | 0.1 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------ |
| Applicable | Server Only |
| Meaning | The delay ratio, with time window size as the base, for continuous query |
| Value Range | 0.1-0.9 |
| Default Value | 0.1 |
:::info
为避免多个 stream 同时执行占用太多系统资源,程序中对 stream 的执行时间人为增加了一些随机的延时。<br/>maxFirstStreamCompDelay 是 stream 第一次执行前最少要等待的时间。<br/>streamCompDelayRatio 是延迟时间的计算系数,它乘以查询的 interval 后为延迟时间基准。<br/>maxStreamCompDelay 是延迟时间基准的上限。<br/>实际延迟时间为一个不超过延迟时间基准的随机值。<br/>stream 某次计算失败后需要重试,retryStreamCompDelay 是重试的等待时间基准。<br/>实际重试等待时间为不超过等待时间基准的随机值。
To prevent system resource from being exhausted by multiple concurrent streams, a random delay is applied on each stream automatically. `maxFirstStreamCompDelay` is the maximum delay time before a continuous query is started the first time. `streamCompDelayRatio` is the ratio for calculating delay time, with the size of the time window as base. `maxStreamCompDelay` is the maximum delay time. The actual delay time is a random time not bigger than `maxStreamCompDelay`. If a continuous query fails, `retryStreamComDelay` is the delay time before retrying it, also not bigger than `maxStreamCompDelay`.
:::
## HTTP 相关
## HTTP Parameters
:::note
HTTP服务在2.4.0.0(不含)以前的版本中由taosd提供,在2.4.0.0以后(含)由taosAdapter提供。
本节的配置参数仅在2.4.0.0(不含)以前的版本中生效。如果您使用的是2.4.0.0(含)及以后的版本请参考[文档](/reference/taosadapter/)
HTTP server had been provided by `taosd` prior to version 2.4.0.0, now is provided by `taosAdapter` after version 2.4.0.0.
The parameters described in this section are only application in versions prior to 2.4.0.0. If you are using any version from 2.4.0.0, please refer to [taosAdapter]](/reference/taosadapter/).
:::
### http
| 属性 | 说明 |
| -------- | --------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 服务器内部的 http 服务开关。 |
| 取值范围 | 0:关闭 http 服务, 1:激活 http 服务。 |
| 缺省值 | 1 |
| Attribute | Description |
| ------------- | ------------------------------ |
| Applicable | Server Only |
| Meaning | Whether to enable http service |
| Value Range | 0: disabled, 1: enabled |
| Default Value | 1 |
### httpEnableRecordSql
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 记录通过 RESTFul 接口,产生的 SQL 调用。 |
| 缺省值 | 0 |
| 补充说明 | 生成的文件(httpnote.0/httpnote.1),与服务端日志所在目录相同。 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------- |
| Applicable | Server Only |
| Meaning | Whether to record the SQL invocation through REST interface |
| Default Value | 0: false; 1: true |
| Note | The resulting files, i.e. httpnote.0/httpnote.1, are located under logDir |
### httpMaxThreads
| 属性 | 说明 |
| -------- | --------------------------------------------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | RESTFul 接口的线程数。taosAdapter 配置或有不同,请参考相应[文档](/reference/taosadapter/) |
| 缺省值 | 2 |
| Attribute | Description |
| ------------- | -------------------------------------------- |
| Applicable | Server Only |
| Meaning | The number of threads for RESTFul interface. |
| Default Value | 2 |
### restfulRowLimit
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | RESTFul 接口单次返回的记录条数。taosAdapter 配置或有不同,请参考相应[文档](/reference/taosadapter/) |
| 缺省值 | 10240 |
| 补充说明 | 最大 10,000,000 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------ |
| Applicable | Server Only |
| Meaning | Maximum number of rows returned each time by REST interface. |
| Default Value | 10240 |
| Note | Maximum value is 10,000,000 |
### httpDBNameMandatory
| 属性 | 说明 |
| -------- | ---------------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | 是否在 URL 中输入 数据库名称 |
| 取值范围 | 0:不开启,1:开启 |
| 缺省值 | 0 |
| 补充说明 | 2.3 版本新增。 |
| Attribute | Description |
| ------------- | ---------------------------------------- |
| Applicable | Server Only |
| Meaning | Whether database name is required in URL |
| Value Range | 0:not required, 1: required |
| Default Value | 0 |
| Note | From version 2.3.0.0 |
## 日志相关
## Log Parameters
### logDir
| 属性 | 说明 |
| -------- | -------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 日志文件目录,客户端和服务器的运行日志将写入该目录 |
| 缺省值 | /var/log/taos |
| Attribute | Description |
| ------------- | ----------------------------------- |
| Applicable | Server and Client |
| Meaning | The directory for writing log files |
| Default Value | /var/log/taos |
### minimalLogDirGB
| 属性 | 说明 |
| -------- | -------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 当日志文件夹的磁盘大小小于该值时,停止写日志 |
| 单位 | GB |
| 缺省值 | 1.0 |
| Attribute | Description |
| ------------- | -------------------------------------------------------------------------------------------------- |
| Applicable | Server and Client |
| Meaning | When the available disk space in logDir is below this threshold, writing to log files is suspended |
| Unit | GB |
| Default Value | 1.0 |
### numOfLogLines
| 属性 | 说明 |
| -------- | ---------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 单个日志文件允许的最大行数。 |
| 缺省值 | 10,000,000 |
| Attribute | Description |
| ------------- | ------------------------------------------ |
| Applicable | Server and Client |
| Meaning | Maximum number of lines in single log file |
| Default Value | 10,000,000 |
### asyncLog
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 日志写入模式 |
| 取值范围 | 0:同步、1:异步 |
| 缺省值 | 1 |
| Attribute | Description |
| ------------- | ---------------------------- |
| Applicable | Server and Client |
| Meaning | The mode of writing log file |
| Value Range | 0: sync way; 1: async way |
| Default Value | 1 |
### logKeepDays
| 属性 | 说明 |
| -------- | ----------------------------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 日志文件的最长保存时间 |
| 单位 | 天 |
| 缺省值 | 0 |
| 补充说明 | 大于 0 时,日志文件会被重命名为 taosdlog.xxx,其中 xxx 为日志文件最后修改的时间戳。 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server and Client |
| Meaning | The number of days for log files to be kept |
| Unit | day |
| Default Value | 0 |
| Note | When it's bigger than 0, the log file would be renamed to "taosdlog.xxx" in which "xxx" is the timestamp when the file is changed last time |
### debugFlag
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 运行日志开关 |
| 取值范围 | 131(输出错误和警告日志),135(输出错误、警告和调试日志),143(输出错误、警告、调试和跟踪日志) |
| 缺省值 | 131 或 135(不同模块有不同的默认值) |
| Attribute | Description |
| ------------- | --------------------------------------------------------- |
| Applicable | Server and Client |
| Meaning | Log level |
| Value Range | 131: INFO/WARNING/ERROR; 135: plus DEBUG; 143: plus TRACE |
| Default Value | 131 or 135, depending on the module |
### mDebugFlag
| 属性 | 说明 |
| -------- | ------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 管理模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | 135 |
| Attribute | Description |
| ------------- | ------------------ |
| Applicable | Server Only |
| Meaning | Log level of mnode |
| Value Range | same as debugFlag |
| Default Value | 135 |
### dDebugFlag
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | dnode 模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | 135 |
| Attribute | Description |
| ------------- | ------------------ |
| Applicable | Server and Client |
| Meaning | Log level of dnode |
| Value Range | same as debugFlag |
| Default Value | 135 |
### sDebugFlag
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | sync 模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | 135 |
| Attribute | Description |
| ------------- | ------------------------ |
| Applicable | Server and Client |
| Meaning | Log level of sync module |
| Value Range | same as debugFlag |
| Default Value | 135 |
### wDebugFlag
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | wal 模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | 135 |
| Attribute | Description |
| ------------- | ----------------------- |
| Applicable | Server and Client |
| Meaning | Log level of WAL module |
| Value Range | same as debugFlag |
| Default Value | 135 |
### sdbDebugFlag
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | sdb 模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | 135 |
| Attribute | Description |
| ------------- | ---------------------- |
| Applicable | Server and Client |
| Meaning | logLevel of sdb module |
| Value Range | same as debugFlag |
| Default Value | 135 |
### rpcDebugFlag
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | rpc 模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ----------------------- |
| Applicable | Server and Client |
| Meaning | Log level of rpc module |
| Value Range | Same as debugFlag |
| Default Value | |
### tmrDebugFlag
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 定时器模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ------------------------- |
| Applicable | Server and Client |
| Meaning | Log level of timer module |
| Value Range | Same as debugFlag |
| Default Value | |
### cDebugFlag
| 属性 | 说明 |
| -------- | --------------------- |
| 适用范围 | 仅客户端适用 |
| 含义 | client 模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ------------------- |
| Applicable | Client Only |
| Meaning | Log level of Client |
| Value Range | Same as debugFlag |
| Default Value | |
### jniDebugFlag
| 属性 | 说明 |
| -------- | ------------------ |
| 适用范围 | 仅客户端适用 |
| 含义 | jni 模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ----------------------- |
| Applicable | Client Only |
| Meaning | Log level of jni module |
| Value Range | 同上 |
| Default Value | |
### odbcDebugFlag
| 属性 | 说明 |
| -------- | ------------------- |
| 适用范围 | 仅客户端适用 |
| 含义 | odbc 模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ------------------------ |
| Applicable | Client Only |
| Meaning | Log level of odbc module |
| Value Range | Same as debugFlag |
| Default Value | |
### uDebugFlag
| 属性 | 说明 |
| -------- | ---------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 共用功能模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | -------------------------- |
| Applicable | Server and Client |
| Meaning | Log level of common module |
| Value Range | Same as debugFlag |
| Default Value | |
### httpDebugFlag
| 属性 | 说明 |
| -------- | ------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | http 模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ------------------------------------------- |
| Applicable | Server Only |
| Meaning | Log level of http module (prior to 2.4.0.0) |
| Value Range | Same as debugFlag |
| Default Value | |
### mqttDebugFlag
| 属性 | 说明 |
| -------- | ------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | mqtt 模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ------------------------ |
| Applicable | Server Only |
| Meaning | Log level of mqtt module |
| Value Range | Same as debugFlag |
| Default Value | |
### monitorDebugFlag
| 属性 | 说明 |
| -------- | ------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 监控模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ------------------------------ |
| Applicable | Server Only |
| Meaning | Log level of monitoring module |
| Value Range | Same as debugFlag |
| Default Value | |
### qDebugFlag
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 查询模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ------------------------- |
| Applicable | Server and Client |
| Meaning | Log level of query module |
| Value Range | Same as debugFlag |
| Default Value | |
### vDebugFlag
| 属性 | 说明 |
| -------- | -------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | vnode 模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ------------------ |
| Applicable | Server and Client |
| Meaning | Log level of vnode |
| Value Range | Same as debugFlag |
| Default Value | |
### tsdbDebugFlag
| 属性 | 说明 |
| -------- | ------------------- |
| 适用范围 | 仅服务端适用 |
| 含义 | TSDB 模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ------------------------ |
| Applicable | Server Only |
| Meaning | Log level of TSDB module |
| Value Range | Same as debugFlag |
| Default Value | |
### cqDebugFlag
| 属性 | 说明 |
| -------- | ---------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 连续查询模块的日志开关 |
| 取值范围 | 同上 |
| 缺省值 | |
| Attribute | Description |
| ------------- | ------------------------------------ |
| Applicable | Server and Client |
| Meaning | Log level of continuous query module |
| Value Range | Same as debugFlag |
| Default Value | |
## 仅客户端适用
## Client Only
### maxSQLLength
| 属性 | 说明 |
| -------- | --------------------------- |
| 适用范围 | 仅客户端适用 |
| 含义 | 单条 SQL 语句允许的最长限制 |
| 单位 | bytes |
| 取值范围 | 65480-1048576 |
| 缺省值 | 1048576 |
| Attribute | Description |
| ------------- | -------------------------------------- |
| Applicable | Client Only |
| Meaning | Maximum length of single SQL statement |
| Unit | bytes |
| Value Range | 65480-1048576 |
| Default Value | 1048576 |
### tscEnableRecordSql
| 属性 | 说明 |
| -------- | ----------------------------------------------------------------------------------- |
| 含义 | 是否记录客户端 sql 语句到文件 |
| 取值范围 | 0:否,1:是 |
| 缺省值 | 0 |
| 补充说明 | 生成的文件(tscnote-xxxx.0/tscnote-xxx.1,xxxx 是 pid),与客户端日志所在目录相同。 |
| Attribute | Description |
| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |
| Meaning | Whether to record SQL statements in file |
| Value Range | 0: false, 1: true |
| Default Value | 0 |
| Note | The generated files are named as "tscnote-xxxx.0/tscnote-xxx.1" in which "xxxx" is the pid of the client, and located at same place as client log |
### maxBinaryDisplayWidth
| 属性 | 说明 |
| -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 含义 | Taos shell 中 binary 和 nchar 字段的显示宽度上限,超过此限制的部分将被隐藏 |
| 取值范围 | 5 - |
| 缺省值 | 30 |
| Attribute | Description |
| ------------- | --------------------------------------------------------------------------------------------------- |
| Meaning | Maximum display width of binary and nchar in taos shell. Anything beyond this limit would be hidden |
| Value Range | 5 - |
| Default Value | 30 |
:::info
实际上限按以下规则计算:如果字段值的长度大于 maxBinaryDisplayWidth,则显示上限为 **字段名长度****maxBinaryDisplayWidth** 的较大者。<br/>否则,上限为 **字段名长度****字段值长度** 的较大者。<br/>可在 shell 中通过命令 set max_binary_display_width nn 动态修改此选项
If the length of value exceeds `maxBinaryDisplayWidth`, then the actual display width is max(column name, maxBinaryDisplayLength); otherwise the actual display width is max(length of column name, length of column value). This parameter can also be changed dynamically using `set max_binary_display_width <nn\>` in TDengine CLI `taos`.
:::
### maxWildCardsLength
| 属性 | 说明 |
| -------- | ------------------------------------------ |
| 含义 | 设定 LIKE 算子的通配符字符串允许的最大长度 |
| 单位 | bytes |
| 取值范围 | 0-16384 |
| 缺省值 | 100 |
| 补充说明 | 2.1.6.1 版本新增。 |
| Attribute | Description |
| ------------- | ----------------------------------------------------- |
| Meaning | The maximum length for wildcard string used with LIKE |
| Unit | bytes |
| Value Range | 0-16384 |
| Default Value | 100 |
| Note | From version 2.1.6.1 |
### clientMerge
| 属性 | 说明 |
| -------- | ---------------------------- |
| 含义 | 是否允许客户端对写入数据去重 |
| 取值范围 | 0:不开启,1:开启 |
| 缺省值 | 0 |
| 补充说明 | 2.3 版本新增。 |
| Attribute | Description |
| ------------- | --------------------------------------------------- |
| Meaning | Whether to filter out duplicate data on client side |
| Value Range | 0: false; 1: true |
| Default Value | 0 |
| Note | From version 2.3.0.0 |
### maxRegexStringLen
| 属性 | 说明 |
| -------- | -------------------------- |
| 含义 | 正则表达式最大允许长度 |
| 取值范围 | 默认值 128,最大长度 16384 |
| 缺省值 | 128 |
| 补充说明 | 2.3 版本新增。 |
| Attribute | Description |
| ------------- | ----------------------------------------------------------- |
| Meaning | Maximum length of regular expression 正则表达式最大允许长度 |
| Value Range | [128, 16384] |
| Default Value | 128 |
| Note | From version 2.3.0.0 |
## 其他
## Other Parameters
### enableCoreFile
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | 是否开启服务 crash 时生成 core 文件 |
| 取值范围 | 0:否,1:是 |
| 缺省值 | 1 |
| 补充说明 | 不同的启动方式,生成 core 文件的目录如下:1、systemctl start taosd 启动:生成的 core 在根目录下 <br/> 2、手动启动,就在 taosd 执行目录下。 |
| Attribute | Description |
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Applicable | Server and Client |
| Meaning | Whether to generate core file when server crashes |
| Value Range | 0: false, 1: true |
| Default Value | 1 |
| Note | The core file is generated under root directory `systemctl start taosd` is used to start, or under the working directory if `taosd` is started directly on Linux Shell. |
---
title: 文件目录结构
description: "TDengine 安装目录说明"
title: File directory structure
description: "TDengine installation directory description"
---
安装 TDengine 后,默认会在操作系统中生成下列目录或文件:
After TDengine is installed, the following directories or files will be created in the system by default.
| 目录/文件 | 说明 |
| directory/file | description |
| ------------------------- | -------------------------------------------------------------------- |
| /usr/local/taos/bin | TDengine 可执行文件目录。其中的执行文件都会软链接到/usr/bin 目录下。 |
| /usr/local/taos/driver | TDengine 动态链接库目录。会软链接到/usr/lib 目录下。 |
| /usr/local/taos/examples | TDengine 各种语言应用示例目录。 |
| /usr/local/taos/include | TDengine 对外提供的 C 语言接口的头文件。 |
| /etc/taos/taos.cfg | TDengine 默认[配置文件] |
| /var/lib/taos | TDengine 默认数据文件目录。可通过[配置文件]修改位置。 |
| /var/log/taos | TDengine 默认日志文件目录。可通过[配置文件]修改位置。 |
## 可执行文件
TDengine 的所有可执行文件默认存放在 _/usr/local/taos/bin_ 目录下。其中包括:
- _taosd_:TDengine 服务端可执行文件
- _taos_:TDengine Shell 可执行文件
- _taosdump_:数据导入导出工具
- _taosBenchmark_:TDengine 测试工具
- _remove.sh_:卸载 TDengine 的脚本,请谨慎执行,链接到/usr/bin 目录下的**rmtaos**命令。会删除 TDengine 的安装目录/usr/local/taos,但会保留/etc/taos、/var/lib/taos、/var/log/taos
- _taosadapter_: 提供 RESTful 服务和接受其他多种软件写入请求的服务端可执行文件
- _tarbitrator_: 提供双节点集群部署的仲裁功能
- _run_taosd_and_taosadapter.sh_:同时启动 taosd 和 taosAdapter 的脚本
- _TDinsight.sh_:用于下载 TDinsight 并安装的脚本
- _set_core.sh_:用于方便调试设置系统生成 core dump 文件的脚本
- _taosd-dump-cfg.gdb_:用于方便调试 taosd 的 gdb 执行脚本。
| /usr/local/taos/bin | The TDengine executable directory. The executable files are soft-linked to the /usr/bin directory. |
| /usr/local/taos/driver | The TDengine dynamic link library directory. It is soft-linked to the /usr/lib directory. |
| /usr/local/taos/examples | The TDengine various language application examples directory. |
| /usr/local/taos/include | The header files for TDengine's external C interface. |
| /etc/taos/taos.cfg | TDengine default [configuration file] |
| /var/lib/taos | TDengine's default data file directory. The location can be changed via [configuration file]. |
| /var/log/taos | TDengine default log file directory. The location can be changed via [configure file]. |
## Executable files
All executable files of TDengine are in the _/usr/local/taos/bin_ directory by default. These include.
- _taosd_: TDengine server-side executable files
- _taos_: TDengine CLI executable
- _taosdump_: data import and export tool
- _taosBenchmark_: TDengine testing tool
- _remove.sh_: script to uninstall TDengine, please execute it carefully, link to the **rmtaos** command in the /usr/bin directory. Will remove the TDengine installation directory `/usr/local/taos`, but will keep `/etc/taos`, `/var/lib/taos`, `/var/log/taos`
- _taosadapter_: server-side executable that provides RESTful services and accepts writing requests from a variety of other softwares
- _tarbitrator_: provides arbitration for two-node cluster deployments
- _run_taosd_and_taosadapter.sh_: script to start both taosd and taosAdapter
- _TDinsight.sh_: script to download TDinsight and install it
- _set_core.sh_: script for setting up the system to generate core dump files for easy debugging
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
:::note
2.4.0.0 版本之后的 taosBenchmark 和 taosdump 需要安装独立安装包 taosTools。
taosdump after version 2.4.0.0 require taosTools as a standalone installation. A few version taosBenchmark is include in taosTools too.
:::
:::tip
您可以通过修改系统配置文件 taos.cfg 来配置不同的数据目录和日志目录。
You can configure different data directories and log directories by modifying the system configuration file `taos.cfg`.
:::
---
title: Schemaless 写入
description: "Schemaless 写入方式,可以免于预先创建超级表/子表的步骤,随着数据写入接口能够自动创建与数据对应的存储结构"
title: Schemaless Writing
description: "The Schemaless write method eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data as it is written to the interface."
---
在物联网应用中,常会采集比较多的数据项,用于实现智能控制、业务分析、设备监控等。由于应用逻辑的版本升级,或者设备自身的硬件调整等原因,数据采集项就有可能比较频繁地出现变动。为了在这种情况下方便地完成数据记录工作,TDengine
从 2.2.0.0 版本开始,提供调用 Schemaless 写入方式,可以免于预先创建超级表/子表的步骤,随着数据写入接口能够自动创建与数据对应的存储结构。并且在必要时,Schemaless
将自动增加必要的数据列,保证用户写入的数据可以被正确存储。
In IoT applications, many data items are often collected for intelligent control, business analysis, device monitoring, etc. Due to the version upgrade of the application logic, or the hardware adjustment of the device itself, the data collection items may change more frequently. To facilitate the data logging work in such cases, TDengine starting from version 2.2.0.0, it provides a series of interfaces to the schemaless writing method, which eliminates the need to create super tables/sub tables in advance and automatically creates the storage structure corresponding to the data as the data is written to the interface. And when necessary, Schemaless writing will automatically add the required columns to ensure that the data written by the user is stored correctly.
无模式写入方式建立的超级表及其对应的子表与通过 SQL 直接建立的超级表和子表完全没有区别,你也可以通过,SQL 语句直接向其中写入数据。需要注意的是,通过无模式写入方式建立的表,其表名是基于标签值按照固定的映射规则生成,所以无法明确地进行表意,缺乏可读性。
The schemaless writing method creates super tables and their corresponding sub-tables completely indistinguishable from the super tables and sub-tables created directly via SQL. You can write data directly to them via SQL statements. Note that the names of tables created by schemaless writing are based on fixed mapping rules for tag values, so they are not explicitly ideographic and lack readability.
## 无模式写入行协议
## Schemaless Writing Line Protocol
TDengine 的无模式写入的行协议兼容 InfluxDB 的 行协议(Line Protocol)、OpenTSDB 的 telnet 行协议、OpenTSDB 的 JSON 格式协议。但是使用这三种协议的时候,需要在 API 中指定输入内容使用解析协议的标准。
TDengine's schemaless writing line protocol supports to be compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. However, when using these three protocols, you need to specify in the API the standard of the parsing protocol to be used for the input content.
对于 InfluxDB、OpenTSDB 的标准写入协议请参考各自的文档。下面首先以 InfluxDB 的行协议为基础,介绍 TDengine 扩展的协议内容,允许用户采用更加精细的方式控制(超级表)模式。
For the standard writing protocols of InfluxDB and OpenTSDB, please refer to the documentation of each protocol. The following is a description of TDengine's extended protocol, based on InfluxDB's line protocol first. They allow users to control the (super table) schema more granularly.
Schemaless 采用一个字符串来表达一个数据行(可以向写入 API 中一次传入多行字符串来实现多个数据行的批量写入),其格式约定如下:
With the following formatting conventions, Schemaless writing uses a single string to express a data row (multiple rows can be passed into the writing API at once to enable bulk writing).
```json
measurement,tag_set field_set timestamp
```
其中:
where :
- measurement 将作为数据表名。它与 tag_set 之间使用一个英文逗号来分隔。
- tag_set 将作为标签数据,其格式形如 `<tag_key>=<tag_value>,<tag_key>=<tag_value>`,也即可以使用英文逗号来分隔多个标签数据。它与 field_set 之间使用一个半角空格来分隔。
- field_set 将作为普通列数据,其格式形如 `<field_key>=<field_value>,<field_key>=<field_value>`,同样是使用英文逗号来分隔多个普通列的数据。它与 timestamp 之间使用一个半角空格来分隔。
- timestamp 即本行数据对应的主键时间戳。
- measurement will be used as the data table name. It will be separated from tag_set by a comma.
- tag_set will be used as tag data in the format `<tag_key>=<tag_value>,<tag_key>=<tag_value>`, i.e. multiple tags' data can be separated by a comma. It is separated from field_set by space.
- field_set will be used as normal column data in the format of `<field_key>=<field_value>,<field_key>=<field_value>`, again using a comma to separate multiple normal columns of data. It is separated from the timestamp by space.
- The timestamp is the primary key corresponding to the data in this row.
tag_set 中的所有的数据自动转化为 nchar 数据类型,并不需要使用双引号(")。
All data in tag_set is automatically converted to the NCHAR data type and does not require double quotes (").
在无模式写入数据行协议中,field_set 中的每个数据项都需要对自身的数据类型进行描述。具体来说:
In the schemaless writing data line protocol, each data item in the field_set needs to be described with its data type. Let's explain in detail:
- 如果两边有英文双引号,表示 BIANRY(32) 类型。例如 `"abc"`
- 如果两边有英文双引号而且带有 L 前缀,表示 NCHAR(32) 类型。例如 `L"报错信息"`
- 对空格、等号(=)、逗号(,)、双引号("),前面需要使用反斜杠(\)进行转义。(都指的是英文半角符号)
- 数值类型将通过后缀来区分数据类型:
- If there are English double quotes on both sides, it indicates the BINARY(32) type. For example, `"abc"`.
- If there are double quotes on both sides and an L prefix, it means NCHAR(32) type. For example, `L"error message"`.
- Spaces, equal signs (=), commas (,), and double quotes (") need to be escaped with a backslash (\) in front. (All refer to the ASCII character)
- Numeric types will be distinguished from data types by the suffix.
| **序号** | **后缀** | **映射类型** | **大小(字节)** |
| **Serial number** | **Postfix** | **Mapping type** | **Size (bytes)** |
| -------- | -------- | ------------ | -------------- |
| 1 | 无或 f64 | double | 8 |
| 2 | f32 | float | 4 |
| 3 | i8 | TinyInt | 1 |
| 4 | i16 | SmallInt | 2 |
| 5 | i32 | Int | 4 |
| 6 | i64 或 i | Bigint | 8 |
| 1 | none or f64 | double | 8 |
| 2 | f32 | float | 4 |
| 3 | i8 | TinyInt | 1 |
| 4 | i16 | SmallInt | 2 |
| 5 | i32 | Int | 4 |
| 6 | i64 or i | Bigint | 8 |
- t, T, true, True, TRUE, f, F, false, False 将直接作为 BOOL 型来处理。
- `t`, `T`, `true`, `True`, `TRUE`, `f`, `F`, `false`, and `False` will be handled directly as BOOL types.
例如如下数据行表示:向名为 st 的超级表下的 t1 标签为 "3"(NCHAR)、t2 标签为 "4"(NCHAR)、t3
标签为 "t3"(NCHAR)的数据子表,写入 c1 列为 3(BIGINT)、c2 列为 false(BOOL)、c3
列为 "passit"(BINARY)、c4 列为 4(DOUBLE)、主键时间戳为 1626006833639000000 的一行数据。
For example, the following data rows indicate that the t1 label is "3" (NCHAR), the t2 label is "4" (NCHAR), and the t3 label is "t3" to the super table named `st` labeled "t3" (NCHAR), write c1 column as 3 (BIGINT), c2 column as false (BOOL), c3 column is "passit" (BINARY), c4 column is 4 (DOUBLE), and the primary key timestamp is 1626006833639000000 in one row.
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```
需要注意的是,如果描述数据类型后缀时使用了错误的大小写,或者为数据指定的数据类型有误,均可能引发报错提示而导致数据写入失败。
Note that if the wrong case is used when describing the data type suffix, or if the wrong data type is specified for the data, it may cause an error message and cause the data to fail to be written.
## 无模式写入的主要处理逻辑
## Main processing logic for schemaless writing
无模式写入按照如下原则来处理行数据:
Schemaless writes process row data according to the following principles.
1. 将使用如下规则来生成子表名:首先将 measurement 的名称和标签的 key 和 value 组合成为如下的字符串
1. You can use the following rules to generate the sub-table names: first, combine the measurement name and the key and value of the label into the next string:
```json
"measurement,tag_key1=tag_value1,tag_key2=tag_value2"
```
需要注意的是,这里的 tag_key1, tag_key2 并不是用户输入的标签的原始顺序,而是使用了标签名称按照字符串升序排列后的结果。所以,tag_key1 并不是在行协议中输入的第一个标签。
排列完成以后计算该字符串的 MD5 散列值 "md5_val"。然后将计算的结果与字符串组合生成表名:“t_md5_val”。其中的 “t*” 是固定的前缀,每个通过该映射关系自动生成的表都具有该前缀。
Note that tag_key1, tag_key2 are not the original order of the tags entered by the user but the result of using the tag names in ascending order of the strings. Therefore, tag_key1 is not the first tag entered in the line protocol.
The string's MD5 hash value "md5_val" is calculated after the ranking is completed. The calculation result is then combined with the string to generate the table name: "t_md5_val". "t*" is a fixed prefix that every table generated by this mapping relationship has. 2.
2. 如果解析行协议获得的超级表不存在,则会创建这个超级表。
3. 如果解析行协议获得子表不存在,则 Schemaless 会按照步骤 1 或 2 确定的子表名来创建子表。
4. 如果数据行中指定的标签列或普通列不存在,则在超级表中增加对应的标签列或普通列(只增不减)。
5. 如果超级表中存在一些标签列或普通列未在一个数据行中被指定取值,那么这些列的值在这一行中会被置为
NULL。
6. 对 BINARY 或 NCHAR 列,如果数据行中所提供值的长度超出了列类型的限制,自动增加该列允许存储的字符长度上限(只增不减),以保证数据的完整保存。
7. 如果指定的数据子表已经存在,而且本次指定的标签列取值跟已保存的值不一样,那么最新的数据行中的值会覆盖旧的标签列取值。
8. 整个处理过程中遇到的错误会中断写入过程,并返回错误代码。
2. If the super table obtained by parsing the line protocol does not exist, this super table is created.
If the sub-table obtained by the parse line protocol does not exist, Schemaless creates the sub-table according to the sub-table name determined in steps 1 or 2. 4.
4. If the specified tag or regular column in the data row does not exist, the corresponding tag or regular column is added to the super table (only incremental).
5. If there are some tag columns or regular columns in the super table that are not specified to take values in a data row, then the values of these columns are set to NULL.
6. For BINARY or NCHAR columns, if the length of the value provided in a data row exceeds the column type limit, the maximum length of characters allowed to be stored in the column is automatically increased (only incremented and not decremented) to ensure complete preservation of the data.
7. If the specified data sub-table already exists, and the specified tag column takes a value different from the saved value this time, the value in the latest data row overwrites the old tag column take value.
8. Errors encountered throughout the processing will interrupt the writing process and return an error code.
:::tip
无模式所有的处理逻辑,仍会遵循 TDengine 对数据结构的底层限制,例如每行数据的总长度不能超过
16k 字节。这方面的具体限制约束请参见 [TAOS SQL 边界限制](/taos-sql/limit)
All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed
16k bytes. See [TAOS SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area.
:::
## 时间分辨率识别
## Time resolution recognition
无模式写入过程中支持三个指定的模式,具体如下
Three specified modes are supported in the schemaless writing process, as follows:
| **序号** | **值** | **说明** |
| **Serial** | **Value** | **Description** |
| -------- | ------------------- | ------------------------------- |
| 1 | SML_LINE_PROTOCOL | InfluxDB 行协议(Line Protocol) |
| 2 | SML_TELNET_PROTOCOL | OpenTSDB 文本行协议 |
| 3 | SML_JSON_PROTOCOL | JSON 协议格式 |
| 1 | SML_LINE_PROTOCOL | InfluxDB Line Protocol |
| 2 | SML_TELNET_PROTOCOL | OpenTSDB Text Line Protocol | | 2 | SML_TELNET_PROTOCOL | OpenTSDB Text Line Protocol
| 3 | SML_JSON_PROTOCOL | JSON protocol format |
在 SML_LINE_PROTOCOL 解析模式下,需要用户指定输入的时间戳的时间分辨率。可用的时间分辨率如下表所示:
In the SML_LINE_PROTOCOL parsing mode, the user is required to specify the time resolution of the input timestamp. The available time resolutions are shown in the following table.
| **序号** | **时间分辨率定义** | **含义** |
| **Serial Number** | **Time Resolution Definition** | **Meaning** |
| -------- | --------------------------------- | -------------- |
| 1 | TSDB_SML_TIMESTAMP_NOT_CONFIGURED | 未定义(无效) |
| 2 | TSDB_SML_TIMESTAMP_HOURS | 小时 |
| 3 | TSDB_SML_TIMESTAMP_MINUTES | 分钟 |
| 4 | TSDB_SML_TIMESTAMP_SECONDS | 秒 |
| 5 | TSDB_SML_TIMESTAMP_MILLI_SECONDS | 毫秒 |
| 6 | TSDB_SML_TIMESTAMP_MICRO_SECONDS | 微秒 |
| 7 | TSDB_SML_TIMESTAMP_NANO_SECONDS | 纳秒 |
| 1 | TSDB_SML_TIMESTAMP_NOT_CONFIGURED | Not defined (invalid) |
| 2 | TSDB_SML_TIMESTAMP_HOURS | hour |
| 3 | TSDB_SML_TIMESTAMP_MINUTES | MINUTES
| 4 | TSDB_SML_TIMESTAMP_SECONDS | SECONDS
| 5 | TSDB_SML_TIMESTAMP_MILLI_SECONDS | milliseconds
| 6 | TSDB_SML_TIMESTAMP_MICRO_SECONDS | microseconds
| 7 | TSDB_SML_TIMESTAMP_NANO_SECONDS | nanoseconds |
在 SML_TELNET_PROTOCOL 和 SML_JSON_PROTOCOL 模式下,根据时间戳的长度来确定时间精度(与 OpenTSDB 标准操作方式相同),此时会忽略用户指定的时间分辨率。
In SML_TELNET_PROTOCOL and SML_JSON_PROTOCOL modes, the time precision is determined based on the length of the timestamp (in the same way as the OpenTSDB standard operation), and the user-specified time resolution is ignored at this point.
## 数据模式映射规则
## Data schema mapping rules
本节将说明行协议的数据如何映射成为具有模式的数据。每个行协议中数据 measurement 映射为
超级表名称。tag_set 中的 标签名称为 数据模式中的标签名,field_set 中的名称为列名称。以如下数据为例,说明映射规则:
This section describes how data for line protocols are mapped to data with a schema. The data measurement in each line protocol is mapped to
The tag name in tag_set is the name of the tag in the data schema, and the name in field_set is the column's name. The following data is used as an example to illustrate the mapping rules.
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
```
该行数据映射生成一个超级表: st, 其包含了 3 个类型为 nchar 的标签,分别是:t1, t2, t3。五个数据列,分别是 ts(timestamp),c1 (bigint),c3(binary),c2 (bool), c4 (bigint)。映射成为如下 SQL 语句:
The row data mapping generates a super table: `st`, which contains three labels of type NCHAR: t1, t2, t3. Five data columns are ts (timestamp), c1 (bigint), c3 (binary), c2 (bool), c4 (bigint). The mapping becomes the following SQL statement.
```json
create stable st (_ts timestamp, c1 bigint, c2 bool, c3 binary(6), c4 bigint) tags(t1 nchar(1), t2 nchar(1), t3 nchar(2))
```
## 数据模式变更处理
## Data schema change handling
本节将说明不同行数据写入情况下,对于数据模式的影响。
This section describes the impact on the data schema for different line protocol data writing cases.
在使用行协议写入一个明确的标识的字段类型的时候,后续更改该字段的类型定义,会出现明确的数据模式错误,即会触发写入 API 报告错误。如下所示,
When writing to an explicitly identified field type using the line protocol, subsequent changes to the field's type definition will result in an explicit data schema error, i.e., will trigger a write API report error. As shown below, the
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4i 1626006833640000000
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4i 1626006833640000000
```
第一行的数据类型映射将 c4 列定义为 Double, 但是第二行的数据又通过数值后缀方式声明该列为 BigInt, 由此会触发无模式写入的解析错误。
The data type mapping in the first row defines column c4 as DOUBLE, but the data in the second row is declared as BIGINT by the numeric suffix, which triggers a parsing error with schemaless writing.
如果列前面的行协议将数据列声明为了 binary, 后续的要求长度更长的 binary 长度,此时会触发超级表模式的变更。
If the line protocol before the column declares the data column as BINARY, the subsequent one requires a longer binary length, which triggers a super table schema change.
```json
st,t1=3,t2=4,t3=t3 c1=3i64,c5="pass" 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c5="passit" 1626006833640000000
st,t1=3,t2=4,t3=t3 c1=3i64,c5="pass" 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c5="passit" 1626006833640000000
```
第一行中行协议解析会声明 c5 列是一个 binary(4)的字段,第二次行数据写入会提取列 c5 仍然是 binary 列,但是其宽度为 6,此时需要将 binary 的宽度增加到能够容纳 新字符串的宽度。
The first line of the line protocol parsing will declare column c5 is a BINARY(4) field, the second line data write will extract column c5 is still a BINARY column. Still, its width is 6, then you need to increase the width of the BINARY field to be able to accommodate the new string.
```json
st,t1=3,t2=4,t3=t3 c1=3i64 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000
st,t1=3,t2=4,t3=t3 c1=3i64 1626006833639000000
st,t1=3,t2=4,t3=t3 c1=3i64,c6="passit" 1626006833640000000
```
第二行数据相对于第一行来说增加了一个列 c6,类型为 binary(6)。那么此时会自动增加一个列 c6, 类型为 binary(6)。
The second line of data has an additional column c6 of type BINARY(6) compared to the first row. Then a column c6 of type BINARY(6) is automatically added at this point.
## 写入完整性
## Write integrity
TDengine 提供数据写入的幂等性保证,即您可以反复调用 API 进行出错数据的写入操作。但是不提供多行数据写入的原子性保证。即在多行数据一批次写入过程中,会出现部分数据写入成功,部分数据写入失败的情况。
TDengine provides idempotency guarantees for data writing, i.e., you can repeatedly call the API to write data with errors. However, it does not give atomicity guarantees for writing multiple rows of data. During the process of writing numerous rows of data in one batch, some data will be written successfully, and some data will fail.
## 错误码
## Error code
如果是无模式写入过程中的数据本身错误,应用会得到 TSDB_CODE_TSC_LINE_SYNTAX_ERROR
错误信息,该错误信息表明错误发生在写入文本中。其他的错误码与原系统一致,可以通过
taos_errstr 获取具体的错误原因。
If it is an error in the data itself during the schemaless writing process, the application will get `TSDB_CODE_TSC_LINE_SYNTAX_ERROR` error message, which indicates that the error occurred in writing. The other error codes are consistent with the TDengine and can be obtained via the `taos_errstr()` to get the specific cause of the error.
label: Schemaless 写入
label: Schemaless writing
label: Reference
link:
slug: /reference/
type: generated-index
description: "参考指南是对 TDengine 本身、 TDengine 各语言连接器及自带的工具最详细的介绍。"
......@@ -19,7 +19,7 @@ password = "taosdata"
The default database name written by taosAdapter is `collectd`. You can also modify the taosAdapter configuration file dbs entry to specify a different name. user and password are the values configured by the actual TDengine. After changing the configuration file, you need to restart the taosAdapter.
- You can also enable the taosAdapter to receive collectd data by using the taosAdapter command line parameters or by setting environment variables.
- You can also enable the taosAdapter to receive collectd data by using the taosAdapter command-line parameters or by setting environment variables.
### Configure collectd
#collectd
......
......@@ -21,7 +21,7 @@ password = "taosdata"
The default database name written by the taosAdapter is `icinga2`. You can also modify the taosAdapter configuration file dbs entry to specify a different name. user and password are the values configured by the actual TDengine. You need to restart the taosAdapter after modification.
- You can also enable taosAdapter to receive icinga2 data by using the taosAdapter command line parameters or setting environment variables.
- You can also enable taosAdapter to receive icinga2 data by using the taosAdapter command-line parameters or setting environment variables.
### Configure icinga3
......
Configuring Prometheus is done by editing the Prometheus configuration file prometheus.yml (default location /etc/prometheus/prometheus.yml).
Configuring Prometheus is done by editing the Prometheus configuration file prometheus.yml (default location `/etc/prometheus/prometheus.yml`).
### Configuring third-party database addresses
......
......@@ -27,7 +27,7 @@ deleteTimings = true
The default database name written by taosAdapter is `statsd`. To specify a different name, you can also modify the taosAdapter configuration file db entry. user and password fill in the actual TDengine configuration values. After changing the configuration file, you need to restart the taosAdapter.
- You can also enable taosAdapter to receive StatsD data by using the taosAdapter command line parameters or setting environment variables.
- You can also enable taosAdapter to receive StatsD data by using the taosAdapter command-line parameters or setting environment variables.
### Configuring StatsD
......
......@@ -19,7 +19,7 @@ password = "taosdata"
The taosAdapter writes to the database with the default name `tcollector`. You can also modify the taosAdapter configuration file dbs entry to specify a different name. user and password fill in the actual TDengine configuration values. After changing the configuration file, you need to restart the taosAdapter.
- You can also enable taosAdapter to receive tcollector data by using the taosAdapter command line parameters or setting environment variables.
- You can also enable taosAdapter to receive tcollector data by using the taosAdapter command-line parameters or setting environment variables.
### Configuring TCollector
......
In the Telegraf configuration file (default location /etc/telegraf/telegraf.conf) add the outputs.http output module configuration.
In the Telegraf configuration file (default location `/etc/telegraf/telegraf.conf`) add an `outputs.http` section.
```
[[outputs.http]]
......@@ -24,4 +24,3 @@ An example is as follows.
data_format = "influx"
influx_max_line_bytes = 250
```
---
title: Reference
---
The reference guide is the detailed introduction to TDengine, various TDengine's connectors in different languages, and the tools that come with it.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -7,6 +7,12 @@ TDengine can be quickly integrated with the open-source data visualization syste
You can learn more about using the TDengine plugin on [GitHub](https://github.com/taosdata/grafanaplugin/blob/master/README.md).
## Prerequisites
In order for Grafana to add the TDengine data source successfully, the following preparations are required:
1. The TDengine cluster is deployed and functioning properly
2. taosAdapter is installed and running properly. Please refer to the taosAdapter manual for details.
## Installing Grafana
TDengine currently supports Grafana versions 7.0 and above. Users can go to the Grafana official website to download the installation package and execute the installation according to the current operating system. The download address is as follows: <https://grafana.com/grafana/download>.
......
......@@ -3,7 +3,7 @@ sidebar_label: TCollector
title: TCollector writing
---
import Tcollector from "../14-reference/_tcollector.mdx"
import TCollector from "../14-reference/_tcollector.mdx"
TCollector is part of openTSDB and collects client computer's logs to send to the database.
......@@ -17,7 +17,7 @@ To write data to the TDengine via TCollector requires the following preparations
- TCollector has been installed. Please refer to [official documentation](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html#installation-of-tcollector) for TCollector installation
## Configuration steps
<Tcollector />
<TCollector />
## Verification method
......
---
sidebar_label: EMQ Broker
title: EMQ Broker writing
sidebar_label: EMQX Broker
title: EMQX Broker writing
---
MQTT is a popular IoT data transfer protocol, [EMQ](https://github.com/emqx/emqx) is an open-source MQTT Broker software, without any code, only need to use "rules" in EMQ Dashboard to do simple configuration. You can write MQTT data directly to TDengine. EMQ X supports saving data to TDengine by sending it to web services and provides a native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQ official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it. tdengine).
MQTT is a popular IoT data transfer protocol, [EMQX](https://github.com/emqx/emqx) is an open-source MQTT Broker software, without any code, only need to use "rules" in EMQX Dashboard to do simple configuration. You can write MQTT data directly to TDengine. EMQX supports saving data to TDengine by sending it to web services and provides a native TDengine driver for direct saving in the Enterprise Edition. Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use it. tdengine).
## Prerequisites
......@@ -34,7 +34,7 @@ Depending on the current operating system, users can download the installation p
CREATE TABLE sensor_data (ts timestamp, temperature float, humidity float, volume float, PM10 float, pm25 float, SO2 float, NO2 float, CO float, sensor_id NCHAR(255), area TINYINT, coll_time timestamp);
```
Note: The table schema is based on the blog [(In Chinese) Data Transfer, Storage, Presentation, EMQ X + TDengine Build MQTT IoT Data Visualization Platform](https://www.taosdata.com/blog/2020/08/04/1722.html) as an example. Subsequent operations are carried out with this blog scenario too. Please modify it according to your actual application scenario.
Note: The table schema is based on the blog [(In Chinese) Data Transfer, Storage, Presentation, EMQX + TDengine Build MQTT IoT Data Visualization Platform](https://www.taosdata.com/blog/2020/08/04/1722.html) as an example. Subsequent operations are carried out with this blog scenario too. Please modify it according to your actual application scenario.
## Configuring EMQX Rules
......@@ -187,4 +187,4 @@ Use the TDengine CLI program to log in and query the appropriate databases and t
![img](./emqx/check-result-in-taos.png)
Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine.
EMQX Please refer to the [EMQ official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
label: Third Party Tools
link:
type: generated-index
slug: /third-party/
description: TDengine's support for standard SQL commands, common database connector standards (e.g., JDBC), ORM, and other popular time-series database writing protocols (e.g., InfluxDB Line Protocol, OpenTSDB JSON, OpenTSDB Telnet, etc.) makes TDengine very easy to use with third-party tools.
---
title: Third Party Tools
---
TDengine's support for standard SQL commands, common database connector standards (e.g., JDBC), ORM, and other popular time-series database writing protocols (e.g., InfluxDB Line Protocol, OpenTSDB JSON, OpenTSDB Telnet, etc.) makes TDengine very easy to use with third-party tools.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -14,7 +14,7 @@ Logical structure diagram of TDengine distributed architecture as following:
![TDengine architecture diagram](structure.png)
<center> Figure 1: TDengine architecture diagram </center>
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine application driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
**Physical node (pnode)**: A pnode is a computer that runs independently and has its own computing, storage and network capabilities. It can be a physical machine, virtual machine, or Docker container installed with OS. The physical node is identified by its configured FQDN (Fully Qualified Domain Name). TDengine relies entirely on FQDN for network communication. If you don't know about FQDN, please check [wikipedia](https://en.wikipedia.org/wiki/Fully_qualified_domain_name).
......@@ -30,7 +30,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc
### Node Communication
**Communication mode**: The communication among each data node of TDengine system, and among the application driver and each data node is carried out through TCP/UDP. Considering an IoT scenario, the data writing packets are generally not large, so TDengine uses UDP in addition to TCP for transmission, because UDP is more efficient and is not limited by the number of connections. TDengine implements its own timeout, retransmission, confirmation and other mechanisms to ensure reliable transmission of UDP. For packets with a data volume of less than 15K, UDP is adopted for transmission, and TCP is automatically adopted for transmission of packets with a data volume of more than 15K or query operations. At the same time, TDengine will automatically compress/decompress the data, digital sign/authenticate the data according to the configuration and data packet. For data replication among data nodes, only TCP is used for data transportation.
**Communication mode**: The communication among each data node of TDengine system, and among the client driver and each data node is carried out through TCP/UDP. Considering an IoT scenario, the data writing packets are generally not large, so TDengine uses UDP in addition to TCP for transmission, because UDP is more efficient and is not limited by the number of connections. TDengine implements its own timeout, retransmission, confirmation and other mechanisms to ensure reliable transmission of UDP. For packets with a data volume of less than 15K, UDP is adopted for transmission, and TCP is automatically adopted for transmission of packets with a data volume of more than 15K or query operations. At the same time, TDengine will automatically compress/decompress the data, digital sign/authenticate the data according to the configuration and data packet. For data replication among data nodes, only TCP is used for data transportation.
**FQDN configuration:** A data node has one or more FQDNs, which can be specified in the system configuration file taos.cfg with the parameter “fqdn”. If it is not specified, the system will automatically use the hostname of the computer as its FQDN. If the node is not configured with FQDN, you can directly set the configuration parameter “fqdn” of the node to its IP address. However, IP is not recommended because IP address may be changed, and once it changes, the cluster will not work properly. The EP (End Point) of a data node consists of FQDN + Port. With FQDN, it is necessary to ensure the DNS service is running, or hosts files on nodes are configured properly.
......
label: TDengine Inside
link:
slug: /tdinternal/
type: generated-index
\ No newline at end of file
label: TDengine Inside
\ No newline at end of file
---
title: TDengine Inside
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -34,7 +34,7 @@ Please refer to the [official documentation](https://grafana.com/grafana/downloa
### TDengine
Download the latest TDengine-server 2.4.0.x or above from the [Downloads](http://taosdata.com/cn/all-downloads/) page on the Taos Data website and install it.
Download the latest TDengine-server 2.4.0.x or above from the [Downloads](http://taosdata.com/cn/all-downloads/) page on the TAOSData website and install it.
## Data Connection Setup
......
......@@ -99,7 +99,7 @@ This chapter describes the differences between OpenTSDB and TDengine at the syst
TDengine currently only supports Grafana for visual kanban rendering, so if your application uses front-end kanban boards other than Grafana (e.g., [TSDash](https://github.com/facebook/tsdash), [Status Wolf](https://github) .com/box/StatusWolf), etc.). You cannot directly migrate those front-end kanbans to TDengine, and the front-end kanban will need to be ported to Grafana to work correctly.
TDengine version 2.3.0.x only supports collectd and StatsD as data collection aggregation software but will provide more data collection aggregation software in the future. If you use other data aggregators on the collection side, your application needs to be ported to these two data aggregation systems to write data correctly.
In addition to the two data aggregator software protocols mentioned above, TDengine also supports writing data directly via InfluxDB's row protocol and OpenTSDB's data writing protocol, JSON format. You can rewrite the logic on the data push side to write data using the row protocols supported by TDengine.
In addition to the two data aggregator software protocols mentioned above, TDengine also supports writing data directly via InfluxDB's line protocol and OpenTSDB's data writing protocol, JSON format. You can rewrite the logic on the data push side to write data using the line protocols supported by TDengine.
In addition, if your application uses the following features of OpenTSDB, you need to understand the following considerations before migrating your application to TDengine.
......
label: Application Practice
link:
slug: /application/
type: generated-index
---
title: Application Practice
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
---
title: 常见问题及反馈
sidebar_label: FAQ
title: Frequently Asked Questions
---
## 问题反馈
## Submit an Issue
如果 FAQ 中的信息不能够帮到您,需要 TDengine 技术团队的技术支持与协助,请将以下两个目录中内容打包:
If the tips in FAQ don't help much, please submit an issue on [GitHub](https://github.com/taosdata/TDengine) to describe your problem description, including TDengine version, hardware and OS information, the steps to reproduce the problem, etc. It would be very helpful if you package the contents in `/var/log/taos` and `/etc/taos` and upload. These two are the default directories used by TDengine, if they have been changed in your configuration, please use according to the actual configuration. It's recommended to firstly set `debugFlag` to 135 in `taos.cfg`, restart `taosd`, then reproduce the problem and collect logs. If you don't want to restart, an alternative way of setting `debugFlag` is executing `alter dnode <dnode_id> debugFlag 135` command in TDengine CLI `taos`. During normal running, however, please make sure `debugFlag` is set to 131.
1. /var/log/taos (如果没有修改过默认路径)
2. /etc/taos
## Frequently Asked Questions
附上必要的问题描述,包括使用的 TDengine 版本信息、平台环境信息、发生该问题的执行操作、出现问题的表征及大概的时间,在 [GitHub](https://github.com/taosdata/TDengine) 提交 issue。
**1. How to upgrade to TDengine 2.0 from older version? ☆☆☆**
为了保证有足够的 debug 信息,如果问题能够重复,请修改/etc/taos/taos.cfg 文件,最后面添加一行“debugFlag 135"(不带引号本身),然后重启 taosd, 重复问题,然后再递交。也可以通过如下 SQL 语句,临时设置 taosd 的日志级别。
version 2.x is not compatible with version 1.x regarding configuration file and data file, please do following before upgrading:
```
alter dnode <dnode_id> debugFlag 135;
```
1. Delete configuration files: `sudo rm -rf /etc/taos/taos.cfg`
2. Delete log files: `sudo rm -rf /var/log/taos/`
3. Delete data files if the data doesn't need to be kept: `sudo rm -rf /var/lib/taos/`
4. Install latests 2.x version
5. If the data needs to be kept and migrated to newer version, please contact professional service of TDengine for assistance
但系统正常运行时,请一定将 debugFlag 设置为 131,否则会产生大量的日志信息,降低系统效率。
**2. How to handle "Unable to establish connection"**
## 常见问题列表
When the client is unable to connect to the server, you can try following ways to find out why.
**1. TDengine2.0 之前的版本升级到 2.0 及以上的版本应该注意什么?☆☆☆**
1. Check the network
2.0 版在之前版本的基础上,进行了完全的重构,配置文件和数据文件是不兼容的。在升级之前务必进行如下操作:
- Check if the hosts where the client and server are running can be accessible to each other, for example by `ping` command.
- Check if the TCP/UDP on port 6030-6042 are open for access if firewall is enabled. It's better to firstly disable firewall for diagnostics.
- Check if the FQDN and serverPort are configured correctly in `taos.cfg` used by the server side
- Check if the `firstEp` is set properly in the `taos.cfg` used by the client side
1. 删除配置文件,执行 `sudo rm -rf /etc/taos/taos.cfg`
2. 删除日志文件,执行 `sudo rm -rf /var/log/taos/`
3. 确保数据已经不再需要的前提下,删除数据文件,执行 `sudo rm -rf /var/lib/taos/`
4. 安装最新稳定版本的 TDengine
5. 如果需要迁移数据或者数据文件损坏,请联系涛思数据官方技术支持团队,进行协助解决
2. Make sure the client version and server version are same.
**2. Windows 平台下 JDBCDriver 找不到动态链接库,怎么办?**
3. On server side, check the running status of `taosd` by executing `systemctl status taosd` . If your server is started using another way instead of `systemctl`, use the proper method to check whether the server process is running normally.
请看为此问题撰写的[技术博客](https://www.taosdata.com/blog/2019/12/03/950.html)。
4. If using connector of Python, Java, Go, Rust, C#, node.JS on Linux to connect toe the server, please make sure `libtaos.so` is in directory `/usr/local/taos/driver` and `/usr/local/taos/driver` is in system lib search environment variable `LD_LIBRARY_PATH`.
**3. 创建数据表时提示 more dnodes are needed**
5. If using connector on Windows, please make sure `C:\TDengine\driver\taos.dll` is in your system lib search path, it's suggested to put `taos.dll` under `C:\Windows\System32`.
请看为此问题撰写的[技术博客](https://www.taosdata.com/blog/2019/12/03/965.html)。
6. Some advanced network diagnostics tools
**4. 如何让 TDengine crash 时生成 core 文件?**
- On Linux system tool `nc` can be used to check whether the TCP/UDP can be accessible on a specified port
Check whether a UDP port is open: `nc -vuz {hostIP} {port} `
Check whether a TCP port on server side is open: `nc -l {port}`
Check whether a TCP port on client side is open: `nc {hostIP} {port}`
请看为此问题撰写的[技术博客](https://www.taosdata.com/blog/2019/12/06/974.html)。
- On Windows system `Net-TestConnection -ComputerName {fqdn} -Port {port}` on PowerShell can be used to check whether the port on serer side is open for access.
**5. 遇到错误“Unable to establish connection”, 我怎么办?**
7. TDengine CLI `taos` can also be used to check network, please refer to [TDengine CLI](/reference/taos-shell).
客户端遇到连接故障,请按照下面的步骤进行检查:
**3. How to handle "Unexpected generic error in RPC" or "Unable to resolve FQDN" ?**
1. 检查网络环境
This error is caused because the FQDN can't be resolved. Please try following ways:
- 云服务器:检查云服务器的安全组是否打开 TCP/UDP 端口 6030-6042 的访问权限
- 本地虚拟机:检查网络能否 ping 通,尽量避免使用`localhost` 作为 hostname
- 公司服务器:如果为 NAT 网络环境,请务必检查服务器能否将消息返回值客户端
1. Check whether the FQDN is configured properly on the server side
2. If DSN server is configured in the network, please check whether it works; otherwise, check `/etc/hosts` to see whether the FQDN is configured with correct IP
3. If the network configuration on the server side is OK, try to ping the server from the client side.
4. If TDengine has been used before with an old hostname then the hostname has been changed, please check `/var/lib/taos/taos/dnode/dnodeEps.json`. Before setting up a new TDengine cluster, it's better to cleanup the directories configured.
2. 确保客户端与服务端版本号是完全一致的,开源社区版和企业版也不能混用
**4. "Invalid SQL" is returned even though the Syntax is correct**
3. 在服务器,执行 `systemctl status taosd` 检查*taosd*运行状态。如果没有运行,启动*taosd*
"Invalid SQL" is returned when the length of SQL statement exceeds maximum allowed length or the syntax is not correct.
4. 确认客户端连接时指定了正确的服务器 FQDN (Fully Qualified Domain Name —— 可在服务器上执行 Linux 命令 hostname -f 获得),FQDN 配置参考:[一篇文章说清楚 TDengine 的 FQDN](https://www.taosdata.com/blog/2020/09/11/1824.html)。
**5. Whether validation queries are supported?**
5. ping 服务器 FQDN,如果没有反应,请检查你的网络,DNS 设置,或客户端所在计算机的系统 hosts 文件。如果部署的是 TDengine 集群,客户端需要能 ping 通所有集群节点的 FQDN。
6. 检查防火墙设置(Ubuntu 使用 ufw status,CentOS 使用 firewall-cmd --list-port),确认 TCP/UDP 端口 6030-6042 是打开的
7. 对于 Linux 上的 JDBC(ODBC, Python, Go 等接口类似)连接, 确保*libtaos.so*在目录*/usr/local/taos/driver*里, 并且*/usr/local/taos/driver*在系统库函数搜索路径*LD_LIBRARY_PATH*里
8. 对于 Windows 上的 JDBC, ODBC, Python, Go 等连接,确保*C:\TDengine\driver\taos.dll*在你的系统库函数搜索目录里 (建议*taos.dll*放在目录 _C:\Windows\System32_)
9. 如果仍不能排除连接故障
- Linux 系统请使用命令行工具 nc 来分别判断指定端口的 TCP 和 UDP 连接是否通畅
检查 UDP 端口连接是否工作:`nc -vuz {hostIP} {port} `
检查服务器侧 TCP 端口连接是否工作:`nc -l {port}`
检查客户端侧 TCP 端口连接是否工作:`nc {hostIP} {port}`
- Windows 系统请使用 PowerShell 命令 Net-TestConnection -ComputerName {fqdn} -Port {port} 检测服务段端口是否访问
10. 也可以使用 taos 程序内嵌的网络连通检测功能,来验证服务器和客户端之间指定的端口连接是否通畅(包括 TCP 和 UDP):[TDengine 内嵌网络检测工具使用指南](https://www.taosdata.com/blog/2020/09/08/1816.html)。
**6. 遇到错误“Unexpected generic error in RPC”或者“Unable to resolve FQDN”,我怎么办?**
产生这个错误,是由于客户端或数据节点无法解析 FQDN(Fully Qualified Domain Name)导致。对于 TAOS Shell 或客户端应用,请做如下检查:
1. 请检查连接的服务器的 FQDN 是否正确,FQDN 配置参考:[一篇文章说清楚 TDengine 的 FQDN](https://www.taosdata.com/blog/2020/09/11/1824.html)
2. 如果网络配置有 DNS server,请检查是否正常工作
3. 如果网络没有配置 DNS server,请检查客户端所在机器的 hosts 文件,查看该 FQDN 是否配置,并是否有正确的 IP 地址
4. 如果网络配置 OK,从客户端所在机器,你需要能 Ping 该连接的 FQDN,否则客户端是无法连接服务器的
5. 如果服务器曾经使用过 TDengine,且更改过 hostname,建议检查 data 目录的 dnodeEps.json 是否符合当前配置的 EP,路径默认为/var/lib/taos/dnode。正常情况下,建议更换新的数据目录或者备份后删除以前的数据目录,这样可以避免该问题。
6. 检查/etc/hosts 和/etc/hostname 是否是预配置的 FQDN
**7. 虽然语法正确,为什么我还是得到 "Invalid SQL" 错误**
如果你确认语法正确,2.0 之前版本,请检查 SQL 语句长度是否超过 64K。如果超过,也会返回这个错误。
**8. 是否支持 validation queries?**
TDengine 还没有一组专用的 validation queries。然而建议你使用系统监测的数据库”log"来做。
It's suggested to use a builtin database named as `log` to monitor.
<a class="anchor" id="update"></a>
**9. 我可以删除或更新一条记录吗?**
TDengine 目前尚不支持删除功能,未来根据用户需求可能会支持。
从 2.0.8.0 开始,TDengine 支持更新已经写入数据的功能。使用更新功能需要在创建数据库时使用 UPDATE 1 参数,之后可以使用 INSERT INTO 命令更新已经写入的相同时间戳数据。UPDATE 参数不支持 ALTER DATABASE 命令修改。没有使用 UPDATE 1 参数创建的数据库,写入相同时间戳的数据不会修改之前的数据,也不会报错。
**6. Can I delete a record?**
另需注意,在 UPDATE 设置为 0 时,后发送的相同时间戳的数据会被直接丢弃,但并不会报错,而且仍然会被计入 affected rows (所以不能利用 INSERT 指令的返回信息进行时间戳查重)。这样设计的主要原因是,TDengine 把写入的数据看做一个数据流,无论时间戳是否出现冲突,TDengine 都认为产生数据的原始设备真实地产生了这样的数据。UPDATE 参数只是控制这样的流数据在进行持久化时要怎样处理——UPDATE 为 0 时,表示先写入的数据覆盖后写入的数据;而 UPDATE 为 1 时,表示后写入的数据覆盖先写入的数据。这种覆盖关系如何选择,取决于对数据的后续使用和统计中,希望以先还是后生成的数据为准。
From version 2.6.0.0 Enterprise version, deleting data can be supported.
此外,从 2.1.7.0 版本开始,支持将 UPDATE 参数设为 2,表示“支持部分列更新”。也即,当 UPDATE 设为 1 时,如果更新一个数据行,其中某些列没有提供取值,那么这些列会被设为 NULL;而当 UPDATE 设为 2 时,如果更新一个数据行,其中某些列没有提供取值,那么这些列会保持原有数据行中的对应值。
**7. How to create a table of over 1024 columns?**
**10. 我怎么创建超过 1024 列的表?**
From version 2.1.7.0, at most 4096 columns can be defined for a table.
使用 2.0 及其以上版本,默认支持 1024 列;2.0 之前的版本,TDengine 最大允许创建 250 列的表。但是如果确实超过限值,建议按照数据特性,逻辑地将这个宽表分解成几个小表。(从 2.1.7.0 版本开始,表的最大列数增加到了 4096 列。)
**8. How to improve the efficiency of inserting data?**
**11. 最有效的写入数据的方法是什么?**
Inserting data in batch is a good practice. Single SQL statement can insert data for one or multiple tables in batch.
批量插入。每条写入语句可以一张表同时插入多条记录,也可以同时插入多张表的多条记录。
**9. JDBC Error: the excuted SQL is not a DML or a DDL?**
**12. Windows 系统下插入的 nchar 类数据中的汉字被解析成了乱码如何解决?**
Please upgrade to latest JDBC driver, for details please refer to [Java Connector](/reference/connector/java)
Windows 下插入 nchar 类的数据中如果有中文,请先确认系统的地区设置成了中国(在 Control Panel 里可以设置),这时 cmd 中的`taos`客户端应该已经可以正常工作了;如果是在 IDE 里开发 Java 应用,比如 Eclipse, Intellij,请确认 IDE 里的文件编码为 GBK(这是 Java 默认的编码类型),然后在生成 Connection 时,初始化客户端的配置,具体语句如下:
**10. Failed to connect with error "invalid timestamp"**
```JAVA
Class.forName("com.taosdata.jdbc.TSDBDriver");
Properties properties = new Properties();
properties.setProperty(TSDBDriver.LOCALE_KEY, "UTF-8");
Connection = DriverManager.getConnection(url, properties);
```
**13.JDBC 报错: the excuted SQL is not a DML or a DDL?**
请更新至最新的 JDBC 驱动
```xml
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.27</version>
</dependency>
```
The most common reason is that the time setting is not aligned on the client side and the server side. On Linux system, please use `ntpdate` command. On Windows system, please enable automatic sync in system time setting.
**14. taos connect failed, reason&#58; invalid timestamp**
**11. Table name is not shown in full**
常见原因是服务器和客户端时间没有校准,可以通过和时间服务器同步的方式(Linux 下使用 ntpdate 命令,Windows 在系统时间设置中选择自动同步)校准。
There is a display width setting in TDengine CLI `taos`. It can be controlled by configuration parameter `maxBinaryDisplayWidth`, or can be set using SQL command `set max_binary_display_width`. A more convenient way is to append `\G` in a SQL command to bypass this limitation.
**15. 表名显示不全**
**12. How to change log level temporarily?**
由于 taos shell 在终端中显示宽度有限,有可能比较长的表名显示不全,如果按照显示的不全的表名进行相关操作会发生 Table does not exist 错误。解决方法可以是通过修改 taos.cfg 文件中的设置项 maxBinaryDisplayWidth, 或者直接输入命令 set max_binary_display_width 100。或者在命令结尾使用 \G 参数来调整结果的显示方式。
**16. 如何进行数据迁移?**
TDengine 是根据 hostname 唯一标志一台机器的,在数据文件从机器 A 移动机器 B 时,注意如下两件事:
- 2.0.0.0 至 2.0.6.x 的版本,重新配置机器 B 的 hostname 为机器 A 的 hostname。
- 2.0.7.0 及以后的版本,到/var/lib/taos/dnode 下,修复 dnodeEps.json 的 dnodeId 对应的 FQDN,重启。确保机器内所有机器的此文件是完全相同的。
- 1.x 和 2.x 版本的存储结构不兼容,需要使用迁移工具或者自己开发应用导出导入数据。
**17. 如何在命令行程序 taos 中临时调整日志级别**
为了调试方便,从 2.0.16 版本开始,命令行程序 taos 新增了与日志记录相关的两条指令:
Below SQL command can be used to adjust log level temporarily
```sql
ALTER LOCAL flag_name flag_value;
```
其含义是,在当前的命令行程序下,修改一个特定模块的日志记录级别(只对当前命令行程序有效,如果 taos 命令行程序重启,则需要重新设置):
- flag_name 的取值可以是:debugFlag,cDebugFlag,tmrDebugFlag,uDebugFlag,rpcDebugFlag
- flag_value 的取值可以是:131(输出错误和警告日志),135( 输出错误、警告和调试日志),143( 输出错误、警告、调试和跟踪日志)
```sql
ALTER LOCAL RESETLOG;
```
其含义是,清空本机所有由客户端生成的日志文件。
- flag_name can be: debugFlag,cDebugFlag,tmrDebugFlag,uDebugFlag,rpcDebugFlag
- flag_value can be: 131 (INFO/WARNING/ERROR), 135 (plus DEBUG), 143 (plus TRACE)
<a class="anchor" id="timezone"></a>
**18. 时间戳的时区信息是怎样处理的?**
TDengine 中时间戳的时区总是由客户端进行处理,而与服务端无关。具体来说,客户端会对 SQL 语句中的时间戳进行时区转换,转为 UTC 时区(即 Unix 时间戳——Unix Timestamp)再交由服务端进行写入和查询;在读取数据时,服务端也是采用 UTC 时区提供原始数据,客户端收到后再根据本地设置,把时间戳转换为本地系统所要求的时区进行显示。
客户端在处理时间戳字符串时,会采取如下逻辑:
1. 在未做特殊设置的情况下,客户端默认使用所在操作系统的时区设置。
2. 如果在 taos.cfg 中设置了 timezone 参数,则客户端会以这个配置文件中的设置为准。
3. 如果在 C/C++/Java/Python 等各种编程语言的 Connector Driver 中,在建立数据库连接时显式指定了 timezone,那么会以这个指定的时区设置为准。例如 Java Connector 的 JDBC URL 中就有 timezone 参数。
4. 在书写 SQL 语句时,也可以直接使用 Unix 时间戳(例如 `1554984068000`)或带有时区的时间戳字符串,也即以 RFC 3339 格式(例如 `2013-04-12T15:52:01.123+08:00`)或 ISO-8601 格式(例如 `2013-04-12T15:52:01.123+0800`)来书写时间戳,此时这些时间戳的取值将不再受其他时区设置的影响。
<a class="anchor" id="port"></a>
**19. TDengine 都会用到哪些网络端口?**
**13. Hhat to do if go compilation fails?**
在 TDengine 2.0 版本中,会用到以下这些网络端口(以默认端口 6030 为前提进行说明,如果修改了配置文件中的设置,那么这里列举的端口都会出现变化),管理员可以参考这里的信息调整防火墙设置:
| 协议 | 默认端口 | 用途说明 | 修改方法 |
| :--- | :-------- | :---------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------- |
| TCP | 6030 | 客户端与服务端之间通讯。 | 由配置文件设置 serverPort 决定。 |
| TCP | 6035 | 多节点集群的节点间通讯。 | 随 serverPort 端口变化。 |
| TCP | 6040 | 多节点集群的节点间数据同步。 | 随 serverPort 端口变化。 |
| TCP | 6041 | 客户端与服务端之间的 RESTful 通讯。 | 随 serverPort 端口变化。注意 taosAdapter 配置或有不同,请参考相应[文档](/reference/taosadapter/)。 |
| TCP | 6042 | Arbitrator 的服务端口。 | 随 Arbitrator 启动参数设置变化。 |
| TCP | 6043 | TaosKeeper 监控服务端口。 | 随 TaosKeeper 启动参数设置变化。 |
| TCP | 6044 | 支持 StatsD 的数据接入端口。 | 随 taosAdapter 启动参数设置变化(2.3.0.1+以上版本)。 |
| TCP | 6045 | 支持 collectd 数据接入端口。 | 随 taosAdapter 启动参数设置变化(2.3.0.1+以上版本)。 |
| TCP | 6060 | 企业版内 Monitor 服务的网络端口。 | |
| UDP | 6030-6034 | 客户端与服务端之间通讯。 | 随 serverPort 端口变化。 |
| UDP | 6035-6039 | 多节点集群的节点间通讯。 | 随 serverPort 端口变化。 |
**20. go 语言编写组件编译失败怎样解决?**
新版本 TDengine 2.3.0.0 包含一个使用 go 语言开发的 taosAdapter 独立组件,需要单独运行,取代之前 taosd 内置的 httpd ,提供包含原 httpd 功能以及支持多种其他软件(Prometheus、Telegraf、collectd、StatsD 等)的数据接入功能。
使用最新 develop 分支代码编译需要先 `git submodule update --init --recursive` 下载 taosAdapter 仓库代码后再编译。
目前编译方式默认自动编译 taosAdapter。go 语言版本要求 1.14 以上,如果发生 go 编译错误,往往是国内访问 go mod 问题,可以通过设置 go 环境变量来解决:
From version 2.3.0.0, a new component named `taosAdapter` is introduced. Its' developed in Go. If you want to compile from source code and meet go compilation problems, try to do below steps to resolve Go environment problems.
```sh
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
如果希望继续使用之前的内置 httpd,可以关闭 taosAdapter 编译,使用
`cmake .. -DBUILD_HTTP=true` 使用原来内置的 httpd。
---
title: 视频教程
---
## 技术公开课
- [技术公开课:开源、高效的物联网大数据平台,TDengine 内核技术剖析](https://www.taosdata.com/blog/2020/12/25/2126.html)
## 视频教程
- [TDengine 视频教程 - 快速上手](https://www.taosdata.com/blog/2020/11/11/1941.html)
- [TDengine 视频教程 - 数据建模](https://www.taosdata.com/blog/2020/11/11/1945.html)
- [TDengine 视频教程 - 集群搭建](https://www.taosdata.com/blog/2020/11/11/1961.html)
- [TDengine 视频教程 - Go Connector](https://www.taosdata.com/blog/2020/11/11/1951.html)
- [TDengine 视频教程 - JDBC Connector](https://www.taosdata.com/blog/2020/11/11/1955.html)
- [TDengine 视频教程 - Node.js Connector](https://www.taosdata.com/blog/2020/11/11/1957.html)
- [TDengine 视频教程 - Python Connector](https://www.taosdata.com/blog/2020/11/11/1963.html)
- [TDengine 视频教程 - RESTful Connector](https://www.taosdata.com/blog/2020/11/11/1965.html)
- [TDengine 视频教程 - “零”代码运维监控](https://www.taosdata.com/blog/2020/11/11/1959.html)
## 微课堂
关注 TDengine 视频号, 有精心制作的微课堂。
<img src="/img/shi-pin-hao.png" width={350} />
---
title: 通过 Docker 快速体验 TDengine
sidebar_label: TDengine in Docker
title: Deploy TDengine in Docker
---
虽然并不推荐在生产环境中通过 Docker 来部署 TDengine 服务,但 Docker 工具能够很好地屏蔽底层操作系统的环境差异,很适合在开发测试或初次体验时用于安装运行 TDengine 的工具集。特别是,借助 Docker,能够比较方便地在 macOS 和 Windows 系统上尝试 TDengine,而无需安装虚拟机或额外租用 Linux 服务器。另外,从 2.0.14.0 版本开始,TDengine 提供的镜像已经可以同时支持 X86-64、X86、arm64、arm32 平台,像 NAS、树莓派、嵌入式开发板之类可以运行 docker 的非主流计算机也可以基于本文档轻松体验 TDengine。
Even though it's not recommended to deploy TDengine using docker in production system, docker is still very useful in development environment, especially when your host is not Linux. From version 2.0.14.0, the official image of TDengine can support X86-64, X86, arm64, and rm32 .
下文通过 Step by Step 风格的介绍,讲解如何通过 Docker 快速建立 TDengine 的单节点运行环境,以支持开发和测试。
In this chapter a simple step by step guide of using TDengine in docker is introduced.
## 下载 Docker
## Install Docker
Docker 工具自身的下载请参考 [Docker 官网文档](https://docs.docker.com/get-docker/)
The installation of docker please refer to [Get Docker](https://docs.docker.com/get-docker/).
安装完毕后可以在命令行终端查看 Docker 版本。如果版本号正常输出,则说明 Docker 环境已经安装成功。
After docker is installed, you can check whether Docker is installed properly by displaying Docker version.
```bash
$ docker -v
Docker version 20.10.3, build 48d30b5
```
## 使用 Docker 在容器中运行 TDengine
## Launch TDengine in Docker
### 在 Docker 容器中运行 TDengine server
### Launch TDengine Server
```bash
$ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
```
这条命令,启动一个运行了 TDengine server 的 docker 容器,并且将容器的 6030 到 6049 端口映射到宿主机的 6030 到 6049 端口上。如果宿主机已经运行了 TDengine server 并占用了相同端口,需要映射容器的端口到不同的未使用端口段。(详情参见 [TDengine 2.0 端口说明](/train-faq/faq#port)。为了支持 TDengine 客户端操作 TDengine server 服务, TCP 和 UDP 端口都需要打开。
In the above command, a docker container is started to run TDengine server, the port range 6030-6049 of the container is mapped to host port range 6030-6049. If port range 6030-6049 has been occupied on the host, please change to an available host port range. Regarding the requirements about ports on the host, please refer to [Port Configuration](/reference/config/#serverport).
- **docker run**:通过 Docker 运行一个容器
- **-d**:让容器在后台运行
- **-p**:指定映射端口。注意:如果不是用端口映射,依然可以进入 Docker 容器内部使用 TDengine 服务或进行应用开发,只是不能对容器外部提供服务
- **tdengine/tdengine**:拉取的 TDengine 官方发布的应用镜像
- **526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd**:这个返回的长字符是容器 ID,我们也可以通过容器 ID 来查看对应的容器
- **docker run**: Launch a docker container
- **-d**: the container will run in background mode
- **-p**: port mapping
- **tdengine/tdengine**: The image from which to launch the container
- **526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd**: the container ID if successfully launched.
进一步,还可以使用 docker run 命令启动运行 TDengine server 的 docker 容器,并使用 `--name` 命令行参数将容器命名为 `tdengine`,使用 `--hostname` 指定 hostname 为 `tdengine-server`,通过 `-v` 挂载本地目录到容器,实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。
Furthermore, `--name` can be used with `docker run` to specify name for the container, `--hostname` can be used to specify hostname for the container, `-v` can be used to mount local volumes to the container so that the data generated inside the container can be persisted to disk on the host.
```bash
docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
- **--name tdengine**:设置容器名称,我们可以通过容器名称来访问对应的容器
- **--hostname=tdengine-server**:设置容器内 Linux 系统的 hostname,我们可以通过映射 hostname 和 IP 来解决容器 IP 可能变化的问题。
- **-v**:设置宿主机文件目录映射到容器内目录,避免容器删除后数据丢失。
- **--name tdengine**: specify the name of the container, the name can be used to specify the container later
- **--hostname=tdengine-server**: specify the hostname inside the container, the hostname can be used inside the container without worrying the container IP may vary
- **-v**: volume mapping between host and container
### 使用 docker ps 命令确认容器是否已经正确运行
### Check the container
```bash
docker ps
```
输出示例如下:
The output is like below:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS ···
c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
```
- **docker ps**:列出所有正在运行状态的容器信息。
- **CONTAINER ID**:容器 ID。
- **IMAGE**:使用的镜像。
- **COMMAND**:启动容器时运行的命令。
- **CREATED**:容器创建时间。
- **STATUS**:容器状态。UP 表示运行中。
- **docker ps**: List all the containers
- **CONTAINER ID**: Container ID
- **IMAGE**: The image used for the container
- **COMMAND**: The command used when launching the container
- **CREATED**: When the container was created
- **STATUS**: Status of the container
### 通过 docker exec 命令,进入到 docker 容器中去做开发
### Access TDengine inside container
```bash
$ docker exec -it tdengine /bin/bash
root@tdengine-server:~/TDengine-server-2.4.0.4#
```
- **docker exec**:通过 docker exec 命令进入容器,如果退出,容器不会停止。
- **-i**:进入交互模式。
- **-t**:指定一个终端。
- **tdengine**:容器名称,需要根据 docker ps 指令返回的值进行修改。
- **/bin/bash**:载入容器后运行 bash 来进行交互。
- **docker exec**: Attach to the continaer
- **-i**: Interactive mode
- **-t**: Use terminal
- **tdengine**: Container name, up to the output of `docker ps`
- **/bin/bash**: The command to execute once the container is attached
进入容器后,执行 taos shell 客户端程序。
Inside the container, start TDengine CLI `taos`
```bash
root@tdengine-server:~/TDengine-server-2.4.0.4# taos
......@@ -88,13 +89,13 @@ Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos>
```
TDengine 终端成功连接服务端,打印出了欢迎消息和版本信息。如果失败,会有错误信息打印出来。
The above example is for a successful connection. If `taos` fails to connect to the server side, error information would be shown.
在 TDengine 终端中,可以通过 SQL 命令来创建/删除数据库、表、超级表等,并可以进行插入和查询操作。具体可以参考 [TAOS SQL 说明文档](/taos-sql/)
In TDengine CLI, SQL commands can be executed to create/drop databases, tables, STables, and insert or query data. For details please refer to [TAOS SQL](/taos-sql/).
### 在宿主机访问 Docker 容器中的 TDengine server
### Access TDengine from host
在使用了 -p 命令行参数映射了正确的端口启动了 TDengine Docker 容器后,就在宿主机使用 taos shell 命令即可访问运行在 Docker 容器中的 TDengine。
If `-p` used to map ports properly between host and container, it's also able to access TDengine in container from the host as long as `firstEp` is configured correctly for the client on host.
```
$ taos
......@@ -105,61 +106,59 @@ Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos>
```
也可以在宿主机使用 curl 通过 RESTful 端口访问 Docker 容器内的 TDengine server。
It's also able to access the REST interface provided by TDengine in container from the host.
```
curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
```
输出示例如下:
Output is like below:
```
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
```
这条命令,通过 REST API 访问 TDengine server,这时连接的是本机的 6041 端口,可见连接成功。
For details of REST API please refer to [REST API]](/reference/rest-api/).
TDengine REST API 详情请参考[官方文档](/reference/rest-api/)
### Run TDengine server and taosAdapter inside container
### 使用 Docker 容器运行 TDengine server 和 taosAdapter
From version 2.4.0.0, in the TDengine Docker image, `taosAdapter` is enabled by default, but can be disabled using environment variable `TAOS_DISABLE_ADAPTER=true` . `taosAdapter` can also be run alone without `taosd` when launching a container.
在 TDegnine 2.4.0.0 之后版本的 Docker 容器,开始提供一个独立运行的组件 taosAdapter,代替之前版本 TDengine 中 taosd 进程中内置的 http server。taosAdapter 支持通过 RESTful 接口对 TDengine server 的数据写入和查询能力,并提供和 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。在新版本 Docker 镜像中,默认启用了 taosAdapter,也可以使用 docker run 命令中设置 TAOS_DISABLE_ADAPTER=true 来禁用 taosAdapter;也可以在 docker run 命令中单独使用 taosAdapter,而不运行 taosd 。
For the port mapping of `taosAdapter`, please refer to [taosAdapter](/reference/taosadapter/).
注意:如果容器中运行 taosAdapter,需要根据需要映射其他端口,具体端口默认配置和修改方法请参考[taosAdapter 文档](/reference/taosadapter/)
使用 docker 运行 TDengine 2.4.0.4 版本镜像(taosd + taosAdapter):
- Run both `taosd` and `taosAdapter` (by default) in docker container:
```bash
docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
```
使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosAdapter,需要设置 firstEp 配置项 或 TAOS_FIRST_EP 环境变量):
- Run `taosAdapter` only in docker container, `TAOS_FIRST_EP` environment variable needs to be used to specify the container name in which `taosd` is running:
```bash
docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
```
使用 docker 运行 TDengine 2.4.0.4 版本镜像(仅 taosd):
- Run `taosd` only in docker container:
```bash
docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
```
使用 curl 命令验证 RESTful 接口可以正常工作:
- Verify the REST interface:
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0.1:6041/rest/sql
```
输出示例如下:
Below is an example output:
```
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
```
### 应用示例:在宿主机使用 taosBenchmark 写入数据到 Docker 容器中的 TDengine server
### Use taosBenchmark on host to access TDenginer server in container
1. 在宿主机命令行界面执行 taosBenchmark (曾命名为 taosdemo)写入数据到 Docker 容器中的 TDengine server
1. Run `taosBenchmark`, named as `taosdemo` previously, on the host:
```bash
$ taosBenchmark
......@@ -209,24 +208,11 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0
Press enter key to continue or Ctrl-C to stop
```
回车后,该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "beijing" 或者 "shanghai"。
最后共插入 1 亿条记录。
Once the execution is finished, a database `test` is created, a STable `meters` is created in database `test`, 10,000 sub tables are created using `meters` as template, named as "d0" to "d9999", while 10,000 rows are inserted into each table, so totally 100,000,000 rows are inserted.
2. 进入 TDengine 终端,查看 taosBenchmark 生成的数据。
2. Check the data
- **进入命令行。**
```bash
$ root@c452519b0f9b:~/TDengine-server-2.4.0.4# taos
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos>
```
- **查看数据库。**
- **Check database**
```bash
$ taos> show databases;
......@@ -236,7 +222,7 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0
```
- **查看超级表。**
- **Check STable**
```bash
$ taos> use test;
......@@ -250,7 +236,7 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0
```
- **查看表,限制输出十条。**
- **Check Tables**
```bash
$ taos> select * from test.t0 limit 10;
......@@ -273,7 +259,7 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0
```
- **查看 d0 表的标签值。**
- **Check tag values of table d0**
```bash
$ taos> select groupid, location from test.d0;
......@@ -283,48 +269,17 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' 127.0.0
Query OK, 1 row(s) in set (0.003490s)
```
### 应用示例:使用数据收集代理软件写入 TDengine
taosAdapter 支持多个数据收集代理软件(如 Telegraf、StatsD、collectd 等),这里仅模拟 StasD 写入数据,在宿主机执行命令如下:
### Access TDengine from 3rd party tools
```
echo "foo:1|c" | nc -u -w0 127.0.0.1 6044
```
然后可以使用 taos shell 查询 taosAdapter 自动创建的数据库 statsd 和 超级表 foo 中的内容:
```
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2021-12-28 09:18:55.765 | 12 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
statsd | 2021-12-28 09:21:48.841 | 1 | 1 | 1 | 1 | 10 | 3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ns | 2 | ready |
Query OK, 2 row(s) in set (0.002112s)
taos> use statsd;
Database changed.
taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
foo | 2021-12-28 09:21:48.894 | 2 | 1 | 1 |
Query OK, 1 row(s) in set (0.001160s)
taos> select * from foo;
ts | value | metric_type |
=======================================================================================
2021-12-28 09:21:48.840820836 | 1 | counter |
Query OK, 1 row(s) in set (0.001639s)
taos>
```
A lot of 3rd party tools can be used to write data into TDengine through `taosAdapter` , for details please refer to [3rd party tools](/third-party/).
可以看到模拟数据已经被写入到 TDengine 中。
There is nothing different from the 3rd party side to access TDengine server inside a container, as long as the end point is specified correctly, the end point should be the FQDN and the mapped port of the host.
## 停止正在 Docker 中运行的 TDengine 服务
## Stop TDengine inside container
```bash
docker stop tdengine
```
- **docker stop**:通过 docker stop 停止指定的正在运行中的 docker 镜像。
- **docker stop**: stop a container
- **tdengine**: container name
label: FAQ & Others
link:
slug: /train-faq/
type: generated-index
---
title: FAQ & Others
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
# 高效写入数据
TDengine 支持多种接口写入数据,包括 SQL,Prometheus,Telegraf,collectd,StatsD,EMQ MQTT Broker,HiveMQ Broker,CSV 文件等,后续还将提供 Kafka,OPC 等接口。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。
TDengine 支持多种接口写入数据,包括 SQL,Prometheus,Telegraf,collectd,StatsD,EMQX MQTT Broker,HiveMQ Broker,CSV 文件等,后续还将提供 Kafka,OPC 等接口。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。
## <a class="anchor" id="sql"></a>SQL 写入
......@@ -312,9 +312,9 @@ TCollector 是一个在客户侧收集本地收集器并发送数据到 OpenTSDB
taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相关文档。
## <a class="anchor" id="emq"></a>EMQ Broker 直接写入
## <a class="anchor" id="emq"></a>EMQX Broker 直接写入
MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/emqx) 是一开源的 MQTT Broker 软件,无需任何代码,只需要在 EMQ Dashboard 里使用“规则”做简单配置,即可将 MQTT 的数据直接写入 TDengine。EMQX 支持通过 发送到 Web 服务的方式保存数据到 TDengine,也在企业版上提供原生的 TDengine 驱动实现直接保存。详细使用方法请参考 [EMQ 官方文档](https://docs.emqx.com/zh/enterprise/v4.4/rule/backend_tdengine.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine)
MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/emqx) 是一开源的 MQTT Broker 软件,无需任何代码,只需要在 EMQX Dashboard 里使用“规则”做简单配置,即可将 MQTT 的数据直接写入 TDengine。EMQX 支持通过 发送到 Web 服务的方式保存数据到 TDengine,也在企业版上提供原生的 TDengine 驱动实现直接保存。详细使用方法请参考 [EMQX 官方文档](https://docs.emqx.com/zh/enterprise/v4.4/rule/backend_tdengine.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine)
## <a class="anchor" id="hivemq"></a>HiveMQ Broker 直接写入
......
......@@ -864,9 +864,9 @@ Query OK, 1 row(s) in set (0.000141s)
请参考:[JDBC example](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC)
## 常见问题
* 使用Statement的addBatch和executeBatch来执行“批量写入/更行”,为什么没有带来性能上的提升?
**原因**:TDengine的JDBC实现中,通过addBatch方法提交的sql语句,会按照添加的顺序,依次执行,这种方式没有减少与服务端的交互次数,不会带来性能上的提升。
**解决方法**:1. 在一条insert语句中拼接多个values值;2. 使用多线程的方式并发插入;3. 使用参数绑定的写入方式
* 使用 Statement 的 addBatch() 和 executeBatch() 来执行“批量写入/更新”,为什么没有带来性能上的提升?
**原因**:TDengine 的 JDBC 实现中,通过 addBatch() 方法提交的sql语句,会按照添加的顺序,依次执行,这种方式没有减少与服务端的交互次数,不会带来性能上的提升。
**解决方法**:1. 在一条 insert 语句中拼接多个 values 值;2. 使用多线程的方式并发插入;3. 使用参数绑定的写入方式
* java.lang.UnsatisfiedLinkError: no taos in java.library.path
**原因**:程序没有找到依赖的本地函数库 taos。
......
# Efficient Data Writing
TDengine supports multiple ways to write data, including SQL, Prometheus, Telegraf, collectd, StatsD, EMQ MQTT Broker, HiveMQ Broker, CSV file, etc. Kafka, OPC and other interfaces will be provided in the future. Data can be inserted in one single record or in batches, data from one or multiple data collection points can be inserted at the same time. TDengine supports multi-thread insertion, out-of-order data insertion, and also historical data insertion.
TDengine supports multiple ways to write data, including SQL, Prometheus, Telegraf, collectd, StatsD, EMQX MQTT Broker, HiveMQ Broker, CSV file, etc. Kafka, OPC and other interfaces will be provided in the future. Data can be inserted in one single record or in batches, data from one or multiple data collection points can be inserted at the same time. TDengine supports multi-thread insertion, out-of-order data insertion, and also historical data insertion.
## <a class="anchor" id="sql"></a> Data Writing via SQL
......@@ -303,9 +303,9 @@ TCollector is a client-side process that gathers data from local collectors and
Please find taosAdapter configuration and usage from `taosadapter --help` output.
## <a class="anchor" id="emq"></a> Data Writing via EMQ Broker
## <a class="anchor" id="emq"></a> Data Writing via EMQX Broker
[EMQ](https://github.com/emqx/emqx) is an open source MQTT Broker software, with no need of coding, only to use "rules" in EMQ Dashboard for simple configuration, and MQTT data can be directly written into TDengine. EMQX supports storing data to the TDengine by sending it to a Web service, and also provides a native TDengine driver on Enterprise Edition for direct data store. Please refer to [EMQ official documents](https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine) for more details.
[EMQX](https://github.com/emqx/emqx) is an open source MQTT Broker software, with no need of coding, only to use "rules" in EMQX Dashboard for simple configuration, and MQTT data can be directly written into TDengine. EMQX supports storing data to the TDengine by sending it to a Web service, and also provides a native TDengine driver on Enterprise Edition for direct data store. Please refer to [EMQX official documents](https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine) for more details.
## <a class="anchor" id="hivemq"></a> Data Writing via HiveMQ Broker
......
......@@ -49,7 +49,7 @@ namespace insertCn
String table = stable + "_subtable_1";
var colData = new List<Object>{1637064040000,1,"涛思数据","保利广场","Beijing","China",
1637064041000,2,"涛思数据taosdata","保利广场baoli","Beijing","China",
1637064042000,3,"TDegnine涛思数据","time广场","NewYork","US",
1637064042000,3,"TDengine涛思数据","time广场","NewYork","US",
1637064043000,4,"4涛思数据","4广场南部","London","UK",
1637064044000,5,"涛思数据5","!广场路中部123","Tokyo","JP",
1637064045000,6,"taos涛思数据6","青年广场123号!","Washin","DC",
......@@ -99,7 +99,7 @@ namespace insertCn
{
var colData = new List<Object>{1637064040000,1,"涛思数据","保利广场","Beijing","China",
1637064041000,2,"涛思数据taosdata","保利广场baoli","Beijing","China",
1637064042000,3,"TDegnine涛思数据","time广场","NewYork","US",
1637064042000,3,"TDengine涛思数据","time广场","NewYork","US",
1637064043000,4,"4涛思数据","4广场南部","London","UK",
1637064044000,5,"涛思数据5","!广场路中部123","Tokyo","JP",
1637064045000,6,"taos涛思数据6","青年广场123号!","Washin","DC",
......
......@@ -59,7 +59,7 @@ cp ${compile_dir}/../packaging/tools/set_core.sh ${pkg_dir}${install_home_pat
cp ${compile_dir}/../packaging/tools/taosd-dump-cfg.gdb ${pkg_dir}${install_home_path}/bin
cp ${compile_dir}/build/bin/taosd ${pkg_dir}${install_home_path}/bin
#cp ${compile_dir}/build/bin/taosBenchmark ${pkg_dir}${install_home_path}/bin
cp ${compile_dir}/build/bin/taosBenchmark ${pkg_dir}${install_home_path}/bin
if [ -f "${compile_dir}/build/bin/taosadapter" ]; then
cp ${compile_dir}/build/bin/taosadapter ${pkg_dir}${install_home_path}/bin ||:
......
......@@ -68,7 +68,7 @@ cp %{_compiledir}/../packaging/tools/set_core.sh %{buildroot}%{homepath}/bin
cp %{_compiledir}/../packaging/tools/taosd-dump-cfg.gdb %{buildroot}%{homepath}/bin
cp %{_compiledir}/build/bin/taos %{buildroot}%{homepath}/bin
cp %{_compiledir}/build/bin/taosd %{buildroot}%{homepath}/bin
#cp %{_compiledir}/build/bin/taosBenchmark %{buildroot}%{homepath}/bin
cp %{_compiledir}/build/bin/taosBenchmark %{buildroot}%{homepath}/bin
if [ -f %{_compiledir}/build/bin/taosadapter ]; then
cp %{_compiledir}/build/bin/taosadapter %{buildroot}%{homepath}/bin ||:
......
......@@ -10248,6 +10248,9 @@ static int32_t doValidateSubquery(SSqlNode* pSqlNode, int32_t index, SSqlObj* pS
tstrncpy(pTableMetaInfo1->aliasName, subInfo->aliasName.z, subInfo->aliasName.n + 1);
}
if (TPARSER_HAS_TOKEN(pSqlNode->interval.interval) && pSub->order.orderColId == INT32_MIN) {
pSub->order.orderColId = PRIMARYKEY_TIMESTAMP_COL_INDEX;
}
// NOTE: order mix up in subquery not support yet.
pQueryInfo->order = pSub->order;
......
Subproject commit 0aad27d725f4ee6b18daf1db0c07d933aed16eea
Subproject commit 3b6bf41d16de351668fc02589f931da383d8a9fe
......@@ -815,15 +815,15 @@ class TDTestCase:
query_sql = f'select count(*), avg(c6), sum(c3) from (select * from {tb_name} where c1 >1 or c2 = 2 and c7 like "binar_" and c4 in (3, 5)) where c1 != 2 or c3 = 1 or t1=2 or t1=3 or c8 like "ncha_" and c9 in (true) interval(8d)'
res = tdSql.query(query_sql, True)
tdSql.checkRows(3)
tdSql.checkEqual(int(res[0][1]), 17)
tdSql.checkEqual(int(res[0][1]), 15)
tdSql.checkEqual(int(res[0][2]), 1)
tdSql.checkEqual(int(res[0][3]), 38)
tdSql.checkEqual(int(res[1][1]), 10)
tdSql.checkEqual(int(res[1][2]), 2)
tdSql.checkEqual(int(res[1][3]), 17)
tdSql.checkEqual(int(res[2][1]), 8)
tdSql.checkEqual(int(res[0][3]), 50)
tdSql.checkEqual(int(res[1][1]), 15)
tdSql.checkEqual(int(res[1][2]), 3)
tdSql.checkEqual(int(res[1][3]), 15)
tdSql.checkEqual(int(res[2][1]), 5)
tdSql.checkEqual(int(res[2][2]), 1)
tdSql.checkEqual(int(res[2][3]), 15)
tdSql.checkEqual(int(res[2][3]), 5)
## select count avg sum from (condition_A and condition_B and and line and in and ts and condition_tag_A and condition_tag_B and between) where condition_C orr condition_D or condition_tag_C or condition_tag_D or like and in interval
query_sql = f'select count(*), avg(c6), sum(c3) from (select * from {tb_name} where c1 >= 1 and c2 = 2 and c7 like "binar_" and c4 in (3, 5) and ts > "2021-01-11 12:00:00" and t1 < 2 and t1 > 0 and c6 between 0 and 7) where c1 != 2 or c3 = 1 or t1=2 or t1=3 or c8 like "ncha_" and c9 in (true) interval(8d)'
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册