未验证 提交 43ad67a6 编写于 作者: arielyangpan's avatar arielyangpan 提交者: GitHub

Merge branch 'develop' into docs/typo-collection

......@@ -11,7 +11,7 @@
# TDengine 简介
TDengine 是一款高性能、分布式、支持 SQL 的时序数据库。而且除时序数据库功能外,它还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。与其他时序数据数据库相比,TDengine 有以下特点:
TDengine 是一款高性能、分布式、支持 SQL 的时序数据库(Time-Series Database)。而且除时序数据库功能外,它还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。与其他时序数据数据库相比,TDengine 有以下特点:
- **高性能**:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,而且存储空间也大为节省。
......
......@@ -19,25 +19,6 @@ MESSAGE(STATUS "Project binary files output path: " ${PROJECT_BINARY_DIR})
MESSAGE(STATUS "Project executable files output path: " ${EXECUTABLE_OUTPUT_PATH})
MESSAGE(STATUS "Project library files output path: " ${LIBRARY_OUTPUT_PATH})
find_package(Git QUIET)
if(GIT_FOUND AND EXISTS "${TD_COMMUNITY_DIR}/.git")
# Update submodules as needed
option(GIT_SUBMODULE "Check submodules during build" ON)
if(GIT_SUBMODULE)
message(STATUS "Submodule update")
execute_process(COMMAND ${GIT_EXECUTABLE} submodule update --init --recursive
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE GIT_SUBMOD_RESULT)
if(NOT GIT_SUBMOD_RESULT EQUAL "0")
message(WARNING "git submodule update --init --recursive failed with ${GIT_SUBMOD_RESULT}, please checkout submodules")
endif()
endif()
endif()
if(NOT EXISTS "${TD_COMMUNITY_DIR}/deps/jemalloc/Makefile.in")
message(WARNING "The submodules were not downloaded! GIT_SUBMODULE was turned off or failed. Please update submodules manually if you need build them.")
endif()
IF (TD_BUILD_JDBC)
FIND_PROGRAM(TD_MVN_INSTALLED mvn)
IF (TD_MVN_INSTALLED)
......
......@@ -4,9 +4,13 @@ title: 产品简介
toc_max_heading_level: 2
---
## TDengine 主要功能
TDengine 是一款高性能、分布式、支持 SQL 的时序数据库,其核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供[缓存](/develop/cache/)[数据订阅](/develop/subscribe)[流式计算](/develop/continuous-query)等大数据平台所需要的系列功能,最大程度减少研发和运维的复杂度。
TDengine 是一款高性能、分布式、支持 SQL 的时序数据库,其核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供[缓存](/develop/cache/)[数据订阅](/develop/subscribe)[流式计算](/develop/continuous-query)等大数据平台所需要的系列功能,最大程度减少研发和运维的复杂度。主要功能如下:
本章节介绍TDengine的主要功能、竞争优势、适用场景、与其他数据库的对比测试等等,让大家对TDengine有个整体的了解。
## 主要功能
TDengine的主要功能如下:
1. 高速数据写入,除 [SQL 写入](/develop/insert-data/sql-writing)外,还支持 [Schemaless 写入](/reference/schemaless/),支持 [InfluxDB LINE 协议](/develop/insert-data/influxdb-line)[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json)等协议写入;
2. 第三方数据采集工具 [Telegraf](/third-party/telegraf)[Prometheus](/third-party/prometheus)[StatsD](/third-party/statsd)[collectd](/third-party/collectd)[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQ](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker) 等都可以进行配置后,不用任何代码,即可将数据写入;
......@@ -26,7 +30,7 @@ TDengine 是一款高性能、分布式、支持 SQL 的时序数据库,其核
更多细小的功能,请阅读整个文档。
## TDengine 主要亮点
## 竞争优势
由于 TDengine 充分利用了[时序数据特点](https://www.taosdata.com/blog/2019/07/09/105.html),比如结构化、无需事务、很少删除或更新、写多读少等等,设计了全新的针对时序数据的存储引擎和计算引擎,因此与其他时序数据库相比,TDengine 有以下特点:
......@@ -53,7 +57,7 @@ TDengine 是一款高性能、分布式、支持 SQL 的时序数据库,其核
3. 因为其 All In One 的特性,系统复杂度降低,能降研发成本
4. 因为运维维护简单,运营维护成本能大幅降低
## TDengine 技术生态
## 技术生态
在整个时序大数据平台中,TDengine 在其中扮演的角色如下:
......@@ -111,7 +115,7 @@ TDengine 是一款高性能、分布式、支持 SQL 的时序数据库,其核
| 要求运维学习成本可控 | | | √ | 同上。 |
| 要求市场有大量人才储备 | √ | | | TDengine 作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。 |
## TDengine 与其他数据库的对比测试
## 与其他数据库的对比测试
- [用 InfluxDB 开源的性能测试工具对比 InfluxDB 和 TDengine](https://www.taosdata.com/blog/2020/01/13/1105.html)
- [TDengine 与 OpenTSDB 对比测试](https://www.taosdata.com/blog/2019/08/21/621.html)
......
......@@ -137,7 +137,18 @@ TDengine 建议用数据采集点的名字(如上表中的 D1001)来做表
超级表是指某一特定类型的数据采集点的集合。同一类型的数据采集点,其表的结构是完全一样的,但每个表(数据采集点)的静态属性(标签)是不一样的。描述一个超级表(某一特定类型的数据采集点的集合),除需要定义采集量的表结构之外,还需要定义其标签的 schema,标签的数据类型可以是整数、浮点数、字符串,标签可以有多个,可以事后增加、删除或修改。如果整个系统有 N 个不同类型的数据采集点,就需要建立 N 个超级表。
在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**。当为某个具体数据采集点创建表时,用户使用超级表的定义做模板,同时指定该具体采集点(表)的标签值。与传统的关系型数据库相比,表(一个数据采集点)是带有静态标签的,而且这些标签可以事后增加、删除、修改。超级表与与基于超级表建立的子表之间的关系表现在:
在 TDengine 的设计里,**表用来代表一个具体的数据采集点,超级表用来代表一组相同类型的数据采集点集合**
## 子表 (Subtable)
当为某个具体数据采集点创建表时,用户可以使用超级表的定义做模板,同时指定该具体采集点(表)的具体标签值来创建该表。**通过超级表创建的表称之为子表**。正常的表与子表的差异在于:
1. 子表就是表,因此所有正常表的SQL操作都可以在子表上执行。
2. 子表在正常表的基础上有扩展,它是带有静态标签的,而且这些标签可以事后增加、删除、修改,而正常的表没有。
3. 子表一定属于一张超级表,但普通表不属于任何超级表
4. 普通表无法转为子表,子表也无法转为普通表。
超级表与与基于超级表建立的子表之间的关系表现在:
1. 一张超级表包含有多张子表,这些子表具有相同的采集量 schema,但带有不同的标签值。
2. 不能通过子表调整数据或标签的模式,对于超级表的数据模式修改立即对所有的子表生效。
......@@ -145,6 +156,8 @@ TDengine 建议用数据采集点的名字(如上表中的 D1001)来做表
查询既可以在表上进行,也可以在超级表上进行。针对超级表的查询,TDengine 将把所有子表中的数据视为一个整体数据集进行处理,会先把满足标签过滤条件的表从超级表中找出来,然后再扫描这些表的时序数据,进行聚合操作,这样需要扫描的数据集会大幅减少,从而显著提高查询的性能。本质上,TDengine 通过对超级表查询的支持,实现了多个同类数据采集点的高效聚合。
TDengine系统建议给一个数据采集点建表,需要通过超级表建表,而不是建普通表。
## 库 (database)
库是指一组表的集合。TDengine 容许一个运行实例有多个库,而且每个库可以配置不同的存储策略。不同类型的数据采集点往往具有不同的数据特征,包括数据采集频率的高低,数据保留时间的长短,副本的数目,数据块的大小,是否允许更新数据等等。为了在各种场景下 TDengine 都能最大效率的工作,TDengine 建议将不同数据特征的超级表创建在不同的库里。
......
label: 写入数据
link:
type: generated-index
slug: /insert-data/
description: "TDengine 支持多种写入协议,包括 SQL,InfluxDB Line 协议, OpenTSDB Telnet 协议,OpenTSDB JSON 格式协议。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。同时,TDengine 支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。InfluxDB Line 协议、OpenTSDB Telnet 协议和 OpenTSDB JSON 格式协议是 TDengine 支持的三种无模式写入协议。使用无模式方式写入无需提前创建超级表和子表,并且引擎能自适用数据对表结构做调整。"
label: 写入数据
\ No newline at end of file
---
title: 写入数据
---
TDengine 支持多种写入协议,包括 SQL,InfluxDB Line 协议, OpenTSDB Telnet 协议,OpenTSDB JSON 格式协议。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。同时,TDengine 支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。InfluxDB Line 协议、OpenTSDB Telnet 协议和 OpenTSDB JSON 格式协议是 TDengine 支持的三种无模式写入协议。使用无模式方式写入无需提前创建超级表和子表,并且引擎能自适用数据对表结构做调整。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
label: 开发指南
link:
type: generated-index
slug: /develop
description: "开始指南是对开发者友好的使用教程,既包括数据建模、写入、查询等基础功能的使用,也包括数据订阅、连续查询等高级功能的使用。对于每个主题,都配有各编程语言的连接器的示例代码,方便开发者快速上手。如果想更深入地了解各连接器的使用,请阅读连接器参考指南。"
label: 开发指南
\ No newline at end of file
---
title: 开发指南
---
开始指南是对开发者友好的使用教程,既包括数据建模、写入、查询等基础功能的使用,也包括数据订阅、连续查询等高级功能的使用。对于每个主题,都配有各编程语言的连接器的示例代码,方便开发者快速上手。如果想更深入地了解各连接器的使用,请阅读连接器参考指南。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
label: 集群管理
link:
type: generated-index
slug: /cluster/
description: "TDengine支持以集群方式部署,以提升系统的处理能力和高可用性。TDengine集群支持任意数据的多副本从而提升高可用性,并自动实现负载均衡。同时TDengine集群具有很好的横向扩展能力以处理更多的数据采集点和更大的数据量。"
keywords:
[
集群,
高可用,
负载均衡,
横向扩展
]
---
title: 集群管理
---
TDengine支持以集群方式部署,以提升系统的处理能力和高可用性。TDengine集群支持任意数据的多副本从而提升高可用性,并自动实现负载均衡。同时TDengine集群具有很好的横向扩展能力以处理更多的数据采集点和更大的数据量。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -19,7 +19,23 @@ CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
4. 更多关于 UPDATE 参数的用法,请参考[FAQ](/train-faq/faq)
3. 数据库名最大长度为 33;
4. 一条 SQL 语句的最大长度为 65480 个字符;
5. 数据库还有更多与数据库相关的配置参数,如 cache, blocks, days, keep, minRows, maxRows, wal, fsync, update, cacheLast, replica, quorum, maxVgroupsPerDb, ctime, comp, prec, 具体细节请参见 [配置参数](/reference/config/) 章节。
5. 创建数据库时可用的参数有:
- cache: [Description](/reference/config/#cache)
- blocks: [Description](/reference/config/#blocks)
- days: [Description](/reference/config/#days)
- keep: [Description](/reference/config/#keep)
- minRows: [Description](/reference/config/#minrows)
- maxRows: [Description](/reference/config/#maxrows)
- wal: [Description](/reference/config/#wallevel)
- fsync: [Description](/reference/config/#fsync)
- update: [Description](/reference/config/#update)
- cacheLast: [Description](/reference/config/#cachelast)
- replica: [Description](/reference/config/#replica)
- quorum: [Description](/reference/config/#quorum)
- maxVgroupsPerDb: [Description](/reference/config/#maxvgroupsperdb)
- comp: [Description](/reference/config/#comp)
- precision: [Description](reference/config/#precision)
6. 请注意上面列出的所有参数都可以配置在配置文件 `taosd.cfg` 中作为创建数据库时使用的默认配置, `create database` 的参数中明确指定的会覆盖配置文件中的设置。
:::
......
......@@ -30,4 +30,11 @@ taos> DESCRIBE meters;
groupid | INT | 4 | TAG |
```
数据集包含 4 个智能电表的数据,按照 TDengine 的建模规则,对应 4 个子表,其名称分别是 d1001, d1002, d1003, d1004。
\ No newline at end of file
数据集包含 4 个智能电表的数据,按照 TDengine 的建模规则,对应 4 个子表,其名称分别是 d1001, d1002, d1003, d1004。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
label: 运维指南
link:
slug: /operation/
type: generated-index
---
title: 运维指南
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -4,7 +4,9 @@ title: REST API
为支持各种不同类型平台的开发,TDengine 提供符合 REST 设计标准的 API,即 REST API。为最大程度降低学习成本,不同于其他数据库 REST API 的设计方法,TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST 连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。
注意:与原生连接器的一个区别是,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。(从 2.2.0.0 版本开始,支持在 RESTful url 中指定 db_name,这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 url 中指定的这个 db_name。从 2.4.0.0 版本开始,RESTful 默认由 taosAdapter 提供,要求必须在 url 中指定 db_name。)
:::note
与原生连接器的一个区别是,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。从 2.2.0.0 版本开始,支持在 RESTful URL 中指定 db_name,这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 URL 中指定的这个 db_name。从 2.4.0.0 版本开始,RESTful 默认由 taosAdapter 提供,要求必须在 URL 中指定 db_name。
:::
## 安装
......@@ -16,11 +18,10 @@ RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安
下面以 Ubuntu 环境中使用 curl 工具(确认已经安装)来验证 RESTful 接口的正常。
下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041(缺省值)替换为实际运行的 TDengine 服务 fqdn 和端口号:
下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041(缺省值)替换为实际运行的 TDengine 服务 FQDN 和端口号:
```html
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;'
h1.taosdata.com:6041/rest/sql
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' h1.taosdata.com:6041/rest/sql
```
返回值结果如下表示验证通过:
......@@ -84,7 +85,7 @@ http://<fqdn>:<port>/rest/sql/[db_name]
- port: 配置文件中 httpPort 配置项,缺省为 6041
- db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。(从 2.2.0.0 版本开始支持)
例如:http://h1.taos.com:6041/rest/sql/test 是指向地址为 h1.taos.com:6041 的 url,并将默认使用的数据库库名设置为 test
例如:`http://h1.taos.com:6041/rest/sql/test` 是指向地址为 `h1.taos.com:6041` 的 URL,并将默认使用的数据库库名设置为 `test`
HTTP 请求的 Header 里需带有身份认证信息,TDengine 支持 Basic 认证与自定义认证两种机制,后续版本将提供标准安全的数字签名机制来做身份验证。
......@@ -100,9 +101,9 @@ HTTP 请求的 Header 里需带有身份认证信息,TDengine 支持 Basic 认
Authorization: Basic <TOKEN>
```
HTTP 请求的 BODY 里就是一个完整的 SQL 语句,SQL 语句中的数据表应提供数据库前缀,例如 \<db_name>.\<tb_name>。如果表名不带数据库前缀,又没有在 url 中指定数据库名的话,系统会返回错误。因为 HTTP 模块只是一个简单的转发,没有当前 DB 的概念。
HTTP 请求的 BODY 里就是一个完整的 SQL 语句,SQL 语句中的数据表应提供数据库前缀,例如 db_name.tb_name。如果表名不带数据库前缀,又没有在 URL 中指定数据库名的话,系统会返回错误。因为 HTTP 模块只是一个简单的转发,没有当前 DB 的概念。
使用 curl 通过自定义身份认证方式来发起一个 HTTP Request,语法如下:
使用 `curl` 通过自定义身份认证方式来发起一个 HTTP Request,语法如下:
```bash
curl -H 'Authorization: Basic <TOKEN>' -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
......@@ -136,7 +137,7 @@ curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
说明:
- status: 告知操作结果是成功还是失败。
- head: 表的定义,如果不返回结果集,则仅有一列 “affected_rows”。(从 2.0.17.0 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在未来版本中,有可能会从返回值中去掉 head 这一项。)
- head: 表的定义,如果不返回结果集,则仅有一列 “affected_rows”。(从 2.0.17.0 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在后续版本中,有可能会从返回值中去掉 head 这一项。)
- column_meta: 从 2.0.17.0 版本开始,返回值中增加这一项来说明 data 里每一列的数据类型。具体每个列会用三个值来说明,分别为:列名、列类型、类型长度。例如`["current",6,4]`表示列名为“current”;列类型为 6,也即 float 类型;类型长度为 4,也即对应 4 个字节表示的 float。如果列类型为 binary 或 nchar,则类型长度表示该列最多可以保存的内容长度,而不是本次返回值中的具体数据长度。当列类型是 nchar 的时候,其类型长度表示可以保存的 unicode 字符数量,而不是 bytes。
- data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有 [[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
- rows: 表明总共多少行数据。
......@@ -162,7 +163,7 @@ HTTP 请求中需要带有授权码 `<TOKEN>`,用于身份识别。授权码
curl http://<fqnd>:<port>/rest/login/<username>/<password>
```
其中,`fqdn` 是 TDengine 数据库的 fqdn 或 ip 地址,port 是 TDengine 服务的端口号,`username` 为数据库用户名,`password` 为数据库密码,返回值为 `JSON` 格式,各字段含义如下:
其中,`fqdn` 是 TDengine 数据库的 FQDN 或 IP 地址,`port` 是 TDengine 服务的端口号,`username` 为数据库用户名,`password` 为数据库密码,返回值为 JSON 格式,各字段含义如下:
- status:请求结果的标志位
......@@ -236,13 +237,13 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata
### 结果集采用 Unix 时间戳
HTTP 请求 URL 采用 `sqlt` 时,返回结果集的时间戳将采用 Unix 时间戳格式表示,例如
HTTP 请求 URL 采用 `/rest/sqlt` 时,返回结果集的时间戳将采用 Unix 时间戳格式表示,例如
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001' 192.168.0.1:6041/rest/sqlt
```
返回
返回结果
```json
{
......@@ -264,7 +265,7 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001
### 结果集采用 UTC 时间字符串
HTTP 请求 URL 采用 `sqlutc` 时,返回结果集的时间戳将采用 UTC 时间字符串表示,例如
HTTP 请求 URL 采用 `/rest/sqlutc` 时,返回结果集的时间戳将采用 UTC 时间字符串表示,例如
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.t1' 192.168.0.1:6041/rest/sqlutc
......@@ -298,10 +299,9 @@ HTTP 请求 URL 采用 `sqlutc` 时,返回结果集的时间戳将采用 UTC
- httpMaxThreads: 启动的线程数量,默认为 2(2.0.17.0 版本开始,默认值改为 CPU 核数的一半向下取整)。
- restfulRowLimit: 返回结果集(JSON 格式)的最大条数,默认值为 10240。
- httpEnableCompress: 是否支持压缩,默认不支持,目前 TDengine 仅支持 gzip 压缩格式。
- httpDebugFlag: 日志开关,默认 131。131:仅错误和报警信息,135:调试信息,143:非常详细的调试信息,默认 131。
- httpDbNameMandatory: 是否必须在 RESTful url 中指定默认的数据库名。默认为 0,即关闭此检查。如果设置为 1,那么每个 RESTful url 中都必须设置一个默认数据库名,否则无论此时执行的 SQL 语句是否需要指定数据库,都会返回一个执行错误,拒绝执行此 SQL 语句。
:::note
如果使用 taosd 提供的 REST API, 那么以上配置需要写在 taosd 的配置文件 taos.cfg 中。如果使用 taosAdaper 提供的 REST API, 那么需要参考 taosAdaper [对应的配置方法](/reference/taosadapter/)。
- httpDebugFlag: 日志开关,默认 131。131:仅错误和报警信息,135:调试信息,143:非常详细的调试信息。
- httpDbNameMandatory: 是否必须在 RESTful URL 中指定默认的数据库名。默认为 0,即关闭此检查。如果设置为 1,那么每个 RESTful URL 中都必须设置一个默认数据库名,否则无论此时执行的 SQL 语句是否需要指定数据库,都会返回一个执行错误,拒绝执行此 SQL 语句。
:::note
如果使用 taosd 提供的 REST API, 那么以上配置需要写在 taosd 的配置文件 taos.cfg 中。如果使用 taosAdapter 提供的 REST API, 那么需要参考 taosAdapter [对应的配置方法](/reference/taosadapter/)。
:::
......@@ -51,12 +51,12 @@ TDengine 客户端驱动的安装请参考 [安装指南](/reference/connector#
taos_cleanup();
```
在上面的示例代码中, `taos_connect` 建立到客户端程序所在主机的 6030 端口的连接,`taos_close`关闭当前连接,`taos_cleanup`清除客户端驱动所申请和使用的资源。
在上面的示例代码中, `taos_connect()` 建立到客户端程序所在主机的 6030 端口的连接,`taos_close()`关闭当前连接,`taos_cleanup()`清除客户端驱动所申请和使用的资源。
:::note
- 如未特别说明,当 API 的返回值是整数时,_0_ 代表成功,其它是代表失败原因的错误码,当返回值是指针时, _NULL_ 表示失败。
- 所有的错误码以及对应的原因描述在 taoserror.h 文件中。
- 所有的错误码以及对应的原因描述在 `taoserror.h` 文件中。
:::
......@@ -120,8 +120,8 @@ TDengine 客户端驱动的安装请参考 [安装指南](/reference/connector#
</details>
:::info
更多示例代码及下载请见 [github](https://github.com/taosdata/TDengine/tree/develop/examples/c)
也可以在安装目录下的 examples/c 路径下找到。 该目录下有 makefile,在 Linux 环境下,直接执行 make 就可以编译得到执行文件。
更多示例代码及下载请见 [GitHub](https://github.com/taosdata/TDengine/tree/develop/examples/c)。
也可以在安装目录下的 `examples/c` 路径下找到。 该目录下有 makefile,在 Linux 环境下,直接执行 make 就可以编译得到执行文件。
**提示:**在 ARM 环境下编译时,请将 makefile 中的 `-msse4.2` 去掉,这个选项只有在 x64/x86 硬件平台上才能支持。
:::
......@@ -362,7 +362,7 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
(2.1.3.0 版本新增)
用于在其他 STMT API 返回错误(返回错误码或空指针)时获取错误信息。
### 无模式写入 API
### 无模式(schemaless)写入 API
除了使用 SQL 方式或者使用参数绑定 API 写入数据外,还可以使用 Schemaless 的方式完成写入。Schemaless 可以免于预先创建超级表/数据子表的数据结构,而是可以直接写入数据,TDengine 系统会根据写入的数据内容自动创建和维护所需要的表结构。Schemaless 的使用方式详见 [Schemaless 写入](/reference/schemaless/) 章节,这里介绍与之配套使用的 C/C++ API。
......@@ -390,7 +390,7 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
- TSDB_SML_TELNET_PROTOCOL: OpenTSDB Telnet 文本行协议
- TSDB_SML_JSON_PROTOCOL: OpenTSDB Json 协议格式
时间戳分辨率的定义,定义在 taos.h 文件中,具体内容如下:
时间戳分辨率的定义,定义在 `taos.h` 文件中,具体内容如下:
- TSDB_SML_TIMESTAMP_NOT_CONFIGURED = 0,
- TSDB_SML_TIMESTAMP_HOURS,
......@@ -448,3 +448,4 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
- `void taos_unsubscribe(TAOS_SUB *tsub, int keepProgress)`
取消订阅。 如参数 `keepProgress` 不为 0,API 会保留订阅的进度信息,后续调用 `taos_subscribe()` 时可以基于此进度继续;否则将删除进度信息,后续只能重新开始读取数据。
......@@ -9,7 +9,7 @@ description: TDengine Java 连接器基于标准 JDBC API 实现, 并提供原
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
`taos-jdbcdriver` 是 TDengine 的官方 Java 语言连接器,Java 开发人员可以通过它开发存取 TDengine 数据库的应用软件。`taos-jdbcdriver` 实现了 JDBC driver 标准的接口,并提供两种形式的连接器。一种是通过 TDengine 客户端驱动程序(taosc)原生连接 TDengine 实例,支持数据写入、查询、订阅、schemaless 接口和参数绑定接口等功能,一种是通过 taosAdapter 提供的 REST 接口连接 TDengine 实例(2.0.18 及更高版本)。REST 连接实现的功能集合和原生连接有少量不同。
`taos-jdbcdriver` 是 TDengine 的官方 Java 语言连接器,Java 开发人员可以通过它开发存取 TDengine 数据库的应用软件。`taos-jdbcdriver` 实现了 JDBC driver 标准的接口,并提供两种形式的连接器。一种是通过 TDengine 客户端驱动程序(taosc)原生连接 TDengine 实例,支持数据写入、查询、订阅、schemaless 接口和参数绑定接口等功能,一种是通过 taosAdapter 提供的 REST 接口连接 TDengine 实例(2.4.0.0 及更高版本)。REST 连接实现的功能集合和原生连接有少量不同。
![tdengine-connector](tdengine-jdbc-connector.png)
......@@ -804,7 +804,7 @@ Query OK, 1 row(s) in set (0.000141s)
请参考:[JDBC example](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC)
## 重要更新记录
## 最近更新记录
| taos-jdbcdriver 版本 | 主要变化 |
| :------------------: | :----------------------------: |
......@@ -814,7 +814,7 @@ Query OK, 1 row(s) in set (0.000141s)
## 常见问题
1. 使用 Statement 的 `addBatch` 和 `executeBatch` 来执行“批量写入/更行”,为什么没有带来性能上的提升?
1. 使用 Statement 的 `addBatch()` 和 `executeBatch()` 来执行“批量写入/更新”,为什么没有带来性能上的提升?
**原因**:TDengine 的 JDBC 实现中,通过 `addBatch` 方法提交的 SQL 语句,会按照添加的顺序,依次执行,这种方式没有减少与服务端的交互次数,不会带来性能上的提升。
......
......@@ -122,7 +122,7 @@ Requirement already satisfied: taospy in c:\users\username\appdata\local\program
<Tabs>
<TabItem value="native" label="原生连接">
请确保 TDengine 集群已经启动, 且集群中机器的 FQDN (如果启动的是单机版,FQDN 默认为 hostname)在本机能够解析, 可用 ping 命令进行测试:
请确保 TDengine 集群已经启动, 且集群中机器的 FQDN (如果启动的是单机版,FQDN 默认为 hostname)在本机能够解析, 可用 `ping` 命令进行测试:
```
ping <FQDN>
......@@ -197,7 +197,7 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
{{#include docs-examples/python/connect_rest_examples.py:connect}}
```
`connect` 函数的所有参数都是可选的关键字参数。下面是连接参数的具体说明:
`connect()` 函数的所有参数都是可选的关键字参数。下面是连接参数的具体说明:
- `host`: 要连接的主机。默认是 localhost。
- `user`: TDenigne 用户名。默认是 root。
......@@ -205,10 +205,6 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
- `port`: taosAdapter REST 服务监听端口。默认是 6041.
- `timeout`: HTTP 请求超时时间。单位为秒。默认为 `socket._GLOBAL_DEFAULT_TIMEOUT`。 一般无需配置。
:::note
:::
</TabItem>
</Tabs>
......@@ -232,12 +228,12 @@ curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
```
:::tip
查询结果只能获取一次。比如上面的示例中 `featch_all` 和 `fetch_all_into_dict` 只能用一个。重复获取得到的结果为空列表。
查询结果只能获取一次。比如上面的示例中 `fetch_all()` 和 `fetch_all_into_dict()` 只能用一个。重复获取得到的结果为空列表。
:::
##### TaosResult 类的使用
上面 `TaosConnection` 类的使用示例中,我们已经展示了两种获取查询结果的方法: `featch_all` 和 `fetch_all_into_dict`。除此之外 `TaosResult` 还提供了按行迭代(`rows_iter`)或按数据块迭代(`blocks_iter`)结果集的方法。在查询数据量较大的场景,使用这两个方法会更高效。
上面 `TaosConnection` 类的使用示例中,我们已经展示了两种获取查询结果的方法: `fetch_all()` 和 `fetch_all_into_dict()`。除此之外 `TaosResult` 还提供了按行迭代(`rows_iter`)或按数据块迭代(`blocks_iter`)结果集的方法。在查询数据量较大的场景,使用这两个方法会更高效。
```python title="blocks_iter 方法"
{{#include docs-examples/python/result_set_examples.py}}
......
......@@ -8,7 +8,7 @@ import Prometheus from "./_prometheus.mdx"
import CollectD from "./_collectd.mdx"
import StatsD from "./_statsd.mdx"
import Icinga2 from "./_icinga2.mdx"
import Tcollector from "./_tcollector.mdx"
import TCollector from "./_tcollector.mdx"
taosAdapter 是一个 TDengine 的配套工具,是 TDengine 集群和应用程序之间的桥梁和适配器。它提供了一种易于使用和高效的方式来直接从数据收集代理软件(如 Telegraf、StatsD、collectd 等)摄取数据。它还提供了 InfluxDB/OpenTSDB 兼容的数据摄取接口,允许 InfluxDB/OpenTSDB 应用程序无缝移植到 TDengine。
......@@ -225,7 +225,7 @@ AllowWebSockets
### TCollector
<Tcollector />
<TCollector />
### node_exporter
......
......@@ -400,7 +400,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
- **threads** : 执行 SQL 的线程数,默认为 1。
- **interva** : 执行订阅的时间间隔,单位为秒,默认为 0。
- **interval** : 执行订阅的时间间隔,单位为秒,默认为 0。
- **restart** : "yes" 表示开始新的订阅,"no" 表示继续之前的订阅,默认值为 "no"。
......@@ -420,7 +420,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
- **threads** : 执行 SQL 的线程数,默认为 1。
- **interva** : 执行订阅的时间间隔,单位为秒,默认为 0。
- **interval** : 执行订阅的时间间隔,单位为秒,默认为 0。
- **restart** : "yes" 表示开始新的订阅,"no" 表示继续之前的订阅,默认值为 "no"。
......
......@@ -8,7 +8,7 @@ TDengine 命令行程序(以下简称 TDengine CLI)是用户操作 TDengine
## 安装
如果在 TDengine 服务器端执行,无需任何安装,已经自动安装好。如果要在非 TDengine 服务器端运行,需要安装 TDengine 客户端驱动,具体安装,请参考 [连接器](/reference/connector/)
如果在 TDengine 服务器端执行,无需任何安装,已经自动安装好 TDengine CLI。如果要在非 TDengine 服务器端运行,需要安装 TDengine 客户端驱动安装包,具体安装,请参考 [连接器](/reference/connector/)
## 执行
......@@ -18,17 +18,17 @@ TDengine 命令行程序(以下简称 TDengine CLI)是用户操作 TDengine
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。TDengine CLI 的提示符号如下:
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。TDengine CLI 的提示符号如下:
```cmd
taos>
```
进入 CLI 后,你可执行各种 SQL 语句,包括插入、查询以及各种管理命令。
进入 TDengine CLI 后,你可执行各种 SQL 语句,包括插入、查询以及各种管理命令。
## 执行 SQL 脚本
在 TDengine CLI 里可以通过 `source` 命令来运行 SQL 命令脚本
在 TDengine CLI 里可以通过 `source` 命令来运行脚本文件中的多条 SQL 命令
```sql
taos> source <filename>;
......@@ -42,7 +42,7 @@ taos> source <filename>;
taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
```
如显示的内容后面以...结尾时,表示该内容已被截断,可通过本命令修改显示字符宽度以显示完整的内容。
如显示的内容后面以 ... 结尾时,表示该内容已被截断,可通过本命令修改显示字符宽度以显示完整的内容。
## 命令行参数
......@@ -56,20 +56,20 @@ taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
还有更多其他参数:
- -c, --config-dir: 指定配置文件目录,默认为 `/etc/taos`,该目录下的配置文件默认名称为 taos.cfg
- -C, --dump-config: 打印 -c 指定的目录中 taos.cfg 的配置参数
- -c, --config-dir: 指定配置文件目录,Linux 环境下默认为 `/etc/taos`,该目录下的配置文件默认名称为 `taos.cfg`
- -C, --dump-config: 打印 -c 指定的目录中 `taos.cfg` 的配置参数
- -d, --database=DATABASE: 指定连接到服务端时使用的数据库
- -D, --directory=DIRECTORY: 导入指定路径中的 SQL 脚本文件
- -f, --file=FILE: 以非交互模式执行 SQL 脚本文件
- -f, --file=FILE: 以非交互模式执行 SQL 脚本文件。文件中一个 SQL 语句只能占一行
- -k, --check=CHECK: 指定要检查的表
- -l, --pktlen=PKTLEN: 网络测试时使用的测试包大小
- -n, --netrole=NETROLE: 网络连接测试时的测试范围,默认为 startup, 可选值为 client, server, rpc, startup, sync, speed, fqdn
- -r, --raw-time: 将时间输出出 uint64_t
- -n, --netrole=NETROLE: 网络连接测试时的测试范围,默认为 `startup`, 可选值为 `client``server``rpc``startup``sync``speed``fqdn` 之一
- -r, --raw-time: 将时间输出出无符号 64 位整数类型(即 C 语音中 uint64_t)
- -s, --commands=COMMAND: 以非交互模式执行的 SQL 命令
- -S, --pkttype=PKTTYPE: 指定网络测试所用的包类型,默认为 TCP。只有 netrole 为 speed 时既可以指定为 TCP 也可以指定为 UDP
- -S, --pkttype=PKTTYPE: 指定网络测试所用的包类型,默认为 TCP。只有 netrole 为 `speed` 时既可以指定为 TCP 也可以指定为 UDP
- -T, --thread=THREADNUM: 以多线程模式导入数据时的线程数
- -s, --commands: 在不进入终端的情况下运行 TDengine 命令
- -z, --timezone=TIMEZONE: 指定时区,默认为本地
- -z, --timezone=TIMEZONE: 指定时区,默认为本地时区
- -V, --version: 打印出当前版本号
示例:
......@@ -81,8 +81,8 @@ taos -h h1.taos.com -s "use db; show tables;"
## TDengine CLI 小技巧
- 可以使用上下光标键查看历史输入的指令
- 修改用户密码:在 shell 中使用 `alter user` 命令,缺省密码为 taosdata
- ctrl+c 中止正在进行中的查询
- 执行 `RESET QUERY CACHE` 可清除本地缓存的表 schema
- 批量执行 SQL 语句。可以将一系列的 shell 命令(以英文 ; 结尾,每个 SQL 语句为一行)按行存放在文件里,在 shell 里执行命令 `source <file-name>` 自动执行该文件里所有的 SQL 语句
- 输入 q 回车,退出 taos shell
- 在 TDengine CLI 中使用 `alter user` 命令可以修改用户密码,缺省密码为 `taosdata`
- Ctrl+C 中止正在进行中的查询
- 执行 `RESET QUERY CACHE` 可清除本地表 Schema 的缓存
- 批量执行 SQL 语句。可以将一系列的 TDengine CLI 命令(以英文 ; 结尾,每个 SQL 语句为一行)按行存放在文件里,在 TDengine CLI 里执行命令 `source <file-name>` 自动执行该文件里所有的 SQL 语句
- 输入 `q``quit``exit` 回车,可以退出 TDengine CLI
......@@ -47,19 +47,19 @@ taos --dump-config
### firstEp
| 属性 | 说明 |
| -------- | -------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 属性 | 说明 |
| -------- | --------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | taosd 或者 taos 启动时,主动连接的集群中首个 dnode 的 endpoint |
| 缺省值 | localhost:6030 |
| 缺省值 | localhost:6030 |
### secondEp
| 属性 | 说明 |
| -------- | ------------------------------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 属性 | 说明 |
| -------- | -------------------------------------------------------------------------------------- |
| 适用范围 | 服务端和客户端均适用 |
| 含义 | taosd 或者 taos 启动时,如果 firstEp 连接不上,尝试连接集群中第二个 dnode 的 endpoint |
| 缺省值 | 无 |
| 缺省值 | 无 |
### fqdn
......@@ -476,14 +476,23 @@ charset 的有效值是 UTF-8。
### arbitrator
| 属性 | 说明 |
| -------- | ----------------------------------------- |
| 适用范围 | 仅服务端适用 |
| 属性 | 说明 |
| -------- | ------------------------------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 系统中裁决器的 endpoint,其格式如 firstEp |
| 缺省值 | 空 |
| 缺省值 | 空 |
## 时间相关
### precision
| 属性 | 说明 |
| -------- | ------------------------------------------------- |
| 适用范围 | 仅服务端 |
| 含义 | 创建数据库时使用的时间精度 |
| 取值范围 | ms: millisecond; us: microsecond ; ns: nanosecond |
| 缺省值 | ms |
### rpcTimer
| 属性 | 说明 |
......
label: 参考指南
link:
slug: /reference/
type: generated-index
description: "参考指南是对 TDengine 本身、 TDengine 各语言连接器及自带的工具最详细的介绍。"
label: 参考指南
\ No newline at end of file
---
title: 参考指南
---
参考指南是对 TDengine 本身、 TDengine 各语言连接器及自带的工具最详细的介绍。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -3,7 +3,7 @@ sidebar_label: TCollector
title: TCollector 写入
---
import Tcollector from "../14-reference/_tcollector.mdx"
import TCollector from "../14-reference/_tcollector.mdx"
TCollector 是 openTSDB 的一部分,它用来采集客户端日志发送给数据库。
......@@ -17,7 +17,7 @@ TCollector 是 openTSDB 的一部分,它用来采集客户端日志发送给
- TCollector 已经安装。安装 TCollector 请参考[官方文档](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html#installation-of-tcollector)
## 配置步骤
<Tcollector />
<TCollector />
## 验证方法
......
......@@ -35,7 +35,7 @@ MQTT 是流行的物联网数据传输协议,[EMQX](https://github.com/emqx/em
CREATE TABLE sensor_data (ts timestamp, temperature float, humidity float, volume float, PM10 float, pm25 float, SO2 float, NO2 float, CO float, sensor_id NCHAR(255), area TINYINT, coll_time timestamp);
```
注:表结构以博客[数据传输、存储、展现,EMQ X + TDengine 搭建 MQTT 物联网数据可视化平台](https://www.taosdata.com/blog/2020/08/04/1722.html)为例。后续操作均以此博客场景为例进行,请你根据实际应用场景进行修改。
注:表结构以博客[数据传输、存储、展现,EMQX + TDengine 搭建 MQTT 物联网数据可视化平台](https://www.taosdata.com/blog/2020/08/04/1722.html)为例。后续操作均以此博客场景为例进行,请你根据实际应用场景进行修改。
## 配置 EMQX 规则
......@@ -188,5 +188,5 @@ node mock.js
![img](./emqx/check-result-in-taos.png)
TDengine 详细使用方法请参考 [TDengine 官方文档](https://docs.taosdata.com/)
EMQX 详细使用方法请参考 [EMQ 官方文档](https://www.emqx.io/docs/zh/v4.4/rule/rule-engine.html)
EMQX 详细使用方法请参考 [EMQX 官方文档](https://www.emqx.io/docs/zh/v4.4/rule/rule-engine.html)
label: 第三方工具
link:
type: generated-index
slug: /third-party/
description: TDengine 通过对标准 SQL 命令、常用数据库连接器标准(例如 JDBC)、ORM 以及其他流行时序数据库写入协议(例如 InfluxDB Line Protocol、OpenTSDB JSON、OpenTSDB Telnet 等)的支持可以使 TDengine 非常容易和第三方工具共同使用。
label: 第三方工具
\ No newline at end of file
---
title: 第三方工具
---
TDengine 通过对标准 SQL 命令、常用数据库连接器标准(例如 JDBC)、ORM 以及其他流行时序数据库写入协议(例如 InfluxDB Line Protocol、OpenTSDB JSON、OpenTSDB Telnet 等)的支持可以使 TDengine 非常容易和第三方工具共同使用。
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
label: 技术内幕
link:
slug: /tdinternal/
type: generated-index
\ No newline at end of file
label: 技术内幕
\ No newline at end of file
---
title: 技术内幕
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
label: 应用实践
link:
slug: /application/
type: generated-index
---
title: 应用实践
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
label: FAQ、教程及其它
link:
slug: /train-faq/
type: generated-index
label: FAQ 及其他
---
title: FAQ 及其他
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -3,19 +3,23 @@ title: Introduction
toc_max_heading_level: 2
---
## TDengine Major Features
TDengine is a high-performance, scalable time-series database with SQL support. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/continuous-query), [data subscription](/develop/subscribe) and other functionalities to reduce the complexity and cost of development and operation.
TDengine is a high-performance, scalable time-series database with SQL support. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/continuous-query), [data subscription](/develop/subscribe) and other functionalities to reduce the complexity and cost of development and operation. The major features are listed below:
This section introduces the major features, competitive advantages, suited scenarios and benchmarks to help you get a high level picture for TDengine.
## Major Features
The major features are listed below:
1. Besides [using SQL to insert](/develop/insert-data/sql-writing),supports [Schemaless writing](/reference/schemaless/),and supports [InfluxDB LINE](/develop/insert-data/influxdb-line)[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) and other protocols.
2. Support seamless integration with third-party data collection agent like [Telegraf](/third-party/telegraf)[Prometheus](/third-party/prometheus)[StatsD](/third-party/statsd)[collectd](/third-party/collectd)[icinga2](/third-party/icinga2), [Tcollector](/third-party/tcollector), [EMQ](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). Without a line of code, those agents can write data points into TDengine just by configuration.
2. Support seamless integration with third-party data collection agent like [Telegraf](/third-party/telegraf)[Prometheus](/third-party/prometheus)[StatsD](/third-party/statsd)[collectd](/third-party/collectd)[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). Without a line of code, those agents can write data points into TDengine just by configuration.
3. Support [all kinds of queries](/query-data), including aggregation, nested query, downsampling, interpolation, etc.
4. Support [user defined functions](/develop/udf)
5. Support [caching](/develop/cache). TDengine always save the last data point in cache, so Redis is not needed in some scenarios.
6. Support [continuous query](/develop/continuous-query).
7. Support [data subscription](/develop/subscribe),and the filter condition can be specified.
8. Support [cluster](/cluster/), so it can gain more processing power by adding more nodes. The high availability is supported by replication.
9. Provide interactive [command line intrerface](/reference/taos-shell) for management, maintainence and ad-hoc query.
9. Provide interactive [command-line intrerface](/reference/taos-shell) for management, maintainence and ad-hoc query.
10. Provide many ways to [import](/operation/import), [export](/operation/export) data.
11. Provide [monitoring](/operation/monitor) on TDengine running instances.
12. Provide [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages.
......@@ -25,7 +29,7 @@ TDengine is a high-performance, scalable time-series database with SQL support.
For more detailed features, please read through the whole document.
## TDenginge Highlights
## Competitive Advantages
TDengine makes full use of [the characteristics of time series data](https://tdengine.com/2019/07/09/86.html), such as structured, no transaction, rarely delete or update, etc., and builds its own innovative storage engine and computing engine to differentiate itself from other TSDBs with the following advantages.
......@@ -54,7 +58,7 @@ In the time-series data processing platform, TDengine stands in a role like this
<center>Figure 1. TDengine Technical Ecosystem</center>
On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, IoT App can be connected. TDengine itself provides interactive command line interface and web interface for management and maintainence.
On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, IoT App can be connected. TDengine itself provides interactive command-line interface and web interface for management and maintainence.
## Suited Scenarios for TDengine
......@@ -101,7 +105,7 @@ From the perspective of data sources, designers can analyze the applicability of
| Minimize learning and maintenance costs | | | √ | In addition to being easily configurable, standard SQL support and the Taos shell for ad hoc queries makes maintenance simpler, allows reuse and reduces learning costs.|
| Abundant talent supply | √ | | | Given the above, and given the extensive training and professional services provided by TDengine, it is easy to migrate from existing solutions or create a new and lasting solution based on TDengine.|
## Benchmark comparision between TDengine and other databases
## Comparision with other databases
- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/2022/02/23/4975.html)
- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/2022/02/24/5120.html)
......
......@@ -135,13 +135,25 @@ The design of one table for one data collection point will require a huge number
STable is an set for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes(tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**. When creating a table for a specific data collection point, the user uses a STable as a template and specifies the tag value of the specific DCP (table). Compared with the traditional relational database, the table (a DCP) has static tags, and these tags can be added, deleted, and updated afterward. The relationship between the STable and the tables created based on the STable is as follows:
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**.
1. A STable contains multiple tables with the same metric schema but with different tag values.
2. The schema of metrics or labels cannot be adjusted through tables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all belonged tables.
3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to tables.
## Subtable
Query can be executed on both table and STable. For a query on a STable, TDengine will treat the data in all belonged tables as a whole data set for processing. TDengine will first find out the tables that meet the tag filter conditions, then scan the time-series data of these tables to perform aggregation operation, which can greatly reduce the data sets to be scanned, thus greatly improving the performance of data aggregation across multiple DCPs.
When creating a table for a specific data collection point, the user can use a STable as a template and specifies the tag value of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine system. The difference between regular table and subtable is:
1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
2. Subtable is a table with extension, it has static tags(labels), and these tags can be added, deleted, and updated afterward. But regular table does not have tags.
3. A subtable belongs to only one STable, but a STable may have many subtables. Regular table does not belong to any STable.
4. A regular table could not converted into a subtable, and vice versa.
The relationship between a STable and the subtables created based on this STable is as follows:
1. A STable contains multiple subtables with the same metric schema but with different tag values.
2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all belonged subtables.
3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
Query can be executed on both table(subtable) and STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find out the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which can greatly reduce the data sets to be scanned, thus greatly improving the performance of data aggregation across multiple DCPs.
In TDengine system, it is recommend to use a substable instead of a regular table for a DCP.
## Database
......
......@@ -10,7 +10,7 @@ import AptGetInstall from "./\_apt_get_install.mdx";
## Quick Install
The full package of TDengine includes server(taosd), taosAdapter for connecting with third-party systems and providing RESTful interface, application driver(taosc), command line program(CLI, taos) and some tools. For current version, the server taosd and taosAdapter can only be installed and run on Linux systems, and will support Windows, macOS and other systems in the future. The application driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to the RESTful interface, TDengine also provides connectors for a number of programming languages. In versions before 2.4, there is no taosAdapter, and the RESTful interface is provided by the built-in http service of taosd.
The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to the RESTful interface, TDengine also provides connectors for a number of programming languages. In versions before 2.4, there is no taosAdapter, and the RESTful interface is provided by the built-in http service of taosd.
TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms, and will support ARM32, RISC-V and other CPU architectures in the future.
......@@ -71,13 +71,13 @@ Check if taosd is running:
systemctl status taosd
```
If everything is fine,you can run TDengine command line interface `taos` to access TDengine and play around it.
If everything is fine, you can run TDengine command-line interface `taos` to access TDengine and test it out yourself.
:::info
- systemctl requires _root_ privileges,if you are not _root_ ,please add sudo before the command.
- To get feedback and keep polishing the product, TDengine is collecting some basic usage information, but you can turn it off by setting telemetryReporting to 0 in configuration file taos.cfg.
- TDengine uses FQDN (usually hostname)as the ID for a node. To make system work, you need to configure the FQDN for the server running taosd, and configure the DNS service or hosts file on the the machine where the application or TDengine CLI runs to ensure that the FQDN can be resolved.
- To get feedback and keep improving the product, TDengine is collecting some basic usage information, but you can turn it off by setting telemetryReporting to 0 in configuration file taos.cfg.
- TDengine uses FQDN (usually hostname)as the ID for a node. To make the system work, you need to configure the FQDN for the server running taosd, and configure the DNS service or hosts file on the the machine where the application or TDengine CLI runs to ensure that the FQDN can be resolved.
- `systemctl stop taosd` won't stop the server right away, it will wait until all the data in memory are flushed to disk. It may takes time depending on the cache size.
TDengine supports the installation on system which runs [`systemd`](https://en.wikipedia.org/wiki/Systemd) for process management,use `which systemctl` to check if the system has `systemd` installed:
......@@ -92,7 +92,7 @@ If the system does not have `systemd`,you can start TDengine manually by execu
## Command Line Interface
To manage the TDengine running instance,or execute ad-hoc queries, TDengine provides a Command Line Interface(hereinafter referred to as TDengine CLI) taos. To enter into the interactive CLI,execute `taos` on a Linux terminal where TDengine is installed.
To manage the TDengine running instance,or execute ad-hoc queries, TDengine provides a Command Line Interface (hereinafter referred to as TDengine CLI) taos. To enter into the interactive CLI,execute `taos` on a Linux terminal where TDengine is installed.
```bash
taos
......@@ -120,25 +120,25 @@ select * from t;
Query OK, 2 row(s) in set (0.003128s)
```
Besides executing SQL commands, system administrator can check running status, add/drop user accounts and manage the running instances. TAOS CLI with application driver can be installed and run on either Linux or Windows machine. For more details on CLI, please [check here](../reference/taos-shell/).
Besides executing SQL commands, system administrators can check running status, add/drop user accounts and manage the running instances. TAOS CLI with client driver can be installed and run on either Linux or Windows machines. For more details on CLI, please [check here](../reference/taos-shell/).
## Experience the blazing fast speed
After TDengine server is running,execute `taosBenchmark`(named as taosdemo before) from a Linux terminal:
After TDengine server is running,execute `taosBenchmark` (previously named taosdemo) from a Linux terminal:
```bash
taosBenchmark
```
This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with name from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "beijing" or "shanghai".
This command will create a super table "meters" under database "test". Under "meters", 10000 tables are created with names from "d0" to "d9999". Each table has 10000 rows and each row has four columns (ts, current, voltage, phase). Time stamp is starting from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999". Each table has tags "location" and "groupId". groupId is set 1 to 10 randomly, and location is set to "beijing" or "shanghai".
This command will insert 100 million rows into database quickly. Depends on the hardware configuration, it only takes a dozen seconds for a regular PC server.
This command will insert 100 million rows into the database quickly. Time to insert depends on the hardware configuration, it only takes a dozen seconds for a regular PC server.
taosBenchmark provides you command line options and configuration file to customize the scenarios, like number of tables, number of rows per table, number of columns and more. Please execute `taosBenchmark --help` to list them. For details on running taosBenchmark, please check [reference for taosBenchmark](/reference/taosbenchmark)
taosBenchmark provides command-line options and a configuration file to customize the scenarios, like number of tables, number of rows per table, number of columns and more. Please execute `taosBenchmark --help` to list them. For details on running taosBenchmark, please check [reference for taosBenchmark](/reference/taosbenchmark)
## Experience query speed
After using taosBenchmark to insert a number of rows data, you can execute queries from TDengine CLI to experience the lightning query speed.
After using taosBenchmark to insert a number of rows data, you can execute queries from TDengine CLI to experience the lightning fast query speed.
query the total number of rows under super table "meters":
......
---
sidebar_label: Connect
sidebar_label: Connection
title: Connect to TDengine
description: "This document explains how to establish connection to TDengine, and briefly introduce how to install and use TDengine connectors."
---
......@@ -19,25 +19,25 @@ import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.md
import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx";
import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx";
Any application programs running on any kind of platforms can access TDengine through the REST API provided by TDengine. For the details please refer to [REST API](/reference/rest-api/). Besides, application programs can use the connectors of multiple languages to access TDengine, including C/C++, Java, Python, Go, Node.js, C#, and Rust. This chapter describes how to establish connection to TDengine and briefly introduce how to install and use connectors. For details about the connectors please refer to [Connectors](/reference/connector/)
Any application programs running on any kind of platforms can access TDengine through the REST API provided by TDengine. For the details, please refer to [REST API](/reference/rest-api/). Besides, application programs can use the connectors of multiple programming languages to access TDengine, including C/C++, Java, Python, Go, Node.js, C#, and Rust. This chapter describes how to establish connection to TDengine and briefly introduce how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/)
## Establish Connection
There are two ways to establish connections to TDengine:
There are two ways for a connector to establish connections to TDengine:
1. Connection to taosd can be established through the REST API provided by taosAdapter component, this way is called "REST connection" hereinafter.
2. Connection to taosd can be established through the client side driver taosc, this way is called "Native connection" hereinafter.
1. Connection through the REST API provided by taosAdapter component, this way is called "REST connection" hereinafter.
2. Connection through the TDengine client driver (taosc), this way is called "Native connection" hereinafter.
Either way, same or similar APIs are provided by connectors to access database or execute SQL statements, no obvious difference can be observed.
Key differences:
1. With REST connection, it's not necessary to install the client side driver taosc, it's more friendly for cross-platform with the cost of 30% performance downgrade.
2. With native connection, full compatibility of TDengine can be utilized, like [Parameter Binding](/reference/connector/cpp#Parameter Binding-api), [Subscription](reference/connector/cpp#Subscription), etc.
1. With REST connection, it's not necessary to install TDengine client driver (taosc), it's more friendly for cross-platform with the cost of 30% performance downgrade. When taosc has an upgrade, application does not need to make changes.
2. With native connection, full compatibility of TDengine can be utilized, like [Parameter Binding](/reference/connector/cpp#Parameter Binding-api), [Subscription](reference/connector/cpp#Subscription), etc. But taosc has to be installed, some platforms may not be supported.
## Install Client Driver taosc
If choosing to use native connection and the client program is not on the same host as TDengine server, TDengine client driver needs to be installed on the host where the client program is. If choosing to use REST connection or the client is on the same host as server side, this step can be skipped. It's better to use same version of client as the server.
If choosing to use native connection and the application is not on the same host as TDengine server, TDengine client driver taosc needs to be installed on the host where the application is. If choosing to use REST connection or the application is on the same host as server side, this step can be skipped. It's better to use same version of taosc as the server.
### Install
......@@ -52,7 +52,7 @@ If choosing to use native connection and the client program is not on the same h
### Verify
After the above installation and configuration are done and making sure TDengine service is already started and in service, the Shell command `taos` can be launched to access TDengine.以
After the above installation and configuration are done and making sure TDengine service is already started and in service, the TDengine command-line interface `taos` can be launched to access TDengine.
<Tabs defaultValue="linux" groupId="os">
<TabItem value="linux" label="Linux">
......@@ -198,7 +198,7 @@ install.packages("RJDBC")
</TabItem>
<TabItem label="C" value="c">
If the client driver taosc is already installed, then the C connector is already available.
If the client driver (taosc) is already installed, then the C connector is already available.
<br/>
</TabItem>
......
---
sidebar_label: Data Model
slug: /model
title: Data Model
---
......@@ -43,13 +42,12 @@ If you are using versions prior to 2.0.15, the `STable` keyword needs to be repl
:::
Similar to creating a normal table, when creating a STable, name and schema need to be provided too. In the STable schema, the first column must be timestamp (like ts in the example), and other columns (like current, voltage and phase in the example) are the data collected. The type of a column can be integer, floating point number, string ,etc. Besides, the schema for tags need to be provided, like location and groupId in the example. The type of a tag can be integer, floating point number, string, etc. The static properties of a data collection point can be defined as tags, like the location, device type, device group ID, manager ID, etc. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
Similar to creating a regular table, when creating a STable, name and schema need to be provided too. In the STable schema, the first column must be timestamp (like ts in the example), and other columns (like current, voltage and phase in the example) are the data collected. The type of a column can be integer, float, double, string ,etc. Besides, the schema for tags need to be provided, like location and groupId in the example. The type of a tag can be integer, float, string, etc. The static properties of a data collection point can be defined as tags, like the location, device type, device group ID, manager ID, etc. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
Each kind of data collection points needs a corresponding STable to be created, so there may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another point for environmental data like temperature, humidity and wind direction, multiple STables are required for such kind of device.
For each kind of data collection points, a corresponding STable must be created. There may be man y STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another point for environmental data like temperature, humidity and wind direction, multiple STables are required for such kind of device.
At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable. If there are more than 4096 of metrics to bo collected for a data collection point, multiple STables are required for such kind of data collection point. There can be multiple databases in system, while one or more STables can exist in a database.
## Create Table
A specific table needs to be created for each data collection point. Similar to RDBMS, table name and schema are required to create a table. Beside, one or more tags can be created for each table. To create a table, a STable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement.
......@@ -60,6 +58,7 @@ CREATE TABLE d1001 USING meters TAGS ("Beijing.Chaoyang", 2);
In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "Beijing.Chaoyang" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
In TDengine system, it's recommended to create a table for a data collection point via STable. Table created via STable is called subtable in some parts of TDengine document. All SQL commands applied on regular table can be applied on subtable.
:::warning
It's not recommended to create a table in a database while using a STable from another database as template.
......
......@@ -22,7 +22,7 @@ import CStmt from "./_c_stmt.mdx";
## Introduction
Application program can execute `INSERT` statement through connectors to insert rows. TAOS Shell can be launched manually to insert data too.
Application program can execute `INSERT` statement through connectors to insert rows. TAOS CLI can be launched manually to insert data too.
### Insert Single Row
......@@ -53,7 +53,7 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
:::info
- Inserting in batch can gain better performance. Normally, the higher the batch size, the better the performance. Please be noted each single row can't exceed 16K bytes and each single SQL statement can't exceed 1M bytes.
- Inserting with multiple threads can gain better performance too. However, depending on the system resources on the client side and the server side, with the number of inserting threads grows to a specific point, the performance may drop instead of growing. The proper number of threads need to be tested in a specific environment to find the best number.
- Inserting with multiple threads can gain better performance too. However, depending on the system resources on the application side and the server side, with the number of inserting threads grows to a specific point, the performance may drop instead of growing. The proper number of threads need to be tested in a specific environment to find the best number.
:::
......
......@@ -15,7 +15,7 @@ import CJson from "./_c_opts_json.mdx";
## Introduction
A JSON string is sued in OpenTSDB JSON to represent one or more rows of data, for exmaple:
A JSON string is used in OpenTSDB JSON to represent one or more rows of data, for exmaple:
```json
[
......
label: Insert
link:
type: generated-index
slug: /insert-data/
description: "TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, OpenTSDB JSON protocol. Data can be inserted row by row, or in batch. Data from one or more collecting points can be inserted simultaneously. In the meantime, data can be inserted with multiple threads, out of order data and historical data can be inserted too. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create stable and table in advance if using schemaless protocols, and the schemas can be adjusted automatically according to the data to be inserted."
label: Insert Data
---
title: Insert
---
TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, OpenTSDB JSON protocol. Data can be inserted row by row, or in batch. Data from one or more collecting points can be inserted simultaneously. In the meantime, data can be inserted with multiple threads, out of order data and historical data can be inserted too. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create stable and table in advance if using schemaless protocols, and the schemas can be adjusted automatically according to the data to be inserted.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -21,7 +21,7 @@ import CAsync from "./_c_async.mdx";
## Introduction
SQL is used by TDengine as the language for query. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc query. Here is the list of major query functionalities supported by TDengine:
SQL is used by TDengine as the query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc query. Here is the list of major query functionalities supported by TDengine:
- Query on single column or multiple columns
- Filter on tags or data columns:>, <, =, <\>, like
......@@ -47,13 +47,15 @@ taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
Query OK, 2 row(s) in set (0.001100s)
```
To meet the requirements in IoT use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), `last_row` (the last row), more and more functions will be added to better perform in IoT use cases. Furthermore, continuous query is also supported in TDengine.
To meet the requirements in many use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), `last_row` (the last row), more and more functions will be added to better perform in many use cases. Furthermore, continuous query is also supported in TDengine.
For detailed query syntax please refer to [Select](/taos-sql/select).
## Join Query
## Aggregation among Tables
In IoT use cases, there are always multiple data collection points of same kind. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection points, and a table is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, join query can be performed efficiently between all the tables belonging to same STable, i.e. same kind of data collection points, can be. Aggregate functions applicable for tables can be used directly on STables, syntax is exactly same.
In many use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection points, and a table is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same kind of data collection points, can be. Aggregate functions applicable for tables can be used directly on STables, syntax is exactly same.
In summary, for a STable, its subtables can be aggregated by a simple query on STable, it's kind of join operation. But tables belong to different STables could not be aggregated.
### Example 1
......@@ -109,7 +111,7 @@ taos> SELECT SUM(current) FROM meters where location like "Beijing%" INTERVAL(1s
Query OK, 5 row(s) in set (0.001538s)
```
Down sample also supports time offset. For example, below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
Down sampling also supports time offset. For example, below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
```
taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
......@@ -123,7 +125,7 @@ taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
Query OK, 5 row(s) in set (0.001521s)
```
In IoT use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle by themselves in many systems. In TDengine, it's easy to achieve the alignment using down sampling.
In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle by themselves in many systems. In TDengine, it's easy to achieve the alignment using down sampling.
Interpolation can be performed in TDengine if there is no data in a time range.
......
label: Develop
link:
type: generated-index
slug: /develop
description: "The guide is for developers to quickly learn about the functionalities of TDengine, including fundamentals like data model, inserting data, query and advanced features like data subscription, continuous query. For each functionality, sample code of multiple programming languages are provided for developers to get started quickly."
label: Develop
\ No newline at end of file
---
title: Develop
---
The guide is for developers to quickly learn about the functionalities of TDengine, including fundamentals like data model, inserting data, query and advanced features like data subscription, continuous query. For each functionality, sample code of multiple programming languages are provided for developers to get started quickly.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
---
title: Deploy
title: Deployment
---
## Prerequisites
......
......@@ -6,7 +6,7 @@ title: Manage DNODEs
It has been introduced that how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually.\
:::note
All the commands to be introduced in this chapter need to be run after login TDengine, sometimes it's necessary to use root privilege.
All the commands to be introduced in this chapter need to be run through TDengine CLI, sometimes it's necessary to use root privilege.
:::
......@@ -67,7 +67,7 @@ Query OK, 8 row(s) in set (0.001154s)
## Add DNODE
Launch TDengine CLI `taos` and execute to add the end point of a new dnode into the EPI (end point) list of the cluster. "fqdn:port" must be quoted using double quotes.
Launch TDengine CLI `taos` and execute the command below to add the end point of a new dnode into the EPI (end point) list of the cluster. "fqdn:port" must be quoted using double quotes.
```sql
CREATE DNODE "fqdn:port";
......@@ -100,7 +100,7 @@ Query OK, 2 row(s) in set (0.001316s)
## Drop DNODE
Launch TDengine CLI `taos` and execute command below to drop or remove a dndoe from the cluster. In the command, `dnodeId` can be gotten from `show dnodes`.
Launch TDengine CLI `taos` and execute the command below to drop or remove a dndoe from the cluster. In the command, `dnodeId` can be gotten from `show dnodes`.
```sql
DROP DNODE "fqdn:port";
......
......@@ -7,7 +7,7 @@ title: High Availability and Load Balancing
High availability of vnode and mnode can be achieved through replicas in TDengine.
The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. For the purpose of operation, different number of replicas can be configured properly for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the reliability of the system can't be guaranteed. Whenever one node is down, data service would be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". Below SQL statement is used to create a database named as "demo" with 3 replicas.
The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. For the purpose of operation, different number of replicas can be configured properly for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, data service would be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". Below SQL statement is used to create a database named as "demo" with 3 replicas.
```sql
CREATE DATABASE demo replica 3;
......@@ -40,7 +40,7 @@ If high availability is important for your system, both vnode and mnode must be
Load balance will be triggered in 3 cades without manual intervention.
- When a new dnode is joined in the cluster, automatic load balancing may be triggered, some data from some dnodes may be transferred to the new dnode automatically.
- When a new dnode is joined in the cluster, automatic load balancing may be triggered, some data from some dnodes may be transferred to the new dnode automatically.
- When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically.
- When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes.
- :::tip
......
label: Cluster
link:
type: generated-index
slug: /cluster/
description: "TDengine can be deployed in cluster mode to increase the processing capability and high availability. In cluster mode, any data can have multiple replications for the purpose of high availability and load balance. TDengine cluster can be scaled out easily to support more data collecting points and more data."
keywords:
[
cluster,
high availability,
load balance,
scale out
]
---
title: Cluster
keywords: ["cluster", "high availability", "load balance", "scale out"]
---
TDengine can be deployed in cluster mode to increase the processing capability and high availability. In cluster mode, any data can have multiple replications for the purpose of high availability and load balance. TDengine cluster can be scaled out easily to support more data collecting points and more data.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
---
sidebar_label: Data Types
title: Data Types
description: "The data types supported by TDengine include timestamp, float, JSON, etc"
---
......
......@@ -19,8 +19,24 @@ CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
3. UPDATE set to 2 means updating a part of columns for a row is allowed, the columns for which no value is specified will be kept as no change
3. The maximum length of database name is 33 bytes.
4. The maximum length of a SQL statement is 65,480 bytes.
5. For more parameters that can be used when creating a database, like cache, blocks, days, keep, minRows, maxRows, wal, fsync, update, cacheLast, replica, quorum, maxVgroupsPerDb, ctime, comp, prec, Please refer to [Configuration Parameters](/reference/config/).
5. Below are the parameters that can be used when creating a database
- cache: [Description](/reference/config/#cache)
- blocks: [Description](/reference/config/#blocks)
- days: [Description](/reference/config/#days)
- keep: [Description](/reference/config/#keep)
- minRows: [Description](/reference/config/#minrows)
- maxRows: [Description](/reference/config/#maxrows)
- wal: [Description](/reference/config/#wallevel)
- fsync: [Description](/reference/config/#fsync)
- update: [Description](/reference/config/#update)
- cacheLast: [Description](/reference/config/#cachelast)
- replica: [Description](/reference/config/#replica)
- quorum: [Description](/reference/config/#quorum)
- maxVgroupsPerDb: [Description](/reference/config/#maxvgroupsperdb)
- comp: [Description](/reference/config/#comp)
- precision: [Description](reference/config/#precision)
6. Please be noted that all of the parameters mentioned in this section can be configured in configuration file `taosd.cfg` at server side and used by default, can be override if they are specifically in `create database` statement.
:::
## Show Current Configuration
......
......@@ -22,15 +22,15 @@ CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_nam
:::
### Create Table Using STable As Template
### Create Subtable Using STable As Template
```
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name TAGS (tag_value1, ...);
```
The above command creates a sub table using the specified super table as template and the specified tab values.
The above command creates a subtable using the specified super table as template and the specified tab values.
### Create Table Using STable As Template With A Part of Tags
### Create Subtable Using STable As Template With A Part of Tags
```
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
......
---
sidebar_label: Insert
title: Insert
---
......@@ -106,7 +105,7 @@ Then data in this file can be inserted by below SQL statement:
INSERT INTO d1001 FILE '/tmp/csvfile.csv';
```
## CreateTables Automatically and Insert Rows From File
## Create Tables Automatically and Insert Rows From File
From version 2.1.5.0, tables can be automatically created using a super table as template when inserting data from a CSV file, Like below:
......
---
sidebar_label: Select
title: Select
---
......
---
sidebar_label: Functions
title: Functions
---
......
---
sidebar_label: Window
title: Aggregate by Window
sidebar_label: Interval
title: Aggregate by Time Window
---
Aggregate by time window is supported in TDengine. For example, each temperature sensor reports the temperature every second, the average temperature every 10 minutes can be retrieved by query with time window.
......
---
sidebar_label: Limits
title: Limits and Restrictions
title: Limits & Restrictions
---
## Naming Rules
......
---
sidebar_label: JSON
title: JSON Type
---
......
---
sidebar-label: Escape
title: Escape
title: Escape Characters
---
## Escape Characters
Below table is the list of escape characters used in TDengine.
| Escape Character | **Actual Meaning** |
| :--------------: | ------------------------ |
......
---
sidebar_label: Keywords
title: Reserved Keywords
title: Keywords
---
## Reserved Keywords
There are about 200 keywords reserved by TDengine, they can't be used as the name of database, STable or table with either upper case, lower case or mixed case.
**Keywords List**
......
---
title: TAOS SQL
description: "The syntax, select, functions and tips supported by TAOS SQL "
description: "The syntax supported by TAOS SQL "
---
This document explains the syntax, select, functions and some tips that can be used in TAOS SQL. It would be easier to understand with some fundamental knowledge of SQL.
This document explains the syntax about operating database, table, STable, inserting data, selecting data, functions and some tips that can be used in TAOS SQL. It would be easier to understand with some fundamental knowledge of SQL.
TAOS SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please be noted that TAOS SQL is not standard SQL. Besides, because TDengine doesn't provide the functionality of deleting time series data, corresponding statements are not provided in TAOS SQL.
......
---
sidebar_label: Planning
title: Capacity Planning
title: Resource Planning
---
The computing and storage resources need to be planned if using TDengine to build an IoT platform. How to plan the CPU, memory and disk required will be described in this chapter.
......
---
sidebar_label: Tolerance
title: Tolerance and Disaster Recovery
sidebar_label: Fault Tolerance
title: Fault Tolerance & Disaster Recovery
---
## Tolerance
## Fault Tolerance
TDengine uses **WAL**, i.e. Write Ahead Log, to achieve fault tolerance and make sure high availability.
TDengine uses **WAL**, i.e. Write Ahead Log, to achieve fault tolerance and high reliability.
When a data block is received by TDengine, the original data block is firstly written into WAL. The log in WAL will be deleted only after the data has been written into data files in the database. Data can be recovered from WAL in case the server is stopped abnormally due to any reason and then restarted.
......@@ -14,7 +14,7 @@ There are 2 configuration parameters related to WAL:
- walLevel:0:wal is disabled; 1:wal is enabled without fsync; 2:wal is enabled with fsync.
- fsync:only valid when walLevel is set to 2, it specified the interval of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written.
To achieve absolutely no data loss, walLevel needs to be set to 2 and fsync needs to be set to 1. The penalty is the speed of data insertion downgrades. However, if the concurrent threads of data insertion on the client side can reach a big enough number, for example 50, the data insertion performance would be still good enough, our verification shows that the drop is only 30% compared to fsync is set to 3,000 milliseconds.
To achieve absolutely no data loss, walLevel needs to be set to 2 and fsync needs to be set to 1. The penalty is the performance of data ingestion downgrades. However, if the concurrent threads of data insertion on the client side can reach a big enough number, for example 50, the data ingestion performance would be still good enough, our verification shows that the drop is only 30% compared to fsync is set to 3,000 milliseconds.
## Disaster Recovery
......@@ -24,6 +24,6 @@ TDengine cluster is managed by mnode. To make sure the high availability of mnod
The number of replicas for time series data in TDengine is associated with each database, there can be a lot of databases in a cluster while each database can be configured with a different number of replicas. When creating a database, parameter `replica` is used to configure the number of replications. To achieve high availability, `replica` needs to be higher than 1.
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create table.
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create table.
As long as the dnodes of a TDengine cluster are deployed on different physical machines and replica number is set to bigger than 1, high availability can be achieved without any other assistance. If dnodes of TDengine cluster are deployed in geographically different data centers, disaster recovery can be achieved too.
---
sidebar_label: Import
title: Import Data
title: Data Import
---
There are multiple ways of importing data provided byTDengine: import with script, import from data file, import using `taosdump`.
......
---
sidebar_label: Export
title: Export Data
title: Data Export
---
There are two ways of exporting data from a TDengine cluster, one is SQL statement in TDengine CLI, the other one is `taosdump`.
......
---
sidebar_label: Monitor
title: Monitor TDengine
title: TDengine Monitoring
---
After TDengine is started, a database named `log` for monitoring is created automatically. The information about CPU, memory, disk, bandwidth, number of requests, disk I/O speed, slow query is written into `log` database on the basis of a predefined interval. Besides, some important system operations, like logon, create user, drop database, and alerts and warnings generated in TDengine are written into `log` database too. System operator can view the data in `log` database from TDengine CLI or from a web console.
......
---
sidebar_label: Optimize
title: Optimize Performance
title: Performance Optimization
---
After a TDengine cluster has been running for long enough time, because of updating data, deleting tables and deleting expired data, there may be fragments in data files and query performance may be impacted. To resolve the problem of fragments, from version 2.1.3.0 a new SQL command `COMPACT` can be used to defragment the data files.
......
---
sidebar_label: Diagnose
title: Diagnose Problems
title: Problem Diagnostics
---
## Diagnose Network Connection
## Network Connection Diagnostics
When the client is unable to access the server, the network connection between the client side and the server side needs to be checked to find out the root cause and resolve problems.
......
label: Administration
link:
slug: /operation/
type: generated-index
---
title: Administration
---
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
\ No newline at end of file
......@@ -2,28 +2,29 @@
title: REST API
---
为支持各种不同类型平台的开发,TDengine 提供符合 REST 设计标准的 API,即 REST API。为最大程度降低学习成本,不同于其他数据库 REST API 的设计方法,TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST 连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。
To support the development of various types of platforms, TDengine provides an API that conforms to the REST principle, namely REST API. To minimize the learning cost, different from the other database REST APIs, TDengine directly requests the SQL command contained in the request BODY through HTTP POST to operate the database and only requires a URL.
注意:与原生连接器的一个区别是,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。(从 2.2.0.0 版本开始,支持在 RESTful url 中指定 db_name,这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 url 中指定的这个 db_name。从 2.4.0.0 版本开始,RESTful 默认由 taosAdapter 提供,要求必须在 url 中指定 db_name。)
:::note
One difference from the native connector is that the REST interface is stateless, so the `USE db_name` command has no effect. All references to table names and super table names need to specify the database name prefix. (Since version 2.2.0.0, it is supported to specify db_name in RESTful URL. If the database name prefix is not specified in the SQL command, the `db_name` specified in the URL will be used. Since version 2.4.0.0, REST service is provided by taosAdapter by default. And it requires that the `db_name` must be specified in the URL.)
:::
## 安装
## Installation
RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安装任何 TDengine 的库,只要客户端的开发语言支持 HTTP 协议即可。
The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language supports the HTTP protocol is enough.
## 验证
## Verification
在已经安装 TDengine 服务器端的情况下,可以按照如下方式进行验证。
If the TDengine server is already installed, it can be verified as follows:
下面以 Ubuntu 环境中使用 curl 工具(确认已经安装)来验证 RESTful 接口的正常。
The following is an Ubuntu environment using the `curl` tool (to confirm that it is installed) to verify that the REST interface is working.
下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041(缺省值)替换为实际运行的 TDengine 服务 fqdn 和端口号:
The following example lists all databases, replacing `h1.taosdata.com` and `6041` (the default port) with the actual running TDengine service FQDN and port number.
```html
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;'
h1.taosdata.com:6041/rest/sql
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' h1.taosdata.com:6041/rest/sql
```
返回值结果如下表示验证通过:
The following return value results indicate that the verification passed.
```json
{
......@@ -72,111 +73,111 @@ h1.taosdata.com:6041/rest/sql
}
```
## HTTP 请求格式
## HTTP request URL format
```
http://<fqdn>:<port>/rest/sql/[db_name]
```
参数说明:
Parameter Description:
- fqnd: 集群中的任一台主机 FQDN 或 IP 地址
- port: 配置文件中 httpPort 配置项,缺省为 6041
- db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。(从 2.2.0.0 版本开始支持)
- fqnd: FQDN or IP address of any host in the cluster
- port: httpPort configuration item in the configuration file, default is 6041
- db_name: Optional parameter that specifies the default database name for the executed SQL command. (supported since version 2.2.0.0)
例如:http://h1.taos.com:6041/rest/sql/test 是指向地址为 h1.taos.com:6041 的 url,并将默认使用的数据库库名设置为 test。
For example, `http://h1.taos.com:6041/rest/sql/test` is a URL to `h1.taos.com:6041` and sets the default database name to `test`.
HTTP 请求的 Header 里需带有身份认证信息,TDengine 支持 Basic 认证与自定义认证两种机制,后续版本将提供标准安全的数字签名机制来做身份验证。
TDengine supports both Basic authentication and custom authentication mechanisms, and subsequent versions will provide a standard secure digital signature mechanism for authentication.
- 自定义身份认证信息如下所示(token 稍后介绍)
- The custom authentication information is as follows (Let's introduce token later)
```
Authorization: Taosd <TOKEN>
```
- Basic 身份认证信息如下所示
- Basic authentication information is shown below
```
Authorization: Basic <TOKEN>
```
HTTP 请求的 BODY 里就是一个完整的 SQL 语句,SQL 语句中的数据表应提供数据库前缀,例如 \<db_name>.\<tb_name>。如果表名不带数据库前缀,又没有在 url 中指定数据库名的话,系统会返回错误。因为 HTTP 模块只是一个简单的转发,没有当前 DB 的概念。
The HTTP request's BODY is a complete SQL command, and the data table in the SQL statement should be provided with a database prefix, e.g., `db_name.tb_name`. If the table name does not have a database prefix and the database name is not specified in the URL, the system will response an error because the HTTP module is a simple forwarder and has no awareness of the current DB.
使用 curl 通过自定义身份认证方式来发起一个 HTTP Request,语法如下:
Use `curl` to initiate an HTTP request with a custom authentication method, with the following syntax.
```bash
curl -H 'Authorization: Basic <TOKEN>' -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
```
或者
Or
```bash
curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
```
其中,`TOKEN` 为 `{username}:{password}` 经过 Base64 编码之后的字符串,例如 `root:taosdata` 编码后为 `cm9vdDp0YW9zZGF0YQ==`
where `TOKEN` is the string after Base64 encoding of `{username}:{password}`, e.g. `root:taosdata` is encoded as `cm9vdDp0YW9zZGF0YQ==`.
## HTTP 返回格式
## HTTP Return Format
返回值为 JSON 格式,如下:
The return result is in JSON format, as follows:
```json
{
"status": "succ",
"head": ["ts","current", …],
"column_meta": [["ts",9,8],["current",6,4], ],
"head": ["ts", "current", ...],
"column_meta": [["ts",9,8],["current",6,4], ...],
"data": [
["2018-10-03 14:38:05.000", 10.3, ],
["2018-10-03 14:38:15.000", 12.6, ]
["2018-10-03 14:38:05.000", 10.3, ...],
["2018-10-03 14:38:15.000", 12.6, ...]
],
"rows": 2
}
```
说明:
Description:
- status: 告知操作结果是成功还是失败。
- head: 表的定义,如果不返回结果集,则仅有一列 “affected_rows”。(从 2.0.17.0 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在未来版本中,有可能会从返回值中去掉 head 这一项。)
- column_meta: 从 2.0.17.0 版本开始,返回值中增加这一项来说明 data 里每一列的数据类型。具体每个列会用三个值来说明,分别为:列名、列类型、类型长度。例如`["current",6,4]`表示列名为“current”;列类型为 6,也即 float 类型;类型长度为 4,也即对应 4 个字节表示的 float。如果列类型为 binary 或 nchar,则类型长度表示该列最多可以保存的内容长度,而不是本次返回值中的具体数据长度。当列类型是 nchar 的时候,其类型长度表示可以保存的 unicode 字符数量,而不是 bytes。
- data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有 [[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
- rows: 表明总共多少行数据。
- status: tell if the operation result is success or failure.
- head: the definition of the table, or just one column "affected_rows" if no result set is returned. (As of version 2.0.17.0, it is recommended not to rely on the head return value to determine the data column type but rather use column_meta. In later versions, the head item may be removed from the return value.)
- column_meta: this item is added to the return value to indicate the data type of each column in the data with version 2.0.17.0 and later versions. Each column is described by three values: column name, column type, and type length. For example, `["current",6,4]` means that the column name is "current", the column type is 6, which is the float type, and the type length is 4, which is the float type with 4 bytes. If the column type is binary or nchar, the type length indicates the maximum length of content stored in the column, not the length of the specific data in this return value. When the column type is nchar, the type length indicates the number of Unicode characters that can be saved, not bytes.
- data: The exact data returned, presented row by row, or just [[affected_rows]] if no result set is returned. The order of the data columns in each row of data is the same as that of the data columns described in column_meta.
- rows: Indicates how many rows of data there are.
column_meta 中的列类型说明:
The column types in column_meta are described as follows:
- 1BOOL
- 2TINYINT
- 3SMALLINT
- 4INT
- 5BIGINT
- 6FLOAT
- 7DOUBLE
- 8BINARY
- 9TIMESTAMP
- 10NCHAR
- 1:BOOL
- 2:TINYINT
- 3:SMALLINT
- 4:INT
- 5:BIGINT
- 6:FLOAT
- 7:DOUBLE
- 8:BINARY
- 9:TIMESTAMP
- 10:NCHAR
## 自定义授权码
## Custom Authorization Code
HTTP 请求中需要带有授权码 `<TOKEN>`,用于身份识别。授权码通常由管理员提供,可简单的通过发送 `HTTP GET` 请求来获取授权码,操作如下:
HTTP requests require an authorization code `<TOKEN>` for identification purposes. The administrator usually provides the authorization code, and it can be obtained simply by sending an ``HTTP GET`` request as follows:
```bash
curl http://<fqnd>:<port>/rest/login/<username>/<password>
```
其中,`fqdn` 是 TDengine 数据库的 fqdn 或 ip 地址,port 是 TDengine 服务的端口号,`username` 为数据库用户名,`password` 为数据库密码,返回值为 `JSON` 格式,各字段含义如下:
Where `fqdn` is the FQDN or IP address of the TDengine database. `port` is the port number of the TDengine service. `username` is the database username. `password` is the database password. The return value is in `JSON` format, and the meaning of each field is as follows.
- status:请求结果的标志位
- status: flag bit of the request result
- code:返回值代码
- code: return value code
- desc:授权码
- desc: authorization code
获取授权码示例:
Example of getting authorization code.
```bash
curl http://192.168.0.1:6041/rest/login/root/taosdata
```
返回值:
Response body:
```json
{
......@@ -186,15 +187,15 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata
}
```
## 使用示例
## For example
- 在 demo 库里查询表 d1001 的所有记录:
- query all records from table d1001 of database demo
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001' 192.168.0.1:6041/rest/sql
```
返回值:
Response body:
```json
{
......@@ -214,13 +215,13 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata
}
```
- 创建库 demo:
- Create database demo:
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'create database demo' 192.168.0.1:6041/rest/sql
```
返回值:
Response body:
```json
{
......@@ -232,17 +233,17 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata
}
```
## 其他用法
## Other Uses
### 结果集采用 Unix 时间戳
### Unix timestamps for result sets
HTTP 请求 URL 采用 `sqlt` 时,返回结果集的时间戳将采用 Unix 时间戳格式表示,例如
When the HTTP request URL uses `/rest/sqlt`, the returned result set's timestamp value will be in Unix timestamp format, for example:
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001' 192.168.0.1:6041/rest/sqlt
```
返回值:
Response body:
```json
{
......@@ -262,15 +263,15 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001
}
```
### 结果集采用 UTC 时间字符串
### UTC format for the result set
HTTP 请求 URL 采用 `sqlutc` 时,返回结果集的时间戳将采用 UTC 时间字符串表示,例如
When the HTTP request URL uses `/rest/sqlutc`, the timestamp of the returned result set will be expressed as a UTC format, for example:
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.t1' 192.168.0.1:6041/rest/sqlutc
```
返回值:
Respones body:
```json
{
......@@ -290,18 +291,17 @@ HTTP 请求 URL 采用 `sqlutc` 时,返回结果集的时间戳将采用 UTC
}
```
## 重要配置项
## Important configuration items
下面仅列出一些与 RESTful 接口有关的配置参数,其他系统参数请看配置文件里的说明。
Only some configuration parameters related to the RESTful interface are listed below. Please see the description in the configuration file for other system parameters.
- 对外提供 RESTful 服务的端口号,默认绑定到 6041(实际取值是 serverPort + 11,因此可以通过修改 serverPort 参数的设置来修改)。
- httpMaxThreads: 启动的线程数量,默认为 2(2.0.17.0 版本开始,默认值改为 CPU 核数的一半向下取整)。
- restfulRowLimit: 返回结果集(JSON 格式)的最大条数,默认值为 10240。
- httpEnableCompress: 是否支持压缩,默认不支持,目前 TDengine 仅支持 gzip 压缩格式。
- httpDebugFlag: 日志开关,默认 131。131:仅错误和报警信息,135:调试信息,143:非常详细的调试信息,默认 131。
- httpDbNameMandatory: 是否必须在 RESTful url 中指定默认的数据库名。默认为 0,即关闭此检查。如果设置为 1,那么每个 RESTful url 中都必须设置一个默认数据库名,否则无论此时执行的 SQL 语句是否需要指定数据库,都会返回一个执行错误,拒绝执行此 SQL 语句。
- The port number of the external RESTful service is bound to 6041 by default (the actual value is serverPort + 11, so it can be changed by modifying the setting of the serverPort parameter).
- httpMaxThreads: the number of threads to start, default is 2 (the default value is rounded down to half of the CPU cores with version 2.0.17.0 and later versions).
- restfulRowLimit: the maximum number of result sets (in JSON format) to return. The default value is 10240.
- httpEnableCompress: whether to support compression, the default is not supported. Currently, TDengine only supports the gzip compression format.
- httpDebugFlag: logging switch, default is 131. 131: error and alarm messages only, 135: debug messages, 143: very detailed debug messages.
- httpDbNameMandatory: users must specify the default database name in the RESTful URL. The default is 0, which turns off this check. If set to 1, users must put a default database name in every RESTful URL. Otherwise, it will return an execution error and reject this SQL statement, regardless of whether the SQL statement executed at this time requires a specified database.
:::note
如果使用 taosd 提供的 REST API, 那么以上配置需要写在 taosd 的配置文件 taos.cfg 中。如果使用 taosAdaper 提供的 REST API, 那么需要参考 taosAdaper [对应的配置方法](/reference/taosadapter/)。
If you are using the REST API provided by taosd, you should write the above configuration in taosd's configuration file taos.cfg. If you use the REST API of taosAdapter, you need to refer to taosAdapter [corresponding configuration method](/reference/taosadapter/).
:::
--
---
title: Connector
---
TDengine provides a rich application development interface. To facilitate users to develop their own applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in some languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
TDengine provides a rich set of APIs (application development interface). To facilitate users to develop their applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in a few languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
! [image-connector](/img/connector.png)
![image-connector](/img/connector.png)
## Supported platforms
Currently, TDengine's native interface connectors can support platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha hardware platforms and Linux/Win64/Win32 development environments. The comparison matrix is as follows.
| **CPU** | **OS** | **JDBC** | **Python** | **Go** | **Node.js** | **C#** | **Rust** | C/C++ |
| -------------- | --------- | -------- | ---------- | ------ | ----------- | ------ | -------- | ----- |
| | **X86 64bit** | **Linux** | ● | ● | ● | ● | ● | ● | ● | ●
| **X86 64bit** | **Win64** | ● | ● | ● | ● | ● | ● | ● | ●
| **X86 64bit** | **Win32** | ● | ● ● | ○ | ○ | ●
| **X86 32bit** | **Win32** | ○ | ○ | ○ | ○ | ○ | ●
| **ARM64** | **Linux** | ● | ● ● | ○ | ○ | ●
| **ARM32** | **Linux** | ● ● ● ● ● ● | ○ | ○ | ●
| **MIPS Longcore** | **Linux** | ○ | ○ | ○ | ○ | ○ | ○ | ○
| **Alpha Shenwei** | **Linux** | ○ | ○ ○ | -- | -- | --- | --- | ○ |
| **X86 Haiguang** | **Linux** | ○ | ○ | ○ | -- | -- | --- | ○ |
| ------- | ------ | -------- | ---------- | ------ | ----------- | ------ | -------- | ----- |
| **X86 64bit** | **Linux** | ● | ● | ● | ● | ● | ● | ● |
| **X86 64bit** | **Win64** | ● | ● | ● | ● | ● | ● | ● |
| **X86 64bit** | **Win32** | ● | ● | ● | ● | ○ | ○ | ● |
| **X86 32bit** | **Win32** | ○ | ○ | ○ | ○ | ○ | ○ | ● |
| **ARM64** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
| **ARM32** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
| **MIPS** | **Linux** | ○ | ○ | ○ | ○ | ○ | ○ | ○ |
Where ● means the official test verification passed, ○ means the unofficial test verification passed, -- means no assurance.
......@@ -46,13 +44,13 @@ Comparing the connector support for TDengine functional features as follows.
| **Functional Features** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
| -------------- | -------- | ---------- | ------ | ------ | ----------- | -------- |
| **Connection Management** | Support | Support | Support | Support | Support | Support | Support
| Support | Support | Support | Support | Support | Support | Support | Support | Support
| Support | Support | Support | Support | Support | Support | Support | Support | Support
| **Parameter Binding** | Support | Support | Support | Support | Support | Support | Support
| Support | Support | Support | Support | Support | Support | Support | Not Supported
| **Schemaless** | Support | Support | Support | Support | Support | Support | Support
| **DataFrame** | Not Supported | Support | Not Supported | Not Supported | Not Supported | Not Supported
| **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Continous Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Support | Support | Support | Support | Support | Support |
| **Subscription** | Support | Support | Support | Support | Support | Not Supported |
| **Schemaless** | Support | Support | Support | Support | Support | Support |
| **DataFrame** | Not Supported | Support | Not Supported | Not Supported | Not Supported | Not Supported |
:::info
The different database framework specifications for various programming languages do not mean that all C/C++ interfaces need a wrapper.
......@@ -62,14 +60,14 @@ The different database framework specifications for various programming language
| **Functional Features** | **Java** | **Python** | **Go** | **C# (not supported yet)** | **Node.js** | **Rust** |
| ------------------------------ | -------- | ---------- | -------- | ------------------ | ----------- | -------- |
| **Connection Management** | Support | Support | Support | N/A | Support | Support | Support
| Support | Support | N/A | Support | Support | Support
| Support | Support | N/A | Support | Support | Support
| Support | N/A | Support | N/A | Support | N/A
| | N/A | Support | N/A | Support | N/A
| **Schemaless** | Not supported | N/A | Not supported | Not supported | N/A
| N/A | Not Supported | Not Supported | N/A
| **DataFrame** | Not supported | Support | Not supported | N/A | Not supported | Not supported
| **Connection Management** | Support | Support | Support | N/A | Support | Support |
| **Regular Query** | Support | Support | Support | N/A | Support | Support |
| **Continous Query ** | Support | Support | Support | N/A | Support | Support |
| **Parameter Binding** | Not Supported | Not Supported | Not Supported | N/A | Not Supported | Not Supported |
| **Subscription** | Not Supported | Not Supported | Not Supported | N/A | Not Supported | Not Supported |
| **Schemaless** | Not supported | Not Supported | Not Supported | N/A | Not Supported | Not supported |
| **Bulk Pulling (based on WebSocket) **| Support | Not Supported | Not Supported | N/A | Not Supported | Not Supported |
| **DataFrame** | Not supported | Support | Not supported | N/A | Not supported | Not supported |
:::warning
......@@ -79,12 +77,12 @@ The different database framework specifications for various programming language
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import InstallOnWindows from ". /_linux_install.mdx";
import InstallOnLinux from ". /_windows_install.mdx";
import VerifyWindows from ". /_verify_windows.mdx";
import VerifyLinux from ". /_verify_linux.mdx";
import InstallOnWindows from "./_linux_install.mdx";
import InstallOnLinux from "./_windows_install.mdx";
import VerifyWindows from "./_verify_windows.mdx";
import VerifyLinux from "./_verify_linux.mdx";
## Install client driver
## Install Client Driver
:::info
The client driver needs to be installed if you use the native interface connector on a system that does not have the TDengine server software installed.
......
......@@ -15,17 +15,19 @@ import PkgList from "/components/PkgList";
Once the package is unzipped, you will see the following files in the directory:
- _ install_client.sh_: install script
- _ taos.tar.gz_: application driver package
- _ driver_: TDengine application driver
- _ taos.tar.gz_: client driver package
- _ driver_: TDengine client driver
- _examples_: some example programs of different programming languages (C/C#/go/JDBC/MATLAB/python/R)
You can run `install_client.sh` to install it.
4. Edit taos.cfg
Edit `taos.cfg` file (full path is `/etc/taos/taos.cfg` by default), modify `firstEP` with actual TDengine server's End Point, for example `h1.tdengine.com:6030`
:::tip
1. If the computer does not run the TDengine service but installs the TDengine application driver, then you need to config `firstEP` in `taos.cfg` only, and there is no need to configure `FQDN`;
1. If the computer does not run the TDengine service but installs the TDengine client driver, then you need to config `firstEP` in `taos.cfg` only, and there is no need to configure `FQDN`;
2. If you encounter the "Unable to resolve FQDN" error, please make sure the FQDN in the `/etc/hosts` file of the current computer is correctly configured, or the DNS service is correctly configured.
......
......@@ -4,7 +4,7 @@
Since the TDengine client driver is written in C, using the native connection requires loading the client driver shared library file, which is usually included in the TDengine installer. You can install either standard TDengine server installation package or [TDengine client installtion package](/get-started/). For Windows development, you need to install the corresponding [Windows client](https://www.taosdata.com/cn/all-downloads/#TDengine-Windows-Client) for TDengine.
- libtaos.so: After successful installation of TDengine on a Linux system, the dependent Linux version of the client driver libtaos.so file will be automatically copied to /usr/lib/libtaos.so, which is included in the Linux scanable path and does not need to be specified separately.
- libtaos.so: After successful installation of TDengine on a Linux system, the dependent Linux version of the client driver `libtaos.so` file will be automatically linked to `/usr/lib/libtaos.so`, which is included in the Linux scannable path and does not need to be specified separately.
- taos.dll: After installing the client on Windows, the dependent Windows version of the client driver taos.dll file will be automatically copied to the system default search path C:/Windows/System32, again without the need to specify it separately.
:::
Execute `taos` directly under the Linux shell to connect to the TDengine service and enter the TDengine CLI interface, as shown in the following example.
Execute TDengine CLI program `taos` directly from the Linux shell to connect to the TDengine service and enter the TDengine CLI interface, as shown in the following example.
```text
$ taos
......
Go to the C:\TDengine directory under cmd and execute `taos.exe` directly to connect to the TDengine service and enter the TDengine CLI interface, for example, as follows:
Go to the `C:\TDengine` directory from `cmd` and execute TDengine CLI program `taos.exe` directly to connect to the TDengine service and enter the TDengine CLI interface, for example, as follows:
```text
C:\TDengine>taos
......
......@@ -4,16 +4,16 @@ import PkgList from "/components/PkgList";
<PkgList type={1} sys="Windows" />
[all downloads](https://www.taosdata.com/cn/all-downloads/)
[All downloads](https://www.taosdata.com/cn/all-downloads/)
2. Execute the installer, select the default value as prompted, and complete the installation
3. Installation path
The default installation path is C:\TDengine, including the following files (directories).
- _taos.exe_ : TDengine CLI command line program
- _taos.exe_ : TDengine CLI command-line program
- _cfg_ : configuration file directory
- _driver_: application driver dynamic link library
- _driver_: client driver dynamic link library
- _examples_: sample programs bash/C/C#/go/JDBC/Python/Node.js
- _include_: header files
- _log_ : log file
......@@ -25,7 +25,7 @@ import PkgList from "/components/PkgList";
:::tip
1. If you use FQDN to connect to the server, you must ensure the local network environment DNS is configured, or add FQDN addressing records in the `hosts` file, e.g., edit C:\Windows\system32\drivers\etc\hosts and add a record like the following: `192.168.1.99 h1.tados.com`. 2.
2. Uninstall: Run unins000.exe to uninstall the TDengine application driver.
1. If you use FQDN to connect to the server, you must ensure the local network environment DNS is configured, or add FQDN addressing records in the `hosts` file, e.g., edit C:\Windows\system32\drivers\etc\hosts and add a record like the following: `192.168.1.99 h1.tados.com`..
2. Uninstall: Run unins000.exe to uninstall the TDengine client driver.
:::
......@@ -17,9 +17,9 @@ import CSQuery from "../../04-develop/04-query-data/_cs.mdx"
import CSAsyncQuery from "../../04-develop/04-query-data/_cs_async.mdx"
TDengine.Connector` is a C# language connector provided by TDengine that allows C# developers to develop C# applications that access TDengine cluster data.
`TDengine.Connector` is a C# language connector provided by TDengine that allows C# developers to develop C# applications that access TDengine cluster data.
The `TDengine.Connector` connector supports connection to TDengine runtime instances via the TDengine client driver (taosc), providing data writing, querying, subscription, schemaless data writing, bind interface data writing, etc. The `TDengine.Connector` currently does not provide a REST connection. REST connection is not yet available. Users can write their RESTful APIs by referring to the [RESTful APIs](https://docs.taosdata.com//reference/restful-api/) documentation.
The `TDengine.Connector` connector supports connect to TDengine instances via the TDengine client driver (taosc), providing data writing, querying, subscription, schemaless writing, bind interface, etc. The `TDengine.Connector` currently does not provide a REST connection interface. Developers can write their RESTful application by referring to the [RESTful APIs](https://docs.taosdata.com//reference/restful-api/) documentation.
This article describes how to install `TDengine.Connector` in a Linux or Windows environment and connect to TDengine clusters via `TDengine.Connector` to perform basic operations such as data writing and querying.
......@@ -31,15 +31,15 @@ The supported platforms are the same as those supported by the TDengine client d
## Version support
Please refer to [version support list](/reference/connector#version support)
Please refer to [version support list](/reference/connector#version-support)
## Supported features
1. connection management
2. general query
3. continuous query
4. parameter binding
5. subscription function
1. Connection Mmanagement
2. General Query
3. Continuous Query
4. Parameter Binding
5. Subscription
6. Schemaless
## Installation Steps
......@@ -50,7 +50,7 @@ Please refer to [version support list](/reference/connector#version support)
* [Nuget Client](https://docs.microsoft.com/en-us/nuget/install-nuget-client-tools) (optional installation)
* Install TDengine client driver, please refer to [Install client driver](/reference/connector#Install client driver) for details
### Install using dotnet CLI
### Install via dotnet CLI
<Tabs defaultValue="CLI">
<TabItem value="CLI" label="Get C# driver using dotnet CLI">
......@@ -61,7 +61,7 @@ You can reference the `TDengine.Connector` published in Nuget to the current pro
dotnet add package TDengine.Connector
```
</TabItem
</TabItem>
<TabItem value="source" label="Use source code to get C# driver">
You can download TDengine's source code and directly reference the latest version of the TDengine.Connector library
......@@ -179,7 +179,7 @@ namespace TDengineExample
1. "Unable to establish connection", "Unable to resolve FQDN"
Usually, because the FQDN configuration is incorrect, you can refer to [How to understand TDengine's FQDN thoroughly](https://www.taosdata.com/blog/2021/07/29/2741.html) to solve it. 2.
Usually, it cause by the FQDN configuration is incorrect, you can refer to [How to understand TDengine's FQDN (Chinese)](https://www.taosdata.com/blog/2021/07/29/2741.html) to solve it. 2.
Unhandled exception. System.DllNotFoundException: Unable to load DLL 'taos' or one of its dependencies: The specified module cannot be found.
......
......@@ -15,9 +15,9 @@ import GoOpenTSDBTelnet from "../../04-develop/03-insert-data/_go_opts_telnet.md
import GoOpenTSDBJson from "../../04-develop/03-insert-data/_go_opts_json.mdx"
import GoQuery from "../../04-develop/04-query-data/_go.mdx"
`driver-go` is the official Go language connector for TDengine, which implements the interface to the Go language [ database/sql ](https://golang.org/pkg/database/sql/) package. Go developers can use it to develop applications that access TDengine cluster data.
`driver-go` is the official Go language connector for TDengine, which implements the interface to the Go language [database/sql](https://golang.org/pkg/database/sql/) package. Go developers can use it to develop applications that access TDengine cluster data.
`driver-go` provides two ways to establish connections. One is **native connection**, which connects to TDengine runtime instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless interface, and parameter binding interface. The other is the **REST connection**, which connects to TDengine runtime instances via the REST interface provided by taosAdapter. The set of features implemented by the REST connection differs slightly from the native connection.
`driver-go` provides two ways to establish connections. One is **native connection**, which connects to TDengine instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless writing, and bind interface. The other is the **REST connection**, which connects to TDengine instances via the REST interface provided by taosAdapter. The set of features implemented by the REST connection differs slightly from the native connection.
This article describes how to install `driver-go` and connect to TDengine clusters and perform basic operations such as data query and data writing through `driver-go`.
......@@ -30,13 +30,13 @@ REST connections are supported on all platforms that can run Go.
## Version support
Please refer to [version support list](/reference/connector#version support)
Please refer to [version support list](/reference/connector#version-support)
## Supported features
### Native connections
A "native connection" is established by the connector directly to the TDengine runtime instance via the TDengine client driver (taosc). The supported functional features are
A "native connection" is established by the connector directly to the TDengine instance via the TDengine client driver (taosc). The supported functional features are:
* Normal queries
* Continuous queries
......@@ -46,7 +46,7 @@ A "native connection" is established by the connector directly to the TDengine r
### REST connection
A "REST connection" is a connection between a connector and a TDengine runtime instance via the REST API provided by the taosAdapter component. The following features are supported.
A "REST connection" is a connection between the application and the TDengine instance via the REST API provided by the taosAdapter component. The following features are supported:
* General queries
* Continuous queries
......@@ -75,7 +75,7 @@ Configure the environment variables and check the command.
go mod init taos-demo
``` text
2. Introduce taosSql: ``text
2. Introduce taosSql
```go
import (
......@@ -101,7 +101,7 @@ Configure the environment variables and check the command.
### Data source name (DSN)
Data source names have a standard format, e.g. [PEAR DB](http://pear.php.net/manual/en/package.database.db.intro-dsn.php), but no type prefix (square brackets indicate optionally): the
Data source names have a standard format, e.g. [PEAR DB](http://pear.php.net/manual/en/package.database.db.intro-dsn.php), but no type prefix (square brackets indicate optionally):
``` text
[username[:password]@][protocol[(address)]]/[dbname][?param1=value1&... &paramN=valueN]
......@@ -112,7 +112,8 @@ DSN in full form.
```text
username:password@protocol(address)/dbname?param=value
```
### Connecting using connectors
### Connecting via connector
<Tabs defaultValue="native">
<TabItem value="native" label="native connection">
......@@ -121,7 +122,7 @@ _taosSql_ implements Go's `database/sql/driver` interface via cgo. You can use t
Use `taosSql` as `driverName` and use a correct [DSN](#DSN) as `dataSourceName`, DSN supports the following parameters.
* configPath specifies the taos.cfg directory
* configPath specifies the `taos.cfg` directory
Example.
......@@ -145,7 +146,7 @@ func main() {
}
```
</TabItem
</TabItem>
<TabItem value="rest" label="REST connection">
_taosRestful_ implements Go's `database/sql/driver` interface via `http client`. You can use the [`database/sql`](https://golang.org/pkg/database/sql/) interface by simply introducing the driver.
......@@ -210,7 +211,7 @@ func main() {
## Usage limitations
Since the REST interface is stateless, the `use db` syntax will not work. You need to put the db name into the SQL statement, e.g. `create table if not exists tb1 (ts timestamp, a int)` to `create table if not exists test.tb1 (ts timestamp, a int)` otherwise it will report the error `[0x217] Database not specified or available`.
Since the REST interface is stateless, the `use db` syntax will not work. You need to put the db name into the SQL command, e.g. `create table if not exists tb1 (ts timestamp, a int)` to `create table if not exists test.tb1 (ts timestamp, a int)` otherwise it will report the error `[0x217] Database not specified or available`.
You can also put the db name in the DSN by changing `root:taosdata@http(localhost:6041)/` to `root:taosdata@http(localhost:6041)/test`. This method is supported by taosAdapter in TDengine 2.4.0.5. is supported since TDengine 2.4.0.5. Executing the `create database` statement when the specified db does not exist will not report an error while executing other queries or writing against that db will report an error.
......@@ -268,27 +269,27 @@ func main() {
1. Cannot find the package `github.com/taosdata/driver-go/v2/taosRestful`
Change the reference to `github.com/taosdata/driver-go/v2` in the require block in `go.mod` to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
Change the `github.com/taosdata/driver-go/v2` line in the require block of the `go.mod` file to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
2. stmt (parameter binding) related interface in database/sql crashes
2. bind interface in database/sql crashes
REST does not support parameter binding related interface. It is recommended to use `db.Exec` and `db.Query`.
3. error `[0x217] Database not specified or available` after executing other statements with `use db` statement
The execution of SQL statements in the REST interface is not contextual, so using `use db` statement will not work, see the usage restrictions section above.
The execution of SQL command in the REST interface is not contextual, so using `use db` statement will not work, see the usage restrictions section above.
4. use taosSql without error use taosRestful with error `[0x217] Database not specified or available`
4. use `taosSql` without error but use `taosRestful` with error `[0x217] Database not specified or available`
Because the REST interface is stateless, using the `use db` statement will not take effect. See the usage restrictions section above.
5. Upgrade `github.com/taosdata/driver-go/v2/taosRestful`
Change the reference to `github.com/taosdata/driver-go/v2` in the `go.mod` file to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
Change the `github.com/taosdata/driver-go/v2` line in the `go.mod` file to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
6. `readBufferSize` parameter has no significant effect after being increased
If you increase `readBufferSize` will reduce the number of `syscall` calls when fetching results. If the query result is more petite, modifying this parameter will not improve significantly. If you increase the parameter too much, the bottleneck will be parsing JSON data. If you need to optimize the query speed, you must adjust the value according to the actual situation to achieve the best query result.
If you increase `readBufferSize` will reduce the number of `syscall` calls when fetching results. If the query result is smaller, modifying this parameter will not improve significantly. If you increase the parameter value too much, the bottleneck will be parsing JSON data. If you need to optimize the query speed, you must adjust the value according to the actual situation to achieve the best query result.
7. `disableCompression` parameter is set to `false` when the query efficiency is reduced
......@@ -408,4 +409,4 @@ The `af` package encapsulates TDengine advanced functions such as connection man
## API Reference
Full API see [driver-go documentation](https://pkg.go.dev/github.com/taosdata/driver-go/v2)
\ No newline at end of file
Full API see [driver-go documentation](https://pkg.go.dev/github.com/taosdata/driver-go/v2)
......@@ -16,86 +16,86 @@ import NodeOpenTSDBJson from "../../04-develop/03-insert-data/_js_opts_json.mdx"
import NodeQuery from "../../04-develop/04-query-data/_js.mdx";
import NodeAsyncQuery from "../../04-develop/04-query-data/_js_async.mdx";
`td2.0-connector` 和 `td2.0-rest-connector` 是 TDengine 的官方 Node.js 语言连接器。Node.js 开发人员可以通过它开发可以存取 TDengine 集群数据的应用软件。
`td2.0-connector` and `td2.0-rest-connector` are the official Node.js language connectors for TDengine. Node.js developers can develop applications to access TDengine instance data.
`td2.0-connector` 是**原生连接器**,它通过 TDengine 客户端驱动程序(taosc)连接 TDengine 运行实例,支持数据写入、查询、订阅、schemaless 接口和参数绑定接口等功能。`td2.0-rest-connector` 是 **REST 连接器**,它通过 taosAdapter 提供的 REST 接口连接 TDengine 的运行实例。REST 连接器可以在任何平台运行,但性能略为下降,接口实现的功能特性集合和原生接口有少量不同。
`td2.0-connector` is a **native connector** that connects to TDengine instances via the TDengine client driver (taosc) and supports data writing, querying, subscriptions, schemaless writing, and bind interface. The `td2.0-rest-connector` is a **REST connector** that connects to TDengine instances via the REST interface provided by taosAdapter. The REST connector can run on any platform, but performance is slightly degraded, and the interface implements a somewhat different set of functional features than the native interface.
Node.js 连接器源码托管在 [GitHub](https://github.com/taosdata/taos-connector-node)。
The Node.js connector source code is hosted on [GitHub](https://github.com/taosdata/taos-connector-node).
## 支持的平台
## Supported Platforms
原生连接器支持的平台和 TDengine 客户端驱动支持的平台一致。
REST 连接器支持所有能运行 Node.js 的平台。
The platforms supported by the native connector are the same as those supported by the TDengine client driver.
The REST connector supports all platforms that can run Node.js.
## 版本支持
## Version support
请参考[版本支持列表](/reference/connector#版本支持)
Please refer to [version support list](/reference/connector#version-support)
## 支持的功能特性
## Supported features
### 原生连接器
### Native connectors
1. 连接管理
2. 普通查询
3. 连续查询
4. 参数绑定
5. 订阅功能
1. connection management
2. general query
3. continuous query
4. parameter binding
5. subscription function
6. Schemaless
### REST 连接器
### REST Connector
1. 连接管理
2. 普通查询
3. 连续查询
1. connection management
2. general queries
3. Continuous query
## 安装步骤
## Installation steps
### 安装前准备
### Pre-installation
- 安装 Node.js 开发环境
- 如果使用 REST 连接器,跳过此步。但如果使用原生连接器,请安装 TDengine 客户端驱动,具体步骤请参考[安装客户端驱动](/reference/connector#安装客户端驱动)。我们使用 [node-gyp](https://github.com/nodejs/node-gyp) 和 TDengine 实例进行交互,还需要根据具体操作系统来安装下文提到的一些依赖工具。
- Install the Node.js development environment
- If you are using the REST connector, skip this step. However, if you use the native connector, please install the TDengine client driver. Please refer to [Install Client Driver](/reference/connector#Install-Client-Driver) for more details. We use [node-gyp](https://github.com/nodejs/node-gyp) to interact with TDengine instances and also need to install some dependencies mentioned below depending on the specific OS.
<Tabs defaultValue="Linux">
<TabItem value="Linux" label="Linux 系统安装依赖工具">
<TabItem value="Linux" label="Linux system installation dependencies">
- `python` (建议`v2.7` , `v3.x.x` 目前还不支持)
- `td2.0-connector` 2.0.6 支持 Node.js LTS v10.9.0 或更高版本, Node.js LTS v12.8.0 或更高版本;2.0.5 及更早版本支持 Node.js LTS v10.x 版本。其他版本可能存在包兼容性的问题
- `python` (recommended for `v2.7` , `v3.x.x` currently not supported)
- `td2.0-connector` 2.0.6 supports Node.js LTS v10.9.0 or later, Node.js LTS v12.8.0 or later; 2.0.5 and earlier support Node.js LTS v10.x versions. Other versions may have package compatibility issues
- `make`
- C 语言编译器,[GCC](https://gcc.gnu.org) v4.8.5 或更高版本
- C compiler, [GCC](https://gcc.gnu.org) v4.8.5 or higher
</TabItem>
<TabItem value="Windows" label="Windows 系统安装依赖工具">
<TabItem value="Windows" label="Windows system installation dependencies">
- 安装方法 1
- Installation method 1
使用微软的[ windows-build-tools ](https://github.com/felixrieseberg/windows-build-tools)在`cmd` 命令行界面执行`npm install --global --production windows-build-tools` 即可安装所有的必备工具。
Use Microsoft's [windows-build-tools](https://github.com/felixrieseberg/windows-build-tools) to execute `npm install --global --production` from the `cmd` command-line interface to install all the necessary tools.
- 安装方法 2
- Installation method 2
手动安装以下工具:
Manually install the following tools.
- 安装 Visual Studio 相关:[Visual Studio Build 工具](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) 或者 [Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community)
- 安装 [Python](https://www.python.org/downloads/) 2.7(`v3.x.x` 暂不支持) 并执行 `npm config set python python2.7`
- 进入`cmd`命令行界面,`npm config set msvs_version 2017`
- Install Visual Studio related: [Visual Studio Build Tools](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) or [Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community)
- Install [Python](https://www.python.org/downloads/) 2.7 (`v3.x.x` is not supported) and execute `npm config set python python2.7`.
- Go to the `cmd` command-line interface, `npm config set msvs_version 2017`
参考微软的 Node.js 用户手册[ Microsoft's Node.js Guidelines for Windows](https://github.com/Microsoft/nodejs-guidelines/blob/master/windows-environment.md#compiling-native-addon-modules)。
Refer to Microsoft's Node.js User Manual [Microsoft's Node.js Guidelines for Windows](https://github.com/Microsoft/nodejs-guidelines/blob/master/windows- environment. md#compiling-native-addon-modules).
如果在 Windows 10 ARM 上使用 ARM64 Node.js,还需添加 "Visual C++ compilers and libraries for ARM64" 和 "Visual C++ ATL for ARM64"。
If using ARM64 Node.js on Windows 10 ARM, you must add "Visual C++ compilers and libraries for ARM64" and "Visual C++ ATL for ARM64".
</TabItem>
</Tabs>
### 使用 npm 安装
### Install via npm
<Tabs defaultValue="install_native">
<TabItem value="install_native" label="安装原生连接器">
<TabItem value="install_native" label="Install native connector">
```bash
npm install td2.0-connector
```
</TabItem>
<TabItem value="install_rest" label="安装 REST 连接器">
<TabItem value="install_rest" label="Install REST connector">
```bash
npm i td2.0-rest-connector
......@@ -104,15 +104,15 @@ npm i td2.0-rest-connector
</TabItem>
</Tabs>
### 安装验证
### Installation verification
在安装好 TDengine 客户端后,使用 nodejsChecker.js 程序能够验证当前环境是否支持 Node.js 方式访问 TDengine。
After installing the TDengine client, use the `nodejsChecker.js` program to verify that the current environment supports Node.js access to TDengine.
验证方法:
Verification in details:
- 新建安装验证目录,例如:`~/tdengine-test`,下载 GitHub 上 [nodejsChecker.js 源代码](https://github.com/taosdata/TDengine/tree/develop/examples/nodejs/nodejsChecker.js)到本地。
- Create a new installation verification directory, e.g. `~/tdengine-test`, and download the [nodejsChecker.js source code](https://github.com/taosdata/TDengine/tree/develop/examples/nodejs/) from GitHub. to the work directory.
- 在命令行中执行以下命令。
- Execute the following command from the command-line.
```bash
npm init -y
......@@ -120,16 +120,16 @@ npm install td2.0-connector
node nodejsChecker.js host=localhost
```
- 执行以上步骤后,在命令行会输出 nodejsChecker.js 连接 TDengine 实例,并执行简单插入和查询的结果。
- After executing the above steps, the command-line will output the result of `nodejsChecker.js` connecting to the TDengine instance and performing a simple insert and query.
## 建立连接
## Establishing a connection
请选择使用一种连接器。
Please choose to use one of the connectors.
<Tabs defaultValue="native">
<TabItem value="native" label="原生连接">
<TabItem value="native" label="Native connection">
安装并引用 `td2.0-connector` 包。
Install and refer to `td2.0-connector` package:
```javascript
//A cursor also needs to be initialized in order to interact with TDengine from Node.js.
......@@ -148,9 +148,9 @@ conn.close();
```
</TabItem>
<TabItem value="rest" label="REST 连接">
<TabItem value="rest" label="REST connection">
安装并引用 `td2.0-rest-connector` 包。
Install and refer to `td2.0-rest-connector`package:
```javascript
//A cursor also needs to be initialized in order to interact with TDengine from Node.js.
......@@ -167,93 +167,89 @@ let cursor = conn.cursor();
</TabItem>
</Tabs>
## 使用示例
## Usage examples
### 写入数据
### Write data
#### SQL 写入
#### SQL Writing
<NodeInsert />
#### InfluxDB 行协议写入
#### InfluxDB line protocol writing
<NodeInfluxLine />
#### OpenTSDB Telnet 行协议写入
#### OpenTSDB Telnet line protocol writing
<NodeOpenTSDBTelnet />
#### OpenTSDB JSON 行协议写入
#### OpenTSDB JSON line protocol writing
<NodeOpenTSDBJson />
### 查询数据
### Query data
#### 同步查询
#### Synchronous queries
<NodeQuery />
#### 异步查询
#### asynchronous query
<NodeAsyncQuery />
## 更多示例程序
## More Sample Programs
| 示例程序 | 示例程序描述 |
| ------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------- |
| [connection](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/cursorClose.js) | 建立连接的示例。 |
| [stmtBindBatch](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtBindParamBatchSample.js) | 绑定多行参数插入的示例。 |
| [stmtBind](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtBindParamSample.js) | 一行一行绑定参数插入的示例。 |
| [stmtBindSingleParamBatch](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtBindSingleParamBatchSample.js) | 按列绑定参数插入的示例。 |
| [stmtUseResult](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtUseResultSample.js) | 绑定参数查询的示例。 |
| [json tag](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testJsonTag.js) | Json tag 的使用示例。 |
| [Nanosecond](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testNanoseconds.js) | 时间戳为纳秒精度的使用的示例。 |
| [Microsecond](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testMicroseconds.js) | 时间戳为微秒精度的使用的示例。 |
| [schemless insert](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testSchemalessInsert.js) | schemless 插入的示例。 |
| [subscribe](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testSubscribe.js) | 订阅的使用示例。 |
| [asyncQuery](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/tset.js) | 异步查询的使用示例。 |
| [REST](https://github.com/taosdata/taos-connector-node/blob/develop/typescript-rest/example/example.ts) | 使用 REST 连接的 TypeScript 使用示例。 |
| Sample Programs | Sample Program Description |
| --------------------------------------------------------------------------------------------------------------------------------- --------- | -------------------------------------- |
| [connection](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/cursorClose.js) | Example of establishing a connection. |
| [stmtBindBatch](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtBindParamBatchSample.js) | Example of binding a multi-line parameter Example of inserting. |
| [stmtBind](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtBindParamSample.js) | Example of a line-by-line bind parameter insertion. |
| [stmtBindSingleParamBatch](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/) stmtBindSingleParamBatchSample.js) | Example of binding parameters by column. |
| [stmtUseResult](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/stmtUseResultSample.js) | Example of a bound parameter query. |
| [json tag](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testJsonTag.js) | Example of using Json tag. |
| [Nanosecond](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testNanoseconds.js) | An example of using timestamps with nanosecond precision. |
| [Microsecond](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testMicroseconds.js) | Example of using microsecond timestamp. |
| [schemless insert](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testSchemalessInsert.js) | schemless Example of a schemless insert. |
| [subscribe](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/testSubscribe.js) | Example of using subscribe. |
| [asyncQuery](https://github.com/taosdata/taos-connector-node/tree/develop/nodejs/examples/tset.js) | An example of using asynchronous queries. |
| [REST](https://github.com/taosdata/taos-connector-node/blob/develop/typescript-rest/example/example.ts) | An example of using TypeScript with REST connections. |
## 使用限制
## Usage restrictions
Node.js 连接器 >= v2.0.6 目前支持 node 的版本为:支持 >=v12.8.0 <= v12.9.1 || >=v10.20.0 <= v10.9.0 ;2.0.5 及更早版本支持 v10.x 版本,其他版本可能存在包兼容性的问题。
Node.js Connector >= v2.0.6 currently supports node versions >=v12.8.0 <= v12.9.1 || >=v10.20.0 <= v10.9.0; v10.x versions are supported in 2.0.5 and earlier, other versions may have package compatibility issues.
## 其他说明
## Other notes
Node.js 连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1957.html)。
See [video tutorial](https://www.taosdata.com/blog/2020/11/11/1957.html) for the Node.js connector usage.
## 常见问题
## Frequently Asked Questions
1. 使用 REST 连接需要启动 taosadapter。
1. Using REST connections requires starting taosadapter.
```bash
sudo systemctl start taosadapter
```
2. Node.js 版本
2. "Unable to establish connection", "Unable to resolve FQDN"
连接器 >v2.0.6 目前兼容的 Node.js 版本为:>=v10.20.0 <= v10.9.0 || >=v12.8.0 <= v12.9.1
Usually, root cause is the FQDN is not configured correctly. You can refer to [How to understand TDengine's FQDN (In Chinese)](https://www.taosdata.com/blog/2021/07/29/2741.html).
3. "Unable to establish connection","Unable to resolve FQDN"
## Important Updates
一般都是因为配置 FQDN 不正确。 可以参考[如何彻底搞懂 TDengine 的 FQDN](https://www.taosdata.com/blog/2021/07/29/2741.html) 。
### Native connectors
## 重要更新记录
### 原生连接器
| td2.0-connector 版本 | 说明 |
| td2.0-connector version | description |
| -------------------- | ---------------------------------------------------------------- |
| 2.0.12 | 修复 cursor.close() 报错的 bug。 |
| 2.0.11 | 支持绑定参数、json tag、schemaless 接口等功能。 |
| 2.0.10 | 支持连接管理,普通查询、连续查询、获取系统信息、订阅功能等功能。 |
| 2.0.12 | Fix bug with cursor.close() error. | 2.0.12 | Fix bug with cursor.close() error.
| 2.0.11 | Support for binding parameters, json tag, schemaless interface, etc. |
| 2.0.10 | Support connection management, general query, continuous query, get system information, subscribe function, etc. | ### REST Connector
### REST 连接器
### REST Connector
| td2.0-rest-connector 版本 | 说明 |
| td2.0-rest-connector version | Description |
| ------------------------- | ---------------------------------------------------------------- |
| 1.0.3 | 支持连接管理、普通查询、获取系统信息、错误信息、连续查询等功能。 |
| 1.0.3 | Support connection management, general query, get system information, error message, continuous query, etc. |# API Reference
## API 参考
## API Reference
[API 参考](https://docs.taosdata.com/api/td2.0-connector/)
[API Reference](https://docs.taosdata.com/api/td2.0-connector/)
......@@ -2,16 +2,16 @@
sidebar_position: 3
sidebar_label: Python
title: TDengine Python Connector
description: "taospy 是 TDengine 的官方 Python 连接器。taospy 提供了丰富的 API, 使得 Python 应用可以很方便地使用 TDengine。tasopy 对 TDengine 的原生接口和 REST 接口都进行了封装, 分别对应 tasopy 的两个子模块:tasos 和 taosrest。除了对原生接口和 REST 接口的封装,taospy 还提供了符合 Python 数据访问规范(PEP 249)的编程接口。这使得 taospy 和很多第三方工具集成变得简单,比如 SQLAlchemy 和 pandas"
description: "taospy is the official Python connector for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. tasopy wraps both the native and REST interfaces of TDengine, corresponding to the two submodules of tasopy: taos and taosrest. In addition to wrapping the native and REST interfaces, taospy also provides a programming interface that conforms to the Python Data Access Specification (PEP 249), making it easy to integrate taospy with many third-party tools, such as SQLAlchemy and pandas."
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
`taospy` is the official Python connector for TDengine. `taospy` provides a rich API that makes it easy for Python applications to use TDengine. `taospy` wraps both the [native interface](/reference/connector/cpp) and [REST interface](/reference/rest-api) of TDengine, which correspond to the `taos` and `taosrest` modules of the `taospy` package, respectively.
In addition to wrapping the native and REST interfaces, `taospy` also provides a programming interface that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
`taospy` is the official Python connector for TDengine. `taospy` provides a rich set of APIs that makes it easy for Python applications to access TDengine. `taospy` wraps both the [native interface](/reference/connector/cpp) and [REST interface](/reference/rest-api) of TDengine, which correspond to the `taos` and `taosrest` modules of the `taospy` package, respectively.
In addition to wrapping the native and REST interfaces, `taospy` also provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
The connection to the server directly using the native interface provided by the client driver is referred to hereinafter as a "native connection"; the connection to the server using the REST interface provided by taosAdapter is referred to hereinafter as a "REST connection". ".
The connection to the server directly using the native interface provided by the client driver is referred to hereinafter as a "native connection"; the connection to the server using the REST interface provided by taosAdapter is referred to hereinafter as a "REST connection".
The source code for the Python connector is hosted on [GitHub](https://github.com/taosdata/taos-connector-python).
......@@ -22,33 +22,34 @@ The source code for the Python connector is hosted on [GitHub](https://github.co
## Version selection
We recommend using the latest version of `taospy`, regardless of the version of TDengine used.
We recommend using the latest version of `taospy`, regardless what the version of TDengine is.
## Supported features
- Native connections support all the core features of TDeingine, including connection management, SQL execution, parameter binding, subscriptions, and schemaless writing.
- Native connections support all the core features of TDeingine, including connection management, SQL execution, bind interface, subscriptions, and schemaless writing.
- REST connections support features such as connection management and SQL execution. (SQL execution allows you to: manage databases, tables, and supertables, write data, query data, create continuous queries, etc.).
## Installation
### Preparation
1. Install Python. Python >= 3.6 is recommended. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it. 2.
Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip docuemntation](https://pip.pypa.io/en/stable/installation/) to install it.
If you use a native connection, you will also need to [install the client driver](...). /#install client driver). The client software contains the TDengine client dynamic link library (libtaos.so or taos.dll) and the TDengine CLI.
1. Install Python. Python >= 3.6 is recommended. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it.
2. Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip docuemntation](https://pip.pypa.io/en/stable/installation/) to install it.
### Install using pip
If you use a native connection, you will also need to [Install Client Driver](/reference/connector#Install-Client-Driver). The client install package includes the TDengine client dynamic link library (`libtaos.so` or `taos.dll`) and the TDengine CLI.
### Install via pip
#### Uninstalling an older version
If you have previously installed an older version of the Python Connector, please uninstall it beforehand.
If you have installed an older version of the Python Connector, please uninstall it beforehand.
```
pip3 uninstall taos taospy
```
:::note
Earlier TDengine client software includes the Python connector. If the Python connector is installed from the client software's installation directory, the corresponding Python package name is `taos`. So the above uninstall command includes `taos`, and it doesn't matter if it doesn't exist.
Earlier TDengine client software includes the Python connector. If the Python connector is installed from the client package's installation directory, the corresponding Python package name is `taos`. So the above uninstall command includes `taos`, and it doesn't matter if it doesn't exist.
:::
......@@ -57,19 +58,19 @@ Earlier TDengine client software includes the Python connector. If the Python co
<Tabs>
<TabItem label="Install from PyPI" value="pypi">
Install the latest version of
Install the latest version of:
```
pip3 install taospy
```
You can also specify a specific version to install.
You can also specify a specific version to install:
```
pip3 install taospy==2.3.0
```
</TabItem
</TabItem>
<TabItem label="Install from GitHub" value="github">
```
......@@ -84,7 +85,7 @@ pip3 install git+https://github.com/taosdata/taos-connector-python.git
<Tabs groupId="connect" default="native">
<TabItem value="native" label="native connection">
For native connections, you need to verify that both the client driver and the Python connector itself are installed correctly. The client driver and Python connector have been installed properly if you can successfully import the `taos` module. In the Python Interactive Shell, you can type.
For native connection, you need to verify that both the client driver and the Python connector itself are installed correctly. The client driver and Python connector have been installed properly if you can successfully import the `taos` module. In the Python Interactive Shell, you can type.
```python
import taos
......@@ -93,7 +94,7 @@ import taos
</TabItem>
<TabItem value="rest" label="REST connection">
For REST connections, verifying that the ``taosrest`'' module can be imported successfully can be done in the Python Interactive Shell by typing.
For REST connections, verifying that the `taosrest` module can be imported successfully can be done in the Python Interactive Shell by typing.
```python
import taosrest
......@@ -109,7 +110,6 @@ If you have multiple versions of Python on your system, you may have various `pi
C:\> pip3 install taospy
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: taospy in c:\users\username\appdata\local\programs\python\python310\lib\site-packages (2.3.0)
ðŸ™'ðŸ™'
:::
......@@ -119,16 +119,16 @@ Requirement already satisfied: taospy in c:\users\username\appdata\local\program
Before establishing a connection with the connector, we recommend testing the connectivity of the local TDengine CLI to the TDengine cluster.
<Tabs
<Tabs>
<TabItem value="native" label="native connection">
Ensure that the TDengine cluster is up and that the FQDN of the machines in the cluster (the FQDN defaults to hostname if you are starting a standalone version) can be resolved locally, by testing with the ping command.
Ensure that the TDengine instance is up and that the FQDN of the machines in the cluster (the FQDN defaults to hostname if you are starting a standalone version) can be resolved locally, by testing with the `ping` command.
```
ping <FQDN>
```
Then test if the cluster can be appropriately connected with TDengine CLI: ``` ping <FQDN>```
Then test if the cluster can be appropriately connected with TDengine CLI:
```
taos -h <FQDN> -p <PORT>
......@@ -136,7 +136,7 @@ taos -h <FQDN> -p <PORT>
The FQDN above can be the FQDN of any dnode in the cluster, and the PORT is the serverPort corresponding to this dnode.
</TabItem
</TabItem>
<TabItem value="rest" label="REST connection" groupId="connect">
For REST connections and making sure the cluster is up, make sure the taosAdapter component is up. This can be tested using the following `curl ` command.
......@@ -145,7 +145,7 @@ For REST connections and making sure the cluster is up, make sure the taosAdapte
curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
```
The FQDN above is the FQDN of the machine running taosAdapter, PORT is the port taosAdapter listening, default is 6041.
The FQDN above is the FQDN of the machine running taosAdapter, PORT is the port taosAdapter listening, default is `6041`.
If the test is successful, it will output the server version information, e.g.
```json
......@@ -165,14 +165,14 @@ If the test is successful, it will output the server version information, e.g.
The following example code assumes that TDengine is installed locally and that the default configuration is used for both FQDN and serverPort.
<Tabs
<Tabs>
<TabItem value="native" label="native connection" groupId="connect">
```python
{{#include docs-examples/python/connect_native_reference.py}}
```
All arguments of the ``connect`` function are optional keyword arguments. The following are the connection parameters specified.
All arguments of the `connect()` function are optional keyword arguments. The following are the connection parameters specified.
- `host` : The FQDN of the node to connect to. There is no default value. If this parameter is not provided, the firstEP in the client configuration file will be connected.
- `user` : The TDengine user name. The default value is `root`.
......@@ -182,11 +182,11 @@ All arguments of the ``connect`` function are optional keyword arguments. The fo
- `timezone` : The timezone used to convert the TIMESTAMP data in the query results to python `datetime` objects. The default is the local timezone.
:::warning
`config` and `timezone` are both process-level configurations. we recommend that all connections made by a process use the same parameter values. Otherwise, unpredictable errors may occur.
`config` and `timezone` are both process-level configurations. We recommend that all connections made by a process use the same parameter values. Otherwise, unpredictable errors may occur.
:::
:::tip
The `connect` function returns a `taos.TaosConnection` instance. In client-side multi-threaded scenarios, we recommend that each thread request a separate connection instance rather than sharing a connection between multiple threads.
The `connect()` function returns a `taos.TaosConnection` instance. In client-side multi-threaded scenarios, we recommend that each thread request a separate connection instance rather than sharing a connection between multiple threads.
:::
......@@ -197,7 +197,7 @@ The `connect` function returns a `taos.TaosConnection` instance. In client-side
{{#include docs-examples/python/connect_rest_examples.py:connect}}
```
All arguments to the `connect` function are optional keyword arguments. The following are the connection parameters specified.
All arguments to the `connect()` function are optional keyword arguments. The following are the connection parameters specified.
- `host`: The host to connect to. The default is localhost.
- `user`: TDenigne user name. The default is `root`.
......@@ -205,23 +205,19 @@ All arguments to the `connect` function are optional keyword arguments. The foll
- `port`: The port on which the taosAdapter REST service listens. Default is 6041.
- `timeout`: HTTP request timeout in seconds. The default is `socket._GLOBAL_DEFAULT_TIMEOUT`. Usually, no configuration is needed.
:::note
:::
</TabItem>
</Tabs>
## Sample program
### Basic use
### Basic Usage
<Tabs default="native" groupId="connect">
<TabItem value="native" label="native connection">
Use of the ##### TaosConnection class
##### TaosConnection class
The `TaosConnection` class contains both an implementation of the PEP249 Connection interface (e.g., the `cursor` method and the `close` method) and many extensions (e.g., the `execute`, `query`, `schemaless_insert`, and `subscribe` methods). .
The `TaosConnection` class contains both an implementation of the PEP249 Connection interface (e.g., the `cursor()` method and the `close()` method) and many extensions (e.g., the `execute()`, `query()`, `schemaless_insert()`, and `subscribe()` methods).
```python title="execute method"
{{#include docs-examples/python/connection_usage_native_reference.py:insert}}
......@@ -232,12 +228,12 @@ The `TaosConnection` class contains both an implementation of the PEP249 Connect
```
:::tip
The queried results can only be fetched once. For example, only one of `featch_all` and `fetch_all_into_dict` can be used in the example above. Repeated fetches will result in an empty list.
The queried results can only be fetched once. For example, only one of `fetch_all()` and `fetch_all_into_dict()` can be used in the example above. Repeated fetches will result in an empty list.
:::
##### Use of TaosResult class
In the above example of using the `TaosConnection` class, we have shown two ways to get the result of a query: `featch_all` and `fetch_all_into_dict`. In addition, `TaosResult` also provides methods to iterate through the result set by rows (`rows_iter`) or by data blocks (`blocks_iter`). Using these two methods will be more efficient in scenarios where the query has a large amount of data.
In the above example of using the `TaosConnection` class, we have shown two ways to get the result of a query: `fetch_all()` and `fetch_all_into_dict()`. In addition, `TaosResult` also provides methods to iterate through the result set by rows (`rows_iter`) or by data blocks (`blocks_iter`). Using these two methods will be more efficient in scenarios where the query has a large amount of data.
```python title="blocks_iter method"
{{#include docs-examples/python/result_set_examples.py}}
......@@ -255,7 +251,7 @@ The TaosCursor class uses native connections for write and query operations. In
:::
</TabItem
</TabItem>
<TabItem value="rest" label="REST connection">
##### Use of TaosRestCursor class
......@@ -271,7 +267,7 @@ The ``TaosRestCursor`` class is an implementation of the PEP249 Cursor interface
##### Use of the RestClient class
The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-api). It contains only a ``sql()` method for executing arbitrary SQL statements and returning the result.
The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-api). It contains only a `sql()` method for executing arbitrary SQL statements and returning the result.
```python title="Use of RestClient"
{{#include docs-examples/python/rest_client_example.py}}
......@@ -279,8 +275,6 @@ The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-ap
For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
</TabItem>
</Tabs>
......@@ -293,7 +287,7 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
{{#include docs-examples/python/conn_native_pandas.py}}
```
</TabItem
</TabItem>
<TabItem value="rest" label="REST connection">
```python
......@@ -309,7 +303,7 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
| ------------------------------------------------------------------------------------------------------------- | ------------------- ---- |
| [bind_multi.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-multi.py) | parameter binding, bind multiple rows at once |
| [bind_row.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-row.py) | bind_row.py
| [insert_lines.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/insert-lines.py) | InfluxDB row protocol write |
| [insert_lines.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/insert-lines.py) | InfluxDB line protocol writing |
| [json_tag.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/json-tag.py) | Use JSON type tags |
| [subscribe-async.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/subscribe-async.py) | Asynchronous subscription |
| [subscribe-sync.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/subscribe-sync.py) | synchronous-subscribe |
......
......@@ -15,9 +15,9 @@ import RustOpenTSDBTelnet from "../../04-develop/03-insert-data/_rust_opts_telne
import RustOpenTSDBJson from "../../04-develop/03-insert-data/_rust_opts_json.mdx"
import RustQuery from "../../04-develop/04-query-data/_rust.mdx"
`libtaos` is the official Rust language connector for TDengine, through which Rust developers can develop applications that access the TDengine database.
`libtaos` is the official Rust language connector for TDengine. Rust developers can develop applications to access the TDengine instance data.
`libtaos` provides two ways to establish connections. One is the **Native Connection**, which connects to TDengine runtime instances via the TDengine client driver (taosc). The other is **REST connection**, which connects to TDengine runtime instances via taosAdapter's REST interface. The REST connection supports any platform, but the native connection supports all platforms on which the TDengine client can run.
`libtaos` provides two ways to establish connections. One is the **Native Connection**, which connects to TDengine instances via the TDengine client driver (taosc). The other is **REST connection**, which connects to TDengine instances via taosAdapter's REST interface.
The source code for `libtaos` is hosted on [GitHub](https://github.com/taosdata/libtaos-rs).
......@@ -28,7 +28,7 @@ REST connections are supported on all platforms that can run Rust.
## Version support
Please refer to [version support list](/reference/connector#version support)
Please refer to [version support list](/reference/connector#version-support).
The Rust Connector is still under rapid development and is not guaranteed to be backward compatible before 1.0. Recommend to use TDengine version 2.4 or higher to avoid known issues.
......@@ -36,7 +36,7 @@ The Rust Connector is still under rapid development and is not guaranteed to be
### Pre-installation
* Install the Rust development toolchain
* If using the native connection, please install the TDengine client driver. Please refer to [install client driver](/reference/connector#install client driver)
* If using the native connection, please install the TDengine client driver. Please refer to [install client driver](/reference/connector#install-client-driver)
### Adding libtaos dependencies
......@@ -56,7 +56,7 @@ libtaos = "*"
</TabItem
<TabItem value="rest" label="REST connection">
Add [libtaos][libtaos] to the ``Cargo.toml`'' file and enable the ``rest`'' feature.
Add [libtaos][libtaos] to the `Cargo.toml` file and enable the `rest` feature.
```toml
[dependencies]
......@@ -180,7 +180,7 @@ async fn demo() -> Result<(), Error> {
The [Builder Pattern](https://doc.rust-lang.org/1.0.0/style/ownership/builders.html) constructor pattern is Rust's solution for handling complex data types or optional configuration types. The [libtaos] implementation uses the connection constructor [TaosCfgBuilder] as the entry point for the TDengine Rust connector. The [TaosCfgBuilder] provides optional configuration of servers, ports, databases, usernames, passwords, etc.
Using the ``default()` method, you can construct a [TaosCfg] with default parameters for subsequent connections to the database or establishing connection pools.
Using the `default()` method, you can construct a [TaosCfg] with default parameters for subsequent connections to the database or establishing connection pools.
```rust
let cfg = TaosCfgBuilder::default().build()? ;
......@@ -206,7 +206,7 @@ let conn: Taos = cfg.connect();
### Connection pooling
In complex applications, recommand to enable connection pooling. Connection pooling for [libtaos] is implemented using [r2d2].
In complex applications, recommand to enable connection pool. Connection pool for [libtaos] is implemented using [r2d2].
As follows, a connection pool with default parameters can be generated.
......@@ -226,7 +226,7 @@ You can set the same connection pool parameters using the connection pool's cons
.build(cfg);
```
In the application code, use ``pool.get()? ` to get a connection object [Taos].
In the application code, use `pool.get()? ` to get a connection object [Taos].
```rust
let taos = pool.get()? ;
......@@ -234,13 +234,13 @@ let taos = pool.get()? ;
The [Taos] structure is the connection manager in [libtaos] and provides two main APIs.
1. ``exec``: Execute some non-query SQL statements, such as ``CREATE`, ``ALTER`, ``INSERT`, etc.
1. `exec`: Execute some non-query SQL statements, such as `CREATE`, `ALTER`, `INSERT`, etc.
```rust
taos.exec().await?
```
2. ``query``: Execute the query statement and return the [TaosQueryData] object.
2. `query`: Execute the query statement and return the [TaosQueryData] object.
```rust
let q = taos.query("select * from log.logs").await?
......@@ -275,17 +275,17 @@ Note that Rust asynchronous functions and an asynchronous runtime are required.
- `.create_database(database: &str)`: Executes the `CREATE DATABASE` statement.
- `.use_database(database: &str)`: Executes the `USE` statement.
In addition, this structure is also the entry point for [Parameter Binding](#Parameter Binding Interface) and [Row Protocol Interface](#Row Protocol Interface). Please refer to the specific API descriptions for usage.
In addition, this structure is also the entry point for [Parameter Binding](#Parameter Binding Interface) and [Line Protocol Interface](#Line Protocol Interface). Please refer to the specific API descriptions for usage.
### Parameter Binding Interface
### Bind Interface
Similar to the C interface, Rust provides a parameter binding interface. First, create a parameter binding object [Stmt] for a SQL statement from the [Taos] object.
Similar to the C interface, Rust provides the bind interface's wraping. First, create a bind object [Stmt] for a SQL command from the [Taos] object.
```rust
let mut stmt: Stmt = taos.stmt("insert into ? values(? ,?)") ? ;
```
The parameter binding object provides a set of interfaces for implementing parameter binding.
The bind object provides a set of interfaces for implementing parameter binding.
##### `.set_tbname(tbname: impl ToCString)`
......@@ -325,7 +325,7 @@ stmt.execute()? ;
//stmt.execute()? ;
```
### Row protocol interface
### Line protocol interface
The line protocol interface supports multiple modes and different precision and requires the introduction of constants in the schemaless module to set.
......@@ -334,7 +334,7 @@ use libtaos::*;
use libtaos::schemaless::*;
```
- InfluxDB row protocol
- InfluxDB line protocol
```rust
let lines = [
......
---
title: taosdump
description: "taosdump 是一个支持从运行中的 TDengine 集群备份数据并将备份的数据恢复到相同或另一个运行中的 TDengine 集群中的工具应用程序"
description: "taosdump is a tool application that supports backing up data from a running TDengine cluster and restoring the backed up data to the same or another running TDengine cluster."
---
## 简介
## Introduction
taosdump 是一个支持从运行中的 TDengine 集群备份数据并将备份的数据恢复到相同或另一个运行中的 TDengine 集群中的工具应用程序。
taosdump is a tool application that supports backing up data from a running TDengine cluster and restoring the backed up data to the same or another running TDengine cluster.
taosdump 可以用数据库、超级表或普通表作为逻辑数据单元进行备份,也可以对数据库、超级
表和普通表中指定时间段内的数据记录进行备份。使用时可以指定数据备份的目录路径,如果
不指定位置,taosdump 默认会将数据备份到当前目录。
taosdump can back up a database, a super table, or a normal table as a logical data unit or backup data records in the database, super tables, and normal tables. When using taosdump, you can specify the directory path for data backup. If you do not specify a directory, taosdump will back up the data to the current directory by default.
如果指定的位置已经有数据文件,taosdump 会提示用户并立即退出,避免数据被覆盖。这意味着同一路径只能被用于一次备份。
如果看到相关提示,请小心操作。
Suppose the specified location already has data files. In that case, taosdump will prompt the user and exit immediately to avoid data overwriting which means that the same path can only be used for one backup.
Please be careful if you see a prompt for this.
taosdump 是一个逻辑备份工具,它不应被用于备份任何原始数据、环境设置、
硬件信息、服务端配置或集群的拓扑结构。taosdump 使用
[ Apache AVRO ](https://avro.apache.org/)作为数据文件格式来存储备份数据。
taosdump is a logical backup tool and should not be used to back up any raw data, environment settings,
Users should not use taosdump to back up raw data, environment settings, hardware information, server configuration, or cluster topology. taosdump uses [Apache AVRO](https://avro.apache.org/) as the data file format to store backup data.
## 安装
## Installation
taosdump 有两种安装方式:
There are two ways to install taosdump:
- 安装 taosTools 官方安装包, 请从[所有下载链接](https://www.taosdata.com/all-downloads)页面找到 taosTools 并下载安装。
- Install the taosTools official installer. Please find taosTools from [All download links](https://www.taosdata.com/all-downloads) page and download and install it.
- 单独编译 taos-tools 并安装, 详情请参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。
- Compile taos-tools separately and install it. Please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
## 常用使用场景
## Common usage scenarios
### taosdump 备份数据
### taosdump backup data
1. 备份所有数据库:指定 `-A``--all-databases` 参数;
2. 备份多个指定数据库:使用 `-D db1,db2,...` 参数;
3. 备份指定数据库中的某些超级表或普通表:使用 `dbname stbname1 stbname2 tbname1 tbname2 ...` 参数,注意这种输入序列第一个参数为数据库名称,且只支持一个数据库,第二个和之后的参数为该数据库中的超级表或普通表名称,中间以空格分隔;
4. 备份系统 log 库:TDengine 集群通常会包含一个系统数据库,名为 `log`,这个数据库内的数据为 TDengine 自我运行的数据,taosdump 默认不会对 log 库进行备份。如果有特定需求对 log 库进行备份,可以使用 `-a``--allow-sys` 命令行参数。
5. “宽容”模式备份:taosdump 1.4.1 之后的版本提供 `-n` 参数和 `-L` 参数,用于备份数据时不使用转义字符和“宽容”模式,可以在表名、列名、标签名没使用转义字符的情况下减少备份数据时间和备份数据占用空间。如果不确定符合使用 `-n``-L` 条件时请使用默认参数进行“严格”模式进行备份。转义字符的说明请参考[官方文档](/taos-sql/escape)
1. backing up all databases: specify `-A` or `-all-databases` parameter.
2. backup multiple specified databases: use `-D db1,db2,... ` parameters; 3.
3. back up some super or normal tables in the specified database: use `-dbname stbname1 stbname2 tbname1 tbname2 ... ` parameters. Note that the first parameter of this input sequence is the database name, and only one database is supported. The second and subsequent parameters are the names of super or normal tables in that database, separated by spaces.
4. back up the system log database: TDengine clusters usually contain a system database named `log`. The data in this database is the data that TDengine runs itself, and the taosdump will not back up the log database by default. If users need to back up the log database, users can use the `-a` or `-allow-sys` command-line parameter.
5. Loose mode backup: taosdump version 1.4.1 onwards provides `-n` and `-L` parameters for backing up data without using escape characters and "loose" mode, which can reduce the number of backups if table names, column names, tag names do not use This can reduce the backup data time and backup data footprint if table names, column names, and tag names do not use `escape character`. If you are unsure about using `-n` and `-L` conditions, please use the default parameters for "strict" mode backup. See the [official documentation](/taos-sql/escape) for a description of escaped characters.
:::tip
- taosdump 1.4.1 之后的版本提供 `-I` 参数,用于解析 avro 文件 schema 和数据,如果指定 `-s` 参数将只解析 schema。
- taosdump 1.4.2 之后的备份使用 `-B` 参数指定的批次数,默认值为 16384,如果在某些环境下由于网络速度或磁盘性能不足导致 "Error actual dump .. batch .." 可以通过 `-B` 参数挑战为更小的值进行尝试。
- taosdump versions after 1.4.1 provide the `-I` argument for parsing Avro file schema and data. If users specify `-s` then only taosdump will parse schema.
- Backups after taosdump 1.4.2 use the batch count specified by the `-B` parameter. The default value is 16384. If, in some environments, low network speed or disk performance causes "Error actual dump ... batch ..." can be tried by challenging the `-B` parameter to a smaller value.
:::
### taosdump 恢复数据
### taosdump recover data
恢复指定路径下的数据文件:使用 `-i` 参数加上数据文件所在路径。如前面提及,不应该使用同一个目录备份不同数据集合,也不应该在同一路径多次备份同一数据集,否则备份数据会造成覆盖或多次备份。
Restore the data file in the specified path: use the `-i` parameter plus the path to the data file. You should not use the same directory to backup different data sets, and you should not backup the same data set multiple times in the same path. Otherwise, the backup data will cause overwriting or multiple backups.
:::tip
taosdump 内部使用 TDengine stmt binding API 进行恢复数据的写入,为提高数据恢复性能,目前使用 16384 为一次写入批次。如果备份数据中有比较多列数据,可能会导致产生 "WAL size exceeds limit" 错误,此时可以通过使用 `-B` 参数调整为一个更小的值进行尝试。
taosdump internally uses TDengine stmt binding API for writing recovery data and currently uses 16384 as one write batch for better data recovery performance. If there are more columns in the backup data, it may cause a "WAL size exceeds limit" error. You can try to adjust to a smaller value by using the `-B` parameter.
:::
## 详细命令行参数列表
## Detailed command-line parameter list
以下为 taosdump 详细命令行参数列表:
The following is a detailed list of taosdump command-line arguments.
```
Usage: taosdump [OPTION...] dbname [tbname ...]
......
---
title: TDengine 命令行(CLI)
title: TDengine Command Line (CLI)
sidebar_label: TDengine CLI
description: TDengine CLI 的使用说明和技巧
description: Instructions and tips for using the TDengine CLI
---
TDengine 命令行程序(以下简称 TDengine CLI)是用户操作 TDengine 实例并与之交互的最简洁最常用的方式。
The TDengine command-line application (hereafter referred to as `TDengine CLI`) is the most feasility way for users to manipulate and interact with TDengine instances.
## 安装
## Installation
如果在 TDengine 服务器端执行,无需任何安装,已经自动安装好。如果要在非 TDengine 服务器端运行,需要安装 TDengine 客户端驱动,具体安装,请参考 [连接器](/reference/connector/)
If executed on the TDengine server-side, there is no need for additional installation steps to install TDengine CLI as it is already included and installed automatically. To run TDengine CLI on the environemtn which no TDengine server running, the TDengine client installation package needs to be installed first. For details, please refer to [connector](/reference/connector/).
## 执行
## Execution
要进入 TDengine CLI,您只要在 Linux 终端或Windos 终端执行 `taos` 即可。
To access the TDengine CLI, you can execute `taos` command-line utility from a Linux terminal or Windows terminal.
```bash
taos
```
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。TDengine CLI 的提示符号如下:
TDengine CLI will display a welcome message and version information if it successfully connected to the TDengine service. If it fails, TDengine CLI will print an error message. See [FAQ](/train-faq/faq) to solve the problem of terminal connection failure to the server. The TDengine CLI prompts as follows:
```cmd
taos>
```
进入CLI后,你可执行各种SQL语句,包括插入、查询以及各种管理命令。
After entering the TDengine CLI, you can execute various SQL commands, including inserts, queries, or administrative commands.
## 执行 SQL 脚本
## Execute SQL script file
在 TDengine CLI 里可以通过 `source` 命令来运行 SQL 命令脚本。
Run SQL command script file in the TDengine CLI via the `source` command.
```sql
taos> source <filename>;
```
## 在线修改显示字符宽度
## Adjust display width to show more characters
可以在 TDengine CLI 里使用如下命令调整字符显示宽度
Users can adjust the display width in TDengine CLI to show more characters with the following command:
```sql
taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
```
如显示的内容后面以...结尾时,表示该内容已被截断,可通过本命令修改显示字符宽度以显示完整的内容。
If the displayed content is followed by `...` you can use this command to change the display width to display the full content.
## 命令行参数
## Command Line Parameters
您可通过配置命令行参数来改变 TDengine CLI 的行为。以下为常用的几个命令行参数:
You can change the behavior of TDengine CLI by specifying command-line parameters. The following parameters are commonly used.
- -h, --host=HOST: 要连接的 TDengine 服务端所在服务器的 FQDN, 默认为连接本地服务
- -P, --port=PORT: 指定服务端所用端口号
- -u, --user=USER: 连接时使用的用户名
- -p, --password=PASSWORD: 连接服务端时使用的密码
- -?, --help: 打印出所有命令行参数
- -h, --host=HOST: FQDN of the server where the TDengine server is to be connected. Default is to connect to the local service
- -P, --port=PORT: Specify the port number to be used by the server. Default is `6030`
- -u, --user=USER: the user name to use when connecting. Default is `root`
- -p, --password=PASSWORD: the password to use when connecting to the server. Default is `taosdata`
- -?, --help: print out all command-line arguments
还有更多其他参数:
And many more parameters.
- -c, --config-dir: 指定配置文件目录,默认为 `/etc/taos`,该目录下的配置文件默认名称为 taos.cfg
- -C, --dump-config: 打印 -c 指定的目录中 taos.cfg 的配置参数
- -d, --database=DATABASE: 指定连接到服务端时使用的数据库
- -D, --directory=DIRECTORY: 导入指定路径中的 SQL 脚本文件
- -f, --file=FILE: 以非交互模式执行 SQL 脚本文件
- -k, --check=CHECK: 指定要检查的表
- -l, --pktlen=PKTLEN: 网络测试时使用的测试包大小
- -n, --netrole=NETROLE: 网络连接测试时的测试范围,默认为 startup, 可选值为 client, server, rpc, startup, sync, speed, fqdn
- -r, --raw-time: 将时间输出出 uint64_t
- -s, --commands=COMMAND: 以非交互模式执行的 SQL 命令
- -S, --pkttype=PKTTYPE: 指定网络测试所用的包类型,默认为 TCP。只有 netrole 为 speed 时既可以指定为 TCP 也可以指定为 UDP
- -T, --thread=THREADNUM: 以多线程模式导入数据时的线程数
- -s, --commands: 在不进入终端的情况下运行 TDengine 命令
- -z, --timezone=TIMEZONE: 指定时区,默认为本地
- -V, --version: 打印出当前版本号
- -c, --config-dir: Specify the directory where configuration file exists. The default is `/etc/taos`, and the default name of the configuration file in this directory is `taos.cfg`
- -C, --dump-config: Print the configuration parameters of `taos.cfg` in the default directory or specified by -c
- -d, --database=DATABASE: Specify the database to use when connecting to the server
- -D, --directory=DIRECTORY: Import the SQL script file in the specified path
- -f, --file=FILE: Execute the SQL script file in non-interactive mode
- -k, --check=CHECK: Specify the table to be checked
- -l, --pktlen=PKTLEN: Test package size to be used for network testing
- -n, --netrole=NETROLE: test scope for network connection test, default is `startup`, The value can be `client`, `server`, `rpc`, `startup`, `sync`, `speed`, or `fqdn`.
- -r, --raw-time: output the timestamp format as unsigned 64-bits integer (uint64_t in C language)
- -s, --commands=COMMAND: execute SQL commands in non-interactive mode
- -S, --pkttype=PKTTYPE: Specify the packet type used for network testing. The default is TCP. can be specified as either TCP or UDP when `speed` is specified to netrole parameter
- -T, --thread=THREADNUM: The number of threads to import data in multi-threaded mode
- -s, --commands: Run TDengine CLI commands without entering the terminal
- -z, --timezone=TIMEZONE: Specify time zone. Default is the value of current configruation file
- -V, --version: Print out the current version number
示例:
Example.
```bash
taos -h h1.taos.com -s "use db; show tables;"
```
## TDengine CLI 小技巧
- 可以使用上下光标键查看历史输入的指令
- 修改用户密码:在 shell 中使用 `alter user` 命令,缺省密码为 taosdata
- ctrl+c 中止正在进行中的查询
- 执行 `RESET QUERY CACHE` 可清除本地缓存的表 schema
- 批量执行 SQL 语句。可以将一系列的 shell 命令(以英文 ; 结尾,每个 SQL 语句为一行)按行存放在文件里,在 shell 里执行命令 `source <file-name>` 自动执行该文件里所有的 SQL 语句
- 输入 q 回车,退出 taos shell
## TDengine CLI tips
- You can use the up and down keys to iterate the history of commands entered
- Change user password: use `alter user` command in TDengine CLI to change user's password. The default password is `taosdata`.
- use Ctrl+C to stop a query in progress
- Execute `RESET QUERY CACHE` to clear the local cache of the table schema
- Execute SQL statements in batches. You can store a series of shell commands (ending with ;, one line for each SQL command) in a script file and execute the command `source <file-name>` in the TDengine CLI to execute all SQL commands in that file automatically
- Enter `q` to exit TDengine CLI
---
title: 支持平台列表
description: "TDengine 服务端、客户端和连接器支持的平台列表"
title: List of supported platforms
description: "List of platforms supported by TDengine server, client, and connector"
---
## TDengine 服务端支持的平台列表
| | **CentOS 7/8** | **Ubuntu 16/18/20** | **Other Linux** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **华为 EulerOS** |
| ------------ | -------------- | ------------------- | --------------- | ------------ | ----------------- | ---------------- | ---------------- |
| X64 | ● | ● | | ○ | ● | ● | ● |
| 龙芯 MIPS64 | | | ● | | | | |
| 鲲鹏 ARM64 | | ○ | ○ | | ● | | |
| 申威 Alpha64 | | | ○ | ● | | | |
| 飞腾 ARM64 | | ○ 优麒麟 | | | | | |
| 海光 X64 | ● | ● | ● | ○ | ● | ● | |
| 瑞芯微 ARM64 | | | ○ | | | | |
| 全志 ARM64 | | | ○ | | | | |
| 炬力 ARM64 | | | ○ | | | | |
| 华为云 ARM64 | | | | | | | ● |
注: ● 表示经过官方测试验证, ○ 表示非官方测试验证。
## TDengine 客户端和连接器支持的平台列表
目前 TDengine 的连接器可支持的平台广泛,目前包括:X64/X86/ARM64/ARM32/MIPS/Alpha 等硬件平台,以及 Linux/Win64/Win32 等开发环境。
对照矩阵如下:
| **CPU** | **X64 64bit** | | | **X86 32bit** | **ARM64** | **ARM32** | **MIPS 龙芯** | **Alpha 申威** | **X64 海光** |
| ----------- | ------------- | --------- | --------- | ------------- | --------- | --------- | ------------- | -------------- | ------------ |
| **OS** | **Linux** | **Win64** | **Win32** | **Win32** | **Linux** | **Linux** | **Linux** | **Linux** | **Linux** |
| **C/C++** | ● | ● | ● | ○ | ● | ● | ● | ● | ● |
| **JDBC** | ● | ● | ● | ○ | ● | ● | ● | ● | ● |
| **Python** | ● | ● | ● | ○ | ● | ● | ● | -- | ● |
| **Go** | ● | ● | ● | ○ | ● | ● | ○ | -- | -- |
| **NodeJs** | ● | ● | ○ | ○ | ● | ● | ○ | -- | -- |
| **C#** | ● | ● | ○ | ○ | ○ | ○ | ○ | -- | -- |
| **RESTful** | ● | ● | ● | ● | ● | ● | ● | ● | ● |
注:● 表示官方测试验证通过,○ 表示非官方测试验证通过,-- 表示未经验证。
## List of supported platforms for TDengine server
| | **CentOS 7/8** | **Ubuntu 16/18/20** | **Other Linux** |
| ------------ | -------------- | ------------------- | --------------- |
| X64 | ● | ● | |
| MIPS64 | | | ● |
| ARM64 | | ○ | ○ |
| Alpha64 | | | ○ |
Note: ● means officially tested and verified, ○ means unofficially tested and verified.
## List of supported platforms for TDengine clients and connectors
TDengine's connector can support a wide range of platforms, including X64/X86/ARM64/ARM32/MIPS/Alpha hardware platforms and Linux/Win64/Win32 development environments.
The comparison matrix is as follows.
| **CPU** | **X64 64bit** | | | **X86 32bit** | **ARM64** | **ARM32** | **MIPS** | **Alpha** |
| ----------- | ------------- | --------- | --------- | ------------- | --------- | --------- | --------- | --------- |
| **OS** | **Linux** | **Win64** | **Win32** | **Win32** | **Linux** | **Linux** | **Linux** | **Linux** |
| **C/C++** | ● | ● | ● | ○ | ● | ● | ● | ● |
| **JDBC** | ● | ● | ● | ○ | ● | ● | ● | ● |
| **Python** | ● | ● | ● | ○ | ● | ● | ● | -- |
| **Go** | ● | ● | ● | ○ | ● | ● | ○ | -- |
| **NodeJs** | ● | ● | ○ | ○ | ● | ● | ○ | -- |
| **C#** | ● | ● | ○ | ○ | ○ | ○ | ○ | -- |
| **RESTful** | ● | ● | ● | ● | ● | ● | ● | ● |
Note: ● means the official test is verified, ○ means the unofficial test is verified, -- means not verified.
label: TDengine Docker 镜像
\ No newline at end of file
label: TDengine Docker images
\ No newline at end of file
---
title: 用 Docker 部署 TDengine
description: "本章主要介绍如何在容器中启动 TDengine 服务并访问它"
title: Deploying TDengine with Docker
Description: "This chapter focuses on starting the TDengine service in a container and accessing it."
---
本章主要介绍如何在容器中启动 TDengine 服务并访问它。可以在 docker run 命令行中或者 docker-compose 文件中使用环境变量来控制容器中服务的行为。
This chapter describes how to start the TDengine service in a container and access it. Users can control the behavior of the service in the container by using environment variables on the docker run command-line or in the docker-compose file.
## 启动 TDengine
## Starting TDengine
TDengine 镜像启动时默认激活 HTTP 服务,使用下列命令
The TDengine image starts with the HTTP service activated by default, using the following command:
```shell
docker run -d --name tdengine -p 6041:6041 tdengine/tdengine
```
以上命令启动了一个名为“tdengine”的容器,并把其中的 HTTP 服务的端 6041 映射到了主机端口 6041。使用如下命令可以验证该容器中提供的 HTTP 服务是否可用:
The above command starts a container named "tdengine" and maps the HTTP service end 6041 to the host port 6041. You can verify that the HTTP service provided in this container is available using the following command.
```shell
curl -u root:taosdata -d "show databases" localhost:6041/rest/sql
```
使用如下命令可以在该容器中执行 TDengine 的客户端 taos 对 TDengine 进行访问:
The TDengine client taos can be executed in this container to access TDengine using the following command.
```shell
$ docker exec -it tdengine taos
Welcome to the TDengine shell from Linux, Client Version:2.4.0.0
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
Copyright (c) 2020 by TAOS Data, Inc.
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
log | 2022-01-17 13:57:22.270 | 10 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
name | created_time | ntables | vgroups | replica | quorum | days | keep | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status | status precision | update | status |
================================================================================================================================== ================================================================================================================================== ================
log | 2022-01-17 13:57:22.270 | 10 | 1 | 1 | 1 | 10 | 30 | 1 | 3 | 100 | 4096 | 1 | 3000 | 2 | 0 | us | 0 | ready |
Query OK, 1 row(s) in set (0.002843s)
```
因为运行在容器中的 TDengine 服务端使用容器的 hostname 建立连接,使用 taos shell 或者各种连接器(例如 JDBC-JNI)从容器外访问容器内的 TDengine 比较复杂,所以上述方式是访问容器中 TDengine 服务的最简单的方法,适用于一些简单场景。如果在一些复杂场景下想要从容器化使用 taos shell 或者各种连接器访问容器中的 TDengine 服务,请参考下一节。
The TDengine server running in the container uses the container's hostname to establish a connection. Using TDengine CLI or various connectors (such as JDBC-JNI) to access the TDengine inside the container from outside the container is more complicated. So the above is the simplest way to access the TDengine service in the container and is suitable for some simple scenarios. Please refer to the next section if you want to access the TDengine service in the container from containerized using TDengine CLI or various connectors in some complex scenarios.
## 在 host 网络上启动 TDengine
## Start TDengine on the host network
```shell
docker run -d --name tdengine --network host tdengine/tdengine
```
上面的命令在 host 网络上启动 TDengine,并使用主机的 FQDN 建立连接而不是使用容器的 hostname 。这种方式和在主机上使用 `systemctl` 启动 TDengine 效果相同。在主机已安装 TDengine 客户端情况下,可以直接使用下面的命令访问它。
The above command starts TDengine on the host network and uses the host's FQDN to establish a connection instead of the container's hostname. It works too, like using `systemctl` to start TDengine on the host. If the TDengine client is already installed on the host, you can access it directly with the following command.
```shell
$ taos
Welcome to the TDengine shell from Linux, Client Version:2.4.0.0
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
Copyright (c) 2020 by TAOS Data, Inc.
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | myhost:6030 | 1 | 8 | ready | any | 2022-01-17 22:10:32.619 | |
id | end_point | vnodes | cores | status | role | create_time | offline reason |
================================================================================================================================== ====
1 | myhost:6030 | 1 | 8 | ready | any | 2022-01-17 22:10:32.619 | |
Query OK, 1 row(s) in set (0.003233s)
```
## 以指定的 hostname 和 port 启动 TDengine
## Start TDengine with the specified hostname and port
利用 `TAOS_FQDN` 环境变量或者 `taos.cfg` 中的 `fqdn` 配置项可以使 TDengine 在指定的 hostname 上建立连接。这种方式可以为部署提供更大的灵活性。
The `TAOS_FQDN` environment variable or the `fqdn` configuration item in `taos.cfg` allows TDengine to establish a connection at the specified hostname. This approach provides greater flexibility for deployment.
```shell
docker run -d \
......@@ -70,35 +70,35 @@ docker run -d \
tdengine/tdengine
```
上面的命令在容器中启动一个 TDengine 服务,其所监听的 hostname 为 tdengine ,并将容器的 6030 到 6049 端口段映射到主机的 6030 到 6049 端口段 (tcp 和 udp 都需要映射)。如果主机上该端口段已经被占用,可以修改上述命令指定一个主机上空闲的端口段。如果 `rpcForceTcp` 被设置为 `1` ,可以只映射 tcp 协议。
The above command starts a TDengine service in the container, which listens to the hostname tdengine, and maps the container's port segment 6030 to 6049 to the host's port segment 6030 to 6049 (both TCP and UDP ports need to be mapped). If the port segment is already occupied on the host, you can modify the above command to specify a free port segment on the host. If `rpcForceTcp` is set to `1`, you can map only the TCP protocol.
接下来,要确保 "tdengine" 这个 hostname 在 `/etc/hosts` 中可解析。
Next, ensure the hostname "tdengine" is resolvable in `/etc/hosts`.
```shell
echo 127.0.0.1 tdengine |sudo tee -a /etc/hosts
```
最后,可以从 taos shell 或者任意连接器以 "tdengine" 为服务端地址访问 TDengine 服务。
Finally, the TDengine service can be accessed from the taos shell or any connector with "tdengine" as the server address.
```shell
taos -h tdengine -P 6030
```
如果 `TAOS_FQDN` 被设置为与所在主机名相同,则效果与 “在 host 网络上启动 TDengine” 相同。
If set `TAOS_FQDN` to the same hostname, the effect is the same as "Start TDengine on host network".
## 在指定网络上启动 TDengine
## Start TDengine on the specified network
也可以在指定的特定网络上启动 TDengine。下面是详细步骤:
You can also start TDengine on a specific network.
1. 首先,创建一个 docker 网络,命名为 td-net
1. First, create a docker network named `td-net`
```shell
docker network create td-net
```
``` Create td-net
2. 启动 TDengine
2. Start TDengine
以下命令在 td-net 网络上启动 TDengine 服务
Start the TDengine service on the `td-net` network with the following command:
```shell
docker run -d --name tdengine --network td-net \
......@@ -106,17 +106,17 @@ taos -h tdengine -P 6030
tdengine/tdengine
```
3. 在同一网络上的另一容器中启动 TDengine 客户端
3. Start the TDengine client in another container on the same network
```shell
docker run --rm -it --network td-net -e TAOS_FIRST_EP=tdengine tdengine/tdengine taos
# or
#docker run --rm -it --network td-net -e tdengine/tdengine taos -h tdengine
# docker run --rm -it --network td-net -e tdengine/tdengine taos -h tdengine
```
## 在容器中启动客户端应用
## Launching a client application in a container
如果想在容器中启动自己的应用的话,需要将相应的对 TDengine 的依赖也要加入到镜像中,例如:
If you want to start your application in a container, you need to add the corresponding dependencies on TDengine to the image as well, e.g.
```docker
FROM ubuntu:20.04
......@@ -133,7 +133,7 @@ RUN wget -c https://www.taosdata.com/assets-download/TDengine-client-${TDENGINE_
#CMD ["app"]
```
以下是一个 go 应用程序的示例:
Here is an example GO program:
```go
/*
......@@ -218,7 +218,7 @@ func checkErr(err error, prompt string) {
}
```
如下是完整版本的 dockerfile
Here is the full Dockerfile:
```docker
FROM golang:1.17.6-buster as builder
......@@ -251,7 +251,7 @@ COPY --from=builder /usr/src/app/app /usr/bin/
CMD ["app"]
```
目前我们已经有了 `main.go`, `go.mod`, `go.sum`, `app.dockerfile`, 现在可以构建出这个应用程序并在 `td-net` 网络上启动它
Now that we have `main.go`, `go.mod`, `go.sum`, `app.dockerfile`, we can build the application and start it on the `td-net` network.
```shell
$ docker build -t app -f app.dockerfile
......@@ -276,9 +276,9 @@ password: taosdata
2022-01-18 01:43:51.029 +0000 UTC 3
```
## 用 docker-compose 启动 TDengine 集群
## Start the TDengine cluster with docker-compose
1. 如下 docker-compose 文件启动一个 2 副本、2 管理节点、2 数据节点以及 1 个 arbitrator 的 TDengine 集群。
1. The following docker-compose file starts a TDengine cluster with two replicas, two management nodes, two data nodes, and one arbitrator.
```docker
version: "3"
......@@ -316,14 +316,14 @@ password: taosdata
```
:::note
- `VERSION` 环境变量被用来设置 tdengine image tag
- 在新创建的实例上必须设置 `TAOS_FIRST_EP` 以使其能够加入 TDengine 集群;如果有高可用需求,则需要同时使用 `TAOS_SECOND_EP`
- `TAOS_REPLICA` 用来设置缺省的数据库副本数量,其取值范围为[1,3]
在双副本环境下,推荐使用 arbitrator, 用 TAOS_ARBITRATOR 来设置
- The `VERSION` environment variable is used to set the tdengine image tag
- `TAOS_FIRST_EP` must be set on the newly created instance so that it can join the TDengine cluster; if there is a high availability requirement, `TAOS_SECOND_EP` needs to be used at the same time
- `TAOS_REPLICA` is used to set the default number of database replicas. Its value range is [1,3]
We recommend setting with `TAOS_ARBITRATOR` to use arbitrator in a two-nodes environment.
:::
2. 启动集群
2. Start the cluster
```shell
$ VERSION=2.4.0.0 docker-compose up -d
......@@ -337,7 +337,7 @@ password: taosdata
Creating test_td-2_1 ... done
```
3. 查看节点状态
3. Check the status of each node
```shell
$ docker-compose ps
......@@ -348,7 +348,7 @@ password: taosdata
test_td-2_1 /usr/bin/entrypoint.sh taosd Up 6030/tcp, 6031/tcp, 6032/tcp, 6033/tcp, 6034/tcp, 6035/tcp, 6036/tcp, 6037/tcp, 6038/tcp, 6039/tcp, 6040/tcp, 6041/tcp, 6042/tcp
```
4. 用 taos shell 查看 dnodes
4. Show dnodes via TDengine CLI
```shell
$ docker-compose exec td-1 taos -s "show dnodes"
......@@ -367,19 +367,19 @@ password: taosdata
## taosAdapter
1. taosAdapter 在 TDengine 容器中默认是启动的。如果想要禁用它,在启动时指定环境变量 `TAOS_DISABLE_ADAPTER=true`
1. taosAdapter is enabled by default in the TDengine container. If you want to disable it, specify the environment variable `TAOS_DISABLE_ADAPTER=true` at startup
2. 同时为了部署灵活起见,可以在独立的容器中启动 taosAdapter
2. At the same time, for flexible deployment, taosAdapter can be started in a separate container
```docker
services:
# ...
adapter:
image: tdengine/tdengine:$VERSION
command: taosadapter
```
```docker
services:
# ...
adapter:
image: tdengine/tdengine:$VERSION
command: taosadapter
````
如果要部署多个 taosAdapter 来提高吞吐量并提供高可用性,推荐配置方式为使用 nginx 等反向代理来提供统一的访问入口。具体配置方法请参考 nginx 的官方文档。如下是示例:
Suppose you want to deploy multiple taosAdapters to improve throughput and provide high availability. In that case, the recommended configuration method uses a reverse proxy such as Nginx to offer a unified access entry. For specific configuration methods, please refer to the official documentation of Nginx. Here is an example:
```docker
ersion: "3"
......@@ -459,11 +459,11 @@ password: taosdata
taoslog-td2:
```
## 使用 docker swarm 部署
## Deploy with docker swarm
如果要想将基于容器的 TDengine 集群部署在多台主机上,可以使用 docker swarm。首先要在这些主机上建立 docke swarm 集群,请参考 docker 官方文档。
If you want to deploy a container-based TDengine cluster on multiple hosts, you can use docker swarm. First, to establish a docker swarm cluster on these hosts, please refer to the official docker documentation.
docker-compose 文件可以参考上节。下面是使用 docker swarm 启动 TDengine 的命令:
The docker-compose file can refer to the previous section. Here is the command to start TDengine with docker swarm:
```shell
$ VERSION=2.4.0 docker stack deploy -c docker-compose.yml taos
......@@ -476,7 +476,7 @@ Creating service taos_adapter
Creating service taos_nginx
```
查看和管理
Checking status:
```shell
$ docker stack ps taos
......@@ -498,9 +498,9 @@ d8qr52envqzu taos_nginx replicated 1/1
9pzw7u02ichv taos_td-2 replicated 1/1 tdengine/tdengine:2.4.0
```
从上面的输出可以看到有两个 dnode, 和两个 taosAdapter,以及一个 nginx 反向代理服务。
From the above output, you can see two dnodes, two taosAdapters, and one Nginx reverse proxy service.
接下来,我们可以减少 taosAdapter 服务的数量
Next, we can reduce the number of taosAdapter services.
```shell
$ docker service scale taos_adapter=1
......
label: 配置参数
\ No newline at end of file
label: Configuration
\ No newline at end of file
---
title: 文件目录结构
description: "TDengine 安装目录说明"
title: File directory structure
description: "TDengine installation directory description"
---
安装 TDengine 后,默认会在操作系统中生成下列目录或文件:
After TDengine is installed, the following directories or files will be created in the system by default.
| 目录/文件 | 说明 |
| directory/file | description |
| ------------------------- | -------------------------------------------------------------------- |
| /usr/local/taos/bin | TDengine 可执行文件目录。其中的执行文件都会软链接到/usr/bin 目录下。 |
| /usr/local/taos/driver | TDengine 动态链接库目录。会软链接到/usr/lib 目录下。 |
| /usr/local/taos/examples | TDengine 各种语言应用示例目录。 |
| /usr/local/taos/include | TDengine 对外提供的 C 语言接口的头文件。 |
| /etc/taos/taos.cfg | TDengine 默认[配置文件] |
| /var/lib/taos | TDengine 默认数据文件目录。可通过[配置文件]修改位置。 |
| /var/log/taos | TDengine 默认日志文件目录。可通过[配置文件]修改位置。 |
## 可执行文件
TDengine 的所有可执行文件默认存放在 _/usr/local/taos/bin_ 目录下。其中包括:
- _taosd_:TDengine 服务端可执行文件
- _taos_:TDengine Shell 可执行文件
- _taosdump_:数据导入导出工具
- _taosBenchmark_:TDengine 测试工具
- _remove.sh_:卸载 TDengine 的脚本,请谨慎执行,链接到/usr/bin 目录下的**rmtaos**命令。会删除 TDengine 的安装目录/usr/local/taos,但会保留/etc/taos、/var/lib/taos、/var/log/taos
- _taosadapter_: 提供 RESTful 服务和接受其他多种软件写入请求的服务端可执行文件
- _tarbitrator_: 提供双节点集群部署的仲裁功能
- _run_taosd_and_taosadapter.sh_:同时启动 taosd 和 taosAdapter 的脚本
- _TDinsight.sh_:用于下载 TDinsight 并安装的脚本
- _set_core.sh_:用于方便调试设置系统生成 core dump 文件的脚本
- _taosd-dump-cfg.gdb_:用于方便调试 taosd 的 gdb 执行脚本。
| /usr/local/taos/bin | The TDengine executable directory. The executable files are soft-linked to the /usr/bin directory. |
| /usr/local/taos/driver | The TDengine dynamic link library directory. It is soft-linked to the /usr/lib directory. |
| /usr/local/taos/examples | The TDengine various language application examples directory. |
| /usr/local/taos/include | The header files for TDengine's external C interface. |
| /etc/taos/taos.cfg | TDengine default [configuration file] |
| /var/lib/taos | TDengine's default data file directory. The location can be changed via [configuration file]. |
| /var/log/taos | TDengine default log file directory. The location can be changed via [configure file]. |
## Executable files
All executable files of TDengine are in the _/usr/local/taos/bin_ directory by default. These include.
- _taosd_: TDengine server-side executable files
- _taos_: TDengine CLI executable
- _taosdump_: data import and export tool
- _taosBenchmark_: TDengine testing tool
- _remove.sh_: script to uninstall TDengine, please execute it carefully, link to the **rmtaos** command in the /usr/bin directory. Will remove the TDengine installation directory `/usr/local/taos`, but will keep `/etc/taos`, `/var/lib/taos`, `/var/log/taos`
- _taosadapter_: server-side executable that provides RESTful services and accepts writing requests from a variety of other softwares
- _tarbitrator_: provides arbitration for two-node cluster deployments
- _run_taosd_and_taosadapter.sh_: script to start both taosd and taosAdapter
- _TDinsight.sh_: script to download TDinsight and install it
- _set_core.sh_: script for setting up the system to generate core dump files for easy debugging
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
:::note
2.4.0.0 版本之后的 taosBenchmark 和 taosdump 需要安装独立安装包 taosTools。
taosdump after version 2.4.0.0 require taosTools as a standalone installation. A few version taosBenchmark is include in taosTools too.
:::
:::tip
您可以通过修改系统配置文件 taos.cfg 来配置不同的数据目录和日志目录。
You can configure different data directories and log directories by modifying the system configuration file `taos.cfg`.
:::
label: Schemaless 写入
label: Schemaless writing
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册