未验证 提交 5ca92014 编写于 作者: H Haojun Liao 提交者: GitHub

Merge branch 'develop' into hashtable_cleanup

......@@ -10,6 +10,7 @@
[submodule "deps/TSZ"]
path = deps/TSZ
url = https://github.com/taosdata/TSZ.git
branch = master
[submodule "src/kit/taos-tools"]
path = src/kit/taos-tools
url = https://github.com/taosdata/taos-tools
......@@ -20,3 +21,6 @@
path = tests
url = https://github.com/taosdata/tests
branch = develop
[submodule "examples/rust"]
path = examples/rust
url = https://github.com/songtianyi/tdengine-rust-bindings.git
\ No newline at end of file
......@@ -277,7 +277,7 @@ If TDengine shell connects the server successfully, welcome messages and version
## Install TDengine by apt-get
If you use Debian or Ubuntu system, you can use 'apt-get' command to intall TDengine from official repository. Please use following commands to setup:
If you use Debian or Ubuntu system, you can use 'apt-get' command to install TDengine from official repository. Please use following commands to setup:
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
......
......@@ -96,11 +96,12 @@ IF (${VERBOSE} MATCHES "true")
SET(TD_BUILD_VERBOSE TRUE)
ENDIF ()
IF (${TSZ_ENABLED} MATCHES "true")
# build TSZ by default
IF ("${TSZ_ENABLED}" MATCHES "false")
set(VAR_TSZ "" CACHE INTERNAL "global variant empty" )
ELSE()
# define add
MESSAGE(STATUS "build with TSZ enabled")
ADD_DEFINITIONS(-DTD_TSZ)
set(VAR_TSZ "TSZ" CACHE INTERNAL "global variant tsz" )
ELSE()
set(VAR_TSZ "" CACHE INTERNAL "global variant empty" )
ENDIF()
\ No newline at end of file
......@@ -45,6 +45,6 @@ IF (TD_LINUX_64 AND JEMALLOC_ENABLED)
INCLUDE_DIRECTORIES(${CMAKE_BINARY_DIR}/build/include)
ENDIF ()
IF (${TSZ_ENABLED} MATCHES "true")
IF (NOT "${TSZ_ENABLED}" MATCHES "false")
ADD_SUBDIRECTORY(TSZ)
ENDIF()
Subproject commit 11c1060d4f917dd799ae628b131db5d6a5ef6954
......@@ -87,12 +87,14 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
* [taosAdapter](/tools/adapter): TDengine 集群和应用之间的 RESTful 接口适配服务。
* [TDinsight](/tools/insight): 监控 TDengine 集群的 Grafana 面板集合。
* [taosdump](/tools/taosdump): TDengine 数据备份工具。使用 taosdump 请安装 taosTools。
* [taosBenchmark](/tools/taosbenchmark): TDengine 压力测试工具。使用 taosBenchmark 请安装 taosTools。
* [taosBenchmark](/tools/taosbenchmark): TDengine 压力测试工具。
## [与其他工具的连接](/connections)
* [Grafana](/connections#grafana):获取并可视化保存在TDengine的数据
* [IDEA Database](https://www.taosdata.com/blog/2020/08/27/1767.html):通过IDEA 数据库管理工具可视化使用 TDengine
* [TDengineGUI](https://github.com/skye0207/TDengineGUI):基于Electron开发的跨平台TDengine图形化管理工具
* [DataX](https://www.taosdata.com/blog/2021/10/26/3156.html):支持 TDeninge 和其他数据库之间进行数据迁移的工具
## [TDengine集群的安装、管理](/cluster)
......@@ -132,14 +134,6 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
* [devops](/devops/collectd):使用 TDengine + collectd_statsd + Grafana 快速搭建 IT 运维系统
* [最佳实践](/devops/immigrate):OpenTSDB 应用迁移到 TDengine 的最佳实践
## 常用工具
* [TDengine样例导入工具](https://www.taosdata.com/blog/2020/01/18/1166.html)
* [TDengine写入性能测试工具](https://www.taosdata.com/blog/2020/01/18/1166.html)
* [IDEA数据库管理工具可视化使用TDengine](https://www.taosdata.com/blog/2020/08/27/1767.html)
* [基于Electron开发的跨平台TDengine图形化管理工具](https://github.com/skye0207/TDengineGUI)
* [基于DataX的TDeninge数据迁移工具](https://www.taosdata.com/blog/2021/10/26/3156.html)
## TDengine与其他数据库的对比测试
* [用InfluxDB开源的性能测试工具对比InfluxDB和TDengine](https://www.taosdata.com/blog/2020/01/13/1105.html)
......
如何使用 taosBenchmark 进行性能测试
==
# 如何使用 taosBenchmark 进行性能测试
自从 TDengine 2019年 7 月开源以来,凭借创新的数据建模设计、快捷的安装方式、易用的编程接口和强大的数据写入查询性能博得了大量时序数据开发者的青睐。其中写入和查询性能往往令刚接触 TDengine 的用户称叹不已。为了便于用户在最短时间内就可以体验到 TDengine 的高性能特点,我们专门开发了一个应用程序 taosBenchmark (曾命名为 taosdemo)用于对 TDengine 进行写入和查询的性能测试,用户可以通过 taosBenchmark 轻松模拟大量设备产生海量数据的场景,并且可以通过 taosBenchmark 参数灵活控制表的列数、数据类型、乱序比例以及并发线程数量。
运行 taosBenchmark 很简单,通过下载 TDengine 安装包( https://www.taosdata.com/cn/all-downloads/ )或者自行下载 TDengine 代码( https://github.com/taosdata/TDengine )编译都可以在安装目录或者编译结果目录中找到并运行。
运行 taosBenchmark 很简单,通过下载 [TDengine 安装包](https://www.taosdata.com/cn/all-downloads/)或者自行下载 [TDengine 代码](https://github.com/taosdata/TDengine)编译都可以在安装目录或者编译结果目录中找到并运行。
接下来本文为大家讲解 taosBenchmark 的使用介绍及注意事项。
使用 taosBenchmark 进行写入测试
--
## 使用 taosBenchmark 进行写入测试
不使用任何参数的情况下执行 taosBenchmark 命令,输出如下:
```
$ taosBenchmark
......@@ -58,7 +57,9 @@ column[0]:FLOAT column[1]:INT column[2]:FLOAT
Press enter key to continue or Ctrl-C to stop
```
这里显示的是接下来 taosBenchmark 进行数据写入的各项参数。默认不输入任何命令行参数的情况下 taosBenchmark 将模拟生成一个电力行业典型应用的电表数据采集场景数据。即建立一个名为 test 的数据库,并创建一个名为 meters 的超级表,其中表结构为:
```
taos> describe test.meters;
Field | Type | Length | Note |
......@@ -71,7 +72,9 @@ taos> describe test.meters;
location | BINARY | 64 | TAG |
Query OK, 6 row(s) in set (0.002972s)
```
按任意键后 taosBenchmark 将建立数据库 test 和超级表 meters,并按照 TDengine 数据建模的最佳实践,以 meters 超级表为模板生成一万个子表,代表一万个独立上报数据的电表设备。
```
taos> use test;
Database changed.
......@@ -82,7 +85,9 @@ taos> show stables;
meters | 2021-08-27 11:21:01.209 | 4 | 2 | 10000 |
Query OK, 1 row(s) in set (0.001740s)
```
然后 taosBenchmark 为每个电表设备模拟生成一万条记录:
```
...
====thread[3] completed total inserted rows: 6250000, total affected rows: 6250000. 347626.22 records/second====
......@@ -99,9 +104,11 @@ Spent 18.0863 seconds to insert rows: 100000000, affected rows: 100000000 with 1
insert delay, avg: 28.64ms, max: 112.92ms, min: 9.35ms
```
以上信息是在一台具备 8个CPU 64G 内存的普通 PC 服务器上进行实测的结果。显示 taosBenchmark 用了 18 秒的时间插入了 100000000 (一亿)条记录,平均每秒钟插入 552 万 9千零49 条记录。
TDengine 还提供性能更好的参数绑定接口,而在同样的硬件上使用参数绑定接口 (taosBenchmark -I stmt )进行相同数据量的写入,结果如下:
```
...
......@@ -136,12 +143,13 @@ Spent 6.0257 seconds to insert rows: 100000000, affected rows: 100000000 with 16
insert delay, avg: 8.31ms, max: 860.12ms, min: 2.00ms
```
显示 taosBenchmark 用了 6 秒的时间插入了一亿条记录,每秒钟插入性能高达 1659 万 5 千 590 条记录。
显示 taosBenchmark 用了 6 秒的时间插入了一亿条记录,每秒钟插入性能高达 1659 万 5 千 590 条记录。
由于 taosBenchmark 使用起来非常方便,我们又对 taosBenchmark 做了更多的功能扩充,使其支持更复杂的参数设置,便于进行快速原型开发的样例数据准备和验证工作。
完整的 taosBenchmark 命令行参数列表可以通过 taosBenchmark --help 显示如下:
```
$ taosBenchmark --help
......@@ -188,14 +196,19 @@ Report bugs to <support@taosdata.com>.
```
taosBenchmark 的参数是为了满足数据模拟的需求来设计的。下面介绍几个常用的参数:
```
-I, --interface=INTERFACE The interface (taosc, rest, and stmt) taosBenchmark uses. Default is 'taosc'.
```
前面介绍 taosBenchmark 不同接口的性能差异已经提到, -I 参数为选择不同的接口,目前支持 taosc、stmt 和 rest 几种。其中 taosc 为使用 SQL 语句方式进行数据写入;stmt 为使用参数绑定接口进行数据写入;rest 为使用 RESTful 协议进行数据写入。
```
-T, --threads=NUMBER The number of threads. Default is 8.
```
-T 参数设置 taosBenchmark 使用多少个线程进行数据同步写入,通过多线程可以尽最大可能压榨硬件的处理能力。
```
-b, --data-type=DATATYPE The data_type of columns, default: FLOAT, INT, FLOAT.
......@@ -203,36 +216,50 @@ taosBenchmark 的参数是为了满足数据模拟的需求来设计的。下面
-l, --columns=COLUMNS The number of columns per record. Demo mode by default is 3 (float, int, float). Max values is 4095
```
前文提到,taosBenchmark 默认创建一个典型电表数据采集应用场景,每个设备包含电流电压相位3个采集量。对于需要定义不同的采集量,可以使用 -b 参数。TDengine 支持 BOOL、TINYINT、SMALLINT、INT、BIGINT、FLOAT、DOUBLE、BINARY、NCHAR、TIMESTAMP 等多种数据类型。通过 -b 加上以“ , ”(英文逗号)分割定制类型的列表可以使 taosBenchmark 建立对应的超级表和子表并插入相应模拟数据。通过 -w 参数可以指定 BINARY 和 NCHAR 数据类型的列的宽度(默认为 64 )。-l 参数可以在 -b 参数指定数据类型的几列之后补充以 INT 型的总的列数,特别多列的情况下可以减少手工输入的过程,最多支持到 4095 列。
```
-r, --rec-per-req=NUMBER The number of records per request. Default is 30000.
```
为了达到 TDengine 性能极限,可以使用多客户端、多线程以及一次插入多条数据来进行数据写入。 -r 参数为设置一次写入请求可以拼接的记录条数,默认为30000条。有效的拼接记录条数还和客户端缓冲区大小有关,目前的缓冲区为 1M Bytes,如果记录的列宽度比较大,最大拼接记录条数可以通过 1M 除以列宽(以字节为单位)计算得出。
```
-t, --tables=NUMBER The number of tables. Default is 10000.
-n, --records=NUMBER The number of records per table. Default is 10000.
-M, --random The value of records generated are totally random. The default is to simulate power equipment senario.
```
前面提到 taosBenchmark 默认创建 10000 个表,每个表写入 10000 条记录。可以通过 -t 和 -n 设置表的数量和每个表的记录的数量。默认无参数生成的数据为模拟真实场景,模拟生成的数据为电流电压相位值增加一定的抖动,可以更真实表现 TDengine 高效的数据压缩能力。如果需要模拟生成完全随机数据,可以通过 -M 参数。
```
-y, --answer-yes Default input yes for prompt.
```
前面我们可以看到 taosBenchmark 默认在进行创建数据库或插入数据之前输出将要进行操作的参数列表,方便使用者在插入之前了解即将进行的数据写入的内容。为了方便进行自动测试,-y 参数可以使 taosBenchmark 输出参数后立刻进行数据写入操作。
```
-O, --disorder=NUMBER Insert order mode--0: In order, 1 ~ 50: disorder ratio. Default is in order.
-R, --disorder-range=NUMBER Out of order data's range, ms, default is 1000.
```
在某些场景,接收到的数据并不是完全按时间顺序到来,而是包含一定比例的乱序数据,TDengine 也能进行很好的处理。为了模拟乱序数据的写入,taosBenchmark 提供 -O 和 -R 参数进行设置。-O 参数为 0 和不使用 -O 参数相同为完全有序数据写入。1 到 50 为数据中包含乱序数据的比例。-R 参数为乱序数据时间戳偏移的范围,默认为 1000 毫秒。另外注意,时序数据以时间戳为唯一标识,所以乱序数据可能会生成和之前已经写入数据完全相同的时间戳,这样的数据会根据数据库创建的 update 值或者被丢弃(update 0)或者覆盖已有数据(update 1 或 2),而总的数据条数可能和期待的条数不一致的情况。
```
-g, --debug Print debug info.
```
如果对 taosBenchmark 写入数据过程感兴趣或者数据写入结果不符合预期,可以使用 -g 参数使 taosBenchmark 打印执行过程中间调试信息到屏幕上,或通过 Linux 重定向命令导入到另外一个文件,方便找到发生问题的原因。另外 taosBenchmark 在执行失败后也会把相应执行的语句和调试原因输出到屏幕。可以搜索 reason 来找到 TDengine 服务端返回的错误原因信息。
```
-x, --aggr-func Test aggregation funtions after insertion.
```
TDengine 不仅仅是插入性能非常强大,由于其先进的数据库引擎设计使查询性能也异常强大。taosBenchmark 提供一个 -x 函数,可以在插入数据结束后进行常用查询操作并输出查询消耗时间。以下为在前述服务器上进行插入一亿条记录后进行常用查询的结果。
可以看到 select * 取出一亿条记录(不输出到屏幕)操作仅消耗1.26秒。而对一亿条记录进行常用的聚合函数操作通常仅需要二十几毫秒,时间最长的 count 函数也不到四十毫秒。
```
taosBenchmark -I stmt -T 48 -y -x
...
......@@ -254,7 +281,9 @@ select min(current) took 0.025812 second(s)
select first(current) took 0.024105 second(s)
...
```
除了命令行方式, taosBenchmark 还支持接受指定一个 JSON 文件做为传入参数的方式来提供更丰富的设置。一个典型的 JSON 文件内容如下:
```
{
"filetype": "insert",
......@@ -317,13 +346,15 @@ select first(current) took 0.024105 second(s)
}]
}
```
例如:我们可以通过 "thread_count" 和 "thread_count_create_tbl" 来为建表和插入数据指定不同数量的线程。可以通过 "child_table_exists"、"childtable_limit" 和 "childtable_offset" 的组合来使用多个 taosBenchmark 进程(甚至可以在不同的电脑上)对同一个超级表的不同范围子表进行同时写入。也可以通过 "data_source" 和 "sample_file" 来指定数据来源为 csv 文件,来实现导入已有数据的功能。
使用 taosBenchmark 进行查询和订阅测试
--
## 使用 taosBenchmark 进行查询和订阅测试
taosBenchmark 不仅仅可以进行数据写入,也可以执行查询和订阅功能。但一个 taosBenchmark 实例只能支持其中的一种功能,不能同时支持三种功能,通过配置文件来指定进行哪种功能的测试。
以下为一个典型查询 JSON 示例文件内容:
```
{
"filetype": "query",
......@@ -363,7 +394,9 @@ taosBenchmark 不仅仅可以进行数据写入,也可以执行查询和订阅
}
}
```
以下为 JSON 文件中和查询相关的特有参数含义:
```
"query_times": 每种查询类型的查询次数
"query_mode": 查询数据接口,"taosc":调用TDengine的c接口;“resetful”:使用restfule接口。可选项。缺省是“taosc”。
......@@ -382,6 +415,7 @@ taosBenchmark 不仅仅可以进行数据写入,也可以执行查询和订阅
```
以下为一个典型订阅 JSON 示例文件内容:
```
{
"filetype":"subscribe",
......@@ -421,7 +455,9 @@ taosBenchmark 不仅仅可以进行数据写入,也可以执行查询和订阅
}
}
```
以下为订阅功能相关的特有参数含义:
```
"interval": 执行订阅的间隔,单位是秒。可选项,缺省是0。
"restart": 订阅重启。"yes":如果订阅已经存在,重新开始,"no": 继续之前的订阅。(请注意执行用户需要对 dataDir 目录有读写权限)
......@@ -429,16 +465,15 @@ taosBenchmark 不仅仅可以进行数据写入,也可以执行查询和订阅
"resubAfterConsume": 配合 keepProgress 使用,在订阅消费了相应次数后调用 unsubscribe 取消订阅并再次订阅。
"result": 查询结果写入的文件名。可选项,缺省是空,表示查询结果不写入文件。 注意:每条sql语句后的保存结果的文件不能重名,且生成结果文件时,文件名会附加线程号。
```
结语
--
## 结语
TDengine是涛思数据专为物联网、车联网、工业互联网、IT运维等设计和优化的大数据平台。TDengine 由于数据库内核中创新的数据存储和查询引擎设计,展现出远超同类产品的高效性能。并且由于支持 SQL 语法和多种编程语言的连接器(目前支持 Java, Python, Go, C#, NodeJS, Rust 等),易用性极强,学习成本为零。为了便于运维需求,我们还提供数据迁移和监控功能等相关生态工具软件。
为了刚接触 TDengine 的使用者方便进行技术评估和压力测试,我们为 taosBenchmark 开发了丰富的特性。本文即为对 taosBenchmark 的一个简单介绍,随着 TDengine 新功能的不断增加,taosBenchmark 也会继续演化和改进。taosBenchmark 的代码做为 TDengine 的一部分在 GitHub 上完全开源。欢迎就 taosBenchmark 或 TDengine 的使用或实现在 GitHub 或者涛思数据的用户群提出建议或批评。
## 附录 - 完整 taosBenchmark 参数介绍
附录 - 完整 taosBenchmark 参数介绍
--
taosBenchmark支持两种配置参数的模式,一种是命令行参数,一种是使用 JSON 格式的配置文件。
一、命令行参数
......@@ -505,12 +540,12 @@ taosBenchmark支持两种配置参数的模式,一种是命令行参数,一
--help: 打印命令参数列表。
二、JSON 格式的配置文件中所有参数说明
taosBenchmark支持3种功能的测试,包括插入、查询、订阅。但一个taosBenchmark实例不能同时支持三种功能,一个 taosBenchmark 实例只能支持其中的一种功能,通过配置文件来指定进行哪种功能的测试。
1、插入功能测试的 JSON 配置文件
```
{
"filetype": "insert",
......@@ -700,6 +735,7 @@ taosBenchmark支持3种功能的测试,包括插入、查询、订阅。但一
}]
2、查询功能测试的 JSON 配置文件
```
{
"filetype": "query",
......@@ -784,12 +820,12 @@ taosBenchmark支持3种功能的测试,包括插入、查询、订阅。但一
"result": 查询结果写入的文件名。可选项,缺省是空,表示查询结果不写入文件。
注意:每条sql语句后的保存结果的文件不能重名,且生成结果文件时,文件名会附加线程号。
查询结果显示:如果查询线程结束一次查询距开始执行时间超过30秒打印一次查询次数、用时和QPS。所有查询结束时,汇总打印总的查询次数和QPS。
3、订阅功能测试的 JSON 配置文件
```
{
"filetype":"subscribe",
......
# TDengine 安装包的安装和卸载
TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 等系统,rpm 支持 CentOS/RHEL/SUSE 等系统。同时我们也为企业用户提供 tar.gz 格式安装包。
## deb 包的安装和卸载
### 安装 deb
1、从官网下载获得deb安装包,比如TDengine-server-2.0.0.0-Linux-x64.deb;
2、进入到TDengine-server-2.0.0.0-Linux-x64.deb安装包所在目录,执行如下的安装命令:
```
plum@ubuntu:~/git/taosv16$ sudo dpkg -i TDengine-server-2.0.0.0-Linux-x64.deb
Selecting previously unselected package tdengine.
(Reading database ... 233181 files and directories currently installed.)
Preparing to unpack TDengine-server-2.0.0.0-Linux-x64.deb ...
Failed to stop taosd.service: Unit taosd.service not loaded.
Stop taosd service success!
Unpacking tdengine (2.0.0.0) ...
Setting up tdengine (2.0.0.0) ...
Start to install TDEngine...
Synchronizing state of taosd.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable taosd
insserv: warning: current start runlevel(s) (empty) of script `taosd' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `taosd' overrides LSB defaults (0 1 6).
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos in shell
TDengine is installed successfully!
```
注:当安装第一个节点时,出现 Enter FQDN:提示的时候,不需要输入任何内容。只有当安装第二个或以后更多的节点时,才需要输入已有集群中任何一个可用节点的 FQDN,支持该新节点加入集群。当然也可以不输入,而是在新节点启动前,配置到新节点的配置文件中。
后续两种安装包也是同样的操作。
### 卸载 deb
卸载命令如下:
```
plum@ubuntu:~/git/tdengine/debs$ sudo dpkg -r tdengine
(Reading database ... 233482 files and directories currently installed.)
Removing tdengine (2.0.0.0) ...
TDEngine is removed successfully!
```
## rpm包的安装和卸载
### 安装 rpm
1、从官网下载获得rpm安装包,比如TDengine-server-2.0.0.0-Linux-x64.rpm;
2、进入到TDengine-server-2.0.0.0-Linux-x64.rpm安装包所在目录,执行如下的安装命令:
```
[root@bogon x86_64]# rpm -iv TDengine-server-2.0.0.0-Linux-x64.rpm
Preparing packages...
TDengine-2.0.0.0-3.x86_64
Start to install TDEngine...
Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos in shell
TDengine is installed successfully!
```
### 卸载 rpm
卸载命令如下:
```
[root@bogon x86_64]# rpm -e tdengine
TDEngine is removed successfully!
```
## tar.gz 格式安装包的安装和卸载
### 安装 tar.gz 安装包
1、从官网下载获得tar.gz安装包,比如TDengine-server-2.0.0.0-Linux-x64.tar.gz;
2、进入到TDengine-server-2.0.0.0-Linux-x64.tar.gz安装包所在目录,先解压文件后,进入子目录,执行其中的install.sh安装脚本:
```
plum@ubuntu:~/git/tdengine/release$ sudo tar -xzvf TDengine-server-2.0.0.0-Linux-x64.tar.gz
plum@ubuntu:~/git/tdengine/release$ ll
total 3796
drwxr-xr-x 3 root root 4096 Aug 9 14:20 ./
drwxrwxr-x 11 plum plum 4096 Aug 8 11:03 ../
drwxr-xr-x 5 root root 4096 Aug 8 11:03 TDengine-server/
-rw-r--r-- 1 root root 3871844 Aug 8 11:03 TDengine-server-2.0.0.0-Linux-x64.tar.gz
plum@ubuntu:~/git/tdengine/release$ cd TDengine-server/
plum@ubuntu:~/git/tdengine/release/TDengine-server$ ll
total 2640
drwxr-xr-x 5 root root 4096 Aug 8 11:03 ./
drwxr-xr-x 3 root root 4096 Aug 9 14:20 ../
drwxr-xr-x 5 root root 4096 Aug 8 11:03 connector/
drwxr-xr-x 2 root root 4096 Aug 8 11:03 driver/
drwxr-xr-x 8 root root 4096 Aug 8 11:03 examples/
-rwxr-xr-x 1 root root 13095 Aug 8 11:03 install.sh*
-rw-r--r-- 1 root root 2651954 Aug 8 11:03 taos.tar.gz
plum@ubuntu:~/git/tdengine/release/TDengine-server$ sudo ./install.sh
This is ubuntu system
verType=server interactiveFqdn=yes
Start to install TDengine...
Synchronizing state of taosd.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable taosd
insserv: warning: current start runlevel(s) (empty) of script `taosd' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `taosd' overrides LSB defaults (0 1 6).
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :hostname.taosdata.com:7030
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos in shell
Please run: taos -h hostname.taosdata.com:7030 to login into cluster, then execute : create dnode 'newDnodeFQDN:port'; in TAOS shell to add this new node into the clsuter
TDengine is installed successfully!
```
说明:install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以用 -e no 参数来执行 install.sh 脚本。运行 ./install.sh -h 指令可以查看所有参数的详细说明信息。
### tar.gz 安装后的卸载
卸载命令如下:
```
plum@ubuntu:~/git/tdengine/release/TDengine-server$ rmtaos
TDEngine is removed successfully!
```
## 安装目录说明
TDengine成功安装后,主安装目录是/usr/local/taos,目录内容如下:
```
plum@ubuntu:/usr/local/taos$ cd /usr/local/taos
plum@ubuntu:/usr/local/taos$ ll
total 36
drwxr-xr-x 9 root root 4096 7月 30 19:20 ./
drwxr-xr-x 13 root root 4096 7月 30 19:20 ../
drwxr-xr-x 2 root root 4096 7月 30 19:20 bin/
drwxr-xr-x 2 root root 4096 7月 30 19:20 cfg/
lrwxrwxrwx 1 root root 13 7月 30 19:20 data -> /var/lib/taos/
drwxr-xr-x 2 root root 4096 7月 30 19:20 driver/
drwxr-xr-x 8 root root 4096 7月 30 19:20 examples/
drwxr-xr-x 2 root root 4096 7月 30 19:20 include/
drwxr-xr-x 2 root root 4096 7月 30 19:20 init.d/
lrwxrwxrwx 1 root root 13 7月 30 19:20 log -> /var/log/taos/
```
- 自动生成配置文件目录、数据库目录、日志目录。
- 配置文件缺省目录:/etc/taos/taos.cfg, 软链接到/usr/local/taos/cfg/taos.cfg;
- 数据库缺省目录:/var/lib/taos, 软链接到/usr/local/taos/data;
- 日志缺省目录:/var/log/taos, 软链接到/usr/local/taos/log;
- /usr/local/taos/bin目录下的可执行文件,会软链接到/usr/bin目录下;
- /usr/local/taos/driver目录下的动态库文件,会软链接到/usr/lib目录下;
- /usr/local/taos/include目录下的头文件,会软链接到到/usr/include目录下;
## 卸载和更新文件说明
卸载安装包的时候,将保留配置文件、数据库文件和日志文件,即 /etc/taos/taos.cfg 、 /var/lib/taos 、 /var/log/taos 。如果用户确认后不需保留,可以手工删除,但一定要慎重,因为删除后,数据将永久丢失,不可以恢复!
如果是更新安装,当缺省配置文件( /etc/taos/taos.cfg )存在时,仍然使用已有的配置文件,安装包中携带的配置文件修改为taos.cfg.org保存在 /usr/local/taos/cfg/ 目录,可以作为设置配置参数的参考样例;如果不存在配置文件,就使用安装包中自带的配置文件。
## 注意事项
- TDengine提供了多种安装包,但最好不要在一个系统上同时使用 tar.gz 安装包和 deb 或 rpm 安装包。否则会相互影响,导致在使用时出现问题。
- 对于deb包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除 tdengine 包的安装信息,执行如下命令:
```
plum@ubuntu:~/git/tdengine/$ sudo rm -f /var/lib/dpkg/info/tdengine*
```
然后再重新进行安装就可以了。
- 对于rpm包安装后,如果安装目录被手工误删了部分,出现卸载、或重新安装不能成功。此时,需要清除tdengine包的安装信息,执行如下命令:
```
[root@bogon x86_64]# rpm -e --noscripts tdengine
```
然后再重新进行安装就可以了。
......@@ -2,15 +2,11 @@
## <a class="anchor" id="install"></a>快捷安装
TDengine 软件分为服务器、客户端和报警模块三部分,目前 2.0 版服务器仅能在 Linux 系统上安装和运行,后续会支持 Windows、Mac OS 等系统。客户端可以在 Windows 或 Linux 上安装和运行。任何 OS 的应用也可以选择 RESTful 接口连接服务器 taosd,其中 2.4 之后版本默认使用单独运行的独立组件 taosAdapter 提供 http 服务,之前版本使用内置 http 服务。CPU 支持 X64/ARM64/MIPS64/Alpha64,后续会支持 ARM32、RISC-V 等 CPU 架构。用户可根据需求选择通过 [源码](https://www.taosdata.com/cn/getting-started/#通过源码安装) 或者 [安装包](https://www.taosdata.com/cn/getting-started/#通过安装包安装) 来安装
TDengine 包括服务端、客户端和周边生态工具软件,目前 2.0 版服务端仅在 Linux 系统上安装和运行,后续将支持 Windows、Mac OS 等系统。客户端可以在 Windows 或 Linux 上安装和运行。在任何操作系统上的应用都可以使用 RESTful 接口连接服务端程序 taosd,其中 2.4 之后版本默认使用单独运行的独立组件 taosAdapter 提供 http 服务和更多数据写入方式。taosAdapter 需要手动启动。而之前版本 TDengine 使用内置 http 服务
### <a class="anchor" id="source-install"></a>通过源码安装
请参考我们的 [TDengine github 主页](https://github.com/taosdata/TDengine) 下载源码并安装.
TDengine 支持 X64/ARM64/MIPS64/Alpha64 硬件平台,后续将支持 ARM32、RISC-V 等 CPU 架构。
### 通过 Docker 容器运行
暂时不建议生产环境采用 Docker 来部署 TDengine 的客户端或服务端,但在开发环境下或初次尝试时,使用 Docker 方式部署是十分方便的。特别是,利用 Docker,可以方便地在 Mac OS X 和 Windows 环境下尝试 TDengine。
### 通过 Docker 容器安装
```
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
......@@ -18,65 +14,55 @@ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengin
详细操作方法请参照 [通过 Docker 快速体验 TDengine](https://www.taosdata.com/cn/documentation/getting-started/docker)
注:暂时不建议生产环境采用 Docker 来部署 TDengine 的客户端或服务端,但在开发环境下或初次尝试时,使用 Docker 方式部署是十分方便的。特别是,利用 Docker,可以方便地在 Mac OS X 和 Windows 环境下尝试 TDengine。
### <a class="anchor" id="package-install"></a>通过安装包安装
TDengine 的安装非常简单,从下载到安装成功仅仅只要几秒钟。为方便使用,标准的服务端安装包包含了客户端程序、各种编程语言的连接器和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,那么也可以仅下载 lite 版本的安装包。在安装包格式上,我们提供 rpm、deb、tar.gz 三种,以方便在特定操作系统上使用。版本还分稳定版和Beta版,Beta版含有更多新功能,正式上线或测试,建议安装稳定版。您可以根据需要选择下载:
TDengine 的安装非常简单,从下载到安装成功仅仅只要几秒钟。为方便使用,标准的服务端安装包包含了客户端程序和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 lite 版本的安装包。在安装包格式上,我们提供 rpm 和 deb 格式,也为企业客户提供 tar.gz 格式安装包,以方便在特定操作系统上使用。发布版本包括稳定版和 Beta 版,Beta版含有更多新功能。正式上线或测试建议安装稳定版。您可以根据需要选择下载:
<ul id="server-packageList" class="package-list"></ul>
具体的安装过程,请参见 [TDengine 多种安装包的安装和卸载](https://www.taosdata.com/blog/2019/08/09/566.html) 以及 [视频教程](https://www.taosdata.com/blog/2020/11/11/1941.html)
## <a class="anchor" id="taosBenchmark"></a> taosBenchmark 详细功能列表
taosBenchmark (曾命名 taosdemo)命令本身带有很多选项,配置表的数目、记录条数等等,请执行 `taosBenchmark --help` 详细列出。您可以设置不同参数进行体验。
taosBenchmark 详细使用方法请参照 [如何使用taosBenchmark对TDengine进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)
## 客户端
如果客户端和服务端运行在不同的电脑上,可以单独安装客户端。下载时请注意,所选择的客户端版本号应该和在上面下载的服务端版本号精确匹配。Linux 和 Windows 安装包如下(其中 lite 版本的安装包仅带有 C/C++ 语言的连接支持,而标准版本的安装包还包含 Java、Python、Go、Node.js 等编程语言的连接器支持和示例代码):
<ul id="client-packagelist" class="package-list"></ul>
## taosTools
taosTools 是多个用于 TDengine 的辅助工具软件集合。
推荐下载 deb 或 rpm 安装包,方便安装依赖软件。如果使用 tar.gz 格式安装包,需要自行安装依赖包。其中:
* Debian/Ubuntu 系统需要安装 libjansson4 和 libsnappy1v5
* CentOS/RHEL 系统需要安装 jansson 和 snappy
以及 TDengine server 或 TDengine client 安装包
具体的安装方法,请参见 [TDengine 多种安装包的安装和卸载](https://www.taosdata.com/cn/getting-started/install) 以及 [视频教程](https://www.taosdata.com/blog/2020/11/11/1941.html)
<ul id="taos-tools" class="package-list"></ul>
**请点击[这里](https://github.com/taosdata/TDengine/releases)查看 release notes。**
### 使用 apt-get 安装
## 使用 apt-get 安装
如果使用 Debian 或 Ubuntu 系统,也可以使用 apt-get 从官方仓库安装,设置方法为:
如果使用 Debian 或 Ubuntu 系统,也可以使用 apt-get 工具从官方仓库安装,设置方法为:
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
[ beta 版安装包仓库为可选安装项 ] echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
[ 如果安装 Beta 版需要安装包仓库 ] echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
sudo apt-get update
apt-cache policy tdengine
sudo apt-get install tdengine
```
<a class="anchor" id="start"></a>
## 轻松启动
### 仅安装客户端
如果客户端和服务端运行在不同的电脑上,可以单独安装客户端。下载时请注意,所选择的客户端版本号应该和在上面下载的服务端版本号严格匹配。Linux 和 Windows 安装包如下(其中 lite 版本的安装包仅带有 C/C++ 语言的连接支持,而标准版本的安装包还包含和示例代码):
<ul id="client-packagelist" class="package-list"></ul>
### <a class="anchor" id="source-install"></a>通过源码安装
如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine github 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
**下载其他组件、最新 Beta 版及之前版本的安装包,请点击[这里](https://www.taosdata.com/cn/all-downloads/)**
## <a class="anchor" id="start"></a>轻松启动
安装成功后,用户可使用 `systemctl` 命令来启动 TDengine 的服务进程。
```bash
$ systemctl start taosd
systemctl start taosd
```
检查服务是否正常工作:
```bash
$ systemctl status taosd
systemctl status taosd
```
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
......@@ -88,30 +74,29 @@ $ systemctl status taosd
- TDengine 采用 FQDN (一般就是 hostname )作为节点的 ID,为保证正常运行,需要给运行 taosd 的服务器配置好 hostname,在客户端应用运行的机器配置好 DNS 服务或 hosts 文件,保证 FQDN 能够解析。
- `systemctl stop taosd` 指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
* TDengine 支持在使用 [`systemd`](https://en.wikipedia.org/wiki/Systemd) 做进程服务管理的 Linux 系统上安装,用 `which systemctl` 命令来检测系统中是否存在 `systemd` 包:
TDengine 支持在使用 [`systemd`](https://en.wikipedia.org/wiki/Systemd) 做进程服务管理的 Linux 系统上安装,用 `which systemctl` 命令来检测系统中是否存在 `systemd` 包:
```bash
$ which systemctl
which systemctl
```
如果系统中不支持 `systemd`,也可以用手动运行 /usr/local/taos/bin/taosd 方式启动 TDengine 服务。
<a class="anchor" id="console"></a>
## TDengine 命令行程序
## <a class="anchor" id="console"></a>使用 TDengine 客户端程序
执行 TDengine 命令行程序,您只要在 Linux 终端执行 `taos` 即可。
执行 TDengine 客户端程序,您只要在 Linux 终端执行 `taos` 即可。
```bash
$ taos
taos
```
如果 TDengine 终端连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](https://www.taosdata.com/cn/documentation/faq/) 来解决终端连接服务端失败的问题)。TDengine 终端的提示符号如下:
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](https://www.taosdata.com/cn/documentation/faq/) 来解决终端连接服务端失败的问题)。客户端的提示符号如下:
```cmd
taos>
```
在 TDengine 端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
在 TDengine 客户端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
```mysql
create database demo;
......@@ -127,23 +112,23 @@ select * from t;
Query OK, 2 row(s) in set (0.003128s)
```
除执行 SQL 语句外,系统管理员还可以从 TDengine 端进行检查系统运行状态、添加删除用户账号等操作。
除执行 SQL 语句外,系统管理员还可以从 TDengine 客户端进行检查系统运行状态、添加删除用户账号等操作。
### 命令行参数
您可通过配置命令行参数来改变 TDengine 端的行为。以下为常用的几个命令行参数:
您可通过配置命令行参数来改变 TDengine 客户端的行为。以下为常用的几个命令行参数:
- -c, --config-dir: 指定配置文件目录,默认为 `/etc/taos`
- -h, --host: 指定服务的 FQDN 地址或 IP 地址,默认为连接本地服务
- -s, --commands: 在不进入终端的情况下运行 TDengine 命令
- -u, --user: 连接 TDengine 服务的用户名,缺省为 root
- -p, --password: 连接TDengine服务器的密码,缺省为 taosdata
- -u, --user: 连接 TDengine 服务的用户名,缺省为 root
- -p, --password: 连接 TDengine 服务端的密码,缺省为 taosdata
- -?, --help: 打印出所有命令行参数
示例:
```bash
$ taos -h h1.taos.com -s "use db; show tables;"
taos -h h1.taos.com -s "use db; show tables;"
```
### 运行 SQL 命令脚本
......@@ -165,15 +150,25 @@ taos> source <filename>;
## <a class="anchor" id="demo"></a>TDengine 极速体验
启动 TDengine 的服务,在 Linux 终端执行 taosBenchmark (曾命名为 taosdemo,在 2.4 之后的版本请按照独立的 taosTools 软件包):
### <a class="anchor" id="taosBenchmark"></a> 使用 taosBenchmark 体验写入速度
启动 TDengine 的服务,在 Linux 终端执行 `taosBenchmark` (曾命名为 taosdemo)。taosBenchmark 在 TDengine 2.4.0.7 和之前发布版本在 taosTools 安装包中发布提供,在后续版本中 taosBenchmark 将在 TDengine 标准安装包中发布。
```bash
$ taosBenchmark
taosBenchmark
```
该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "beijing" 或者 "shanghai"。
执行这条命令大概需要几分钟,最后共插入 1 亿条记录。
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
#### taosBenchmark 详细功能列表
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,请执行 `taosBenchmark --help` 详细列出。您可以设置不同参数进行体验。
taosBenchmark 详细使用方法请参照 [如何使用taosBenchmark对TDengine进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)
### <a class="anchor" id="taosshell"></a> 使用 taos shell 体验查询速度
在 TDengine 客户端输入查询命令,体验查询速度。
......@@ -207,10 +202,9 @@ taos> select avg(current), max(voltage), min(phase) from test.meters where group
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
```
## <a class="anchor" id="platforms"></a>支持平台列表
### TDengine 服务支持的平台列表
### TDengine 服务支持的平台列表
| | **CentOS 7/8** | **Ubuntu 16/18/20** | **Other Linux** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **华为 EulerOS** |
| -------------- | --------------------- | ------------------------ | --------------- | --------------- | ------------------------- | --------------------- | --------------------- |
......@@ -248,3 +242,4 @@ taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
请跳转到 [连接器](https://www.taosdata.com/cn/documentation/connector) 查看更详细的信息。
<script src="/wp-includes/js/quick-start.js?v=1"></script>
......@@ -313,16 +313,6 @@ TCollector 是一个在客户侧收集本地收集器并发送数据到 OpenTSDB
taosAdapter 相关配置参数请参考 taosadapter --help 命令输出以及相关文档。
## <a class="anchor" id="bailongma2-prometheus"></a> 使用 Bailongma 2.0 接入 Prometheus 数据写入
**注意:**
TDengine 新版本(2.4.0.4+)包含 taosAdapter 组件,提供更简便的 Prometheus 数据写入以及其他更强大的功能,Bailongma v2 及之前版本将逐步不再维护。
## <a class="anchor" id="bailongma2-telegraf"></a> 使用 Bailongma 2.0 接入 Telegraf 数据写入
**注意:**
TDengine 新版本(2.3.0.0+)包含 taosAdapter 组件,提供更简便的 Telegraf 数据写入以及其他更强大的功能,Bailongma v2 及之前版本将逐步不再维护。
## <a class="anchor" id="emq"></a>EMQ Broker 直接写入
MQTT是流行的物联网数据传输协议,[EMQ](https://github.com/emqx/emqx)是一开源的MQTT Broker软件,无需任何代码,只需要在EMQ Dashboard里使用“规则”做简单配置,即可将MQTT的数据直接写入TDengine。EMQ X 支持通过 发送到 Web 服务的方式保存数据到 TDEngine,也在企业版上提供原生的 TDEngine 驱动实现直接保存。详细使用方法请参考 [EMQ 官方文档](https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine)
......
......@@ -15,11 +15,13 @@ Database Memory Size = maxVgroupsPerDb * (blocks * cache + 10MB) + numOfTables *
示例:假设是 4 核机器,cache 是缺省大小 16M, blocks 是缺省值 6,并且一个 DB 中有 10 万张表,标签总长度是 256 字节,则这个 DB 总的内存需求为:4 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 499M。
在实际的系统运维中,我们通常会更关心 TDengine 服务进程(taosd)会占用的内存量。
```
taosd 内存总量 = vnode 内存 + mnode 内存 + 查询内存
```
其中:
1. “vnode 内存”指的是集群中所有的 Database 存储分摊到当前 taosd 节点上所占用的内存资源。可以按上文“Database Memory Size”计算公式估算每个 DB 的内存占用量进行加总,再按集群中总共的 TDengine 节点数做平均(如果设置为多副本,则还需要乘以对应的副本倍数)。
2. “mnode 内存”指的是集群中管理节点所占用的资源。如果一个 taosd 节点上分布有 mnode 管理节点,则内存消耗还需要增加“0.2KB * 集群中数据表总数”。
3. “查询内存”指的是服务端处理查询请求时所需要占用的内存。单条查询语句至少会占用“0.2KB * 查询涉及的数据表总数”的内存量。
......@@ -33,11 +35,13 @@ taosd 内存总量 = vnode 内存 + mnode 内存 + 查询内存
客户端应用采用 taosc 客户端驱动连接服务端,会有内存需求的开销。
客户端的内存开销主要由写入过程中的 SQL 语句、表的元数据信息缓存、以及结构性开销构成。系统最大容纳的表数量为 N(每个通过超级表创建的表的 meta data 开销约 256 字节),最大并行写入线程数量 T,最大 SQL 语句长度 S(通常是 1 Mbytes)。由此可以进行客户端内存开销的估算(单位 MBytes):
```
M = (T * S * 3 + (N / 4096) + 100)
```
举例如下:用户最大并发写入线程数 100,子表数总数 10,000,000,那么客户端的内存最低要求是:
```
100 * 3 + (10000000 / 4096) + 100 = 2741 (MBytes)
```
......@@ -310,6 +314,7 @@ ALTER DNODE <dnode_id> <config>
> debugFlag < 131 | 135 | 143 > 设置debugFlag为131、135或者143
例如:
```
alter dnode 1 debugFlag 135;
```
......@@ -347,25 +352,33 @@ taos -C 或 taos --dump-config
如果配置文件中不设置charset,在Linux系统中,taos在启动时候,自动读取系统当前的locale信息,并从locale信息中解析提取charset编码格式。如果自动读取locale信息失败,则尝试读取charset配置,如果读取charset配置也失败,则中断启动过程。
在Linux系统中,locale信息包含了字符编码信息,因此正确设置了Linux系统locale以后可以不用再单独设置charset。例如:
```
locale zh_CN.UTF-8
```
在Windows系统中,无法从locale获取系统当前编码。如果无法从配置文件中读取字符串编码信息,taos默认设置为字符编码为CP936。其等效在配置文件中添加如下配置:
```
charset CP936
```
如果需要调整字符编码,请查阅当前操作系统使用的编码,并在配置文件中正确设置。
在Linux系统中,如果用户同时设置了locale和字符集编码charset,并且locale和charset的不一致,后设置的值将覆盖前面设置的值。
```
locale zh_CN.UTF-8
charset GBK
```
则charset的有效值是GBK。
```
charset GBK
locale zh_CN.UTF-8
```
charset的有效值是UTF-8。
日志的配置参数,与server 的配置参数完全一样。
......@@ -377,25 +390,33 @@ taos -C 或 taos --dump-config
为应对多时区的数据写入和查询问题,TDengine 采用 Unix 时间戳(Unix Timestamp)来记录和存储时间戳。Unix 时间戳的特点决定了任一时刻不论在任何时区,产生的时间戳均一致。需要注意的是,Unix时间戳是在客户端完成转换和记录。为了确保客户端其他形式的时间转换为正确的 Unix 时间戳,需要设置正确的时区。
在Linux系统中,客户端会自动读取系统设置的时区信息。用户也可以采用多种方式在配置文件设置时区。例如:
```
timezone UTC-8
timezone GMT-8
timezone Asia/Shanghai
```
均是合法的设置东八区时区的格式。但需注意,Windows 下并不支持 `timezone Asia/Shanghai` 这样的写法,而必须写成 `timezone UTC-8`。
时区的设置对于查询和写入SQL语句中非Unix时间戳的内容(时间戳字符串、关键词now的解析)产生影响。例如:
```sql
SELECT count(*) FROM table_name WHERE TS<'2019-04-11 12:01:08';
```
在东八区,SQL语句等效于
```sql
SELECT count(*) FROM table_name WHERE TS<1554955268000;
```
在UTC时区,SQL语句等效于
```sql
SELECT count(*) FROM table_name WHERE TS<1554984068000;
```
为了避免使用字符串时间格式带来的不确定性,也可以直接使用Unix时间戳。此外,还可以在SQL语句中使用带有时区的时间戳字符串,例如:RFC3339格式的时间戳字符串,2013-04-12T15:52:01.123+08:00或者ISO-8601格式时间戳字符串2013-04-12T15:52:01.123+0800。上述两个字符串转化为Unix时间戳不受系统所在时区的影响。
启动taos时,也可以从命令行指定一个taosd实例的end point,否则就从taos.cfg读取。
......@@ -457,6 +478,7 @@ TDengine也支持在shell对已存在的表从CSV文件中进行数据导入。C
```mysql
insert into tb1 file 'path/data.csv';
```
**注意:如果CSV文件首行存在描述信息,请手动删除后再导入。如某列为空,填NULL,无引号。**
例如,现在存在一个子表d1001, 其表结构如下:
......@@ -472,6 +494,7 @@ taos> DESCRIBE d1001
location | BINARY | 64 | TAG |
groupid | INT | 4 | TAG |
```
要导入的data.csv的格式如下:
```csv
......@@ -485,6 +508,7 @@ taos> DESCRIBE d1001
'2018-10-11 06:38:05.000',17.30000,219,0.32000
'2018-10-12 06:38:05.000',18.30000,219,0.31000
```
那么可以用如下命令导入数据:
```mysql
......@@ -494,7 +518,7 @@ Query OK, 9 row(s) affected (0.004763s)
**taosdump工具导入**
TDengine提供了方便的数据库导入导出工具taosdump。用户可以将taosdump从一个系统导出的数据,导入到其他系统中。具体使用方法,请参见博客:[TDengine DUMP工具使用指南](https://www.taosdata.com/blog/2020/03/09/1334.html)
TDengine提供了方便的数据库导入导出工具taosdump。用户可以将taosdump从一个系统导出的数据,导入到其他系统中。具体使用方法,请参见[TDengine 数据备份工具: taosdump](/tools/taosdump)
## <a class="anchor" id="export"></a>数据导出
......@@ -578,10 +602,12 @@ chmod +x TDinsight.sh
准备:
1. TDengine Server 信息:
* TDengine RESTful 服务:对本地而言,可以是 http://localhost:6041 ,使用参数 `-a`
* TDengine RESTful 服务:对本地而言,可以是 `http://localhost:6041`,使用参数 `-a`。
* TDengine 用户名和密码,使用 `-u` `-p` 参数设置。
2. Grafana 告警通知
* 使用已经存在的Grafana Notification Channel `uid`,参数 `-E`。该参数可以使用 `curl -u admin:admin localhost:3000/api/alert-notifications |jq` 来获取。
```bash
......@@ -602,7 +628,7 @@ chmod +x TDinsight.sh
-T '{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}'
```
运行程序并重启 Grafana 服务,打开面板:http://localhost:3000/d/tdinsight
运行程序并重启 Grafana 服务,打开面板:`http://localhost:3000/d/tdinsight`
更多使用场景和限制请参考[TDinsight](https://github.com/taosdata/grafanaplugin/blob/master/dashboards/TDinsight.md) 文档。
......@@ -663,6 +689,7 @@ TDengine 使用 Linux 系统的 systemd/systemctl/service 来管理系统的启
- 查看服务状态:`systemctl status taosd`
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
```
......
......@@ -672,6 +699,7 @@ Active: active (running)
```
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
```
......
......@@ -681,6 +709,7 @@ Active: inactive (dead)
```
卸载 TDengine,只需要执行如下命令:
```
rmtaos
```
......@@ -771,6 +800,7 @@ rmtaos
| COPY | IF | NOW | STABLES | WHERE |
## 转义字符说明
- 转义字符表(转义符的功能从 2.4.0.4 版本开始)
| 字符序列 | **代表的字符** |
......@@ -791,6 +821,7 @@ rmtaos
2. 数据里有转义字符
1. 遇到上面定义的转义字符会转义(%和_见下面说明),如果没有匹配的转义字符会忽略掉转义符\。
2. 对于%和_,因为在like里这两个字符是通配符,所以在模式匹配like里用`\%`%和`\_`表示字符里本身的%和_,如果在like模式匹配上下文之外使用`\%`或`\_`,则它们的计算结果为字符串`\%`和`\_`,而不是%和_。
## 诊断及其他
#### 网络连接诊断
......@@ -909,7 +940,9 @@ taosd 服务端日志文件标志位 debugflag 默认为 131,在 debug 时往
- taosdlog 服务器端生成的日志,记录taosinfo中全部信息外,还根据设置的日志输出级别,记录DEBUG(日志级别135)、TRACE(日志级别是 143)。
### 客户端日志
每个独立运行的客户端(一个进程)生成一个独立的客户端日志,其命名方式采用 taoslog+<序号> 的方式命名。文件标志位 debugflag 默认为 131,在 debug 时往往需要将其提升到 135 或 143 。
- taoslog 客户端(driver)生成的日志,默认记录客户端INFO/ERROR/WARNING 级别日志,还根据设置的日志输出级别,记录DEBUG(日志级别135)、TRACE(日志级别是 143)。
其中,日志文件最大长度由 numOfLogLines 来进行配置,一个 taosd 实例最多保留两个文件。
......
# TDengine Documentation
TDengine is a highly efficient platform to store, query, and analyze time-series data. It is specially designed and optimized for IoT, Internet of Vehicles, Industrial IoT, IT Infrastructure and Application Monitoring, etc. It works like a relational database, such as MySQL, but you are strongly encouraged to read through the following documentation before you experience it, especially the Data Modeling sections. In addition to this document, you should also download and read the technology white paper.
## [TDengine Introduction](/evaluation)
* [TDengine Introduction and Features](/evaluation#intro)
......@@ -84,7 +85,7 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
* [taosAdapter](/tools/adapter): a bridge/adapter between TDengine cluster and applications.
* [TDinsight](/tools/insight): monitoring TDengine cluster with Grafana.
* [taosdump](/tools/taosdump): backup tool for TDengine. Please install `taosTools` package for it.
* [taosBenchmark](/tools/taosbenchmark): stress test tool for TDengine. Please install `taosTools` package for it.
* [taosBenchmark](/tools/taosbenchmark): stress test tool for TDengine.
## [Connections with Other Tools](/connections)
......@@ -92,6 +93,8 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
- [MATLAB](/connections#matlab): access data stored in TDengine server via JDBC configured within MATLAB
- [R](/connections#r): access data stored in TDengine server via JDBC configured within R
- [IDEA Database](https://www.taosdata.com/blog/2020/08/27/1767.html): use TDengine visually through IDEA Database Management Tool
- [TDengineGUI](https://github.com/skye0207/TDengineGUI): a TDengine management tool with Graphical User Interface
- [DataX](https://github.com/taosdata/datax): a data immigaration tool with TDeninge supported
## [Installation and Management of TDengine Cluster](/cluster)
......@@ -118,6 +121,12 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
- [File Directory Structure](/administrator#directories): directories where TDengine data files and configuration files located
- [Parameter Limits and Reserved Keywords](/administrator#keywords): TDengine’s list of parameter limits and reserved keywords
## Rapidly build an IT DevOps system with TDengine
* [devops](/devops/telegraf): Rapidly build an IT DevOps system with TDengine + Telegraf + Grafana
* [devops](/devops/collectd): Rapidly build a IT DevOps system with TDengine + collectd/StatsD + Grafana
* [immigration](/devops/immigrate): Best practice of immigration from OpenTSDB to TDengine
## Performance: TDengine vs Others
- [Performance: TDengine vs OpenTSDB](https://www.taosdata.com/blog/2019/09/12/710.html)
......
# Quickly experience TDengine with Docker
# Quickly Taste TDengine with Docker
While it is not recommended to deploy TDengine services via Docker in a production environment, Docker tools do a good job of shielding the environmental differences in the underlying operating system and are well suited for use in development testing or first-time experience with the toolset for installing and running TDengine. In particular, Docker makes it relatively easy to try TDengine on Mac OSX and Windows systems without having to install a virtual machine or rent an additional Linux server. In addition, starting from version 2.0.14.0, TDengine provides images that support both X86-64, X86, arm64, and arm32 platforms, so non-mainstream computers that can run docker, such as NAS, Raspberry Pi, and embedded development boards, can also easily experience TDengine based on this document.
While it is not recommended to deploy TDengine services via Docker in a production environment, Docker tools do a good job of shielding the environmental differences in the underlying operating system and are well suited for use in development testing or first-time taste with the toolset for installing and running TDengine. In particular, Docker makes it relatively easy to try TDengine on Mac OSX and Windows systems without having to install a virtual machine or rent an additional Linux server. In addition, starting from version 2.0.14.0, TDengine provides images that support both X86-64, X86, arm64, and arm32 platforms, so non-mainstream computers that can run docker, such as NAS, Raspberry Pi, and embedded development boards, can also easily taste TDengine based on this document.
The following article explains how to quickly build a single-node TDengine runtime environment via Docker to support development and testing through a Step by Step style introduction.
......
Since TDengine was open sourced in July 2019, it has gained a lot of popularity among time-series database developers with its innovative data modeling design, simple installation method, easy programming interface, and powerful data insertion and query performance. The insertion and querying performance is often astonishing to users who are new to TDengine. In order to help users to experience the high performance and functions of TDengine in the shortest time, we developed an application called `taosBenchmark` (was named `taosdemo`) for insertion and querying performance testing of TDengine. Then user can easily simulate the scenario of a large number of devices generating a very large amount of data. User can easily manipulate the number of columns, data types, disorder ratio, and number of concurrent threads with taosBenchmark customized parameters.
Running taosBenchmark is very simple. Just download the [TDengine installation package](https://www.taosdata.com/cn/all-downloads/) or compiling the [TDengine code](https://github.com/taosdata/TDengine). It can be found and run in the installation directory or in the compiled results directory.
Running taosBenchmark is very simple. Just download the TDengine installation package (https://www.taosdata.com/cn/all-downloads/) or compiling the TDengine code yourself (https://github.com/taosdata/TDengine). It can be found and run in the installation directory or in the compiled results directory.
# To run an insertion test with taosBenchmark
To run an insertion test with taosBenchmark
--
Executing taosBenchmark without any parameters results in the following output.
```
$ taosBenchmark
......@@ -70,6 +70,7 @@ Query OK, 6 row(s) in set (0.002972s)
```
After pressing any key taosBenchmark will create the database test and super table meters and generate 10,000 sub-tables representing 10,000 individule meter devices that report data. That means they independently using the super table meters as a template according to TDengine data modeling best practices.
```
taos> use test;
Database changed.
......@@ -91,7 +92,9 @@ taos> show stables;
meters | 2021-08-27 11:21:01.209 | 4 | 2 | 10000 |
Query OK, 1 row(s) in set (0.001740s)
```
Then taosBenchmark generates 10,000 records for each meter device.
```
...
====thread[3] completed total inserted rows: 6250000, total affected rows: 6250000. 347626.22 records/second====
......@@ -108,9 +111,11 @@ Spent 18.0863 seconds to insert rows: 100000000, affected rows: 100000000 with 1
insert delay, avg: 28.64ms, max: 112.92ms, min: 9.35ms
```
The above information is the result of a real test on a normal PC server with 8 CPUs and 64G RAM. It shows that taosBenchmark inserted 100,000,000 (no need to count, 100 million) records in 18 seconds, or an average of 552,909,049 records per second.
TDengine also offers a parameter-bind interface for better performance, and using the parameter-bind interface (taosBenchmark -I stmt) on the same hardware for the same amount of data writes, the results are as follows.
```
...
......@@ -145,12 +150,13 @@ Spent 6.0257 seconds to insert rows: 100000000, affected rows: 100000000 with 16
insert delay, avg: 8.31ms, max: 860.12ms, min: 2.00ms
```
It shows that taosBenchmark inserted 100 million records in 6 seconds, with a much more higher insertion performance, 1,659,590 records wer inserted per second.
It shows that taosBenchmark inserted 100 million records in 6 seconds, with a much more higher insertion performance, 1,659,590 records wer inserted per second.
Because taosBenchmark is so easy to use, so we have extended it with more features to support more complex parameter settings for sample data preparation and validation for rapid prototyping.
The complete list of taosBenchmark command-line arguments can be displayed via taosBenchmark --help as follows.
```
$ taosBenchmark --help
......@@ -197,15 +203,19 @@ Report bugs to <support@taosdata.com>.
```
taosBenchmark's parameters are designed to meet the needs of data simulation. A few commonly used parameters are described below.
```
-I, --interface=INTERFACE The interface (taosc, rest, and stmt) taosBenchmark uses. Default is 'taosc'.
```
The performance difference between different interfaces of taosBenchmark has been mentioned earlier, the -I parameter is used to select different interfaces, currently taosc, stmt and rest are supported. The -I parameter is used to select different interfaces, currently taosc, stmt and rest are supported. taosc uses SQL statements to write data, stmt uses parameter binding interface to write data, and rest uses RESTful protocol to write data.
```
-T, --threads=NUMBER The number of threads. Default is 8.
```
The -T parameter sets how many threads taosBenchmark uses to synchronize data writes, so that multiple threads can squeeze as much processing power out of the hardware as possible.
```
-b, --data-type=DATATYPE The data_type of columns, default: FLOAT, INT, FLOAT.
......@@ -213,36 +223,50 @@ The -T parameter sets how many threads taosBenchmark uses to synchronize data wr
-l, --columns=COLUMNS The number of columns per record. Demo mode by default is 3 (float, int, float). Max values is 4095
```
As mentioned earlier, tadosdemo creates a typical meter data reporting scenario by default, with each device containing three columns. They are current, voltage and phases. TDengine supports BOOL, TINYINT, SMALLINT, INT, BIGINT, FLOAT, DOUBLE, BINARY, NCHAR, TIMESTAMP data types. By using -b with a list of types allows you to specify the column list with customized data type. Using -w to specify the width of the columns of the BINARY and NCHAR data types (default is 64). The -l parameter can be added to the columns of the data type specified by the -b parameter with the total number of columns of the INT type, which reduces the manual input process in case of a particularly large number of columns, up to 4095 columns.
```
-r, --rec-per-req=NUMBER The number of records per request. Default is 30000.
```
To reach TDengine performance limits, data insertion can be executed by using multiple clients, multiple threads, and batch data insertions at once. The -r parameter sets the number of records batch that can be stitched together in a single write request, the default is 30,000. The effective number of spliced records is also related to the client buffer size, which is currently 1M Bytes. If the record column width is large, the maximum number of spliced records can be calculated by dividing 1M by the column width (in bytes).
```
-t, --tables=NUMBER The number of tables. Default is 10000.
-n, --records=NUMBER The number of records per table. Default is 10000.
-M, --random The value of records generated are totally random. The default is to simulate power equipment scenario.
```
As mentioned earlier, taosBenchmark creates 10,000 tables by default, and each table writes 10,000 records. taosBenchmark can set the number of tables and the number of records in each table by -t and -n. The data generated by default without parameters are simulated real scenarios, and the simulated data are current and voltage phase values with certain jitter, which can more realistically show TDengine's efficient data compression ability. If you need to simulate the generation of completely random data, you can pass the -M parameter.
```
-y, --answer-yes Default input yes for prompt.
```
As we can see above, taosBenchmark outputs a list of parameters for the upcoming operation by default before creating a database or inserting data, so that the user can know what data is about to be written before inserting. To facilitate automatic testing, the -y parameter allows taosBenchmark to write data immediately after outputting the parameters.
```
-O, --disorder=NUMBER Insert order mode--0: In order, 1 ~ 50: disorder ratio. Default is in order.
-R, --disorder-range=NUMBER Out of order data's range, ms, default is 1000.
```
In some scenarios, the received data does not arrive in exact order, but contains a certain percentage of out-of-order data, which TDengine can also handle very well. In order to simulate the writing of out-of-order data, tadosdemo provides -O and -R parameters to be set. The -O parameter is the same as the -O parameter for fully ordered data writes. 1 to 50 is the percentage of data that contains out-of-order data. The -R parameter is the range of the timestamp offset of the out-of-order data, default is 1000 milliseconds. Also note that temporal data is uniquely identified by a timestamp, so garbled data may generate the exact same timestamp as previously written data, and such data may either be discarded (update 0) or overwrite existing data (update 1 or 2) depending on the update value created by the database, and the total number of data entries may not match the expected number of entries.
```
-g, --debug Print debug info.
```
If you are interested in the taosBenchmark insertion process or if the data insertion result is not as expected, you can use the -g parameter to make taosBenchmark print the debugging information in the process of the execution to the screen or import it to another file with the Linux redirect command to easily find the cause of the problem. In addition, taosBenchmark will also output the corresponding executed statements and debugging reasons to the screen after the execution fails. You can search the word "reason" to find the error reason information returned by the TDengine server.
```
-x, --aggr-func Test aggregation funtions after insertion.
```
TDengine is not only very powerful in insertion performance, but also in query performance due to its advanced database engine design. tadosdemo provides a -x function that performs the usual query operations and outputs the query consumption time after the insertion of data. The following is the result of a common query after inserting 100 million rows on the aforementioned server.
You can see that the select * fetch 100 million rows (not output to the screen) operation consumes only 1.26 seconds. The most of normal aggregation function for 100 million records usually takes only about 20 milliseconds, and even the longest count function takes less than 40 milliseconds.
```
taosBenchmark -I stmt -T 48 -y -x
...
......@@ -264,7 +288,9 @@ select min(current) took 0.025812 second(s)
select first(current) took 0.024105 second(s)
...
```
In addition to the command line approach, taosBenchmark also supports take a JSON file as an incoming parameter to provide a richer set of settings. A typical JSON file would look like this.
```
{
"filetype": "insert",
......@@ -327,13 +353,15 @@ In addition to the command line approach, taosBenchmark also supports take a JSO
}]
}
```
For example, we can specify different number of threads for table creation and data insertion with "thread_count" and "thread_count_create_tbl". You can use a combination of "child_table_exists", "childtable_limit" and "childtable_offset" to use multiple taosBenchmark processes (even on different computers) to write to different ranges of child tables of the same super table at the same time. You can also import existing data by specifying the data source as a csv file with "data_source" and "sample_file".
Use taosBenchmark for query and subscription testing
--
# Use taosBenchmark for query and subscription testing
taosBenchmark can not only write data, but also perform query and subscription functions. However, a taosBenchmark instance can only support one of these functions, not all three, and the configuration file is used to specify which function to test.
The following is the content of a typical query JSON example file.
```
{
"filetype": "query",
......@@ -373,7 +401,9 @@ The following is the content of a typical query JSON example file.
}
}
```
The following parameters are specific to the query in the JSON file.
```
"query_times": the number of queries per query type
"query_mode": query data interface, "tosc": call TDengine's c interface; "resetful": use restfule interface. Options are available. Default is "taosc".
......@@ -392,6 +422,7 @@ The following parameters are specific to the query in the JSON file.
```
The following is a typical subscription JSON example file content.
```
{
"filetype":"subscribe",
......@@ -431,7 +462,9 @@ The following is a typical subscription JSON example file content.
}
}
```
The following are the meanings of the parameters specific to the subscription function.
```
"interval": interval for executing subscriptions, in seconds. Optional, default is 0.
"restart": subscription restart." yes": restart the subscription if it already exists, "no": continue the previous subscription. (Please note that the executing user needs to have read/write access to the dataDir directory)
......@@ -439,8 +472,9 @@ The following are the meanings of the parameters specific to the subscription fu
"resubAfterConsume": Used in conjunction with keepProgress to call unsubscribe after the subscription has been consumed the appropriate number of times and to subscribe again.
"result": the name of the file to which the query result is written. Optional, default is null, means the query result will not be written to the file. Note: The file to save the result after each sql statement cannot be renamed, and the file name will be appended with the thread number when generating the result file.
```
Conclusion
--
# Conclusion
TDengine is a big data platform designed and optimized for IoT, Telematics, Industrial Internet, DevOps, etc. TDengine shows a high performance that far exceeds similar products due to the innovative data storage and query engine design in the database kernel. And withSQL syntax support and connectors for multiple programming languages (currently Java, Python, Go, C#, NodeJS, Rust, etc. are supported), it is extremely easy to use and has zero learning cost. To facilitate the operation and maintenance needs, we also provide data migration and monitoring functions and other related ecological tools and software.
For users who are new to TDengine, we have developed rich features for taosBenchmark to facilitate technical evaluation and stress testing. This article is a brief introduction to taosBenchmark, which will continue to evolve and improve as new features are added to TDengine.
......
# How to install/uninstall TDengine with installtion package
TDengine open source version provides `deb` and `rpm` format installation packages. Our users can choose the appropriate installation package according to their own running environment. The `deb` supports Debian/Ubuntu etc. and the `rpm` supports CentOS/RHEL/SUSE etc. We also provide `tar.gz` format installers for enterprise users.
## Install and uninstall deb package
### Install deb package
- Download and obtain the deb installation package from the official website, such as TDengine-server-2.0.0.0-Linux-x64.deb.
- Go to the directory where the TDengine-server-2.0.0.0-Linux-x64.deb installation package is located and execute the following installation command.
```
plum@ubuntu:~/git/taosv16$ sudo dpkg -i TDengine-server-2.0.0.0-Linux-x64.deb
Selecting previously unselected package tdengine.
(Reading database ... 233181 files and directories currently installed.)
Preparing to unpack TDengine-server-2.0.0.0-Linux-x64.deb ...
Failed to stop taosd.service: Unit taosd.service not loaded.
Stop taosd service success!
Unpacking tdengine (2.0.0.0) ...
Setting up tdengine (2.0.0.0) ...
Start to install TDEngine...
Synchronizing state of taosd.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable taosd
insserv: warning: current start runlevel(s) (empty) of script `taosd' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `taosd' overrides LSB defaults (0 1 6).
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos in shell
TDengine is installed successfully!
```
Note: When the Enter FQDN: prompt appears when the first node is installed, nothing needs to be entered. Only when installing the second or later more nodes is it necessary to enter the FQDN of any of the available nodes in the existing cluster to support that new node joining the cluster. It is of course possible to not enter it, but to configure it into the new node's configuration file before the new node starts
in the configuration file of the new node before it starts.
The same operation is performed for the other installation packages format.
### Uninstall deb
Uninstall command is below:
```
plum@ubuntu:~/git/tdengine/debs$ sudo dpkg -r tdengine
(Reading database ... 233482 files and directories currently installed.)
Removing tdengine (2.0.0.0) ...
TDEngine is removed successfully!
```
## Install and unstall rpm package
### Install rpm
- Download and obtain the rpm installation package from the official website, such as TDengine-server-2.0.0.0-Linux-x64.rpm.
- Go to the directory where the TDengine-server-2.0.0.0-Linux-x64.rpm installation package is located and execute the following installation command.
```
[root@bogon x86_64]# rpm -iv TDengine-server-2.0.0.0-Linux-x64.rpm
Preparing packages...
TDengine-2.0.0.0-3.x86_64
Start to install TDEngine...
Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos in shell
TDengine is installed successfully!
```
### Uninstall rpm
Uninstall command is following:
```
[root@bogon x86_64]# rpm -e tdengine
TDEngine is removed successfully!
```
## Install and uninstall tar.gz
### Install tar.gz
- Download and obtain the tar.gz installation package from the official website, such as `TDengine-server-2.0.0.0-Linux-x64.tar.gz`.
- Go to the directory where the `TDengine-server-2.0.0.0-Linux-x64.tar.gz` installation package is located, unzip the file first, then enter the subdirectory and execute the install.sh installation script in it as follows
```
plum@ubuntu:~/git/tdengine/release$ sudo tar -xzvf TDengine-server-2.0.0.0-Linux-x64.tar.gz
plum@ubuntu:~/git/tdengine/release$ ll
total 3796
drwxr-xr-x 3 root root 4096 Aug 9 14:20 ./
drwxrwxr-x 11 plum plum 4096 Aug 8 11:03 ../
drwxr-xr-x 5 root root 4096 Aug 8 11:03 TDengine-server/
-rw-r--r-- 1 root root 3871844 Aug 8 11:03 TDengine-server-2.0.0.0-Linux-x64.tar.gz
plum@ubuntu:~/git/tdengine/release$ cd TDengine-server/
plum@ubuntu:~/git/tdengine/release/TDengine-server$ ll
total 2640
drwxr-xr-x 5 root root 4096 Aug 8 11:03 ./
drwxr-xr-x 3 root root 4096 Aug 9 14:20 ../
drwxr-xr-x 5 root root 4096 Aug 8 11:03 connector/
drwxr-xr-x 2 root root 4096 Aug 8 11:03 driver/
drwxr-xr-x 8 root root 4096 Aug 8 11:03 examples/
-rwxr-xr-x 1 root root 13095 Aug 8 11:03 install.sh*
-rw-r--r-- 1 root root 2651954 Aug 8 11:03 taos.tar.gz
plum@ubuntu:~/git/tdengine/release/TDengine-server$ sudo ./install.sh
This is ubuntu system
verType=server interactiveFqdn=yes
Start to install TDengine...
Synchronizing state of taosd.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable taosd
insserv: warning: current start runlevel(s) (empty) of script `taosd' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `taosd' overrides LSB defaults (0 1 6).
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join OR leave it blank to build one :hostname.taosdata.com:7030
To configure TDengine : edit /etc/taos/taos.cfg
To start TDengine : sudo systemctl start taosd
To access TDengine : use taos in shell
Please run: taos -h hostname.taosdata.com:7030 to login into cluster, then execute : create dnode 'newDnodeFQDN:port'; in TAOS shell to add this new node into the clsuter
TDengine is installed successfully!
```
Note: The install.sh install script asks for some configuration information through an interactive command line interface during execution. If you prefer a non-interactive installation, you can execute the install.sh script with the -e no parameter. Run . /install.sh -h command to see detailed information about all parameters.
### Uninstall TDengine after tar.gz package installed
Uninstall command is following:
```
plum@ubuntu:~/git/tdengine/release/TDengine-server$ rmtaos
TDEngine is removed successfully!
```
## Installation directory description
After TDengine is successfully installed, the main installation directory is /usr/local/taos, and the directory contents are as follows:
```
plum@ubuntu:/usr/local/taos$ cd /usr/local/taos
plum@ubuntu:/usr/local/taos$ ll
total 36
drwxr-xr-x 9 root root 4096 7 30 19:20 ./
drwxr-xr-x 13 root root 4096 7 30 19:20 ../
drwxr-xr-x 2 root root 4096 7 30 19:20 bin/
drwxr-xr-x 2 root root 4096 7 30 19:20 cfg/
lrwxrwxrwx 1 root root 13 7 30 19:20 data -> /var/lib/taos/
drwxr-xr-x 2 root root 4096 7 30 19:20 driver/
drwxr-xr-x 8 root root 4096 7 30 19:20 examples/
drwxr-xr-x 2 root root 4096 7 30 19:20 include/
drwxr-xr-x 2 root root 4096 7 30 19:20 init.d/
lrwxrwxrwx 1 root root 13 7 30 19:20 log -> /var/log/taos/
```
- Automatically generates the configuration file directory, database directory, and log directory.
- Configuration file default directory: /etc/taos/taos.cfg, softlinked to /usr/local/taos/cfg/taos.cfg.
- Database default directory: /var/lib/taos, softlinked to /usr/local/taos/data.
- Log default directory: /var/log/taos, softlinked to /usr/local/taos/log.
- executables in the /usr/local/taos/bin directory, which are soft-linked to the /usr/bin directory.
- Dynamic library files in the /usr/local/taos/driver directory, which are soft-linked to the /usr/lib directory.
- header files in the /usr/local/taos/include directory, which are soft-linked to the /usr/include directory.
## Uninstall and update file instructions
When uninstalling the installation package, the configuration files, database files and log files will be kept, i.e. /etc/taos/taos.cfg, /var/lib/taos, /var/log/taos. If users confirm that they do not need to keep them, they can delete them manually, but must be careful, because after deletion, the data will be permanently lost and cannot be recovered!
If the installation is updated, when the default configuration file (/etc/taos/taos.cfg) exists, the existing configuration file is still used. The configuration file carried in the installation package is modified to taos.cfg.org and saved in the /usr/local/taos/cfg/ directory, which can be used as a reference sample for setting configuration parameters; if the configuration file does not exist, the Use the configuration file that comes with the installation package
file that comes with the installation package.
## Caution
- TDengine provides several installers, but it is best not to use both the tar.gz installer and the deb or rpm installer on one system. Otherwise, they may affect each other and cause problems when using them.
- For deb package installation, if the installation directory is manually deleted by mistake, the uninstallation, or reinstallation cannot be successful. In this case, you need to clear the installation information of the tdengine package by executing the following command:
```
plum@ubuntu:~/git/tdengine/$ sudo rm -f /var/lib/dpkg/info/tdengine*
```
Then just reinstall it.
- For the rpm package after installation, if the installation directory is manually deleted by mistake part of the uninstallation, or reinstallation can not be successful. In this case, you need to clear the installation information of the tdengine package by executing the following command:
```
[root@bogon x86_64]# rpm -e --noscripts tdengine
```
Then just reinstall it.
......@@ -2,31 +2,33 @@
## <a class="anchor" id="install"></a>Quick Install
TDengine software consists of 3 parts: server, client, and alart module. At the moment, TDengine server only runs on Linux (Windows, mac OS and more OS supports will come soon), but client can run on either Windows or Linux. TDengine client can be installed and run on Windows or Linux. Applications based-on any OSes can all connect to server taosd via a RESTful interface. From 2.4 and later version, TDengine use a stand-alone software, taosAdapteer to provide http service. The early version uses the http server embedded in the taosd. About CPU, TDengine supports X64/ARM64/MIPS64/Alpha64, and ARM32、RISC-V, other more CPU architectures will be supported soon. You can set up and install TDengine server either from the [source code](https://www.taosdata.com/en/getting-started/#Install-from-Source) or the [packages](https://www.taosdata.com/en/getting-started/#Install-from-Package).
TDengine includes server, client, and ecological software and peripheral tools. Currently, version 2.0 of the server can only be installed and run on Linux and will support Windows, macOS, and other OSes in the future. The client can be installed and run on Windows or Linux. Applications on any operating system can use the RESTful interface to connect to the taosd server. After 2.4, TDengine includes taosAdapter to provide an easy-to-use and efficient way to ingest data including RESTful service. taosAdapter needs to be started manually as a stand-alone component. The early version uses an embedded HTTP component to provide the RESTful interface.
### <a class="anchor" id="source-install"></a>Install from Source
Please visit our [TDengine github page](https://github.com/taosdata/TDengine) for instructions on installation from the source code.
### Install from Docker Container
TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms and will support ARM32, RISC-V, and other CPU architectures in the future.
For the time being, it is not recommended to use Docker to deploy the client or server side of TDengine in production environments, but it is convenient to use Docker to deploy in development environments or when trying it for the first time. In particular, with Docker, it is easy to try TDengine in Mac OS X and Windows environments.
### Install with Docker Container
```
docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
```
Please refer to [Quickly experience TDengine with Docker](https://www.taosdata.com/en/documentation/getting-started/docker) for the details.
Please refer to [Quickly Taste TDengine with Docker](https://www.taosdata.com/en/documentation/getting-started/docker) for the details.
For the time being, using Docker to deploy the client or server of TDengine for production environments is not recommended. However it is a convenient way to deploy TDengine for development purposes. In particular, it is easy to try TDengine in Mac OS X and Windows environments with Docker.
### <a class="anchor" id="package-install"></a>Install from Package
Three different packages for TDengine server are provided, please pick up the one you like. (Lite packages only have execution files and connector of C/C++, but standard packages support connectors of nearly all programming languages.) Beta version has more features, but we suggest you to install stable version for production or testing.
TDengine is very easy to install, from download to successful installation in just a few seconds. For ease of use, the standard server installation package includes the client application and sample code; if you only need the server application and C/C++ language support for the client connection, you can also download the lite version of the installation package only. The installation packages are available in `rpm` and `deb` formats, as well as `tar.gz` format for enterprise customers who need to facilitate use on specific operating systems. Releases include both stable and beta releases. We recommend the stable release for production use or testing. The beta release may contain more new features. You can choose to download from the following as needed:
<ul id="server-packageList" class="package-list"></ul>
For detailed installation steps, please refer to [How to install/uninstall TDengine with installation package](https://www.taosdata.com/getting-started/install).
Click [here](https://www.taosdata.com/en/getting-started/#Install-from-Package) to download the install package.
**Click [here](https://github.com/taosdata/TDengine/releases) for release notes.**
### Install TDengine by apt-get
If you use Debian or Ubuntu system you can use 'apt-get' command to install TDengine from official repository. Please use following commands to setup:
If you use Debian or Ubuntu system you can use the `apt-get` command to install TDengine from the official repository. Please use the following commands to setup:
```
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
......@@ -37,18 +39,30 @@ apt-get policy tdengine
sudo apt-get install tdengine
```
### Install client only
If the client and server are running on different computers, you can install the client separately. When downloading, please note that the selected client version number should strictly match the server version number downloaded above. Linux and Windows installation packages are as follows (the lite version of the installer comes with connection support for the C/C++ language only, while the standard version of the installer also contains sample code):
<ul id="client-packagelist" class="package-list"></ul>
### <a class="anchor" id="source-install"></a>Install from Source
If you want to contribute to TDengine, please visit [TDengine GitHub page](https://github.com/taosdata/TDengine) for detailed instructions on build and installation from the source code.
**To download other components, beta version, or early releases, please click [here](https://www.taosdata.com/en/all-downloads/).**
## <a class="anchor" id="start"></a>Quick Launch
After installation, you can start the TDengine service by the `systemctl` command.
```bash
$ systemctl start taosd
systemctl start taosd
```
Then check if the service is working now.
```bash
$ systemctl status taosd
systemctl status taosd
```
If the service is running successfully, you can play around through TDengine shell `taos`.
......@@ -56,25 +70,25 @@ If the service is running successfully, you can play around through TDengine she
**Note:**
- The `systemctl` command needs the **root** privilege. Use **sudo** if you are not the **root** user.
- To get better product feedback and improve our solution, TDengine will collect basic usage information, but you can modify the configuration parameter **telemetryReporting** in the system configuration file taos.cfg, and set it to 0 to turn it off.
- TDengine uses FQDN (usually hostname) as the node ID. In order to ensure normal operation, you need to set hostname for the server running taosd, and configure DNS service or hosts file for the machine running client application, to ensure the FQDN can be resolved.
- TDengine supports installation on Linux systems with [systemd](https://en.wikipedia.org/wiki/Systemd) as the process service management, and uses `which systemctl` command to detect whether `systemd` packages exist in the system:
- To get better product feedback and improve our solution, TDengine will collect basic usage information, but you can modify the configuration parameter **telemetryReporting** in the system configuration file `taos.cfg`, and set it to 0 to turn it off.
- TDengine uses FQDN (usually hostname) as the node ID. To ensure normal operation, you need to set the host's name for the server running `taosd`, and configure DNS service or hosts file for the machine running the client application, to ensure the FQDN can be resolved.
- TDengine supports installation on Linux systems with [systemd](https://en.wikipedia.org/wiki/Systemd) as the process service management and uses `which systemctl` command to detect whether `systemd` packages exist in the system:
```bash
$ which systemctl
which systemctl
```
If `systemd` is not supported in the system, TDengine service can also be launched via `/usr/local/taos/bin/taosd` manually.
## <a class="anchor" id="console"></a>TDengine Shell Command Line
To launch TDengine shell, the command line interface, in a Linux terminal, type:
To launch TDengine shell, the command-line interface, in a Linux terminal, type:
```bash
$ taos
taos
```
The welcome message is printed if the shell connects to TDengine server successfully, otherwise, an error message will be printed (refer to our [FAQ](https://www.taosdata.com/en/faq) page for troubleshooting the connection error). The TDengine shell prompt is:
The welcome message is printed if the shell connects to the TDengine server successfully, otherwise, an error message will be printed (refer to our [FAQ](https://www.taosdata.com/en/faq) page for troubleshooting the connection error). The TDengine shell prompt is:
```cmd
taos>
......@@ -110,49 +124,59 @@ Besides the SQL commands, the system administrator can check system status, add
### Shell Command Line Parameters
You can configure command parameters to change how TDengine shell executes. Some frequently used options are listed below:
You can configure command parameters to change how the TDengine shell executes. Some frequently used options are listed below:
- -c, --config-dir: set the configuration directory. It is */etc/taos* by default.
- -h, --host: set the IP address of the server it will connect to. Default is localhost.
- -s, --commands: set the command to run without entering the shell.
- -u, -- user: user name to connect to server. Default is root.
- -u, -- user: user name to connect to the server/cluster. Default is root.
- -p, --password: password. Default is 'taosdata'.
- -?, --help: get a full list of supported options.
Examples:
```bash
$ taos -h 192.168.0.1 -s "use db; show tables;"
taos -h 192.168.0.1 -s "use db; show tables;"
```
### Run SQL Command Scripts
Inside TDengine shell, you can run SQL scripts in a file with source command.
Inside TDengine shell, you can run SQL scripts in a file with the `source` command.
```mysql
taos> source <filename>;
```
### Shell Tips
### taos shell tips
- Use up/down arrow key to check the command history
- To change the default password, use "alter user" command
- Use the up/down arrow key to check the command history
- To change the default password, use `alter user` command
- Use ctrl+c to interrupt any queries
- To clean the schema of local cached tables, execute command `RESET QUERY CACHE`
- To clean the schema of locally cached tables, execute the command `RESET QUERY CACHE`
## <a class="anchor" id="demo"></a>Taste TDengine’s Lightning Speed
## <a class="anchor" id="demo"></a>Experience TDengine’s Lightning Speed
### <a class="anchor" id="taosBenchmark"></a> Taste insertion speed with taosBenchmark
After starting the TDengine server, you can execute the command `taosBenchmark` (was named `taosdemo`, please install taosTools package if you use TDengine 2.4 or later version) in the Linux terminal.
Once the TDengine server started, you can execute the command `taosBenchmark` (which was named `taosdemo`) in the Linux terminal. In 2.4.0.7 and early release, taosBenchmark is distributed within taosTools package. In later release, taosBenchmark will be included within TDengine again.
```bash
$ taosBenchmark
taosBenchmark
```
Using this command, a STable named `meters` will be created in the database `test`. There are 10k tables under this STable, named from `t0` to `t9999`. In each table there are 100k rows of records, each row with columns (`f1`, `f2` and `f3`. The timestamp is from "2017-07-14 10:40:00 000" to "2017-07-14 10:41:39 999". Each table also has tags `areaid` and `loc`: `areaid` is set from 1 to 10, `loc` is set to "beijing" or "shanghai".
Using this command, a STable named `meters` will be created in the database `test`. There are 10k tables under this STable, named from `d0` to `d9999`. In each table, there are 100k rows of records, each row with columns (`ts`, `current`, `voltage`, and `phase`. The timestamp is from "2017-07-14 10:40:00 000" to "2017-07-14 10:41:39 999". Each table also has tags `location` and `groupId`: `groupId` is set from 1 to 10, `location` is set to "beijing" or "shanghai".
Once execution is finished, 1 billion rows of records will be inserted. It usually takes about a dozen seconds to execute this command on a normal PC server but it may be different depending on the particular hardware platform performance.
### <a class="anchor" id="taosBenchmark"></a> Using taosBenchmark in detail
you can run the command `taosBenchmark` with many options, like the number of tables, rows of records, and so on. To know more about these options, you can execute `taosBenchmark --help` and then take a try using different options.
It takes about 10 minutes to execute this command. Once finished, 1 billion rows of records will be inserted.
For more details on how to use taosBenchmark, please refer to [How to use taosBenchmark to test the performance of TDengine](https://tdengine.com/2021/10/09/3114.html).
In the TDengine client, enter sql query commands and then experience our lightning query speed.
### <a class="anchor" id="taosshell"></a> Taste query speed with taos shell
In the TDengine client, enter sql query commands and then taste our lightning query speed.
- query total rows of records:
......@@ -160,7 +184,7 @@ In the TDengine client, enter sql query commands and then experience our lightni
taos> select count(*) from test.meters;
```
- query average, max and min of the total 1 billion records:
- query average, max, and min of the total 1 billion records:
```mysql
taos> select avg(f1), max(f2), min(f3) from test.meters;
......@@ -184,11 +208,6 @@ taos> select avg(f1), max(f2), min(f3) from test.meters where areaid=10;
taos> select avg(f1), max(f2), min(f3) from test.t10 interval(10s);
```
### <a class="anchor" id="taosBenchmark"></a> Using taosBenchmark in detail
you can run command `taosBenchmark` with many options, like number of tables, rows of records and so on. To know more about these options, you can execute `taosBenchmark --help` and then take a try using different options.
Please refer to [How to use taosBenchmark to test the performance of TDengine](https://tdengine.com/2021/10/09/3114.html) for detail.
## <a class="anchor" id="platforms"></a>List of Supported Platforms
List of platforms supported by TDengine server
......@@ -209,7 +228,7 @@ Note: ● has been verified by official tests; ○ has been verified by unoffici
List of platforms supported by TDengine client and connectors
At the moment, TDengine connectors can support a wide range of platforms, including hardware platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha, and operating system such as Linux/Win64/Win32.
At the moment, TDengine connectors can support a wide range of platforms, including hardware platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha, and operating systems such as Linux/Win64/Win32.
Comparison matrix as following:
......@@ -227,3 +246,5 @@ Comparison matrix as following:
Note: ● has been verified by official tests; ○ has been verified by unofficial tests.
Please visit Connectors section for more detailed information.
<script src="/wp-includes/js/quick-start.js?v=1"></script>
......@@ -304,16 +304,6 @@ TCollector is a client-side process that gathers data from local collectors and
Please find taosAdapter configuration and usage from `taosadapter --help` output.
## <a class="anchor" id="bailongma2-prometheus"></a> Insert Prometheus data via Bailongma 2.0
**Notice:**
TDengine 2.4.0.4+ provides taosAdapter to support Prometheus data writing. Bailongma v2 will be abandoned and no more maintained.
## <a class="anchor" id="bailongma2-telegraf"></a> Insert data via Bailongma 2.0 and Telegraf
**Notice:**
TDengine 2.3.0.0+ provides taosAdapter to support Telegraf data writing. Bailongma v2 will be abandoned and no more maintained.
## <a class="anchor" id="emq"></a> Data Writing via EMQ Broker
[EMQ](https://github.com/emqx/emqx) is an open source MQTT Broker software, with no need of coding, only to use "rules" in EMQ Dashboard for simple configuration, and MQTT data can be directly written into TDengine. EMQ X supports storing data to the TDengine by sending it to a Web service, and also provides a native TDengine driver on Enterprise Edition for direct data store. Please refer to [EMQ official documents](https://docs.emqx.io/broker/latest/cn/rule/rule-example.html#%E4%BF%9D%E5%AD%98%E6%95%B0%E6%8D%AE%E5%88%B0-tdengine) for more details.
......
......@@ -193,17 +193,22 @@ Client configuration parameters:
```
locale zh_CN.UTF-8
```
On Windows systems, the current system encoding cannot be obtained from locale. If string encoding information cannot be read from the configuration file, taos defaults to CP936. It is equivalent to adding the following to the configuration file:
j
```
charset CP936
```
If you need to adjust the character encoding, check the encoding used by the current operating system and set it correctly in the configuration file.
In Linux systems, if user sets both locale and charset encoding charset, and the locale and charset are inconsistent, the value set later will override the value set earlier.
```
locale zh_CN.UTF-8
charset GBK
```
The valid value for charset is GBK.
And the valid value for charset is UTF-8.
......@@ -217,6 +222,7 @@ Client configuration parameters:
The time zone in which the client runs the system. In order to deal with the problem of data writing and query in multiple time zones, TDengine uses Unix Timestamp to record and store timestamps. The characteristics of UNIX timestamps determine that the generated timestamps are consistent at any time regardless of any time zone. It should be noted that UNIX timestamps are converted and recorded on the client side. In order to ensure that other forms of time on the client are converted into the correct Unix timestamp, the correct time zone needs to be set.
In Linux system, the client will automatically read the time zone information set by the system. Users can also set time zones in profiles in a number of ways. For example:
```
timezone UTC-8
timezone GMT-8
......@@ -232,14 +238,17 @@ Client configuration parameters:
```
In East Eight Zone, the SQL statement is equivalent to
```sql
SELECT count(*) FROM table_name WHERE TS<1554955268000;
```
In the UTC time zone, the SQL statement is equivalent to
```sql
SELECT count(*) FROM table_name WHERE TS<1554984068000;
```
In order to avoid the uncertainty caused by using string time format, Unix timestamp can also be used directly. In addition, timestamp strings with time zones can also be used in SQL statements, such as: timestamp strings in RFC3339 format, 2013-04-12T15:52:01.123+08:00, or ISO-8601 format timestamp strings 2013-04-12T15:52:01.123+0800. The conversion of the above two strings into Unix timestamps is not affected by the time zone in which the system is located.
When starting taos, you can also specify an end point for an instance of taosd from the command line, otherwise read from taos.cfg.
......@@ -340,7 +349,7 @@ Query OK, 9 row(s) affected (0.004763s)
**Import via taosdump tool**
TDengine provides a convenient database import and export tool, taosdump. Users can import data exported by taosdump from one system into other systems. Please refer to the blog: [User Guide of TDengine DUMP Tool](https://www.taosdata.com/blog/2020/03/09/1334.html).
TDengine provides a convenient database import and export tool, taosdump. Users can import data exported by taosdump from one system into other systems. Please refer to [backup tool for TDengine - taosdump](/tools/taosdump).
## <a class="anchor" id="export"></a> Export Data
......
Subproject commit 1c8924dc668e6aa848214c2fc54e3ace3f5bf8df
......@@ -59,6 +59,7 @@ cp ${compile_dir}/../packaging/tools/set_core.sh ${pkg_dir}${install_home_pat
cp ${compile_dir}/../packaging/tools/taosd-dump-cfg.gdb ${pkg_dir}${install_home_path}/bin
cp ${compile_dir}/build/bin/taosd ${pkg_dir}${install_home_path}/bin
cp ${compile_dir}/build/bin/taosBenchmark ${pkg_dir}${install_home_path}/bin
if [ -f "${compile_dir}/build/bin/taosadapter" ]; then
cp ${compile_dir}/build/bin/taosadapter ${pkg_dir}${install_home_path}/bin ||:
......
......@@ -4,6 +4,7 @@ WORKDIR /root
ARG pkgFile
ARG dirName
ARG cpuType
RUN echo ${pkgFile} && echo ${dirName}
COPY ${pkgFile} /root/
......@@ -21,6 +22,11 @@ ENV LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib" \
EXPOSE 6030-6049
EXPOSE 6030-6039/udp
COPY ./bin/* /usr/bin/
ENTRYPOINT ["/usr/bin/entrypoint.sh"]
ENV TINI_VERSION v0.19.0
RUN bash -c 'echo -e "Downloading tini-${cpuType} ..."'
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-${cpuType} /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--", "/usr/bin/entrypoint.sh"]
CMD ["taosd"]
VOLUME [ "/var/lib/taos", "/var/log/taos", "/corefile" ]
......@@ -89,7 +89,7 @@ cp -f ${comunityArchiveDir}/${pkgFile} .
echo "dirName=${dirName}"
docker build --rm -f "Dockerfile" --network=host -t tdengine/tdengine-${dockername}:${version} "." --build-arg pkgFile=${pkgFile} --build-arg dirName=${dirName}
docker build --rm -f "Dockerfile" --network=host -t tdengine/tdengine-${dockername}:${version} "." --build-arg pkgFile=${pkgFile} --build-arg dirName=${dirName} --build-arg cpuType=${cpuType}
docker login -u tdengine -p ${passWord} #replace the docker registry username and password
docker push tdengine/tdengine-${dockername}:${version}
......
......@@ -312,9 +312,12 @@ if [ "$osType" != "Darwin" ]; then
echo "====do tar.gz package for all systems===="
cd ${script_dir}/tools
if [ "$verMode" == "cluster" ]; then
${csudo}./makepkg.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode} ${verNumberComp}
${csudo}./makeclient.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode}
${csudo}./makearbi.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode}
# ${csudo}./makeclient.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode}
# ${csudo}./makearbi.sh ${compile_dir} ${verNumber} "${build_time}" ${cpuType} ${osType} ${verMode} ${verType} ${pagMode}
fi
else
# only make client for Darwin
cd ${script_dir}/tools
......
......@@ -68,6 +68,8 @@ cp %{_compiledir}/../packaging/tools/set_core.sh %{buildroot}%{homepath}/bin
cp %{_compiledir}/../packaging/tools/taosd-dump-cfg.gdb %{buildroot}%{homepath}/bin
cp %{_compiledir}/build/bin/taos %{buildroot}%{homepath}/bin
cp %{_compiledir}/build/bin/taosd %{buildroot}%{homepath}/bin
cp %{_compiledir}/build/bin/taosBenchmark %{buildroot}%{homepath}/bin
if [ -f %{_compiledir}/build/bin/taosadapter ]; then
cp %{_compiledir}/build/bin/taosadapter %{buildroot}%{homepath}/bin ||:
fi
......
......@@ -192,6 +192,7 @@ function install_bin() {
${csudo}rm -f ${bin_link_dir}/tarbitrator || :
${csudo}rm -f ${bin_link_dir}/set_core || :
${csudo}rm -f ${bin_link_dir}/run_taosd_and_taosadapter.sh || :
${csudo}rm -f ${bin_link_dir}/TDinsight.sh || :
${csudo}cp -r ${script_dir}/bin/* ${install_main_dir}/bin && ${csudo}chmod 0555 ${install_main_dir}/bin/*
......@@ -201,6 +202,7 @@ function install_bin() {
[ -x ${install_main_dir}/bin/taosadapter ] && ${csudo}ln -s ${install_main_dir}/bin/taosadapter ${bin_link_dir}/taosadapter || :
[ -x ${install_main_dir}/bin/taosBenchmark ] && ${csudo}ln -s ${install_main_dir}/bin/taosBenchmark ${bin_link_dir}/taosdemo || :
[ -x ${install_main_dir}/bin/taosdump ] && ${csudo}ln -s ${install_main_dir}/bin/taosdump ${bin_link_dir}/taosdump || :
[ -x ${install_main_dir}/bin/TDinsight.sh ] && ${csudo}ln -s ${install_main_dir}/bin/TDinsight.sh ${bin_link_dir}/TDinsight.sh || :
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript} || :
[ -x ${install_main_dir}/bin/set_core.sh ] && ${csudo}ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :
[ -x ${install_main_dir}/bin/run_taosd_and_taosadapter.sh ] && ${csudo}ln -s ${install_main_dir}/bin/run_taosd_and_taosadapter.sh ${bin_link_dir}/run_taosd_and_taosadapter.sh || :
......@@ -565,7 +567,7 @@ function install_data() {
}
function install_connector() {
${csudo}cp -rf ${script_dir}/connector/ ${install_main_dir}/
[ -d "${script_dir}/connector/" ] && ${csudo}cp -rf ${script_dir}/connector/ ${install_main_dir}/
}
function install_examples() {
......@@ -691,6 +693,10 @@ function install_service_on_systemd() {
${service_config_dir}/ || :
${csudo}systemctl daemon-reload
[ -f ${script_dir}/cfg/nginxd.service ] &&
${csudo}cp ${script_dir}/cfg/nginxd.service \
${service_config_dir}/ || :
if ! ${csudo}systemctl enable nginxd &>/dev/null; then
${csudo}systemctl daemon-reexec
${csudo}systemctl enable nginxd
......@@ -820,9 +826,9 @@ function update_TDengine() {
install_log
install_header
install_lib
if [ "$pagMode" != "lite" ]; then
install_connector
fi
# if [ "$pagMode" != "lite" ]; then
# install_connector
# fi
install_examples
if [ -z $1 ]; then
install_bin
......@@ -879,7 +885,7 @@ function update_TDengine() {
echo -e "\033[44;32;1m${productName} client is updated successfully!${NC}"
fi
rm -rf $(tar -tf ${tarName})
rm -rf $(tar -tf ${tarName} |grep -v "^\./$")
}
function install_TDengine() {
......@@ -976,7 +982,7 @@ function install_TDengine() {
fi
touch ~/.${historyFile}
rm -rf $(tar -tf ${tarName})
rm -rf $(tar -tf ${tarName} |grep -v "^\./$")
}
## ==============================Main program starts from here============================
......
......@@ -3,7 +3,7 @@
# Generate tar.gz package for all os system
set -e
#set -x
set -x
curr_dir=$(pwd)
compile_dir=$1
......@@ -54,11 +54,21 @@ if [ "$pagMode" == "lite" ]; then
strip ${build_dir}/bin/${serverName}
strip ${build_dir}/bin/${clientName}
# lite version doesn't include taosadapter, which will lead to no restful interface
bin_files="${build_dir}/bin/${serverName} ${build_dir}/bin/${clientName} ${script_dir}/remove.sh ${script_dir}/startPre.sh"
bin_files="${build_dir}/bin/${serverName} ${build_dir}/bin/${clientName} ${script_dir}/remove.sh ${script_dir}/startPre.sh ${build_dir}/bin/taosBenchmark"
taostools_bin_files=""
else
wget https://github.com/taosdata/grafanaplugin/releases/latest/download/TDinsight.sh -O ${build_dir}/bin/TDinsight.sh \
&& echo "TDinsight.sh downloaded!" \
|| echo "failed to download TDinsight.sh"
taostools_bin_files=" ${build_dir}/bin/taosdump \
${build_dir}/bin/TDinsight.sh "
bin_files="${build_dir}/bin/${serverName} \
${build_dir}/bin/${clientName} \
${build_dir}/bin/taosBenchmark \
${taostools_bin_files} \
${build_dir}/bin/taosadapter \
${build_dir}/bin/tarbitrator\
${script_dir}/remove.sh \
......@@ -66,9 +76,6 @@ else
${script_dir}/run_taosd_and_taosadapter.sh \
${script_dir}/startPre.sh \
${script_dir}/taosd-dump-cfg.gdb"
taostools_bin_files=" ${build_dir}/bin/taosdump \
${build_dir}/bin/taosBenchmark"
fi
lib_files="${build_dir}/lib/libtaos.so.${version}"
......@@ -119,36 +126,36 @@ mkdir -p ${install_dir}/init.d && cp ${init_file_rpm} ${install_dir}/init.d/${se
mkdir -p ${install_dir}/init.d && cp ${init_file_tarbitrator_deb} ${install_dir}/init.d/tarbitratord.deb || :
mkdir -p ${install_dir}/init.d && cp ${init_file_tarbitrator_rpm} ${install_dir}/init.d/tarbitratord.rpm || :
if [ -n "${taostools_bin_files}" ]; then
mkdir -p ${taostools_install_dir} || echo -e "failed to create ${taostools_install_dir}"
mkdir -p ${taostools_install_dir}/bin \
&& cp ${taostools_bin_files} ${taostools_install_dir}/bin \
&& chmod a+x ${taostools_install_dir}/bin/* || :
if [ -f ${top_dir}/src/kit/taos-tools/packaging/tools/install-taostools.sh ]; then
cp ${top_dir}/src/kit/taos-tools/packaging/tools/install-taostools.sh \
${taostools_install_dir}/ > /dev/null \
&& chmod a+x ${taostools_install_dir}/install-taostools.sh \
|| echo -e "failed to copy install-taostools.sh"
else
echo -e "install-taostools.sh not found"
fi
#if [ -n "${taostools_bin_files}" ]; then
# mkdir -p ${taostools_install_dir} || echo -e "failed to create ${taostools_install_dir}"
# mkdir -p ${taostools_install_dir}/bin \
# && cp ${taostools_bin_files} ${taostools_install_dir}/bin \
# && chmod a+x ${taostools_install_dir}/bin/* || :
# if [ -f ${top_dir}/src/kit/taos-tools/packaging/tools/install-taostools.sh ]; then
# cp ${top_dir}/src/kit/taos-tools/packaging/tools/install-taostools.sh \
# ${taostools_install_dir}/ > /dev/null \
# && chmod a+x ${taostools_install_dir}/install-taostools.sh \
# || echo -e "failed to copy install-taostools.sh"
# else
# echo -e "install-taostools.sh not found"
# fi
if [ -f ${top_dir}/src/kit/taos-tools/packaging/tools/uninstall-taostools.sh ]; then
cp ${top_dir}/src/kit/taos-tools/packaging/tools/uninstall-taostools.sh \
${taostools_install_dir}/ > /dev/null \
&& chmod a+x ${taostools_install_dir}/uninstall-taostools.sh \
|| echo -e "failed to copy uninstall-taostools.sh"
else
echo -e "uninstall-taostools.sh not found"
fi
# if [ -f ${top_dir}/src/kit/taos-tools/packaging/tools/uninstall-taostools.sh ]; then
# cp ${top_dir}/src/kit/taos-tools/packaging/tools/uninstall-taostools.sh \
# ${taostools_install_dir}/ > /dev/null \
# && chmod a+x ${taostools_install_dir}/uninstall-taostools.sh \
# || echo -e "failed to copy uninstall-taostools.sh"
# else
# echo -e "uninstall-taostools.sh not found"
# fi
if [ -f ${build_dir}/lib/libavro.so.23.0.0 ]; then
mkdir -p ${taostools_install_dir}/avro/{lib,lib/pkgconfig} || echo -e "failed to create ${taostools_install_dir}/avro"
cp ${build_dir}/lib/libavro.* ${taostools_install_dir}/avro/lib
cp ${build_dir}/lib/pkgconfig/avro-c.pc ${taostools_install_dir}/avro/lib/pkgconfig
fi
fi
# if [ -f ${build_dir}/lib/libavro.so.23.0.0 ]; then
# mkdir -p ${taostools_install_dir}/avro/{lib,lib/pkgconfig} || echo -e "failed to create ${taostools_install_dir}/avro"
# cp ${build_dir}/lib/libavro.* ${taostools_install_dir}/avro/lib
# cp ${build_dir}/lib/pkgconfig/avro-c.pc ${taostools_install_dir}/avro/lib/pkgconfig
# fi
#fi
if [ -f ${build_dir}/bin/jemalloc-config ]; then
mkdir -p ${install_dir}/jemalloc/{bin,lib,lib/pkgconfig,include/jemalloc,share/doc/jemalloc,share/man/man3}
......@@ -310,13 +317,14 @@ if [ "$exitcode" != "0" ]; then
exit $exitcode
fi
if [ -n "${taostools_bin_files}" ]; then
tar -zcv -f "$(basename ${taostools_pkg_name}).tar.gz" "$(basename ${taostools_install_dir})" --remove-files || :
exitcode=$?
if [ "$exitcode" != "0" ]; then
echo "tar ${taostools_pkg_name}.tar.gz error !!!"
exit $exitcode
fi
fi
#if [ -n "${taostools_bin_files}" ]; then
# wget https://github.com/taosdata/grafanaplugin/releases/latest/download/TDinsight.sh -O ${taostools_install_dir}/bin/TDinsight.sh && echo "TDinsight.sh downloaded!"|| echo "failed to download TDinsight.sh"
# tar -zcv -f "$(basename ${taostools_pkg_name}).tar.gz" "$(basename ${taostools_install_dir})" --remove-files || :
# exitcode=$?
# if [ "$exitcode" != "0" ]; then
# echo "tar ${taostools_pkg_name}.tar.gz error !!!"
# exit $exitcode
# fi
#fi
cd ${curr_dir}
......@@ -107,6 +107,7 @@ function install_bin() {
${csudo}rm -f ${bin_link_dir}/taos || :
${csudo}rm -f ${bin_link_dir}/taosd || :
${csudo}rm -f ${bin_link_dir}/taosadapter || :
${csudo}rm -f ${bin_link_dir}/taosBenchmark || :
${csudo}rm -f ${bin_link_dir}/taosdemo || :
${csudo}rm -f ${bin_link_dir}/taosdump || :
${csudo}rm -f ${bin_link_dir}/rmtaos || :
......@@ -118,7 +119,8 @@ function install_bin() {
[ -x ${bin_dir}/taos ] && ${csudo}ln -s ${bin_dir}/taos ${bin_link_dir}/taos || :
[ -x ${bin_dir}/taosd ] && ${csudo}ln -s ${bin_dir}/taosd ${bin_link_dir}/taosd || :
[ -x ${bin_dir}/taosadapter ] && ${csudo}ln -s ${bin_dir}/taosadapter ${bin_link_dir}/taosadapter || :
[ -x ${bin_dir}/taosdemo ] && ${csudo}ln -s ${bin_dir}/taosdemo ${bin_link_dir}/taosdemo || :
[ -x ${bin_dir}/taosBenchmark ] && ${csudo}ln -sf ${bin_dir}/taosBenchmark ${bin_link_dir}/taosdemo || :
[ -x ${bin_dir}/TDinsight.sh ] && ${csudo}ln -sf ${bin_dir}/TDinsight.sh ${bin_link_dir}/TDinsight.sh || :
[ -x ${bin_dir}/taosdump ] && ${csudo}ln -s ${bin_dir}/taosdump ${bin_link_dir}/taosdump || :
[ -x ${bin_dir}/set_core.sh ] && ${csudo}ln -s ${bin_dir}/set_core.sh ${bin_link_dir}/set_core || :
}
......
......@@ -121,6 +121,7 @@ clean_service
${csudo}rm -f ${bin_link_dir}/taos || :
${csudo}rm -f ${bin_link_dir}/taosd || :
${csudo}rm -f ${bin_link_dir}/taosadapter || :
${csudo}rm -f ${bin_link_dir}/taosBenchmark || :
${csudo}rm -f ${bin_link_dir}/taosdemo || :
${csudo}rm -f ${bin_link_dir}/set_core || :
${csudo}rm -f ${cfg_link_dir}/*.new || :
......
......@@ -83,12 +83,14 @@ function clean_bin() {
${csudo}rm -f ${bin_link_dir}/${clientName} || :
${csudo}rm -f ${bin_link_dir}/${serverName} || :
${csudo}rm -f ${bin_link_dir}/taosadapter || :
${csudo}rm -f ${bin_link_dir}/taosBenchmark || :
${csudo}rm -f ${bin_link_dir}/taosdemo || :
${csudo}rm -f ${bin_link_dir}/taosdump || :
${csudo}rm -f ${bin_link_dir}/${uninstallScript} || :
${csudo}rm -f ${bin_link_dir}/tarbitrator || :
${csudo}rm -f ${bin_link_dir}/set_core || :
${csudo}rm -f ${bin_link_dir}/run_taosd_and_taosadapter.sh || :
${csudo}rm -f ${bin_link_dir}/TDinsight.sh || :
}
function clean_lib() {
......
......@@ -242,7 +242,7 @@ SExprInfo* tscExprAppend(SQueryInfo* pQueryInfo, int16_t functionId, SColumnInde
int16_t size, int16_t resColId, int16_t interSize, bool isTagCol);
SExprInfo* tscExprUpdate(SQueryInfo* pQueryInfo, int32_t index, int16_t functionId, int16_t srcColumnIndex, int16_t type,
int16_t size);
int32_t size);
size_t tscNumOfExprs(SQueryInfo* pQueryInfo);
int32_t tscExprTopBottomIndex(SQueryInfo* pQueryInfo);
......
......@@ -440,6 +440,15 @@ int32_t tscCreateGlobalMergerEnv(SQueryInfo *pQueryInfo, tExtMemBuffer ***pMemBu
rlen += pExpr->base.resBytes;
}
int32_t pg = DEFAULT_PAGE_SIZE;
int32_t overhead = sizeof(tFilePage);
while((pg - overhead) < rlen * 2) {
pg *= 2;
}
if (*nBufferSizes < pg){
*nBufferSizes = 2 * pg;
}
int32_t capacity = 0;
if (rlen != 0) {
if ((*nBufferSizes) < rlen) {
......@@ -454,12 +463,6 @@ int32_t tscCreateGlobalMergerEnv(SQueryInfo *pQueryInfo, tExtMemBuffer ***pMemBu
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
int32_t pg = DEFAULT_PAGE_SIZE;
int32_t overhead = sizeof(tFilePage);
while((pg - overhead) < pModel->rowSize * 2) {
pg *= 2;
}
assert(numOfSub <= pTableMetaInfo->vgroupList->numOfVgroups);
for (int32_t i = 0; i < numOfSub; ++i) {
(*pMemBuffer)[i] = createExtMemBuffer(*nBufferSizes, rlen, pg, pModel);
......@@ -593,7 +596,7 @@ static void setTagValueForMultipleRows(SQLFunctionCtx* pCtx, int32_t numOfOutput
}
}
static void doMergeResultImpl(SMultiwayMergeInfo* pInfo, SQLFunctionCtx *pCtx, int32_t numOfExpr, int32_t rowIndex, char** pDataPtr) {
static void doMergeResultImpl(SOperatorInfo* pInfo, SQLFunctionCtx *pCtx, int32_t numOfExpr, int32_t rowIndex, char** pDataPtr) {
for (int32_t j = 0; j < numOfExpr; ++j) {
pCtx[j].pInput = pDataPtr[j] + pCtx[j].inputBytes * rowIndex;
}
......@@ -605,12 +608,19 @@ static void doMergeResultImpl(SMultiwayMergeInfo* pInfo, SQLFunctionCtx *pCtx, i
}
if (functionId < 0) {
SUdfInfo* pUdfInfo = taosArrayGet(pInfo->udfInfo, -1 * functionId - 1);
SUdfInfo* pUdfInfo = taosArrayGet(((SMultiwayMergeInfo*)(pInfo->info))->udfInfo, -1 * functionId - 1);
doInvokeUdf(pUdfInfo, &pCtx[j], 0, TSDB_UDF_FUNC_MERGE);
} else {
assert(!TSDB_FUNC_IS_SCALAR(functionId));
aAggs[functionId].mergeFunc(&pCtx[j]);
}
if (functionId == TSDB_FUNC_UNIQUE &&
(GET_RES_INFO(&(pCtx[j]))->numOfRes > MAX_UNIQUE_RESULT_ROWS || GET_RES_INFO(&(pCtx[j]))->numOfRes == -1)){
tscError("Unique result num is too large. num: %d, limit: %d",
GET_RES_INFO(&(pCtx[j]))->numOfRes, MAX_UNIQUE_RESULT_ROWS);
longjmp(pInfo->pRuntimeEnv->env, TSDB_CODE_QRY_UNIQUE_RESULT_TOO_LARGE);
}
}
}
......@@ -644,7 +654,7 @@ static void doExecuteFinalMerge(SOperatorInfo* pOperator, int32_t numOfExpr, SSD
for(int32_t i = 0; i < pBlock->info.rows; ++i) {
if (pInfo->hasPrev) {
if (needToMerge(pBlock, pInfo->orderColumnList, i, pInfo->prevRow)) {
doMergeResultImpl(pInfo, pCtx, numOfExpr, i, addrPtr);
doMergeResultImpl(pOperator, pCtx, numOfExpr, i, addrPtr);
} else {
doFinalizeResultImpl(pInfo, pCtx, numOfExpr);
......@@ -656,7 +666,7 @@ static void doExecuteFinalMerge(SOperatorInfo* pOperator, int32_t numOfExpr, SSD
for(int32_t j = 0; j < numOfExpr; ++j) {
pCtx[j].pOutput += (pCtx[j].outputBytes * numOfRows);
if (pCtx[j].functionId == TSDB_FUNC_TOP || pCtx[j].functionId == TSDB_FUNC_BOTTOM ||
pCtx[j].functionId == TSDB_FUNC_SAMPLE) {
pCtx[j].functionId == TSDB_FUNC_SAMPLE || pCtx[j].functionId == TSDB_FUNC_UNIQUE) {
if(j > 0) pCtx[j].ptsOutputBuf = pCtx[j - 1].pOutput;
}
}
......@@ -671,10 +681,10 @@ static void doExecuteFinalMerge(SOperatorInfo* pOperator, int32_t numOfExpr, SSD
}
}
doMergeResultImpl(pInfo, pCtx, numOfExpr, i, addrPtr);
doMergeResultImpl(pOperator, pCtx, numOfExpr, i, addrPtr);
}
} else {
doMergeResultImpl(pInfo, pCtx, numOfExpr, i, addrPtr);
doMergeResultImpl(pOperator, pCtx, numOfExpr, i, addrPtr);
}
savePrevOrderColumns(pInfo->prevRow, pInfo->orderColumnList, pBlock, i, &pInfo->hasPrev);
......
此差异已折叠。
......@@ -1045,8 +1045,8 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
SGroupbyExpr *pGroupbyExpr = query.pGroupbyExpr;
if (pGroupbyExpr != NULL && pGroupbyExpr->numOfGroupCols > 0) {
pQueryMsg->orderByIdx = htons(pGroupbyExpr->orderIndex);
pQueryMsg->orderType = htons(pGroupbyExpr->orderType);
//pQueryMsg->orderByIdx = htons(pGroupbyExpr->orderIndex);
pQueryMsg->groupOrderType = htons(pGroupbyExpr->orderType);
for (int32_t j = 0; j < pGroupbyExpr->numOfGroupCols; ++j) {
SColIndex* pCol = taosArrayGet(pGroupbyExpr->columnInfo, j);
......@@ -1947,7 +1947,6 @@ int tscProcessRetrieveGlobalMergeRsp(SSqlObj *pSql) {
SQueryInfo *pQueryInfo = tscGetQueryInfo(pCmd);
if (pQueryInfo->pQInfo == NULL) {
STableGroupInfo tableGroupInfo = {.numOfTables = 1, .pGroupList = taosArrayInit(1, POINTER_BYTES),};
tableGroupInfo.map = taosHashInit(1, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), true, HASH_NO_LOCK);
STableKeyInfo tableKeyInfo = {.pTable = NULL, .lastKey = INT64_MIN};
......@@ -1958,8 +1957,6 @@ int tscProcessRetrieveGlobalMergeRsp(SSqlObj *pSql) {
tscDebug("0x%"PRIx64" create QInfo 0x%"PRIx64" to execute query processing", pSql->self, pSql->self);
pQueryInfo->pQInfo = createQInfoFromQueryNode(pQueryInfo, &tableGroupInfo, NULL, NULL, pRes->pMerger, MERGE_STAGE, pSql->self);
if (pQueryInfo->pQInfo == NULL) {
taosHashCleanup(tableGroupInfo.map);
taosArrayDestroy(&group);
tscAsyncResultOnError(pSql);
pRes->code = TSDB_CODE_QRY_OUT_OF_MEMORY;
return pRes->code;
......
......@@ -3805,6 +3805,7 @@ void* createQInfoFromQueryNode(SQueryInfo* pQueryInfo, STableGroupInfo* pTableGr
assert(pQueryInfo != NULL);
SQInfo *pQInfo = (SQInfo *)calloc(1, sizeof(SQInfo));
if (pQInfo == NULL) {
tsdbDestroyTableGroup(pTableGroupInfo);
goto _cleanup;
}
......@@ -3913,6 +3914,7 @@ void* createQInfoFromQueryNode(SQueryInfo* pQueryInfo, STableGroupInfo* pTableGr
int32_t code = initQInfo(&bufInfo, NULL, pSourceOperator, pQInfo, &param, NULL, 0, merger);
taosArrayDestroy(&pa);
if (code != TSDB_CODE_SUCCESS) {
pQInfo = NULL;
goto _cleanup;
}
......
......@@ -74,11 +74,11 @@ int32_t converToStr(char *str, int type, void *buf, int32_t bufSize, int32_t *le
break;
case TSDB_DATA_TYPE_UINT:
n = sprintf(str, "%d", *(uint32_t*)buf);
n = sprintf(str, "%u", *(uint32_t*)buf);
break;
case TSDB_DATA_TYPE_UBIGINT:
n = sprintf(str, "%" PRId64, *(uint64_t*)buf);
n = sprintf(str, "%" PRIu64, *(uint64_t*)buf);
break;
case TSDB_DATA_TYPE_FLOAT:
......@@ -304,7 +304,7 @@ bool tscNonOrderedProjectionQueryOnSTable(SQueryInfo* pQueryInfo, int32_t tableI
return false;
}
// order by columnIndex exists, not a non-ordered projection query
// order by columnIndex not exists, not a ordered projection query
return pQueryInfo->order.orderColId < 0;
}
......@@ -313,7 +313,7 @@ bool tscOrderedProjectionQueryOnSTable(SQueryInfo* pQueryInfo, int32_t tableInde
return false;
}
// order by columnIndex exists, a non-ordered projection query
// order by columnIndex exists, a ordered projection query
return pQueryInfo->order.orderColId >= 0;
}
......@@ -689,7 +689,8 @@ bool isSimpleAggregateRv(SQueryInfo* pQueryInfo) {
(functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM ||
functionId == TSDB_FUNC_TS_COMP ||
functionId == TSDB_FUNC_SAMPLE ||
functionId == TSDB_FUNC_HISTOGRAM)) {
functionId == TSDB_FUNC_HISTOGRAM ||
functionId == TSDB_FUNC_UNIQUE)) {
return true;
}
}
......@@ -1404,8 +1405,6 @@ void handleDownstreamOperator(SSqlObj** pSqlObjList, int32_t numOfUpstream, SQue
}
}
tableGroupInfo.map = taosHashInit(1, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), true, HASH_NO_LOCK);
STableKeyInfo tableKeyInfo = {.pTable = NULL, .lastKey = INT64_MIN};
SArray* group = taosArrayInit(1, sizeof(STableKeyInfo));
......@@ -2614,7 +2613,7 @@ SExprInfo* tscExprAppend(SQueryInfo* pQueryInfo, int16_t functionId, SColumnInde
}
SExprInfo* tscExprUpdate(SQueryInfo* pQueryInfo, int32_t index, int16_t functionId, int16_t srcColumnIndex,
int16_t type, int16_t size) {
int16_t type, int32_t size) {
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
SExprInfo* pExpr = tscExprGet(pQueryInfo, index);
if (pExpr == NULL) {
......@@ -2659,7 +2658,8 @@ int32_t tscExprTopBottomIndex(SQueryInfo* pQueryInfo){
SExprInfo* pExpr = tscExprGet(pQueryInfo, i);
if (pExpr == NULL)
continue;
if (pExpr->base.functionId == TSDB_FUNC_TOP || pExpr->base.functionId == TSDB_FUNC_BOTTOM) {
if (pExpr->base.functionId == TSDB_FUNC_TOP || pExpr->base.functionId == TSDB_FUNC_BOTTOM
|| pExpr->base.functionId == TSDB_FUNC_UNIQUE) {
return i;
}
}
......@@ -4937,8 +4937,12 @@ static int32_t createGlobalAggregateExpr(SQueryAttr* pQueryAttr, SQueryInfo* pQu
pse->colInfo.colIndex = i;
pse->colType = pExpr->base.resType;
if(pExpr->base.resBytes > INT16_MAX && pExpr->base.functionId == TSDB_FUNC_UNIQUE){
pQueryAttr->interBytesForGlobal = pExpr->base.resBytes;
}else{
pse->colBytes = pExpr->base.resBytes;
}
}
{
for (int32_t i = 0; i < pQueryAttr->numOfExpr3; ++i) {
......@@ -5081,6 +5085,7 @@ int32_t tscCreateQueryFromQueryInfo(SQueryInfo* pQueryInfo, SQueryAttr* pQueryAt
pQueryAttr->pUdfInfo = pQueryInfo->pUdfInfo;
pQueryAttr->range = pQueryInfo->range;
if (pQueryInfo->order.order == TSDB_ORDER_ASC) { // TODO refactor
pQueryAttr->window = pQueryInfo->window;
} else {
......@@ -5112,6 +5117,8 @@ int32_t tscCreateQueryFromQueryInfo(SQueryInfo* pQueryInfo, SQueryAttr* pQueryAt
}
}
pQueryAttr->uniqueQuery = isUniqueQuery(numOfOutput, pQueryAttr->pExpr1);
pQueryAttr->tableCols = calloc(numOfCols, sizeof(SColumnInfo));
for(int32_t i = 0; i < numOfCols; ++i) {
SColumn* pCol = taosArrayGetP(pQueryInfo->colList, i);
......@@ -5403,7 +5410,7 @@ int parseJsontoTagData(char* json, SKVRowBuilder* kvRowBuilder, char* errMsg, in
// set json real data
cJSON *root = cJSON_Parse(json);
if (root == NULL){
tscError("json parse error");
tscError("json parse error:%s", json);
return tscSQLSyntaxErrMsg(errMsg, "json parse error", NULL);
}
......
......@@ -54,7 +54,7 @@ typedef struct SSqlExpr {
int32_t resBytes; // length of return value
int32_t interBytes; // inter result buffer size
int16_t colType; // table column type
int16_t colType; // table column type, this should be int32_t, because it is too small for globale merge stage, pQueryAttr->interBytesForGlobal
int16_t colBytes; // table column bytes
int16_t numOfParams; // argument value of each function
......
......@@ -1812,9 +1812,10 @@ static void doInitGlobalConfig(void) {
cfg.ptrLength = 0;
cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg);
assert(tsGlobalConfigNum < TSDB_CFG_MAX_NUM);
assert(tsGlobalConfigNum == TSDB_CFG_MAX_NUM);
#else
assert(tsGlobalConfigNum < TSDB_CFG_MAX_NUM);
// if TD_TSZ macro define, have 5 count configs, so must add 5
assert(tsGlobalConfigNum + 5 == TSDB_CFG_MAX_NUM);
#endif
}
......
......@@ -3,6 +3,7 @@ package com.taosdata.jdbc.rs;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import com.taosdata.jdbc.*;
import com.taosdata.jdbc.enums.TimestampFormat;
import com.taosdata.jdbc.utils.HttpClientPoolUtil;
import com.taosdata.jdbc.ws.InFlightRequest;
import com.taosdata.jdbc.ws.Transport;
......@@ -77,18 +78,20 @@ public class RestfulDriver extends AbstractDriver {
int maxRequest = props.containsKey(TSDBDriver.PROPERTY_KEY_MAX_CONCURRENT_REQUEST)
? Integer.parseInt(props.getProperty(TSDBDriver.PROPERTY_KEY_MAX_CONCURRENT_REQUEST))
: Transport.DEFAULT_MAX_REQUEST;
InFlightRequest inFlightRequest = new InFlightRequest(timeout, maxRequest);
CountDownLatch latch = new CountDownLatch(1);
Map<String, String> httpHeaders = new HashMap<>();
client = new WSClient(new URI(loginUrl), user, password, database, inFlightRequest, httpHeaders, latch, maxRequest);
client = new WSClient(new URI(loginUrl), user, password, database,
inFlightRequest, httpHeaders, latch, maxRequest);
transport = new Transport(client, inFlightRequest);
if (!client.connectBlocking()) {
if (!client.connectBlocking(timeout, TimeUnit.MILLISECONDS)) {
throw new SQLException("can't create connection with server");
}
if (!latch.await(timeout, TimeUnit.MILLISECONDS)) {
throw new SQLException("auth timeout");
}
if (client.isAuth()) {
if (!client.isAuth()) {
throw new SQLException("auth failure");
}
} catch (URISyntaxException e) {
......@@ -96,7 +99,9 @@ public class RestfulDriver extends AbstractDriver {
} catch (InterruptedException e) {
throw new SQLException("creat websocket connection has been Interrupted ", e);
}
return new WSConnection(url, props, transport, database, true);
// TODO fetch Type from config
props.setProperty(TSDBDriver.PROPERTY_KEY_TIMESTAMP_FORMAT, String.valueOf(TimestampFormat.TIMESTAMP));
return new WSConnection(url, props, transport, database);
}
loginUrl = "http://" + props.getProperty(TSDBDriver.PROPERTY_KEY_HOST) + ":" + props.getProperty(TSDBDriver.PROPERTY_KEY_PORT) + "/rest/login/" + user + "/" + password + "";
int poolSize = Integer.parseInt(props.getProperty("httpPoolSize", HttpClientPoolUtil.DEFAULT_MAX_PER_ROUTE));
......
......@@ -302,6 +302,9 @@ public class RestfulResultSet extends AbstractResultSet implements ResultSet {
this.taos_type = taos_type;
}
public int getTaosType() {
return taos_type;
}
}
@Override
......
package com.taosdata.jdbc.ws;
import com.taosdata.jdbc.*;
import com.taosdata.jdbc.rs.RestfulResultSet;
import com.taosdata.jdbc.rs.RestfulResultSetMetaData;
import com.taosdata.jdbc.ws.entity.*;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.sql.Statement;
import java.time.chrono.IsoChronology;
import java.time.format.DateTimeFormatter;
import java.time.format.DateTimeFormatterBuilder;
import java.time.format.ResolverStyle;
import java.time.temporal.ChronoField;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;
public abstract class AbstractWSResultSet extends AbstractResultSet {
public static DateTimeFormatter rfc3339Parser = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendValue(ChronoField.YEAR, 4)
.appendLiteral('-')
.appendValue(ChronoField.MONTH_OF_YEAR, 2)
.appendLiteral('-')
.appendValue(ChronoField.DAY_OF_MONTH, 2)
.appendLiteral('T')
.appendValue(ChronoField.HOUR_OF_DAY, 2)
.appendLiteral(':')
.appendValue(ChronoField.MINUTE_OF_HOUR, 2)
.appendLiteral(':')
.appendValue(ChronoField.SECOND_OF_MINUTE, 2)
.optionalStart()
.appendFraction(ChronoField.NANO_OF_SECOND, 2, 9, true)
.optionalEnd()
.appendOffset("+HH:MM", "Z").toFormatter()
.withResolverStyle(ResolverStyle.STRICT)
.withChronology(IsoChronology.INSTANCE);
protected final Statement statement;
protected final Transport transport;
protected final RequestFactory factory;
protected final long queryId;
protected boolean isClosed;
// meta
protected final ResultSetMetaData metaData;
protected final List<RestfulResultSet.Field> fields = new ArrayList<>();
protected final List<String> columnNames;
protected List<Integer> fieldLength;
// data
protected List<List<Object>> result = new ArrayList<>();
protected int numOfRows = 0;
protected int rowIndex = 0;
private boolean isCompleted;
public AbstractWSResultSet(Statement statement, Transport transport, RequestFactory factory,
QueryResp response, String database) throws SQLException {
this.statement = statement;
this.transport = transport;
this.factory = factory;
this.queryId = response.getId();
columnNames = Arrays.asList(response.getFieldsNames());
for (int i = 0; i < response.getFieldsCount(); i++) {
String colName = response.getFieldsNames()[i];
int taosType = response.getFieldsTypes()[i];
int jdbcType = TSDBConstants.taosType2JdbcType(taosType);
int length = response.getFieldsLengths()[i];
fields.add(new RestfulResultSet.Field(colName, jdbcType, length, "", taosType));
}
this.metaData = new RestfulResultSetMetaData(database, fields, null);
this.timestampPrecision = response.getPrecision();
}
private boolean forward() {
if (this.rowIndex > this.numOfRows) {
return false;
}
return ((++this.rowIndex) < this.numOfRows);
}
public void reset() {
this.rowIndex = 0;
}
@Override
public boolean next() throws SQLException {
if (isClosed()) {
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_RESULTSET_CLOSED);
}
if (this.forward()) {
return true;
}
Request request = factory.generateFetch(queryId);
CompletableFuture<Response> send = transport.send(request);
try {
Response response = send.get();
FetchResp fetchResp = (FetchResp) response;
if (Code.SUCCESS.getCode() != fetchResp.getCode()) {
// TODO reWrite error type
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNKNOWN, fetchResp.getMessage());
}
this.reset();
if (fetchResp.isCompleted()) {
this.isCompleted = true;
return false;
}
fieldLength = Arrays.asList(fetchResp.getLengths());
this.numOfRows = fetchResp.getRows();
this.result = fetchJsonData();
return true;
} catch (InterruptedException | ExecutionException e) {
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_RESTFul_Client_IOException, e.getMessage());
}
}
public abstract List<List<Object>> fetchJsonData() throws SQLException, ExecutionException, InterruptedException;
@Override
public void close() throws SQLException {
this.isClosed = true;
if (result != null && !result.isEmpty() && !isCompleted) {
FetchReq fetchReq = new FetchReq(queryId, queryId);
transport.sendWithoutRep(new Request(Action.FREE_RESULT.getAction(), fetchReq));
}
}
@Override
public ResultSetMetaData getMetaData() throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_RESULTSET_CLOSED);
return this.metaData;
}
@Override
public boolean isClosed() throws SQLException {
return isClosed;
}
}
package com.taosdata.jdbc.ws;
import com.taosdata.jdbc.ws.entity.Action;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.*;
/**
* Unfinished execution
*/
public class InFlightRequest implements AutoCloseable {
public class InFlightRequest {
private final int timeoutSec;
private final Semaphore semaphore;
private final Map<String, ResponseFuture> futureMap = new ConcurrentHashMap<>();
private final ScheduledExecutorService scheduledExecutorService = Executors.newSingleThreadScheduledExecutor();
private final ScheduledFuture<?> scheduledFuture;
private final Map<String, ConcurrentHashMap<Long, ResponseFuture>> futureMap = new HashMap<>();
private final Map<String, PriorityBlockingQueue<ResponseFuture>> expireMap = new HashMap<>();
private final ScheduledExecutorService scheduledExecutorService = Executors.newSingleThreadScheduledExecutor(r -> {
Thread t = new Thread(r);
t.setName("timer-" + t.getId());
return t;
});
public InFlightRequest(int timeoutSec, int concurrentNum) {
this.timeoutSec = timeoutSec;
this.semaphore = new Semaphore(concurrentNum);
this.scheduledFuture = scheduledExecutorService.scheduleAtFixedRate(this::removeTimeoutFuture, timeoutSec, timeoutSec, TimeUnit.MILLISECONDS);
scheduledExecutorService.scheduleWithFixedDelay(this::removeTimeoutFuture,
timeoutSec, timeoutSec, TimeUnit.MILLISECONDS);
Runtime.getRuntime().addShutdownHook(new Thread(scheduledExecutorService::shutdown));
for (Action value : Action.values()) {
String action = value.getAction();
if (Action.CONN.getAction().equals(action))
continue;
futureMap.put(action, new ConcurrentHashMap<>());
expireMap.put(action, new PriorityBlockingQueue<>());
}
}
public void put(ResponseFuture responseFuture) throws InterruptedException, TimeoutException {
public void put(ResponseFuture rf) throws InterruptedException, TimeoutException {
if (semaphore.tryAcquire(timeoutSec, TimeUnit.MILLISECONDS)) {
futureMap.put(responseFuture.getId(), responseFuture);
futureMap.get(rf.getAction()).put(rf.getId(), rf);
expireMap.get(rf.getAction()).put(rf);
} else {
throw new TimeoutException();
}
}
public ResponseFuture remove(String id) {
ResponseFuture future = futureMap.remove(id);
public ResponseFuture remove(String action, Long id) {
ResponseFuture future = futureMap.get(action).remove(id);
if (null != future) {
expireMap.get(action).remove(future);
semaphore.release();
}
return future;
}
private void removeTimeoutFuture() {
futureMap.entrySet().removeIf(entry -> {
if (System.nanoTime() - entry.getValue().getTimestamp() > timeoutSec * 1_000_000L) {
expireMap.forEach((k, v) -> {
while (true) {
ResponseFuture response = v.peek();
if (null == response || (System.nanoTime() - response.getTimestamp()) < timeoutSec * 1_000_000L)
break;
try {
entry.getValue().getFuture().completeExceptionally(new TimeoutException());
}finally {
v.poll();
futureMap.get(k).remove(response.getId());
response.getFuture().completeExceptionally(new TimeoutException());
} finally {
semaphore.release();
}
return true;
} else {
return false;
}
});
}
@Override
public void close() {
scheduledFuture.cancel(true);
scheduledExecutorService.shutdown();
}
}
......@@ -4,18 +4,24 @@ import com.taosdata.jdbc.ws.entity.Response;
import java.util.concurrent.CompletableFuture;
public class ResponseFuture {
private final String id;
public class ResponseFuture implements Comparable<ResponseFuture> {
private final String action;
private final Long id;
private final CompletableFuture<Response> future;
private final long timestamp;
public ResponseFuture(String id, CompletableFuture<Response> future) {
public ResponseFuture(String action, Long id, CompletableFuture<Response> future) {
this.action = action;
this.id = id;
this.future = future;
timestamp = System.nanoTime();
}
public String getId() {
public String getAction() {
return action;
}
public Long getId() {
return id;
}
......@@ -26,4 +32,12 @@ public class ResponseFuture {
long getTimestamp() {
return timestamp;
}
@Override
public int compareTo(ResponseFuture rf) {
long r = this.timestamp - rf.timestamp;
if (r > 0) return 1;
if (r < 0) return -1;
return 0;
}
}
......@@ -25,15 +25,19 @@ public class Transport implements AutoCloseable {
public CompletableFuture<Response> send(Request request) {
CompletableFuture<Response> completableFuture = new CompletableFuture<>();
try {
inFlightRequest.put(new ResponseFuture(request.id(), completableFuture));
inFlightRequest.put(new ResponseFuture(request.getAction(), request.id(), completableFuture));
client.send(request.toString());
} catch (Throwable t) {
inFlightRequest.remove(request.id());
inFlightRequest.remove(request.getAction(), request.id());
completableFuture.completeExceptionally(t);
}
return completableFuture;
}
public void sendWithoutRep(Request request) {
client.send(request.toString());
}
public boolean isClosed() throws SQLException {
return client.isClosed();
}
......
......@@ -7,6 +7,8 @@ import org.java_websocket.handshake.ServerHandshake;
import java.net.URI;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.charset.StandardCharsets;
import java.util.Map;
import java.util.concurrent.*;
......@@ -20,6 +22,7 @@ public class WSClient extends WebSocketClient implements AutoCloseable {
ThreadPoolExecutor executor;
private boolean auth;
private int reqId;
public boolean isAuth() {
return auth;
......@@ -54,8 +57,8 @@ public class WSClient extends WebSocketClient implements AutoCloseable {
@Override
public void onOpen(ServerHandshake serverHandshake) {
// certification
Request request = Request.generateConnect(user, password, database);
this.send(request.toString());
ConnectReq connectReq = new ConnectReq(++reqId, user, password, database);
this.send(new Request(Action.CONN.getAction(), connectReq).toString());
}
@Override
......@@ -64,14 +67,15 @@ public class WSClient extends WebSocketClient implements AutoCloseable {
executor.submit(() -> {
JSONObject jsonObject = JSONObject.parseObject(message);
if (Action.CONN.getAction().equals(jsonObject.getString("action"))) {
latch.countDown();
if (Code.SUCCESS.getCode() != jsonObject.getInteger("code")) {
auth = false;
this.close();
} else {
auth = true;
}
latch.countDown();
} else {
Response response = parseMessage(jsonObject);
ResponseFuture remove = inFlightRequest.remove(response.id());
ResponseFuture remove = inFlightRequest.remove(response.getAction(), response.getReqId());
if (null != remove) {
remove.getFuture().complete(response);
}
......@@ -87,7 +91,14 @@ public class WSClient extends WebSocketClient implements AutoCloseable {
@Override
public void onMessage(ByteBuffer bytes) {
super.onMessage(bytes);
bytes.order(ByteOrder.LITTLE_ENDIAN);
long id = bytes.getLong();
ResponseFuture remove = inFlightRequest.remove(Action.FETCH_BLOCK.getAction(), id);
if (null != remove) {
// FetchBlockResp fetchBlockResp = new FetchBlockResp(id, bytes.slice());
FetchBlockResp fetchBlockResp = new FetchBlockResp(id, bytes);
remove.getFuture().complete(fetchBlockResp);
}
}
@Override
......@@ -97,7 +108,6 @@ public class WSClient extends WebSocketClient implements AutoCloseable {
} else {
throw new RuntimeException("close connection: " + reason);
}
}
@Override
......@@ -109,6 +119,42 @@ public class WSClient extends WebSocketClient implements AutoCloseable {
public void close() {
super.close();
executor.shutdown();
inFlightRequest.close();
}
static class ConnectReq extends Payload {
private String user;
private String password;
private String db;
public ConnectReq(long reqId, String user, String password, String db) {
super(reqId);
this.user = user;
this.password = password;
this.db = db;
}
public String getUser() {
return user;
}
public void setUser(String user) {
this.user = user;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public String getDb() {
return db;
}
public void setDb(String db) {
this.db = db;
}
}
}
......@@ -5,6 +5,7 @@ import com.taosdata.jdbc.TSDBDriver;
import com.taosdata.jdbc.TSDBError;
import com.taosdata.jdbc.TSDBErrorNumbers;
import com.taosdata.jdbc.rs.RestfulDatabaseMetaData;
import com.taosdata.jdbc.ws.entity.RequestFactory;
import java.sql.DatabaseMetaData;
import java.sql.PreparedStatement;
......@@ -16,14 +17,14 @@ public class WSConnection extends AbstractConnection {
private final Transport transport;
private final DatabaseMetaData metaData;
private final String database;
private boolean fetchType;
private final RequestFactory factory;
public WSConnection(String url, Properties properties, Transport transport, String database, boolean fetchType) {
public WSConnection(String url, Properties properties, Transport transport, String database) {
super(properties);
this.transport = transport;
this.database = database;
this.fetchType = fetchType;
this.metaData = new RestfulDatabaseMetaData(url, properties.getProperty(TSDBDriver.PROPERTY_KEY_USER), this);
this.factory = new RequestFactory();
}
@Override
......@@ -31,8 +32,7 @@ public class WSConnection extends AbstractConnection {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_CONNECTION_CLOSED);
// return new WSStatement(transport, database , fetchType);
return null;
return new WSStatement(transport, database, this, factory);
}
@Override
......
package com.taosdata.jdbc.ws;
import com.taosdata.jdbc.AbstractStatement;
import com.taosdata.jdbc.TSDBError;
import com.taosdata.jdbc.TSDBErrorNumbers;
import com.taosdata.jdbc.utils.SqlSyntaxValidator;
import com.taosdata.jdbc.ws.entity.*;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;
public class WSStatement extends AbstractStatement {
private final Transport transport;
private final String database;
private final Connection connection;
private final RequestFactory factory;
private boolean closed;
private ResultSet resultSet;
public WSStatement(Transport transport, String database, Connection connection, RequestFactory factory) {
this.transport = transport;
this.database = database;
this.connection = connection;
this.factory = factory;
}
@Override
public ResultSet executeQuery(String sql) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED);
if (!SqlSyntaxValidator.isValidForExecuteQuery(sql))
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_INVALID_FOR_EXECUTE_QUERY, "not a valid sql for executeQuery: " + sql);
this.execute(sql);
return this.resultSet;
}
@Override
public int executeUpdate(String sql) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED);
if (!SqlSyntaxValidator.isValidForExecuteUpdate(sql))
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_INVALID_FOR_EXECUTE_UPDATE, "not a valid sql for executeUpdate: " + sql);
this.execute(sql);
return affectedRows;
}
@Override
public void close() throws SQLException {
if (!isClosed())
this.closed = true;
}
@Override
public boolean execute(String sql) throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED);
Request request = factory.generateQuery(sql);
CompletableFuture<Response> send = transport.send(request);
Response response;
try {
response = send.get();
QueryResp queryResp = (QueryResp) response;
if (Code.SUCCESS.getCode() != queryResp.getCode()) {
throw TSDBError.createSQLException(queryResp.getCode(), queryResp.getMessage());
}
if (queryResp.isUpdate()) {
this.resultSet = null;
this.affectedRows = queryResp.getAffectedRows();
return false;
} else {
this.resultSet = new BlockResultSet(this, this.transport, this.factory, queryResp, this.database);
this.affectedRows = -1;
return true;
}
} catch (InterruptedException | ExecutionException e) {
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_INVALID_WITH_EXECUTEQUERY, e.getMessage());
}
}
@Override
public ResultSet getResultSet() throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED);
return this.resultSet;
}
@Override
public int getUpdateCount() throws SQLException {
if (isClosed())
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED);
return affectedRows;
}
@Override
public Connection getConnection() throws SQLException {
return this.connection;
}
@Override
public boolean isClosed() throws SQLException {
return closed;
}
}
......@@ -11,8 +11,9 @@ public enum Action {
QUERY("query", QueryResp.class),
FETCH("fetch", FetchResp.class),
FETCH_JSON("fetch_json", FetchJsonResp.class),
// fetch_block's class is meaningless
FETCH_BLOCK("fetch_block", Response.class),
FETCH_BLOCK("fetch_block", FetchBlockResp.class),
// free_result's class is meaningless
FREE_RESULT("free_result", Response.class),
;
private final String action;
private final Class<? extends Response> clazz;
......@@ -35,7 +36,6 @@ public enum Action {
static {
for (Action value : Action.values()) {
actions.put(value.action, value);
IdUtil.init(value.action);
}
}
......
......@@ -5,7 +5,6 @@ package com.taosdata.jdbc.ws.entity;
*/
public enum Code {
SUCCESS(0, "success"),
;
private final int code;
......
package com.taosdata.jdbc.ws.entity;
public class FetchBlockResp {
import java.nio.ByteBuffer;
public class FetchBlockResp extends Response {
private ByteBuffer buffer;
public FetchBlockResp(long id, ByteBuffer buffer) {
this.setAction(Action.FETCH_BLOCK.getAction());
this.setReqId(id);
this.buffer = buffer;
}
public ByteBuffer getBuffer() {
return buffer;
}
public void setBuffer(ByteBuffer buffer) {
this.buffer = buffer;
}
}
package com.taosdata.jdbc.ws.entity;
import com.alibaba.fastjson.JSONArray;
public class FetchJsonResp extends Response{
private long id;
private Object[][] data;
private JSONArray data;
public Object[][] getData() {
public JSONArray getData() {
return data;
}
public void setData(Object[][] data) {
public void setData(JSONArray data) {
this.data = data;
}
......
package com.taosdata.jdbc.ws.entity;
public class FetchReq extends Payload {
private long id;
public FetchReq(long reqId, long id) {
super(reqId);
this.id = id;
}
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
}
......@@ -8,7 +8,7 @@ public class FetchResp extends Response{
private String message;
private long id;
private boolean completed;
private int[] lengths;
private Integer[] lengths;
private int rows;
public int getCode() {
......@@ -43,11 +43,11 @@ public class FetchResp extends Response{
this.completed = completed;
}
public int[] getLengths() {
public Integer[] getLengths() {
return lengths;
}
public void setLengths(int[] lengths) {
public void setLengths(Integer[] lengths) {
this.lengths = lengths;
}
......
package com.taosdata.jdbc.ws.entity;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.atomic.AtomicLong;
/**
* generate id for request
*/
public class IdUtil {
private static final Map<String, AtomicLong> ids = new HashMap<>();
public static long getId(String action) {
return ids.get(action).incrementAndGet();
}
public static void init(String action) {
ids.put(action, new AtomicLong(0));
}
}
package com.taosdata.jdbc.ws.entity;
import com.alibaba.fastjson.annotation.JSONField;
public class Payload {
@JSONField(name = "req_id")
private final long reqId;
public Payload(long reqId) {
this.reqId = reqId;
}
public long getReqId() {
return reqId;
}
}
\ No newline at end of file
package com.taosdata.jdbc.ws.entity;
public class QueryReq extends Payload {
private String sql;
public QueryReq(long reqId, String sql) {
super(reqId);
this.sql = sql;
}
public String getSql() {
return sql;
}
public void setSql(String sql) {
this.sql = sql;
}
}
package com.taosdata.jdbc.ws.entity;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.annotation.JSONField;
/**
* send to taosadapter
......@@ -15,14 +14,14 @@ public class Request {
this.args = args;
}
public String id() {
return action + "_" + args.getReqId();
}
public String getAction() {
return action;
}
public Long id(){
return args.getReqId();
}
public void setAction(String action) {
this.action = action;
}
......@@ -39,118 +38,4 @@ public class Request {
public String toString() {
return JSON.toJSONString(this);
}
public static Request generateConnect(String user, String password, String db) {
long reqId = IdUtil.getId(Action.CONN.getAction());
ConnectReq connectReq = new ConnectReq(reqId, user, password, db);
return new Request(Action.CONN.getAction(), connectReq);
}
public static Request generateQuery(String sql) {
long reqId = IdUtil.getId(Action.QUERY.getAction());
QueryReq queryReq = new QueryReq(reqId, sql);
return new Request(Action.QUERY.getAction(), queryReq);
}
public static Request generateFetch(long id) {
long reqId = IdUtil.getId(Action.FETCH.getAction());
FetchReq fetchReq = new FetchReq(reqId, id);
return new Request(Action.FETCH.getAction(), fetchReq);
}
public static Request generateFetchJson(long id) {
long reqId = IdUtil.getId(Action.FETCH_JSON.getAction());
FetchReq fetchReq = new FetchReq(reqId, id);
return new Request(Action.FETCH_JSON.getAction(), fetchReq);
}
public static Request generateFetchBlock(long id) {
long reqId = IdUtil.getId(Action.FETCH_BLOCK.getAction());
FetchReq fetchReq = new FetchReq(reqId, id);
return new Request(Action.FETCH_BLOCK.getAction(), fetchReq);
}
}
class Payload {
@JSONField(name = "req_id")
private final long reqId;
public Payload(long reqId) {
this.reqId = reqId;
}
public long getReqId() {
return reqId;
}
}
class ConnectReq extends Payload {
private String user;
private String password;
private String db;
public ConnectReq(long reqId, String user, String password, String db) {
super(reqId);
this.user = user;
this.password = password;
this.db = db;
}
public String getUser() {
return user;
}
public void setUser(String user) {
this.user = user;
}
public String getPassword() {
return password;
}
public void setPassword(String password) {
this.password = password;
}
public String getDb() {
return db;
}
public void setDb(String db) {
this.db = db;
}
}
class QueryReq extends Payload {
private String sql;
public QueryReq(long reqId, String sql) {
super(reqId);
this.sql = sql;
}
public String getSql() {
return sql;
}
public void setSql(String sql) {
this.sql = sql;
}
}
class FetchReq extends Payload {
private long id;
public FetchReq(long reqId, long id) {
super(reqId);
this.id = id;
}
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
}
\ No newline at end of file
package com.taosdata.jdbc.ws.entity;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.atomic.AtomicLong;
/**
* generate id for request
*/
public class RequestFactory {
private final Map<String, AtomicLong> ids = new HashMap<>();
public long getId(String action) {
return ids.get(action).incrementAndGet();
}
public RequestFactory() {
for (Action value : Action.values()) {
String action = value.getAction();
if (Action.CONN.getAction().equals(action) || Action.FETCH_BLOCK.getAction().equals(action))
continue;
ids.put(action, new AtomicLong(0));
}
}
public Request generateQuery(String sql) {
long reqId = this.getId(Action.QUERY.getAction());
QueryReq queryReq = new QueryReq(reqId, sql);
return new Request(Action.QUERY.getAction(), queryReq);
}
public Request generateFetch(long id) {
long reqId = this.getId(Action.FETCH.getAction());
FetchReq fetchReq = new FetchReq(reqId, id);
return new Request(Action.FETCH.getAction(), fetchReq);
}
public Request generateFetchJson(long id) {
long reqId = this.getId(Action.FETCH_JSON.getAction());
FetchReq fetchReq = new FetchReq(reqId, id);
return new Request(Action.FETCH_JSON.getAction(), fetchReq);
}
public Request generateFetchBlock(long id) {
FetchReq fetchReq = new FetchReq(id, id);
return new Request(Action.FETCH_BLOCK.getAction(), fetchReq);
}
}
......@@ -11,10 +11,6 @@ public class Response {
@JSONField(name = "req_id")
private long reqId;
public String id() {
return action + "_" + reqId;
}
public String getAction() {
return action;
}
......
......@@ -10,15 +10,17 @@ import org.junit.runner.RunWith;
import java.sql.*;
import java.util.Properties;
import java.util.concurrent.TimeUnit;
/**
* You need to start taosadapter before testing this method
*/
@Ignore
@RunWith(CatalogRunner.class)
@TestTarget(alias = "test connection with server", author = "huolibo",version = "2.0.37")
@TestTarget(alias = "test connection with server", author = "huolibo", version = "2.0.37")
public class WSConnectionTest {
private static final String host = "192.168.1.98";
// private static final String host = "192.168.1.98";
private static final String host = "127.0.0.1";
private static final int port = 6041;
private Connection connection;
......@@ -46,13 +48,12 @@ public class WSConnectionTest {
String url = "jdbc:TAOS-RS://" + host + ":" + port + "/";
Properties properties = new Properties();
properties.setProperty(TSDBDriver.PROPERTY_KEY_USER, "root");
properties.setProperty(TSDBDriver.PROPERTY_KEY_PASSWORD,"taosdata");
properties.setProperty(TSDBDriver.PROPERTY_KEY_PASSWORD, "taosdata");
properties.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_LOAD, "true");
connection = DriverManager.getConnection(url, properties);
}
@Test
// @Test(expected = SQLException.class)
@Test(expected = SQLException.class)
@Description("wrong password or user")
public void wrongUserOrPasswordConection() throws SQLException {
String url = "jdbc:TAOS-RS://" + host + ":" + port + "/test?user=abc&password=taosdata";
......@@ -60,4 +61,21 @@ public class WSConnectionTest {
properties.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_LOAD, "true");
connection = DriverManager.getConnection(url, properties);
}
@Test
@Description("sleep keep connection")
public void keepConnection() throws SQLException, InterruptedException {
String url = "jdbc:TAOS-RS://" + host + ":" + port + "/?user=root&password=taosdata";
Properties properties = new Properties();
properties.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_LOAD, "true");
connection = DriverManager.getConnection(url, properties);
TimeUnit.MINUTES.sleep(1);
Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery("show databases");
TimeUnit.MINUTES.sleep(1);
resultSet.next();
System.out.println(resultSet.getTimestamp(1));
resultSet.close();
statement.close();
}
}
package com.taosdata.jdbc.ws;
import com.taosdata.jdbc.TSDBDriver;
import com.taosdata.jdbc.annotation.CatalogRunner;
import com.taosdata.jdbc.annotation.Description;
import com.taosdata.jdbc.annotation.TestTarget;
import org.junit.*;
import org.junit.runner.RunWith;
import java.sql.*;
import java.util.Properties;
import java.util.concurrent.TimeUnit;
import java.util.stream.IntStream;
@Ignore
@RunWith(CatalogRunner.class)
@TestTarget(alias = "query test", author = "huolibo", version = "2.0.38")
@FixMethodOrder
public class WSQueryTest {
private static final String host = "192.168.1.98";
private static final int port = 6041;
private static final String databaseName = "ws_query";
private static final String tableName = "wq";
private Connection connection;
private long now;
@Description("query")
@Test
public void queryBlock() throws SQLException, InterruptedException {
IntStream.range(1, 100).limit(1000).parallel().forEach(x -> {
try {
Statement statement = connection.createStatement();
statement.execute("insert into " + databaseName + "." + tableName + " values(now+100s, 100)");
ResultSet resultSet = statement.executeQuery("select * from " + databaseName + "." + tableName);
resultSet.next();
Assert.assertEquals(100, resultSet.getInt(2));
statement.close();
TimeUnit.SECONDS.sleep(10);
} catch (SQLException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
});
}
@Before
public void before() throws SQLException {
String url = "jdbc:TAOS-RS://" + host + ":" + port + "/test?user=root&password=taosdata";
Properties properties = new Properties();
properties.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_LOAD, "true");
connection = DriverManager.getConnection(url, properties);
Statement statement = connection.createStatement();
statement.execute("drop database if exists " + databaseName);
statement.execute("create database " + databaseName);
statement.execute("use " + databaseName);
statement.execute("create table if not exists " + databaseName + "." + tableName + "(ts timestamp, f int)");
statement.close();
}
}
package com.taosdata.jdbc.ws;
import com.taosdata.jdbc.TSDBDriver;
import com.taosdata.jdbc.enums.TimestampFormat;
import org.junit.BeforeClass;
import org.junit.Ignore;
import org.junit.Test;
import java.sql.*;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
import java.util.concurrent.TimeUnit;
@Ignore
public class WSSelectTest {
// private static final String host = "192.168.1.98";
private static final String host = "127.0.0.1";
private static final int port = 6041;
private static Connection connection;
private static final String databaseName = "driver";
private static void testInsert() throws SQLException {
Statement statement = connection.createStatement();
long cur = System.currentTimeMillis();
List<String> timeList = new ArrayList<>();
for (long i = 0L; i < 3000; i++) {
long t = cur + i;
timeList.add("insert into " + databaseName + ".alltype_query values(" + t + ",1,1,1,1,1,1,1,1,1,1,1,'test_binary','test_nchar')");
}
for (int i = 0; i < 3000; i++) {
statement.execute(timeList.get(i));
}
statement.close();
}
@Test
public void testWSSelect() throws SQLException {
Statement statement = connection.createStatement();
int count = 0;
long start = System.nanoTime();
for (int i = 0; i < 1000; i++) {
ResultSet resultSet = statement.executeQuery("select ts,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13 from " + databaseName + ".alltype_query limit 3000");
while (resultSet.next()) {
count++;
resultSet.getTimestamp(1);
resultSet.getBoolean(2);
resultSet.getInt(3);
resultSet.getInt(4);
resultSet.getInt(5);
resultSet.getLong(6);
resultSet.getInt(7);
resultSet.getInt(8);
resultSet.getLong(9);
resultSet.getLong(10);
resultSet.getFloat(11);
resultSet.getDouble(12);
resultSet.getString(13);
resultSet.getString(14);
}
}
long d = System.nanoTime() - start;
System.out.println(d / 1000);
System.out.println(count);
statement.close();
}
@BeforeClass
public static void beforeClass() throws SQLException {
String url = "jdbc:TAOS-RS://" + host + ":" + port + "/?user=root&password=taosdata";
Properties properties = new Properties();
properties.setProperty(TSDBDriver.PROPERTY_KEY_TIMESTAMP_FORMAT, String.valueOf(TimestampFormat.UTC));
properties.setProperty(TSDBDriver.PROPERTY_KEY_MESSAGE_WAIT_TIMEOUT, "100000");
properties.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_LOAD, "true");
connection = DriverManager.getConnection(url, properties);
Statement statement = connection.createStatement();
statement.execute("drop database if exists " + databaseName);
statement.execute("create database " + databaseName);
statement.execute("create table " + databaseName + ".alltype_query(ts timestamp, c1 bool,c2 tinyint, c3 smallint, c4 int, c5 bigint, c6 tinyint unsigned, c7 smallint unsigned, c8 int unsigned, c9 bigint unsigned, c10 float, c11 double, c12 binary(20), c13 nchar(30) )");
statement.close();
testInsert();
}
}
......@@ -1192,9 +1192,9 @@
}
},
"follow-redirects": {
"version": "1.14.7",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.7.tgz",
"integrity": "sha512-+hbxoLbFMbRKDwohX8GkTataGqO6Jb7jGwpAlwgy2bIz25XtRm7KEzJM76R1WiNT5SwZkX4Y75SwBolkpmE7iQ=="
"version": "1.14.8",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.8.tgz",
"integrity": "sha512-1x0S9UVJHsQprFcEC/qnNzBLcIxsjAV905f/UkQxbclCsoTWlacCNOpQa/anodLl2uaEKFhfWOvM2Qg77+15zA=="
},
"form-data": {
"version": "4.0.0",
......
const taos = require('../tdengine');
var conn = taos.connect({ host: "localhost" });
var cursor = conn.cursor();
function executeUpdate(updateSql) {
console.log(updateSql);
cursor.execute(updateSql);
}
function executeQuery(querySql) {
let query = cursor.query(querySql);
query.execute().then((result => {
console.log(querySql);
result.pretty();
}));
}
function stmtBindParamBatchSample() {
let db = 'node_test_db';
let table = 'stmt_taos_bind_param_batch';
let createDB = `create database if not exists ${db} keep 3650;`;
let dropDB = `drop database if exists ${db};`;
let useDB = `use ${db}`;
let createTable = `create table if not exists ${table} ` +
`(ts timestamp,` +
`bl bool,` +
`i8 tinyint,` +
`i16 smallint,` +
`i32 int,` +
`i64 bigint,` +
`f32 float,` +
`d64 double,` +
`bnr binary(20),` +
`blob nchar(20),` +
`u8 tinyint unsigned,` +
`u16 smallint unsigned,` +
`u32 int unsigned,` +
`u64 bigint unsigned` +
`)tags(` +
`t_bl bool,` +
`t_i8 tinyint,` +
`t_i16 smallint,` +
`t_i32 int,` +
`t_i64 bigint,` +
`t_f32 float,` +
`t_d64 double,` +
`t_bnr binary(20),` +
`t_blob nchar(20),` +
`t_u8 tinyint unsigned,` +
`t_u16 smallint unsigned,` +
`t_u32 int unsigned,` +
`t_u64 bigint unsigned` +
`);`;
let querySql = `select * from ${table};`;
let insertSql = `insert into ? using ${table} tags(?,?,?,?,?,?,?,?,?,?,?,?,?) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?);`;
executeUpdate(dropDB);
executeUpdate(createDB);
executeUpdate(useDB);
executeUpdate(createTable);
let mBinds = new taos.TaosMultiBindArr(14);
mBinds.multiBindTimestamp([1642435200000, 1642435300000, 1642435400000, 1642435500000, 1642435600000]);
mBinds.multiBindBool([true, false, true, undefined, null]);
mBinds.multiBindTinyInt([-127, 3, 127, null, undefined]);
mBinds.multiBindSmallInt([-256, 0, 256, null, undefined]);
mBinds.multiBindInt([-1299, 0, 1233, null, undefined]);
mBinds.multiBindBigInt([16424352000002222n, -16424354000001111n, 0, null, undefined]);
mBinds.multiBindFloat([12.33, 0, -3.1415, null, undefined]);
mBinds.multiBindDouble([3.141592653, 0, -3.141592653, null, undefined]);
mBinds.multiBindBinary(['TDengine_Binary', '', 'taosdata涛思数据', null, undefined]);
mBinds.multiBindNchar(['taos_data_nchar', 'taosdata涛思数据', '', null, undefined]);
mBinds.multiBindUTinyInt([0, 127, 254, null, undefined]);
mBinds.multiBindUSmallInt([0, 256, 512, null, undefined]);
mBinds.multiBindUInt([0, 1233, 4294967294, null, undefined]);
mBinds.multiBindUBigInt([16424352000002222n, 36424354000001111n, 0, null, undefined]);
let tags = new taos.TaosBind(13);
tags.bindBool(true);
tags.bindTinyInt(127);
tags.bindSmallInt(32767);
tags.bindInt(1234555);
tags.bindBigInt(-164243520000011111n);
tags.bindFloat(214.02);
tags.bindDouble(2.01);
tags.bindBinary('taosdata涛思数据');
tags.bindNchar('TDengine数据');
tags.bindUTinyInt(254);
tags.bindUSmallInt(65534);
tags.bindUInt(4294967290 / 2);
tags.bindUBigInt(164243520000011111n);
cursor.stmtInit();
cursor.stmtPrepare(insertSql);
cursor.stmtSetTbnameTags('s_01', tags.getBind());
cursor.stmtBindParamBatch(mBinds.getMultiBindArr());
cursor.stmtAddBatch();
cursor.stmtExecute();
cursor.stmtClose();
executeQuery(querySql);
executeUpdate(dropDB);
}
stmtBindParamBatchSample();
setTimeout(() => {
conn.close();
}, 2000);
// const TaosBind = require('../nodetaos/taosBind');
const taos = require('../tdengine');
var conn = taos.connect({ host: "localhost" });
var cursor = conn.cursor();
function executeUpdate(updateSql) {
console.log(updateSql);
cursor.execute(updateSql);
}
function executeQuery(querySql) {
let query = cursor.query(querySql);
query.execute().then((result => {
console.log(querySql);
result.pretty();
}));
}
function stmtBindParamSample() {
let db = 'node_test_db';
let table = 'stmt_taos_bind_sample';
let createDB = `create database if not exists ${db} keep 3650;`;
let dropDB = `drop database if exists ${db};`;
let useDB = `use ${db}`;
let createTable = `create table if not exists ${table} ` +
`(ts timestamp,` +
`nil int,` +
`bl bool,` +
`i8 tinyint,` +
`i16 smallint,` +
`i32 int,` +
`i64 bigint,` +
`f32 float,` +
`d64 double,` +
`bnr binary(20),` +
`blob nchar(20),` +
`u8 tinyint unsigned,` +
`u16 smallint unsigned,` +
`u32 int unsigned,` +
`u64 bigint unsigned);`;
let querySql = `select * from ${table};`;
let insertSql = `insert into ? values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?);`
executeUpdate(dropDB);
executeUpdate(createDB);
executeUpdate(useDB);
executeUpdate(createTable);
let binds = new taos.TaosBind(15);
binds.bindTimestamp(1642435200000);
binds.bindNil();
binds.bindBool(true);
binds.bindTinyInt(127);
binds.bindSmallInt(32767);
binds.bindInt(1234555);
binds.bindBigInt(-164243520000011111n);
binds.bindFloat(214.02);
binds.bindDouble(2.01);
binds.bindBinary('taosdata涛思数据');
binds.bindNchar('TDengine数据');
binds.bindUTinyInt(254);
binds.bindUSmallInt(65534);
binds.bindUInt(4294967294);
binds.bindUBigInt(164243520000011111n);
cursor.stmtInit();
cursor.stmtPrepare(insertSql);
cursor.stmtSetTbname(table);
cursor.stmtBindParam(binds.getBind());
cursor.stmtAddBatch();
cursor.stmtExecute();
cursor.stmtClose();
executeQuery(querySql);
executeUpdate(dropDB);
}
stmtBindParamSample();
setTimeout(() => {
conn.close();
}, 2000);
\ No newline at end of file
此差异已折叠。
......@@ -373,7 +373,7 @@ function CTaosInterface(config = null, pass = false) {
, 'taos_stmt_execute': [ref.types.int, [ref.types.void_ptr]]
// TAOS_RES* taos_stmt_use_result(TAOS_STMT *stmt)
, 'taos_stmt_use_result': [ref.types.int, [ref.types.void_ptr]]
, 'taos_stmt_use_result': [ref.types.void_ptr, [ref.types.void_ptr]]
// int taos_stmt_close(TAOS_STMT *stmt)
, 'taos_stmt_close': [ref.types.int, [ref.types.void_ptr]]
......@@ -934,7 +934,15 @@ CTaosInterface.prototype.stmtUseResult = function stmtUseResult(stmt) {
* @returns 0 for success, non-zero for failure.
*/
CTaosInterface.prototype.loadTableInfo = function loadTableInfo(taos, tableList) {
return this.libtaos.taos_load_table_info(taos, tableList)
let _tableListBuf = Buffer.alloc(ref.sizeof.pointer);
let _listStr = tableList.toString();
if ((_.isString(tableList) )|| (_.isArray(tableList))) {
ref.set(_tableListBuf, 0, ref.allocCString(_listStr), ref.types.char_ptr);
return this.libtaos.taos_load_table_info(taos, _tableListBuf);
} else {
throw new errors.InterfaceError("Unspport tableLis input");
}
}
/**
......
此差异已折叠。
此差异已折叠。
var TDengineConnection = require('./nodetaos/connection.js')
const TDengineConstant = require('./nodetaos/constants.js')
const TaosBind = require('./nodetaos/taosBind')
const { TaosMultiBind } = require('./nodetaos/taosMultiBind')
const TaosMultiBindArr = require('./nodetaos/taosMultiBindArr')
module.exports = {
connect: function (connection = {}) {
return new TDengineConnection(connection);
......@@ -8,4 +11,6 @@ module.exports = {
SCHEMALESS_PROTOCOL: TDengineConstant.SCHEMALESS_PROTOCOL,
SCHEMALESS_PRECISION: TDengineConstant.SCHEMALESS_PRECISION,
TaosBind,
TaosMultiBind,
TaosMultiBindArr,
}
\ No newline at end of file
......@@ -293,6 +293,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_QRY_SYS_ERROR TAOS_DEF_ERROR_CODE(0, 0x070D) //"System error")
#define TSDB_CODE_QRY_INVALID_TIME_CONDITION TAOS_DEF_ERROR_CODE(0, 0x070E) //"invalid time condition")
#define TSDB_CODE_QRY_INVALID_SCHEMA_VERSION TAOS_DEF_ERROR_CODE(0, 0x0710) //"invalid schema version")
#define TSDB_CODE_QRY_UNIQUE_RESULT_TOO_LARGE TAOS_DEF_ERROR_CODE(0, 0x0711) //"unique result num is too large")
// grant
#define TSDB_CODE_GRANT_EXPIRED TAOS_DEF_ERROR_CODE(0, 0x0800) //"License expired"
......
......@@ -503,8 +503,8 @@ typedef struct {
uint32_t tagCondLen; // tag length in current query
int32_t colCondLen; // column length in current query
int16_t numOfGroupCols; // num of group by columns
int16_t orderByIdx;
int16_t orderType; // used in group by xx order by xxx
int16_t orderByIdx; // useless
int16_t groupOrderType; // used for group order
int64_t vgroupLimit; // limit the number of rows for each table, used in order by + limit in stable projection query.
int16_t prjOrder; // global order in super table projection query.
int64_t limit;
......
Subproject commit 28ff2899fd0238f81c14cb76ea6dbdefa83570b3
Subproject commit ca4a90027ddfd5faa858a676e695ddcdd56ef2b5
......@@ -704,7 +704,7 @@ void leakFloat() {
void leakTest(){
for(int i=0; i< 90000000000000; i++){
for(int i=0; i< 90000000; i++){
if(i%10000==0)
printf(" ---------- %d ---------------- \n", i);
leakFloat();
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册