提交 44d8099d 编写于 作者: W wpan

Merge branch 'develop' into feature/TD-5925

[submodule "src/connector/go"] [submodule "src/connector/go"]
path = src/connector/go path = src/connector/go
url = git@github.com:taosdata/driver-go.git url = https://github.com/taosdata/driver-go.git
[submodule "src/connector/grafanaplugin"] [submodule "src/connector/grafanaplugin"]
path = src/connector/grafanaplugin path = src/connector/grafanaplugin
url = git@github.com:taosdata/grafanaplugin.git url = https://github.com/taosdata/grafanaplugin.git
[submodule "src/connector/hivemq-tdengine-extension"] [submodule "src/connector/hivemq-tdengine-extension"]
path = src/connector/hivemq-tdengine-extension path = src/connector/hivemq-tdengine-extension
url = git@github.com:taosdata/hivemq-tdengine-extension.git url = https://github.com/taosdata/hivemq-tdengine-extension.git
[submodule "tests/examples/rust"] [submodule "tests/examples/rust"]
path = tests/examples/rust path = tests/examples/rust
url = https://github.com/songtianyi/tdengine-rust-bindings.git url = https://github.com/songtianyi/tdengine-rust-bindings.git
......
...@@ -271,12 +271,12 @@ pipeline { ...@@ -271,12 +271,12 @@ pipeline {
''' '''
} }
timeout(time: 60, unit: 'MINUTES'){ timeout(time: 60, unit: 'MINUTES'){
// sh ''' sh '''
// cd ${WKC}/tests/pytest cd ${WKC}/tests/pytest
// rm -rf /var/lib/taos/* rm -rf /var/lib/taos/*
// rm -rf /var/log/taos/* rm -rf /var/log/taos/*
// ./handle_crash_gen_val_log.sh ./handle_crash_gen_val_log.sh
// ''' '''
sh ''' sh '''
cd ${WKC}/tests/pytest cd ${WKC}/tests/pytest
rm -rf /var/lib/taos/* rm -rf /var/lib/taos/*
......
...@@ -107,6 +107,12 @@ Go 连接器和 Grafana 插件在其他独立仓库,如果安装它们的话 ...@@ -107,6 +107,12 @@ Go 连接器和 Grafana 插件在其他独立仓库,如果安装它们的话
git submodule update --init --recursive git submodule update --init --recursive
``` ```
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub,详细方法请参考 GitHub 官方文档。
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
## 构建 TDengine ## 构建 TDengine
### Linux 系统 ### Linux 系统
......
...@@ -101,6 +101,12 @@ so you should run this command in the TDengine directory to install them: ...@@ -101,6 +101,12 @@ so you should run this command in the TDengine directory to install them:
git submodule update --init --recursive git submodule update --init --recursive
``` ```
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
## Build TDengine ## Build TDengine
### On Linux platform ### On Linux platform
......
...@@ -40,7 +40,7 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专 ...@@ -40,7 +40,7 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
* [超级表管理](/taos-sql#super-table):添加、删除、查看、修改超级表 * [超级表管理](/taos-sql#super-table):添加、删除、查看、修改超级表
* [标签管理](/taos-sql#tags):增加、删除、修改标签 * [标签管理](/taos-sql#tags):增加、删除、修改标签
* [数据写入](/taos-sql#insert):支持单表单条、多条、多表多条写入,支持历史数据写入 * [数据写入](/taos-sql#insert):支持单表单条、多条、多表多条写入,支持历史数据写入
* [数据查询](/taos-sql#select):支持时间段、值过滤、排序、查询结果手动分页等 * [数据查询](/taos-sql#select):支持时间段、值过滤、排序、嵌套查询、UINON、JOIN、查询结果手动分页等
* [SQL函数](/taos-sql#functions):支持各种聚合函数、选择函数、计算函数,如avg, min, diff等 * [SQL函数](/taos-sql#functions):支持各种聚合函数、选择函数、计算函数,如avg, min, diff等
* [窗口切分聚合](/taos-sql#aggregation):将表中数据按照时间段等方式进行切割后聚合,降维处理 * [窗口切分聚合](/taos-sql#aggregation):将表中数据按照时间段等方式进行切割后聚合,降维处理
* [边界限制](/taos-sql#limitation):库、表、SQL等边界限制条件 * [边界限制](/taos-sql#limitation):库、表、SQL等边界限制条件
......
...@@ -2,28 +2,27 @@ ...@@ -2,28 +2,27 @@
## <a class="anchor" id="intro"></a>TDengine 简介 ## <a class="anchor" id="intro"></a>TDengine 简介
TDengine 是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL 数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,在时序空间大数据处理上,有着自己独到的优势。 TDengine 是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL 数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,TDengine 在时序空间大数据处理上,有着自己独到的优势。
TDengine 的模块之一是时序数据库。但除此之外,为减少研发的复杂度、系统维护的难度,TDengine 还提供缓存、消息队列、订阅、流式计算等功能,为物联网、工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与 Hadoop 等典型的大数据平台相比,它具有如下鲜明的特点: TDengine 的模块之一是时序数据库。但除此之外,为减少研发的复杂度、系统维护的难度,TDengine 还提供缓存、消息队列、订阅、流式计算等功能,为物联网和工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与 Hadoop 等典型的大数据平台相比,TDengine 具有如下鲜明的特点:
* __10 倍以上的性能提升__:定义了创新的数据存储结构,单核每秒能处理至少 2 万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快十倍以上。 * __10 倍以上的性能提升__:定义了创新的数据存储结构,单核每秒能处理至少 2 万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快十倍以上。
* __硬件或云服务成本降至 1/5__:由于超强性能,计算资源不到通用大数据方案的 1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的 1/10。 * __硬件或云服务成本降至 1/5__:由于超强性能,计算资源不到通用大数据方案的 1/5;通过列式存储和先进的压缩算法,存储占用不到通用数据库的 1/10。
* __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融为一体,应用无需再集成 Kafka/Redis/HBase/Spark/HDFS 等软件,大幅降低应用开发和维护的复杂度成本。 * __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融为一体,应用无需再集成 Kafka/Redis/HBase/Spark/HDFS 等软件,大幅降低应用开发和维护的复杂度成本。
* __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过 Shell, Python, R, MATLAB 随时进行。 * __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过 Shell, Python, R, MATLAB 随时进行。
* __与第三方工具无缝连接__:不用一行代码,即可与 Telegraf, Grafana, EMQ, HiveMQ, Prometheus, MATLAB, R 等集成。后续将支持 OPC, Hadoop, Spark 等,BI 工具也将无缝连接。 * __高可用性和水平扩展__:通过分布式架构和一致性算法,通过多复制和集群特性,TDengine确保了高可用性和水平扩展性以支持关键任务应用程序。
* __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类标准 SQL,支持 RESTful,支持 Python/Java/C/C++/C#/Go/Node.js, 与 MySQL 相似,零学习成本。 * __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类似标准 SQL,支持 RESTful,支持 Python/Java/C/C++/C#/Go/Node.js, 与 MySQL 相似,零学习成本。
* __核心开源__:除了一些辅助功能外,TDengine的核心是开源的。企业再也不会被数据库绑定了。这使生态更加强大,产品更加稳定,开发者社区更加活跃。
采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。 采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。
![TDengine技术生态图](page://images/eco_system.png) ![TDengine技术生态图](page://images/eco_system.png)
<center>图 1. TDengine技术生态图</center> <center>图 1. TDengine技术生态图</center>
## <a class="anchor" id="scenes"></a>TDengine 总体适用场景 ## <a class="anchor" id="scenes"></a>TDengine 总体适用场景
作为一个 IoT 大数据平台,TDengine 的典型适用场景是在 IoT 范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如 CRM,ERP 等,不在本文讨论范围内。 作为一个 IoT 大数据平台,TDengine 的典型适用场景是在 IoT 范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如 CRM,ERP 等,不在本文讨论范围内。
### 数据源特点和需求 ### 数据源特点和需求
从数据源角度,设计人员可以从下面几个角度分析 TDengine 在目标应用系统里面的适用性。 从数据源角度,设计人员可以从下面几个角度分析 TDengine 在目标应用系统里面的适用性。
...@@ -64,4 +63,3 @@ TDengine 的模块之一是时序数据库。但除此之外,为减少研发 ...@@ -64,4 +63,3 @@ TDengine 的模块之一是时序数据库。但除此之外,为减少研发
|要求系统可靠运行| | | √ | TDengine 的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。| |要求系统可靠运行| | | √ | TDengine 的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。|
|要求运维学习成本可控| | | √ |同上。| |要求运维学习成本可控| | | √ |同上。|
|要求市场有大量人才储备| √ | | | TDengine 作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。| |要求市场有大量人才储备| √ | | | TDengine 作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。|
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
## <a class="anchor" id="queries"></a>主要查询功能 ## <a class="anchor" id="queries"></a>主要查询功能
TDengine 采用 SQL 作为查询语言。应用程序可以通过 C/C++, Java, Go, Python 连接器发送 SQL 语句,用户可以通过 TDengine 提供的命令行(Command Line Interface, CLI)工具 TAOS Shell 手动执行 SQL 即席查询(Ad-Hoc Query)。TDengine 支持如下查询功能: TDengine 采用 SQL 作为查询语言。应用程序可以通过 C/C++, Java, Go, C#, Python, Node.js 连接器发送 SQL 语句,用户可以通过 TDengine 提供的命令行(Command Line Interface, CLI)工具 TAOS Shell 手动执行 SQL 即席查询(Ad-Hoc Query)。TDengine 支持如下查询功能:
- 单列、多列数据查询 - 单列、多列数据查询
- 标签和数值的多种过滤条件:>, <, =, <>, like 等 - 标签和数值的多种过滤条件:>, <, =, <>, like 等
......
...@@ -68,18 +68,18 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES( ...@@ -68,18 +68,18 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES(
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下: TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
| TDengine DataType | Java DataType | | TDengine DataType | JDBCType (driver 版本 < 2.0.24) | JDBCType driver 版本 >= 2.0.24) |
| ----------------- | ------------------ | | ----------------- | ------------------ | ------------------ |
| TIMESTAMP | java.sql.Timestamp | | TIMESTAMP | java.lang.Long | java.sql.Timestamp |
| INT | java.lang.Integer | | INT | java.lang.Integer | java.lang.Integer |
| BIGINT | java.lang.Long | | BIGINT | java.lang.Long | java.lang.Long |
| FLOAT | java.lang.Float | | FLOAT | java.lang.Float | java.lang.Float |
| DOUBLE | java.lang.Double | | DOUBLE | java.lang.Double | java.lang.Double |
| SMALLINT | java.lang.Short | | SMALLINT | java.lang.Short | java.lang.Short |
| TINYINT | java.lang.Byte | | TINYINT | java.lang.Byte | java.lang.Byte |
| BOOL | java.lang.Boolean | | BOOL | java.lang.Boolean | java.lang.Boolean |
| BINARY | byte array | | BINARY | java.lang.String | byte array |
| NCHAR | java.lang.String | | NCHAR | java.lang.String | java.lang.String |
## 安装Java Connector ## 安装Java Connector
......
...@@ -99,22 +99,32 @@ gcc -g -O0 -fPIC -shared add_one.c -o add_one.so ...@@ -99,22 +99,32 @@ gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
在创建 UDF 时,需要区分标量函数和聚合函数。如果创建时声明了错误的函数类别,则可能导致通过 SQL 指令调用函数时出错。 在创建 UDF 时,需要区分标量函数和聚合函数。如果创建时声明了错误的函数类别,则可能导致通过 SQL 指令调用函数时出错。
- 创建标量函数:`CREATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) bufsize(B);` - 创建标量函数:`CREATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) bufsize B;`
* X:标量函数未来在 SQL 指令中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致; * ids(X):标量函数未来在 SQL 指令中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致;
* Y:包含 UDF 函数实现的动态链接库的库文件路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件) * ids(Y):包含 UDF 函数实现的动态链接库的库文件路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件),这个路径需要用英文单引号或英文双引号括起来
* Z:此函数计算结果的数据类型,使用数字表示,含义与上文中 udfNormalFunc 的 itype 参数一致 * typename(Z):此函数计算结果的数据类型,与上文中 udfNormalFunc 的 itype 参数不同,这里不是使用数字表示法,而是直接写类型名称即可
* B:系统使用的中间临时缓冲区大小,单位是字节,最小 0,最大 512,通常可以设置为 128。 * B:系统使用的中间临时缓冲区大小,单位是字节,最小 0,最大 512,通常可以设置为 128。
- 创建聚合函数:`CREATE AGGREGATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) bufsize(B);` 例如,如下语句可以把 add_one.so 创建为系统中可用的 UDF:
* X:标量函数未来在 SQL 指令中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致; ```sql
* Y:包含 UDF 函数实现的动态链接库的库文件路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件); CREATE FUNCTION add_one AS "/home/taos/udf_example/add_one.so" OUTPUTTYPE INT bufsize 128;
* Z:此函数计算结果的数据类型,使用数字表示,含义与上文中 udfNormalFunc 的 itype 参数一致; ```
- 创建聚合函数:`CREATE AGGREGATE FUNCTION ids(X) AS ids(Y) OUTPUTTYPE typename(Z) bufsize B;`
* ids(X):聚合函数未来在 SQL 指令中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致;
* ids(Y):包含 UDF 函数实现的动态链接库的库文件路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件),这个路径需要用英文单引号或英文双引号括起来;
* typename(Z):此函数计算结果的数据类型,与上文中 udfNormalFunc 的 itype 参数不同,这里不是使用数字表示法,而是直接写类型名称即可;
* B:系统使用的中间临时缓冲区大小,单位是字节,最小 0,最大 512,通常可以设置为 128。 * B:系统使用的中间临时缓冲区大小,单位是字节,最小 0,最大 512,通常可以设置为 128。
例如,如下语句可以把 add_one.so 创建为系统中可用的 UDF:
```sql
CREATE FUNCTION abs_max AS "/home/taos/udf_example/abs_max.so" OUTPUTTYPE BIGINT bufsize 128;
```
### 管理 UDF ### 管理 UDF
- 删除指定名称的用户定义函数:`DROP FUNCTION ids(X);` - 删除指定名称的用户定义函数:`DROP FUNCTION ids(X);`
* X:此参数的含义与 CREATE 指令中的 X 参数一致 * ids(X):此参数的含义与 CREATE 指令中的 ids(X) 参数一致,也即要删除的函数的名字,例如 `DROP FUNCTION add_one;`
- 显示系统中当前可用的所有 UDF:`SHOW FUNCTIONS;` - 显示系统中当前可用的所有 UDF:`SHOW FUNCTIONS;`
### 调用 UDF ### 调用 UDF
...@@ -129,8 +139,9 @@ SELECT X(c) FROM table/stable; ...@@ -129,8 +139,9 @@ SELECT X(c) FROM table/stable;
## UDF 的一些使用限制 ## UDF 的一些使用限制
在当前版本下,使用 UDF 存在如下这些限制: 在当前版本下,使用 UDF 存在如下这些限制:
1. UDF 不能与系统内建的 SQL 函数混合使用; 1. 在创建和调用 UDF 时,服务端和客户端都只支持 Linux 操作系统;
2. UDF 只支持以单个数据列作为输入; 2. UDF 不能与系统内建的 SQL 函数混合使用;
3. UDF 只要创建成功,就会被持久化存储到 MNode 节点中; 3. UDF 只支持以单个数据列作为输入;
4. 无法通过 RESTful 接口来创建 UDF; 4. UDF 只要创建成功,就会被持久化存储到 MNode 节点中;
5. UDF 在 SQL 中定义的函数名,必须与 .so 库文件实现中的接口函数名前缀保持一致,也即必须是 udfNormalFunc 的名称,而且不可与 TDengine 中已有的内建 SQL 函数重名。 5. 无法通过 RESTful 接口来创建 UDF;
6. UDF 在 SQL 中定义的函数名,必须与 .so 库文件实现中的接口函数名前缀保持一致,也即必须是 udfNormalFunc 的名称,而且不可与 TDengine 中已有的内建 SQL 函数重名。
...@@ -1453,8 +1453,6 @@ SELECT function_list FROM tb_name ...@@ -1453,8 +1453,6 @@ SELECT function_list FROM tb_name
SELECT function_list FROM stb_name SELECT function_list FROM stb_name
[WHERE where_condition] [WHERE where_condition]
[SESSION(ts_col, tol_val)]
[STATE_WINDOW(col)]
[INTERVAL(interval [, offset]) [SLIDING sliding]] [INTERVAL(interval [, offset]) [SLIDING sliding]]
[FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})] [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
[GROUP BY tags] [GROUP BY tags]
...@@ -1465,8 +1463,8 @@ SELECT function_list FROM stb_name ...@@ -1465,8 +1463,8 @@ SELECT function_list FROM stb_name
1. 时间窗口:聚合时间段的窗口宽度由关键词 INTERVAL 指定,最短时间间隔 10 毫秒(10a);并且支持偏移 offset(偏移必须小于间隔),也即时间窗口划分与“UTC 时刻 0”相比的偏移量。SLIDING 语句用于指定聚合时间段的前向增量,也即每次窗口向前滑动的时长。当 SLIDING 与 INTERVAL 取值相等的时候,滑动窗口即为翻转窗口。 1. 时间窗口:聚合时间段的窗口宽度由关键词 INTERVAL 指定,最短时间间隔 10 毫秒(10a);并且支持偏移 offset(偏移必须小于间隔),也即时间窗口划分与“UTC 时刻 0”相比的偏移量。SLIDING 语句用于指定聚合时间段的前向增量,也即每次窗口向前滑动的时长。当 SLIDING 与 INTERVAL 取值相等的时候,滑动窗口即为翻转窗口。
* 从 2.1.5.0 版本开始,INTERVAL 语句允许的最短时间间隔调整为 1 微秒(1u),当然如果所查询的 DATABASE 的时间精度设置为毫秒级,那么允许的最短时间间隔为 1 毫秒(1a)。 * 从 2.1.5.0 版本开始,INTERVAL 语句允许的最短时间间隔调整为 1 微秒(1u),当然如果所查询的 DATABASE 的时间精度设置为毫秒级,那么允许的最短时间间隔为 1 毫秒(1a)。
* **注意:**用到 INTERVAL 语句时,除非极特殊的情况,都要求把客户端和服务端的 taos.cfg 配置文件中的 timezone 参数配置为相同的取值,以避免时间处理函数频繁进行跨时区转换而导致的严重性能影响。 * **注意:**用到 INTERVAL 语句时,除非极特殊的情况,都要求把客户端和服务端的 taos.cfg 配置文件中的 timezone 参数配置为相同的取值,以避免时间处理函数频繁进行跨时区转换而导致的严重性能影响。
2. 状态窗口:使用整数或布尔值来标识产生记录时设备的状态量,产生的记录如果具有相同的状态量取值则归属于同一个状态窗口,数值改变后该窗口关闭。状态量所对应的列作为 STATE_WINDOW 语句的参数来指定。 2. 状态窗口:使用整数或布尔值来标识产生记录时设备的状态量,产生的记录如果具有相同的状态量取值则归属于同一个状态窗口,数值改变后该窗口关闭。状态量所对应的列作为 STATE_WINDOW 语句的参数来指定。(状态窗口暂不支持对超级表使用)
3. 会话窗口:时间戳所在的列由 SESSION 语句的 ts_col 参数指定,会话窗口根据相邻两条记录的时间戳差值来确定是否属于同一个会话——如果时间戳差异在 tol_val 以内,则认为记录仍属于同一个窗口;如果时间变化超过 tol_val,则自动开启下一个窗口。 3. 会话窗口:时间戳所在的列由 SESSION 语句的 ts_col 参数指定,会话窗口根据相邻两条记录的时间戳差值来确定是否属于同一个会话——如果时间戳差异在 tol_val 以内,则认为记录仍属于同一个窗口;如果时间变化超过 tol_val,则自动开启下一个窗口。(会话窗口暂不支持对超级表使用)
- WHERE 语句可以指定查询的起止时间和其他过滤条件。 - WHERE 语句可以指定查询的起止时间和其他过滤条件。
- FILL 语句指定某一窗口区间数据缺失的情况下的填充模式。填充模式包括以下几种: - FILL 语句指定某一窗口区间数据缺失的情况下的填充模式。填充模式包括以下几种:
1. 不进行填充:NONE(默认填充模式)。 1. 不进行填充:NONE(默认填充模式)。
......
...@@ -6,17 +6,16 @@ TDengine is an innovative Big Data processing product launched by TAOS Data in t ...@@ -6,17 +6,16 @@ TDengine is an innovative Big Data processing product launched by TAOS Data in t
One of the modules of TDengine is the time-series database. However, in addition to this, to reduce the complexity of research and development and the difficulty of system operation, TDengine also provides functions such as caching, message queuing, subscription, stream computing, etc. TDengine provides a full-stack technical solution for the processing of IoT and Industrial Internet BigData. It is an efficient and easy-to-use IoT Big Data platform. Compared with typical Big Data platforms such as Hadoop, TDengine has the following distinct characteristics: One of the modules of TDengine is the time-series database. However, in addition to this, to reduce the complexity of research and development and the difficulty of system operation, TDengine also provides functions such as caching, message queuing, subscription, stream computing, etc. TDengine provides a full-stack technical solution for the processing of IoT and Industrial Internet BigData. It is an efficient and easy-to-use IoT Big Data platform. Compared with typical Big Data platforms such as Hadoop, TDengine has the following distinct characteristics:
- **Performance improvement over 10 times**: An innovative data storage structure is defined, with each single core can process at least 20,000 requests per second, insert millions of data points, and read more than 10 million data points, which is more than 10 times faster than other existing general database. - **Performance improvement over 10 times**: An innovative data storage structure is defined, with every single core that can process at least 20,000 requests per second, insert millions of data points, and read more than 10 million data points, which is more than 10 times faster than other existing general database.
- **Reduce the cost of hardware or cloud services to 1/5**: Due to its ultra-performance, TDengine’s computing resources consumption is less than 1/5 of other common Big Data solutions; through columnar storage and advanced compression algorithms, the storage consumption is less than 1/10 of other general databases. - **Reduce the cost of hardware or cloud services to 1/5**: Due to its ultra-performance, TDengine’s computing resources consumption is less than 1/5 of other common Big Data solutions; through columnar storage and advanced compression algorithms, the storage consumption is less than 1/10 of other general databases.
- **Full-stack time-series data processing engine**: Integrate database, message queue, cache, stream computing, and other functions, and the applications do not need to integrate with software such as Kafka/Redis/HBase/Spark/HDFS, thus greatly reducing the complexity cost of application development and maintenance. - **Full-stack time-series data processing engine**: Integrate database, message queue, cache, stream computing, and other functions, and the applications do not need to integrate with software such as Kafka/Redis/HBase/Spark/HDFS, thus greatly reducing the complexity cost of application development and maintenance.
- **Highly Available and Horizontal Scalable **: With the distributed architecture and consistency algorithm, via multi-replication and clustering features, TDengine ensures high availability and horizontal scalability to support the mission-critical applications. - **Highly Available and Horizontal Scalable**: With the distributed architecture and consistency algorithm, via multi-replication and clustering features, TDengine ensures high availability and horizontal scalability to support mission-critical applications.
- **Zero operation cost & zero learning cost**: Installing clusters is simple and quick, with real-time backup built-in, and no need to split libraries or tables. Similar to standard SQL, TDengine can support RESTful, Python/Java/C/C++/C#/Go/Node.js, and similar to MySQL with zero learning cost. - **Zero operation cost & zero learning cost**: Installing clusters is simple and quick, with real-time backup built-in, and no need to split libraries or tables. Similar to standard SQL, TDengine can support RESTful, Python/Java/C/C++/C#/Go/Node.js, and similar to MySQL with zero learning cost.
- **Core is Open Sourced:** Except some auxiliary features, the core of TDengine is open sourced. Enterprise won't be locked by the database anymore. Ecosystem is more strong, product is more stable, and developer communities are more active. - **Core is Open Sourced:** Except for some auxiliary features, the core of TDengine is open-sourced. Enterprise won't be locked by the database anymore. The ecosystem is more strong, products are more stable, and developer communities are more active.
With TDengine, the total cost of ownership of typical IoT, Internet of Vehicles, and Industrial Internet Big Data platforms can be greatly reduced. However, since it makes full use of the characteristics of IoT time-series data, TDengine cannot be used to process general data from web crawlers, microblogs, WeChat, e-commerce, ERP, CRM, and other sources. With TDengine, the total cost of ownership of typical IoT, Internet of Vehicles, and Industrial Internet Big Data platforms can be greatly reduced. However, since it makes full use of the characteristics of IoT time-series data, TDengine cannot be used to process general data from web crawlers, microblogs, WeChat, e-commerce, ERP, CRM, and other sources.
![TDengine Technology Ecosystem](page://images/eco_system.png) ![TDengine Technology Ecosystem](page://images/eco_system.png)
<center>Figure 1. TDengine Technology Ecosystem</center> <center>Figure 1. TDengine Technology Ecosystem</center>
## <a class="anchor" id="scenes"></a>Overall Scenarios of TDengine ## <a class="anchor" id="scenes"></a>Overall Scenarios of TDengine
...@@ -62,4 +61,4 @@ From the perspective of data sources, designers can analyze the applicability of ...@@ -62,4 +61,4 @@ From the perspective of data sources, designers can analyze the applicability of
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ | | ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
| Require system with high-reliability | | | √ | TDengine has a very robust and reliable system architecture to implement simple and convenient daily operation with streamlined experiences for operators, thus human errors and accidents are eliminated to the greatest extent. | | Require system with high-reliability | | | √ | TDengine has a very robust and reliable system architecture to implement simple and convenient daily operation with streamlined experiences for operators, thus human errors and accidents are eliminated to the greatest extent. |
| Require controllable operation learning cost | | | √ | As above. | | Require controllable operation learning cost | | | √ | As above. |
| Require abundant talent supply | √ | | | As a new-generation product, it’s still difficult to find talents with TDengine experiences from market. However, the learning cost is low. As the vendor, we also provide extensive operation training and counselling services. | | Require abundant talent supply | √ | | | As a new-generation product, it’s still difficult to find talents with TDengine experiences from the market. However, the learning cost is low. As the vendor, we also provide extensive operation training and counseling services. |
\ No newline at end of file
...@@ -155,9 +155,7 @@ The design of TDengine is based on the assumption that one single node or softwa ...@@ -155,9 +155,7 @@ The design of TDengine is based on the assumption that one single node or softwa
Logical structure diagram of TDengine distributed architecture as following: Logical structure diagram of TDengine distributed architecture as following:
![TDengine architecture diagram](page://images/architecture/structure.png) ![TDengine architecture diagram](page://images/architecture/structure.png)
<center> Picture 1: TDengine architecture diagram </center> <center> Figure 1: TDengine architecture diagram </center>
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDEngine application driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit. A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDEngine application driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
...@@ -199,8 +197,8 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc ...@@ -199,8 +197,8 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc
To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process. To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process.
![ typical process of TDengine](page://images/architecture/message.png) ![typical process of TDengine](page://images/architecture/message.png)
<center> Picture 2 typical process of TDengine </center> <center> Figure 2: Typical process of TDengine </center>
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs. 1. Application initiates a request to insert data through JDBC, ODBC, or other APIs.
2. TAOSC checks if meta data existing for the table in the cache. If so, go straight to Step 4. If not, TAOSC sends a get meta-data request to mnode. 2. TAOSC checks if meta data existing for the table in the cache. If so, go straight to Step 4. If not, TAOSC sends a get meta-data request to mnode.
...@@ -268,7 +266,8 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but ...@@ -268,7 +266,8 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but
Master Vnode uses a writing process as follows: Master Vnode uses a writing process as follows:
Figure 3: TDengine Master writing process ![TDengine Master Writing Process](page://images/architecture/write_master.png)
<center> Figure 3: TDengine Master writing process </center>
1. Master vnode receives the application data insertion request, verifies, and moves to next step; 1. Master vnode receives the application data insertion request, verifies, and moves to next step;
2. If the system configuration parameter `walLevel` is greater than 0, vnode will write the original request packet into database log file WAL. If walLevel is set to 2 and fsync is set to 0, TDengine will make WAL data written immediately to ensure that even system goes down, all data can be recovered from database log file; 2. If the system configuration parameter `walLevel` is greater than 0, vnode will write the original request packet into database log file WAL. If walLevel is set to 2 and fsync is set to 0, TDengine will make WAL data written immediately to ensure that even system goes down, all data can be recovered from database log file;
...@@ -281,8 +280,8 @@ Figure 3: TDengine Master writing process ...@@ -281,8 +280,8 @@ Figure 3: TDengine Master writing process
For a slave vnode, the write process as follows: For a slave vnode, the write process as follows:
![TDengine Slave Writing Process](page://images/architecture/write_master.png) ![TDengine Slave Writing Process](page://images/architecture/write_slave.png)
<center> Picture 3 TDengine Slave Writing Process </center> <center> Figure 4: TDengine Slave Writing Process </center>
1. Slave vnode receives a data insertion request forwarded by Master vnode; 1. Slave vnode receives a data insertion request forwarded by Master vnode;
2. If the system configuration parameter `walLevel` is greater than 0, vnode will write the original request packet into database log file WAL. If walLevel is set to 2 and fsync is set to 0, TDengine will make WAL data written immediately to ensure that even system goes down, all data can be recovered from database log file; 2. If the system configuration parameter `walLevel` is greater than 0, vnode will write the original request packet into database log file WAL. If walLevel is set to 2 and fsync is set to 0, TDengine will make WAL data written immediately to ensure that even system goes down, all data can be recovered from database log file;
...@@ -355,8 +354,6 @@ When data is written to disk, it is decided whether to compress the data accordi ...@@ -355,8 +354,6 @@ When data is written to disk, it is decided whether to compress the data accordi
By default, TDengine saves all data in /var/lib/taos directory, and the data files of each vnode are saved in a different directory under this directory. In order to expand the storage space, minimize the bottleneck of file reading and improve the data throughput rate, TDengine can configure the system parameter “dataDir” to allow multiple mounted hard disks to be used by system at the same time. In addition, TDengine also provides the function of tiered data storage, i.e. storage on different storage media according to the time stamps of data files. For example, the latest data is stored on SSD, the data for more than one week is stored on local hard disk, and the data for more than four weeks is stored on network storage device, thus reducing the storage cost and ensuring efficient data access. The movement of data on different storage media is automatically done by the system and completely transparent to applications. Tiered storage of data is also configured through the system parameter “dataDir”. By default, TDengine saves all data in /var/lib/taos directory, and the data files of each vnode are saved in a different directory under this directory. In order to expand the storage space, minimize the bottleneck of file reading and improve the data throughput rate, TDengine can configure the system parameter “dataDir” to allow multiple mounted hard disks to be used by system at the same time. In addition, TDengine also provides the function of tiered data storage, i.e. storage on different storage media according to the time stamps of data files. For example, the latest data is stored on SSD, the data for more than one week is stored on local hard disk, and the data for more than four weeks is stored on network storage device, thus reducing the storage cost and ensuring efficient data access. The movement of data on different storage media is automatically done by the system and completely transparent to applications. Tiered storage of data is also configured through the system parameter “dataDir”.
dataDir format is as follows: dataDir format is as follows:
``` ```
dataDir data_path [tier_level] dataDir data_path [tier_level]
...@@ -364,8 +361,6 @@ dataDir data_path [tier_level] ...@@ -364,8 +361,6 @@ dataDir data_path [tier_level]
Where data_path is the folder path of mount point and tier_level is the media storage-tier. The higher the media storage-tier, means the older the data file. Multiple hard disks can be mounted at the same storage-tier, and data files on the same storage-tier are distributed on all hard disks within the tier. TDengine supports up to 3 tiers of storage, so tier_level values are 0, 1, and 2. When configuring dataDir, there must be only one mount path without specifying tier_level, which is called special mount disk (path). The mount path defaults to level 0 storage media and contains special file links, which cannot be removed, otherwise it will have a devastating impact on the written data. Where data_path is the folder path of mount point and tier_level is the media storage-tier. The higher the media storage-tier, means the older the data file. Multiple hard disks can be mounted at the same storage-tier, and data files on the same storage-tier are distributed on all hard disks within the tier. TDengine supports up to 3 tiers of storage, so tier_level values are 0, 1, and 2. When configuring dataDir, there must be only one mount path without specifying tier_level, which is called special mount disk (path). The mount path defaults to level 0 storage media and contains special file links, which cannot be removed, otherwise it will have a devastating impact on the written data.
Suppose a physical node with six mountable hard disks/mnt/disk1,/mnt/disk2, …,/mnt/disk6, where disk1 and disk2 need to be designated as level 0 storage media, disk3 and disk4 are level 1 storage media, and disk5 and disk6 are level 2 storage media. Disk1 is a special mount disk, you can configure it in/etc/taos/taos.cfg as follows: Suppose a physical node with six mountable hard disks/mnt/disk1,/mnt/disk2, …,/mnt/disk6, where disk1 and disk2 need to be designated as level 0 storage media, disk3 and disk4 are level 1 storage media, and disk5 and disk6 are level 2 storage media. Disk1 is a special mount disk, you can configure it in/etc/taos/taos.cfg as follows:
``` ```
...@@ -379,7 +374,6 @@ dataDir /mnt/disk6/taos 2 ...@@ -379,7 +374,6 @@ dataDir /mnt/disk6/taos 2
Mounted disks can also be a non-local network disk, as long as the system can access it. Mounted disks can also be a non-local network disk, as long as the system can access it.
Note: Tiered Storage is only supported in Enterprise Edition Note: Tiered Storage is only supported in Enterprise Edition
## <a class="anchor" id="query"></a>Data Query ## <a class="anchor" id="query"></a>Data Query
...@@ -419,7 +413,7 @@ For the data collected by device D1001, the number of records per hour is counte ...@@ -419,7 +413,7 @@ For the data collected by device D1001, the number of records per hour is counte
TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure: TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure:
![Diagram of multi-table aggregation query](page://images/architecture/multi_tables.png) ![Diagram of multi-table aggregation query](page://images/architecture/multi_tables.png)
<center> Picture 4 Diagram of multi-table aggregation query </center> <center> Figure 5: Diagram of multi-table aggregation query </center>
1. Application sends a query condition to system; 1. Application sends a query condition to system;
2. TAOSC sends the STable name to Meta Node(management node); 2. TAOSC sends the STable name to Meta Node(management node);
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
## <a class="anchor" id="queries"></a> Main Query Features ## <a class="anchor" id="queries"></a> Main Query Features
TDengine uses SQL as the query language. Applications can send SQL statements through C/C++, Java, Go, Python connectors, and users can manually execute SQL Ad-Hoc Query through the Command Line Interface (CLI) tool TAOS Shell provided by TDengine. TDengine supports the following query functions: TDengine uses SQL as the query language. Applications can send SQL statements through C/C++, Java, Go, C#, Python, Node.js connectors, and users can manually execute SQL Ad-Hoc Query through the Command Line Interface (CLI) tool TAOS Shell provided by TDengine. TDengine supports the following query functions:
- Single-column and multi-column data query - Single-column and multi-column data query
- Multiple filters for tags and numeric values: >, <, =, < >, like, etc - Multiple filters for tags and numeric values: >, <, =, < >, like, etc
...@@ -96,4 +96,4 @@ Query OK, 5 row(s) in set (0.001521s) ...@@ -96,4 +96,4 @@ Query OK, 5 row(s) in set (0.001521s)
In IoT scenario, it is difficult to synchronize the time stamp of collected data at each point, but many analysis algorithms (such as FFT) need to align the collected data strictly at equal intervals of time. In many systems, it’s required to write their own programs to process, but the down sampling operation of TDengine can be used to solve the problem easily. If there is no collected data in an interval, TDengine also provides interpolation calculation function. In IoT scenario, it is difficult to synchronize the time stamp of collected data at each point, but many analysis algorithms (such as FFT) need to align the collected data strictly at equal intervals of time. In many systems, it’s required to write their own programs to process, but the down sampling operation of TDengine can be used to solve the problem easily. If there is no collected data in an interval, TDengine also provides interpolation calculation function.
For details of syntax rules, please refer to the [Time-dimension Aggregation section of TAOS SQL](https://www.taosdata.com/en/documentation/taos-sql#aggregation). For details of syntax rules, please refer to the [Time-dimension Aggregation section of TAOS SQL](https://www.taosdata.com/en/documentation/taos-sql#aggregation).
\ No newline at end of file
...@@ -69,18 +69,18 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES( ...@@ -69,18 +69,18 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES(
The TDengine supports the following data types and Java data types: The TDengine supports the following data types and Java data types:
| TDengine DataType | Java DataType | | TDengine DataType | JDBCType (driver version < 2.0.24) | JDBCType (driver version >= 2.0.24) |
| ----------------- | ------------------ | | ----------------- | ------------------ | ------------------ |
| TIMESTAMP | java.sql.Timestamp | | TIMESTAMP | java.lang.Long | java.sql.Timestamp |
| INT | java.lang.Integer | | INT | java.lang.Integer | java.lang.Integer |
| BIGINT | java.lang.Long | | BIGINT | java.lang.Long | java.lang.Long |
| FLOAT | java.lang.Float | | FLOAT | java.lang.Float | java.lang.Float |
| DOUBLE | java.lang.Double | | DOUBLE | java.lang.Double | java.lang.Double |
| SMALLINT | java.lang.Short | | SMALLINT | java.lang.Short | java.lang.Short |
| TINYINT | java.lang.Byte | | TINYINT | java.lang.Byte | java.lang.Byte |
| BOOL | java.lang.Boolean | | BOOL | java.lang.Boolean | java.lang.Boolean |
| BINARY | byte[] | | BINARY | java.lang.String | byte array |
| NCHAR | java.lang.String | | NCHAR | java.lang.String | java.lang.String |
## Install Java connector ## Install Java connector
......
...@@ -2499,6 +2499,9 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col ...@@ -2499,6 +2499,9 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
case TSDB_FUNC_MAX: case TSDB_FUNC_MAX:
case TSDB_FUNC_DIFF: case TSDB_FUNC_DIFF:
case TSDB_FUNC_DERIVATIVE: case TSDB_FUNC_DERIVATIVE:
case TSDB_FUNC_CEIL:
case TSDB_FUNC_FLOOR:
case TSDB_FUNC_ROUND:
case TSDB_FUNC_STDDEV: case TSDB_FUNC_STDDEV:
case TSDB_FUNC_LEASTSQR: { case TSDB_FUNC_LEASTSQR: {
// 1. valid the number of parameters // 1. valid the number of parameters
...@@ -2689,6 +2692,10 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col ...@@ -2689,6 +2692,10 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
pTableMetaInfo = tscGetMetaInfo(pQueryInfo, index.tableIndex); pTableMetaInfo = tscGetMetaInfo(pQueryInfo, index.tableIndex);
if (pParamElem->pNode->columnName.z == NULL) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg2);
}
// functions can not be applied to tags // functions can not be applied to tags
if ((index.columnIndex >= tscGetNumOfColumns(pTableMetaInfo->pTableMeta)) || (index.columnIndex < 0)) { if ((index.columnIndex >= tscGetNumOfColumns(pTableMetaInfo->pTableMeta)) || (index.columnIndex < 0)) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg6); return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg6);
...@@ -3422,6 +3429,7 @@ static bool functionCompatibleCheck(SQueryInfo* pQueryInfo, bool joinQuery, bool ...@@ -3422,6 +3429,7 @@ static bool functionCompatibleCheck(SQueryInfo* pQueryInfo, bool joinQuery, bool
int32_t scalarUdf = 0; int32_t scalarUdf = 0;
int32_t prjNum = 0; int32_t prjNum = 0;
int32_t aggNum = 0; int32_t aggNum = 0;
int32_t scalNum = 0;
size_t numOfExpr = tscNumOfExprs(pQueryInfo); size_t numOfExpr = tscNumOfExprs(pQueryInfo);
assert(numOfExpr > 0); assert(numOfExpr > 0);
...@@ -3453,6 +3461,10 @@ static bool functionCompatibleCheck(SQueryInfo* pQueryInfo, bool joinQuery, bool ...@@ -3453,6 +3461,10 @@ static bool functionCompatibleCheck(SQueryInfo* pQueryInfo, bool joinQuery, bool
++prjNum; ++prjNum;
} }
if (functionId == TSDB_FUNC_CEIL || functionId == TSDB_FUNC_FLOOR || functionId == TSDB_FUNC_ROUND) {
++scalNum;
}
if (functionId == TSDB_FUNC_PRJ && (pExpr1->base.colInfo.colId == PRIMARYKEY_TIMESTAMP_COL_INDEX || TSDB_COL_IS_UD_COL(pExpr1->base.colInfo.flag))) { if (functionId == TSDB_FUNC_PRJ && (pExpr1->base.colInfo.colId == PRIMARYKEY_TIMESTAMP_COL_INDEX || TSDB_COL_IS_UD_COL(pExpr1->base.colInfo.flag))) {
continue; continue;
} }
...@@ -3474,15 +3486,19 @@ static bool functionCompatibleCheck(SQueryInfo* pQueryInfo, bool joinQuery, bool ...@@ -3474,15 +3486,19 @@ static bool functionCompatibleCheck(SQueryInfo* pQueryInfo, bool joinQuery, bool
} }
} }
aggNum = (int32_t)size - prjNum - aggUdf - scalarUdf; aggNum = (int32_t)size - prjNum - scalNum - aggUdf - scalarUdf;
assert(aggNum >= 0); assert(aggNum >= 0);
if (aggUdf > 0 && (prjNum > 0 || aggNum > 0 || scalarUdf > 0)) { if (aggUdf > 0 && (prjNum > 0 || aggNum > 0 || scalNum > 0 || scalarUdf > 0)) {
return false;
}
if (scalarUdf > 0 && (aggNum > 0 || scalNum > 0)) {
return false; return false;
} }
if (scalarUdf > 0 && aggNum > 0) { if (aggNum > 0 && scalNum > 0) {
return false; return false;
} }
...@@ -6259,7 +6275,9 @@ int32_t validateFunctionsInIntervalOrGroupbyQuery(SSqlCmd* pCmd, SQueryInfo* pQu ...@@ -6259,7 +6275,9 @@ int32_t validateFunctionsInIntervalOrGroupbyQuery(SSqlCmd* pCmd, SQueryInfo* pQu
} }
int32_t f = pExpr->base.functionId; int32_t f = pExpr->base.functionId;
if ((f == TSDB_FUNC_PRJ && pExpr->base.numOfParams == 0) || f == TSDB_FUNC_DIFF || f == TSDB_FUNC_ARITHM || f == TSDB_FUNC_DERIVATIVE) { if ((f == TSDB_FUNC_PRJ && pExpr->base.numOfParams == 0) || f == TSDB_FUNC_DIFF || f == TSDB_FUNC_ARITHM || f == TSDB_FUNC_DERIVATIVE ||
f == TSDB_FUNC_CEIL || f == TSDB_FUNC_FLOOR || f == TSDB_FUNC_ROUND)
{
isProjectionFunction = true; isProjectionFunction = true;
break; break;
} }
...@@ -6861,6 +6879,7 @@ static int32_t checkUpdateTagPrjFunctions(SQueryInfo* pQueryInfo, char* msg) { ...@@ -6861,6 +6879,7 @@ static int32_t checkUpdateTagPrjFunctions(SQueryInfo* pQueryInfo, char* msg) {
const char* msg2 = "aggregation function should not be mixed up with projection"; const char* msg2 = "aggregation function should not be mixed up with projection";
bool tagTsColExists = false; bool tagTsColExists = false;
int16_t numOfScalar = 0;
int16_t numOfSelectivity = 0; int16_t numOfSelectivity = 0;
int16_t numOfAggregation = 0; int16_t numOfAggregation = 0;
...@@ -6894,6 +6913,8 @@ static int32_t checkUpdateTagPrjFunctions(SQueryInfo* pQueryInfo, char* msg) { ...@@ -6894,6 +6913,8 @@ static int32_t checkUpdateTagPrjFunctions(SQueryInfo* pQueryInfo, char* msg) {
if ((aAggs[functionId].status & TSDB_FUNCSTATE_SELECTIVITY) != 0) { if ((aAggs[functionId].status & TSDB_FUNCSTATE_SELECTIVITY) != 0) {
numOfSelectivity++; numOfSelectivity++;
} else if ((aAggs[functionId].status & TSDB_FUNCSTATE_SCALAR) != 0) {
numOfScalar++;
} else { } else {
numOfAggregation++; numOfAggregation++;
} }
......
...@@ -269,7 +269,10 @@ bool tscIsProjectionQueryOnSTable(SQueryInfo* pQueryInfo, int32_t tableIndex) { ...@@ -269,7 +269,10 @@ bool tscIsProjectionQueryOnSTable(SQueryInfo* pQueryInfo, int32_t tableIndex) {
functionId != TSDB_FUNC_DIFF && functionId != TSDB_FUNC_DIFF &&
functionId != TSDB_FUNC_DERIVATIVE && functionId != TSDB_FUNC_DERIVATIVE &&
functionId != TSDB_FUNC_TS_DUMMY && functionId != TSDB_FUNC_TS_DUMMY &&
functionId != TSDB_FUNC_TID_TAG) { functionId != TSDB_FUNC_TID_TAG &&
functionId != TSDB_FUNC_CEIL &&
functionId != TSDB_FUNC_FLOOR &&
functionId != TSDB_FUNC_ROUND) {
return false; return false;
} }
} }
...@@ -1466,7 +1469,12 @@ void tscFreeSubobj(SSqlObj* pSql) { ...@@ -1466,7 +1469,12 @@ void tscFreeSubobj(SSqlObj* pSql) {
tscDebug("0x%"PRIx64" start to free sub SqlObj, numOfSub:%d", pSql->self, pSql->subState.numOfSub); tscDebug("0x%"PRIx64" start to free sub SqlObj, numOfSub:%d", pSql->self, pSql->subState.numOfSub);
for(int32_t i = 0; i < pSql->subState.numOfSub; ++i) { for(int32_t i = 0; i < pSql->subState.numOfSub; ++i) {
tscDebug("0x%"PRIx64" free sub SqlObj:0x%"PRIx64", index:%d", pSql->self, pSql->pSubs[i]->self, i); if (pSql->pSubs[i] != NULL) {
tscDebug("0x%"PRIx64" free sub SqlObj:0x%"PRIx64", index:%d", pSql->self, pSql->pSubs[i]->self, i);
} else {
/* just for python error test case */
tscDebug("0x%"PRIx64" free sub SqlObj:0x0, index:%d", pSql->self, i);
}
taos_free_result(pSql->pSubs[i]); taos_free_result(pSql->pSubs[i]);
pSql->pSubs[i] = NULL; pSql->pSubs[i] = NULL;
} }
......
...@@ -165,12 +165,14 @@ def _crow_binary_to_python_block(data, num_of_rows, nbytes=None, precision=Field ...@@ -165,12 +165,14 @@ def _crow_binary_to_python_block(data, num_of_rows, nbytes=None, precision=Field
assert nbytes is not None assert nbytes is not None
res = [] res = []
for i in range(abs(num_of_rows)): for i in range(abs(num_of_rows)):
try: rbyte = ctypes.cast(data + nbytes * i, ctypes.POINTER(ctypes.c_short))[:1].pop()
rbyte = ctypes.cast(data + nbytes * i, ctypes.POINTER(ctypes.c_short))[:1].pop() chars = ctypes.cast(c_char_p(data + nbytes * i + 2), ctypes.POINTER(c_char * rbyte))
tmpstr = ctypes.c_char_p(data + nbytes * i + 2) buffer = create_string_buffer(rbyte + 1)
res.append(tmpstr.value.decode()[0:rbyte]) buffer[:rbyte] = chars[0][:rbyte]
except ValueError: if rbyte == 1 and buffer[0] == b'\xff':
res.append(None) res.append(None)
else:
res.append(cast(buffer, c_char_p).value.decode())
return res return res
...@@ -179,11 +181,14 @@ def _crow_nchar_to_python_block(data, num_of_rows, nbytes=None, precision=FieldT ...@@ -179,11 +181,14 @@ def _crow_nchar_to_python_block(data, num_of_rows, nbytes=None, precision=FieldT
assert nbytes is not None assert nbytes is not None
res = [] res = []
for i in range(abs(num_of_rows)): for i in range(abs(num_of_rows)):
try: rbyte = ctypes.cast(data + nbytes * i, ctypes.POINTER(ctypes.c_short))[:1].pop()
tmpstr = ctypes.c_char_p(data + nbytes * i + 2) chars = ctypes.cast(c_char_p(data + nbytes * i + 2), ctypes.POINTER(c_char * rbyte))
res.append(tmpstr.value.decode()) buffer = create_string_buffer(rbyte + 1)
except ValueError: buffer[:rbyte] = chars[0][:rbyte]
if rbyte == 4 and buffer[:4] == b'\xff'*4:
res.append(None) res.append(None)
else:
res.append(cast(buffer, c_char_p).value.decode())
return res return res
......
...@@ -104,6 +104,7 @@ extern char configDir[]; ...@@ -104,6 +104,7 @@ extern char configDir[];
#define DATATYPE_BUFF_LEN (SMALL_BUFF_LEN*3) #define DATATYPE_BUFF_LEN (SMALL_BUFF_LEN*3)
#define NOTE_BUFF_LEN (SMALL_BUFF_LEN*16) #define NOTE_BUFF_LEN (SMALL_BUFF_LEN*16)
#define DEFAULT_NTHREADS 8
#define DEFAULT_TIMESTAMP_STEP 1 #define DEFAULT_TIMESTAMP_STEP 1
#define DEFAULT_INTERLACE_ROWS 0 #define DEFAULT_INTERLACE_ROWS 0
#define DEFAULT_DATATYPE_NUM 1 #define DEFAULT_DATATYPE_NUM 1
...@@ -227,7 +228,7 @@ typedef struct SArguments_S { ...@@ -227,7 +228,7 @@ typedef struct SArguments_S {
char * sqlFile; char * sqlFile;
bool use_metric; bool use_metric;
bool drop_database; bool drop_database;
bool insert_only; bool aggr_func;
bool answer_yes; bool answer_yes;
bool debug_print; bool debug_print;
bool verbose_print; bool verbose_print;
...@@ -375,8 +376,7 @@ typedef struct SDbs_S { ...@@ -375,8 +376,7 @@ typedef struct SDbs_S {
char password[SHELL_MAX_PASSWORD_LEN]; char password[SHELL_MAX_PASSWORD_LEN];
char resultFile[MAX_FILE_NAME_LEN]; char resultFile[MAX_FILE_NAME_LEN];
bool use_metric; bool use_metric;
bool insert_only; bool aggr_func;
bool do_aggreFunc;
bool asyncMode; bool asyncMode;
uint32_t threadCount; uint32_t threadCount;
...@@ -605,6 +605,9 @@ char *g_rand_current_buff = NULL; ...@@ -605,6 +605,9 @@ char *g_rand_current_buff = NULL;
char *g_rand_phase_buff = NULL; char *g_rand_phase_buff = NULL;
char *g_randdouble_buff = NULL; char *g_randdouble_buff = NULL;
char *g_aggreFuncDemo[] = {"*", "count(*)", "avg(current)", "sum(current)",
"max(current)", "min(current)", "first(current)", "last(current)"};
char *g_aggreFunc[] = {"*", "count(*)", "avg(C0)", "sum(C0)", char *g_aggreFunc[] = {"*", "count(*)", "avg(C0)", "sum(C0)",
"max(C0)", "min(C0)", "first(C0)", "last(C0)"}; "max(C0)", "min(C0)", "first(C0)", "last(C0)"};
...@@ -628,7 +631,7 @@ SArguments g_args = { ...@@ -628,7 +631,7 @@ SArguments g_args = {
NULL, // sqlFile NULL, // sqlFile
true, // use_metric true, // use_metric
true, // drop_database true, // drop_database
true, // insert_only false, // aggr_func
false, // debug_print false, // debug_print
false, // verbose_print false, // verbose_print
false, // performance statistic print false, // performance statistic print
...@@ -646,7 +649,7 @@ SArguments g_args = { ...@@ -646,7 +649,7 @@ SArguments g_args = {
64, // binwidth 64, // binwidth
4, // columnCount, timestamp + float + int + float 4, // columnCount, timestamp + float + int + float
20 + FLOAT_BUFF_LEN + INT_BUFF_LEN + FLOAT_BUFF_LEN, // lenOfOneRow 20 + FLOAT_BUFF_LEN + INT_BUFF_LEN + FLOAT_BUFF_LEN, // lenOfOneRow
8, // num_of_connections/thread DEFAULT_NTHREADS,// nthreads
0, // insert_interval 0, // insert_interval
DEFAULT_TIMESTAMP_STEP, // timestamp_step DEFAULT_TIMESTAMP_STEP, // timestamp_step
1, // query_times 1, // query_times
...@@ -748,19 +751,19 @@ static void printHelp() { ...@@ -748,19 +751,19 @@ static void printHelp() {
char indent[10] = " "; char indent[10] = " ";
printf("%s\n\n", "Usage: taosdemo [OPTION...]"); printf("%s\n\n", "Usage: taosdemo [OPTION...]");
printf("%s%s%s%s\n", indent, "-f, --file=FILE", "\t\t", printf("%s%s%s%s\n", indent, "-f, --file=FILE", "\t\t",
"The meta file to the execution procedure. Default is './meta.json'."); "The meta file to the execution procedure.");
printf("%s%s%s%s\n", indent, "-u, --user=USER", "\t\t", printf("%s%s%s%s\n", indent, "-u, --user=USER", "\t\t",
"The user name to use when connecting to the server."); "The user name to use when connecting to the server.");
#ifdef _TD_POWER_ #ifdef _TD_POWER_
printf("%s%s%s%s\n", indent, "-p, --password", "\t\t", printf("%s%s%s%s\n", indent, "-p, --password", "\t\t",
"The password to use when connecting to the server. Default is 'powerdb'"); "The password to use when connecting to the server. By default is 'powerdb'");
printf("%s%s%s%s\n", indent, "-c, --config-dir=CONFIG_DIR", "\t", printf("%s%s%s%s\n", indent, "-c, --config-dir=CONFIG_DIR", "\t",
"Configuration directory. Default is '/etc/power/'."); "Configuration directory. By default is '/etc/power/'.");
#elif (_TD_TQ_ == true) #elif (_TD_TQ_ == true)
printf("%s%s%s%s\n", indent, "-p, --password", "\t\t", printf("%s%s%s%s\n", indent, "-p, --password", "\t\t",
"The password to use when connecting to the server. Default is 'tqueue'"); "The password to use when connecting to the server. By default is 'tqueue'");
printf("%s%s%s%s\n", indent, "-c, --config-dir=CONFIG_DIR", "\t", printf("%s%s%s%s\n", indent, "-c, --config-dir=CONFIG_DIR", "\t",
"Configuration directory. Default is '/etc/tq/'."); "Configuration directory. By default is '/etc/tq/'.");
#else #else
printf("%s%s%s%s\n", indent, "-p, --password", "\t\t", printf("%s%s%s%s\n", indent, "-p, --password", "\t\t",
"The password to use when connecting to the server."); "The password to use when connecting to the server.");
...@@ -772,24 +775,24 @@ static void printHelp() { ...@@ -772,24 +775,24 @@ static void printHelp() {
printf("%s%s%s%s\n", indent, "-P, --port=PORT", "\t\t", printf("%s%s%s%s\n", indent, "-P, --port=PORT", "\t\t",
"The TCP/IP port number to use for the connection."); "The TCP/IP port number to use for the connection.");
printf("%s%s%s%s\n", indent, "-I, --interface=INTERFACE", "\t", printf("%s%s%s%s\n", indent, "-I, --interface=INTERFACE", "\t",
"The interface (taosc, rest, and stmt) taosdemo uses. Default is 'taosc'."); "The interface (taosc, rest, and stmt) taosdemo uses. By default use 'taosc'.");
printf("%s%s%s%s\n", indent, "-d, --database=DATABASE", "\t", printf("%s%s%s%s\n", indent, "-d, --database=DATABASE", "\t",
"Destination database. Default is 'test'."); "Destination database. By default is 'test'.");
printf("%s%s%s%s\n", indent, "-a, --replica=REPLICA", "\t\t", printf("%s%s%s%s\n", indent, "-a, --replica=REPLICA", "\t\t",
"Set the replica parameters of the database, Default 1, min: 1, max: 3."); "Set the replica parameters of the database, By default use 1, min: 1, max: 3.");
printf("%s%s%s%s\n", indent, "-m, --table-prefix=TABLEPREFIX", "\t", printf("%s%s%s%s\n", indent, "-m, --table-prefix=TABLEPREFIX", "\t",
"Table prefix name. Default is 'd'."); "Table prefix name. By default use 'd'.");
printf("%s%s%s%s\n", indent, "-s, --sql-file=FILE", "\t\t", printf("%s%s%s%s\n", indent, "-s, --sql-file=FILE", "\t\t",
"The select sql file."); "The select sql file.");
printf("%s%s%s%s\n", indent, "-N, --normal-table", "\t\t", "Use normal table flag."); printf("%s%s%s%s\n", indent, "-N, --normal-table", "\t\t", "Use normal table flag.");
printf("%s%s%s%s\n", indent, "-o, --output=FILE", "\t\t", printf("%s%s%s%s\n", indent, "-o, --output=FILE", "\t\t",
"Direct output to the named file. Default is './output.txt'."); "Direct output to the named file. By default use './output.txt'.");
printf("%s%s%s%s\n", indent, "-q, --query-mode=MODE", "\t\t", printf("%s%s%s%s\n", indent, "-q, --query-mode=MODE", "\t\t",
"Query mode -- 0: SYNC, 1: ASYNC. Default is SYNC."); "Query mode -- 0: SYNC, 1: ASYNC. By default use SYNC.");
printf("%s%s%s%s\n", indent, "-b, --data-type=DATATYPE", "\t", printf("%s%s%s%s\n", indent, "-b, --data-type=DATATYPE", "\t",
"The data_type of columns, default: FLOAT, INT, FLOAT."); "The data_type of columns, By default use: FLOAT, INT, FLOAT.");
printf("%s%s%s%s%d\n", indent, "-w, --binwidth=WIDTH", "\t\t", printf("%s%s%s%s%d\n", indent, "-w, --binwidth=WIDTH", "\t\t",
"The width of data_type 'BINARY' or 'NCHAR'. Default is ", "The width of data_type 'BINARY' or 'NCHAR'. By default use ",
g_args.binwidth); g_args.binwidth);
printf("%s%s%s%s%d%s%d\n", indent, "-l, --columns=COLUMNS", "\t\t", printf("%s%s%s%s%d%s%d\n", indent, "-l, --columns=COLUMNS", "\t\t",
"The number of columns per record. Demo mode by default is ", "The number of columns per record. Demo mode by default is ",
...@@ -798,32 +801,32 @@ static void printHelp() { ...@@ -798,32 +801,32 @@ static void printHelp() {
MAX_NUM_COLUMNS); MAX_NUM_COLUMNS);
printf("%s%s%s%s\n", indent, indent, indent, printf("%s%s%s%s\n", indent, indent, indent,
"\t\t\t\tAll of the new column(s) type is INT. If use -b to specify column type, -l will be ignored."); "\t\t\t\tAll of the new column(s) type is INT. If use -b to specify column type, -l will be ignored.");
printf("%s%s%s%s\n", indent, "-T, --threads=NUMBER", "\t\t", printf("%s%s%s%s%d.\n", indent, "-T, --threads=NUMBER", "\t\t",
"The number of threads. Default is 10."); "The number of threads. By default use ", DEFAULT_NTHREADS);
printf("%s%s%s%s\n", indent, "-i, --insert-interval=NUMBER", "\t", printf("%s%s%s%s\n", indent, "-i, --insert-interval=NUMBER", "\t",
"The sleep time (ms) between insertion. Default is 0."); "The sleep time (ms) between insertion. By default is 0.");
printf("%s%s%s%s%d.\n", indent, "-S, --time-step=TIME_STEP", "\t", printf("%s%s%s%s%d.\n", indent, "-S, --time-step=TIME_STEP", "\t",
"The timestamp step between insertion. Default is ", "The timestamp step between insertion. By default is ",
DEFAULT_TIMESTAMP_STEP); DEFAULT_TIMESTAMP_STEP);
printf("%s%s%s%s%d.\n", indent, "-B, --interlace-rows=NUMBER", "\t", printf("%s%s%s%s%d.\n", indent, "-B, --interlace-rows=NUMBER", "\t",
"The interlace rows of insertion. Default is ", "The interlace rows of insertion. By default is ",
DEFAULT_INTERLACE_ROWS); DEFAULT_INTERLACE_ROWS);
printf("%s%s%s%s\n", indent, "-r, --rec-per-req=NUMBER", "\t", printf("%s%s%s%s\n", indent, "-r, --rec-per-req=NUMBER", "\t",
"The number of records per request. Default is 30000."); "The number of records per request. By default is 30000.");
printf("%s%s%s%s\n", indent, "-t, --tables=NUMBER", "\t\t", printf("%s%s%s%s\n", indent, "-t, --tables=NUMBER", "\t\t",
"The number of tables. Default is 10000."); "The number of tables. By default is 10000.");
printf("%s%s%s%s\n", indent, "-n, --records=NUMBER", "\t\t", printf("%s%s%s%s\n", indent, "-n, --records=NUMBER", "\t\t",
"The number of records per table. Default is 10000."); "The number of records per table. By default is 10000.");
printf("%s%s%s%s\n", indent, "-M, --random", "\t\t\t", printf("%s%s%s%s\n", indent, "-M, --random", "\t\t\t",
"The value of records generated are totally random."); "The value of records generated are totally random.");
printf("%s\n", "\t\t\t\tThe default is to simulate power equipment scenario."); printf("%s\n", "\t\t\t\tBy default to simulate power equipment scenario.");
printf("%s%s%s%s\n", indent, "-x, --no-insert", "\t\t", printf("%s%s%s%s\n", indent, "-x, --aggr-func", "\t\t",
"No-insert flag."); "Test aggregation functions after insertion.");
printf("%s%s%s%s\n", indent, "-y, --answer-yes", "\t\t", "Default input yes for prompt."); printf("%s%s%s%s\n", indent, "-y, --answer-yes", "\t\t", "Input yes for prompt.");
printf("%s%s%s%s\n", indent, "-O, --disorder=NUMBER", "\t\t", printf("%s%s%s%s\n", indent, "-O, --disorder=NUMBER", "\t\t",
"Insert order mode--0: In order, 1 ~ 50: disorder ratio. Default is in order."); "Insert order mode--0: In order, 1 ~ 50: disorder ratio. By default is in order.");
printf("%s%s%s%s\n", indent, "-R, --disorder-range=NUMBER", "\t", printf("%s%s%s%s\n", indent, "-R, --disorder-range=NUMBER", "\t",
"Out of order data's range, ms, default is 1000."); "Out of order data's range. Unit is ms. By default is 1000.");
printf("%s%s%s%s\n", indent, "-g, --debug", "\t\t\t", printf("%s%s%s%s\n", indent, "-g, --debug", "\t\t\t",
"Print debug info."); "Print debug info.");
printf("%s%s%s%s\n", indent, "-?, --help\t", "\t\t", printf("%s%s%s%s\n", indent, "-?, --help\t", "\t\t",
...@@ -1712,13 +1715,14 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) { ...@@ -1712,13 +1715,14 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
} }
} else if ((strcmp(argv[i], "-N") == 0) } else if ((strcmp(argv[i], "-N") == 0)
|| (0 == strcmp(argv[i], "--normal-table"))) { || (0 == strcmp(argv[i], "--normal-table"))) {
arguments->demo_mode = false;
arguments->use_metric = false; arguments->use_metric = false;
} else if ((strcmp(argv[i], "-M") == 0) } else if ((strcmp(argv[i], "-M") == 0)
|| (0 == strcmp(argv[i], "--random"))) { || (0 == strcmp(argv[i], "--random"))) {
arguments->demo_mode = false; arguments->demo_mode = false;
} else if ((strcmp(argv[i], "-x") == 0) } else if ((strcmp(argv[i], "-x") == 0)
|| (0 == strcmp(argv[i], "--no-insert"))) { || (0 == strcmp(argv[i], "--aggr-func"))) {
arguments->insert_only = false; arguments->aggr_func = true;
} else if ((strcmp(argv[i], "-y") == 0) } else if ((strcmp(argv[i], "-y") == 0)
|| (0 == strcmp(argv[i], "--answer-yes"))) { || (0 == strcmp(argv[i], "--answer-yes"))) {
arguments->answer_yes = true; arguments->answer_yes = true;
...@@ -2429,10 +2433,11 @@ static void init_rand_data() { ...@@ -2429,10 +2433,11 @@ static void init_rand_data() {
static int printfInsertMeta() { static int printfInsertMeta() {
SHOW_PARSE_RESULT_START(); SHOW_PARSE_RESULT_START();
if (g_args.demo_mode) if (g_args.demo_mode) {
printf("\ntaosdemo is simulating data generated by power equipments monitoring...\n\n"); printf("\ntaosdemo is simulating data generated by power equipment monitoring...\n\n");
else } else {
printf("\ntaosdemo is simulating random data as you request..\n\n"); printf("\ntaosdemo is simulating random data as you request..\n\n");
}
if (g_args.iface != INTERFACE_BUT) { if (g_args.iface != INTERFACE_BUT) {
// first time if no iface specified // first time if no iface specified
...@@ -10065,11 +10070,10 @@ static void startMultiThreadInsertData(int threads, char* db_name, ...@@ -10065,11 +10070,10 @@ static void startMultiThreadInsertData(int threads, char* db_name,
free(infos); free(infos);
} }
static void *readTable(void *sarg) { static void *queryNtableAggrFunc(void *sarg) {
#if 1
threadInfo *pThreadInfo = (threadInfo *)sarg; threadInfo *pThreadInfo = (threadInfo *)sarg;
TAOS *taos = pThreadInfo->taos; TAOS *taos = pThreadInfo->taos;
setThreadName("readTable"); setThreadName("queryNtableAggrFunc");
char *command = calloc(1, BUFFER_SIZE); char *command = calloc(1, BUFFER_SIZE);
assert(command); assert(command);
...@@ -10092,10 +10096,20 @@ static void *readTable(void *sarg) { ...@@ -10092,10 +10096,20 @@ static void *readTable(void *sarg) {
int64_t ntables = pThreadInfo->ntables; // pThreadInfo->end_table_to - pThreadInfo->start_table_from + 1; int64_t ntables = pThreadInfo->ntables; // pThreadInfo->end_table_to - pThreadInfo->start_table_from + 1;
int64_t totalData = insertRows * ntables; int64_t totalData = insertRows * ntables;
bool do_aggreFunc = g_Dbs.do_aggreFunc; bool aggr_func = g_Dbs.aggr_func;
char **aggreFunc;
int n;
int n = do_aggreFunc ? (sizeof(g_aggreFunc) / sizeof(g_aggreFunc[0])) : 2; if (g_args.demo_mode) {
if (!do_aggreFunc) { aggreFunc = g_aggreFuncDemo;
n = aggr_func?(sizeof(g_aggreFuncDemo) / sizeof(g_aggreFuncDemo[0])) : 2;
} else {
aggreFunc = g_aggreFunc;
n = aggr_func?(sizeof(g_aggreFunc) / sizeof(g_aggreFunc[0])) : 2;
}
if (!aggr_func) {
printf("\nThe first field is either Binary or Bool. Aggregation functions are not supported.\n"); printf("\nThe first field is either Binary or Bool. Aggregation functions are not supported.\n");
} }
printf("%"PRId64" records:\n", totalData); printf("%"PRId64" records:\n", totalData);
...@@ -10106,9 +10120,11 @@ static void *readTable(void *sarg) { ...@@ -10106,9 +10120,11 @@ static void *readTable(void *sarg) {
uint64_t count = 0; uint64_t count = 0;
for (int64_t i = 0; i < ntables; i++) { for (int64_t i = 0; i < ntables; i++) {
sprintf(command, "SELECT %s FROM %s%"PRId64" WHERE ts>= %" PRIu64, sprintf(command, "SELECT %s FROM %s%"PRId64" WHERE ts>= %" PRIu64,
g_aggreFunc[j], tb_prefix, i, startTime); aggreFunc[j], tb_prefix, i, startTime);
double t = taosGetTimestampMs(); double t = taosGetTimestampUs();
debugPrint("%s() LN%d, sql command: %s\n",
__func__, __LINE__, command);
TAOS_RES *pSql = taos_query(taos, command); TAOS_RES *pSql = taos_query(taos, command);
int32_t code = taos_errno(pSql); int32_t code = taos_errno(pSql);
...@@ -10125,29 +10141,27 @@ static void *readTable(void *sarg) { ...@@ -10125,29 +10141,27 @@ static void *readTable(void *sarg) {
count++; count++;
} }
t = taosGetTimestampMs() - t; t = taosGetTimestampUs() - t;
totalT += t; totalT += t;
taos_free_result(pSql); taos_free_result(pSql);
} }
fprintf(fp, "|%10s | %"PRId64" | %12.2f | %10.2f |\n", fprintf(fp, "|%10s | %"PRId64" | %12.2f | %10.2f |\n",
g_aggreFunc[j][0] == '*' ? " * " : g_aggreFunc[j], totalData, aggreFunc[j][0] == '*' ? " * " : aggreFunc[j], totalData,
(double)(ntables * insertRows) / totalT, totalT * 1000); (double)(ntables * insertRows) / totalT, totalT / 1000000);
printf("select %10s took %.6f second(s)\n", g_aggreFunc[j], totalT * 1000); printf("select %10s took %.6f second(s)\n", aggreFunc[j], totalT / 1000000);
} }
fprintf(fp, "\n"); fprintf(fp, "\n");
fclose(fp); fclose(fp);
free(command); free(command);
#endif
return NULL; return NULL;
} }
static void *readMetric(void *sarg) { static void *queryStableAggrFunc(void *sarg) {
#if 1
threadInfo *pThreadInfo = (threadInfo *)sarg; threadInfo *pThreadInfo = (threadInfo *)sarg;
TAOS *taos = pThreadInfo->taos; TAOS *taos = pThreadInfo->taos;
setThreadName("readMetric"); setThreadName("queryStableAggrFunc");
char *command = calloc(1, BUFFER_SIZE); char *command = calloc(1, BUFFER_SIZE);
assert(command); assert(command);
...@@ -10161,12 +10175,23 @@ static void *readMetric(void *sarg) { ...@@ -10161,12 +10175,23 @@ static void *readMetric(void *sarg) {
int64_t insertRows = pThreadInfo->stbInfo->insertRows; int64_t insertRows = pThreadInfo->stbInfo->insertRows;
int64_t ntables = pThreadInfo->ntables; // pThreadInfo->end_table_to - pThreadInfo->start_table_from + 1; int64_t ntables = pThreadInfo->ntables; // pThreadInfo->end_table_to - pThreadInfo->start_table_from + 1;
int64_t totalData = insertRows * ntables; int64_t totalData = insertRows * ntables;
bool do_aggreFunc = g_Dbs.do_aggreFunc; bool aggr_func = g_Dbs.aggr_func;
char **aggreFunc;
int n;
int n = do_aggreFunc ? (sizeof(g_aggreFunc) / sizeof(g_aggreFunc[0])) : 2; if (g_args.demo_mode) {
if (!do_aggreFunc) { aggreFunc = g_aggreFuncDemo;
n = aggr_func?(sizeof(g_aggreFuncDemo) / sizeof(g_aggreFuncDemo[0])) : 2;
} else {
aggreFunc = g_aggreFunc;
n = aggr_func?(sizeof(g_aggreFunc) / sizeof(g_aggreFunc[0])) : 2;
}
if (!aggr_func) {
printf("\nThe first field is either Binary or Bool. Aggregation functions are not supported.\n"); printf("\nThe first field is either Binary or Bool. Aggregation functions are not supported.\n");
} }
printf("%"PRId64" records:\n", totalData); printf("%"PRId64" records:\n", totalData);
fprintf(fp, "Querying On %"PRId64" records:\n", totalData); fprintf(fp, "Querying On %"PRId64" records:\n", totalData);
...@@ -10178,18 +10203,29 @@ static void *readMetric(void *sarg) { ...@@ -10178,18 +10203,29 @@ static void *readMetric(void *sarg) {
for (int64_t i = 1; i <= m; i++) { for (int64_t i = 1; i <= m; i++) {
if (i == 1) { if (i == 1) {
sprintf(tempS, "t1 = %"PRId64"", i); if (g_args.demo_mode) {
sprintf(tempS, "groupid = %"PRId64"", i);
} else {
sprintf(tempS, "t0 = %"PRId64"", i);
}
} else { } else {
sprintf(tempS, " or t1 = %"PRId64" ", i); if (g_args.demo_mode) {
sprintf(tempS, " or groupid = %"PRId64" ", i);
} else {
sprintf(tempS, " or t0 = %"PRId64" ", i);
}
} }
strncat(condition, tempS, COND_BUF_LEN - 1); strncat(condition, tempS, COND_BUF_LEN - 1);
sprintf(command, "SELECT %s FROM meters WHERE %s", g_aggreFunc[j], condition); sprintf(command, "SELECT %s FROM meters WHERE %s", aggreFunc[j], condition);
printf("Where condition: %s\n", condition); printf("Where condition: %s\n", condition);
debugPrint("%s() LN%d, sql command: %s\n",
__func__, __LINE__, command);
fprintf(fp, "%s\n", command); fprintf(fp, "%s\n", command);
double t = taosGetTimestampMs(); double t = taosGetTimestampUs();
TAOS_RES *pSql = taos_query(taos, command); TAOS_RES *pSql = taos_query(taos, command);
int32_t code = taos_errno(pSql); int32_t code = taos_errno(pSql);
...@@ -10206,11 +10242,11 @@ static void *readMetric(void *sarg) { ...@@ -10206,11 +10242,11 @@ static void *readMetric(void *sarg) {
while(taos_fetch_row(pSql) != NULL) { while(taos_fetch_row(pSql) != NULL) {
count++; count++;
} }
t = taosGetTimestampMs() - t; t = taosGetTimestampUs() - t;
fprintf(fp, "| Speed: %12.2f(per s) | Latency: %.4f(ms) |\n", fprintf(fp, "| Speed: %12.2f(per s) | Latency: %.4f(ms) |\n",
ntables * insertRows / (t * 1000.0), t); ntables * insertRows / (t / 1000), t);
printf("select %10s took %.6f second(s)\n\n", g_aggreFunc[j], t * 1000.0); printf("select %10s took %.6f second(s)\n\n", aggreFunc[j], t / 1000000);
taos_free_result(pSql); taos_free_result(pSql);
} }
...@@ -10218,7 +10254,7 @@ static void *readMetric(void *sarg) { ...@@ -10218,7 +10254,7 @@ static void *readMetric(void *sarg) {
} }
fclose(fp); fclose(fp);
free(command); free(command);
#endif
return NULL; return NULL;
} }
...@@ -11225,9 +11261,8 @@ static void setParaFromArg() { ...@@ -11225,9 +11261,8 @@ static void setParaFromArg() {
tstrncpy(g_Dbs.resultFile, g_args.output_file, MAX_FILE_NAME_LEN); tstrncpy(g_Dbs.resultFile, g_args.output_file, MAX_FILE_NAME_LEN);
g_Dbs.use_metric = g_args.use_metric; g_Dbs.use_metric = g_args.use_metric;
g_Dbs.insert_only = g_args.insert_only;
g_Dbs.do_aggreFunc = true; g_Dbs.aggr_func = g_args.aggr_func;
char dataString[TSDB_MAX_BYTES_PER_ROW]; char dataString[TSDB_MAX_BYTES_PER_ROW];
char *data_type = g_args.data_type; char *data_type = g_args.data_type;
...@@ -11238,7 +11273,7 @@ static void setParaFromArg() { ...@@ -11238,7 +11273,7 @@ static void setParaFromArg() {
if ((data_type[0] == TSDB_DATA_TYPE_BINARY) if ((data_type[0] == TSDB_DATA_TYPE_BINARY)
|| (data_type[0] == TSDB_DATA_TYPE_BOOL) || (data_type[0] == TSDB_DATA_TYPE_BOOL)
|| (data_type[0] == TSDB_DATA_TYPE_NCHAR)) { || (data_type[0] == TSDB_DATA_TYPE_NCHAR)) {
g_Dbs.do_aggreFunc = false; g_Dbs.aggr_func = false;
} }
if (g_args.use_metric) { if (g_args.use_metric) {
...@@ -11420,7 +11455,7 @@ static void testMetaFile() { ...@@ -11420,7 +11455,7 @@ static void testMetaFile() {
} }
} }
static void queryResult() { static void queryAggrFunc() {
// query data // query data
pthread_t read_id; pthread_t read_id;
...@@ -11429,7 +11464,6 @@ static void queryResult() { ...@@ -11429,7 +11464,6 @@ static void queryResult() {
pThreadInfo->start_time = DEFAULT_START_TIME; // 2017-07-14 10:40:00.000 pThreadInfo->start_time = DEFAULT_START_TIME; // 2017-07-14 10:40:00.000
pThreadInfo->start_table_from = 0; pThreadInfo->start_table_from = 0;
//pThreadInfo->do_aggreFunc = g_Dbs.do_aggreFunc;
if (g_args.use_metric) { if (g_args.use_metric) {
pThreadInfo->ntables = g_Dbs.db[0].superTbls[0].childTblCount; pThreadInfo->ntables = g_Dbs.db[0].superTbls[0].childTblCount;
pThreadInfo->end_table_to = g_Dbs.db[0].superTbls[0].childTblCount - 1; pThreadInfo->end_table_to = g_Dbs.db[0].superTbls[0].childTblCount - 1;
...@@ -11458,9 +11492,9 @@ static void queryResult() { ...@@ -11458,9 +11492,9 @@ static void queryResult() {
tstrncpy(pThreadInfo->filePath, g_Dbs.resultFile, MAX_FILE_NAME_LEN); tstrncpy(pThreadInfo->filePath, g_Dbs.resultFile, MAX_FILE_NAME_LEN);
if (!g_Dbs.use_metric) { if (!g_Dbs.use_metric) {
pthread_create(&read_id, NULL, readTable, pThreadInfo); pthread_create(&read_id, NULL, queryNtableAggrFunc, pThreadInfo);
} else { } else {
pthread_create(&read_id, NULL, readMetric, pThreadInfo); pthread_create(&read_id, NULL, queryStableAggrFunc, pThreadInfo);
} }
pthread_join(read_id, NULL); pthread_join(read_id, NULL);
taos_close(pThreadInfo->taos); taos_close(pThreadInfo->taos);
...@@ -11482,8 +11516,9 @@ static void testCmdLine() { ...@@ -11482,8 +11516,9 @@ static void testCmdLine() {
g_args.test_mode = INSERT_TEST; g_args.test_mode = INSERT_TEST;
insertTestProcess(); insertTestProcess();
if (false == g_Dbs.insert_only) if (g_Dbs.aggr_func) {
queryResult(); queryAggrFunc();
}
} }
int main(int argc, char *argv[]) { int main(int argc, char *argv[]) {
......
...@@ -181,6 +181,7 @@ typedef struct { ...@@ -181,6 +181,7 @@ typedef struct {
int32_t threadIndex; int32_t threadIndex;
int32_t totalThreads; int32_t totalThreads;
char dbName[TSDB_DB_NAME_LEN]; char dbName[TSDB_DB_NAME_LEN];
int precision;
void *taosCon; void *taosCon;
int64_t rowsOfDumpOut; int64_t rowsOfDumpOut;
int64_t tablesOfDumpOut; int64_t tablesOfDumpOut;
...@@ -246,11 +247,6 @@ static struct argp_option options[] = { ...@@ -246,11 +247,6 @@ static struct argp_option options[] = {
{"avro", 'v', 0, 0, "Dump apache avro format data file. By default, dump sql command sequence.", 2}, {"avro", 'v', 0, 0, "Dump apache avro format data file. By default, dump sql command sequence.", 2},
{"start-time", 'S', "START_TIME", 0, "Start time to dump. Either epoch or ISO8601/RFC3339 format is acceptable. ISO8601 format example: 2017-10-01T00:00:00.000+0800 or 2017-10-0100:00:00:000+0800 or '2017-10-01 00:00:00.000+0800'", 4}, {"start-time", 'S', "START_TIME", 0, "Start time to dump. Either epoch or ISO8601/RFC3339 format is acceptable. ISO8601 format example: 2017-10-01T00:00:00.000+0800 or 2017-10-0100:00:00:000+0800 or '2017-10-01 00:00:00.000+0800'", 4},
{"end-time", 'E', "END_TIME", 0, "End time to dump. Either epoch or ISO8601/RFC3339 format is acceptable. ISO8601 format example: 2017-10-01T00:00:00.000+0800 or 2017-10-0100:00:00.000+0800 or '2017-10-01 00:00:00.000+0800'", 5}, {"end-time", 'E', "END_TIME", 0, "End time to dump. Either epoch or ISO8601/RFC3339 format is acceptable. ISO8601 format example: 2017-10-01T00:00:00.000+0800 or 2017-10-0100:00:00.000+0800 or '2017-10-01 00:00:00.000+0800'", 5},
#if TSDB_SUPPORT_NANOSECOND == 1
{"precision", 'C', "PRECISION", 0, "Specify precision for converting human-readable time to epoch. Valid value is one of ms, us, and ns. Default is ms.", 6},
#else
{"precision", 'C', "PRECISION", 0, "Use specified precision to convert human-readable time. Valid value is one of ms and us. Default is ms.", 6},
#endif
{"data-batch", 'B', "DATA_BATCH", 0, "Number of data point per insert statement. Max value is 32766. Default is 1.", 3}, {"data-batch", 'B', "DATA_BATCH", 0, "Number of data point per insert statement. Max value is 32766. Default is 1.", 3},
{"max-sql-len", 'L', "SQL_LEN", 0, "Max length of one sql. Default is 65480.", 3}, {"max-sql-len", 'L', "SQL_LEN", 0, "Max length of one sql. Default is 65480.", 3},
{"table-batch", 't', "TABLE_BATCH", 0, "Number of table dumpout into one output file. Default is 1.", 3}, {"table-batch", 't', "TABLE_BATCH", 0, "Number of table dumpout into one output file. Default is 1.", 3},
...@@ -281,8 +277,11 @@ typedef struct arguments { ...@@ -281,8 +277,11 @@ typedef struct arguments {
bool with_property; bool with_property;
bool avro; bool avro;
int64_t start_time; int64_t start_time;
char humanStartTime[28];
int64_t end_time; int64_t end_time;
char humanEndTime[28];
char precision[8]; char precision[8];
int32_t data_batch; int32_t data_batch;
int32_t max_sql_len; int32_t max_sql_len;
int32_t table_batch; // num of table which will be dump into one output file. int32_t table_batch; // num of table which will be dump into one output file.
...@@ -296,6 +295,8 @@ typedef struct arguments { ...@@ -296,6 +295,8 @@ typedef struct arguments {
bool debug_print; bool debug_print;
bool verbose_print; bool verbose_print;
bool performance_print; bool performance_print;
int dbCount;
} SArguments; } SArguments;
/* Our argp parser. */ /* Our argp parser. */
...@@ -318,13 +319,17 @@ static void taosDumpCreateTableClause(STableDef *tableDes, int numOfCols, ...@@ -318,13 +319,17 @@ static void taosDumpCreateTableClause(STableDef *tableDes, int numOfCols,
static void taosDumpCreateMTableClause(STableDef *tableDes, char *metric, static void taosDumpCreateMTableClause(STableDef *tableDes, char *metric,
int numOfCols, FILE *fp, char* dbName); int numOfCols, FILE *fp, char* dbName);
static int32_t taosDumpTable(char *tbName, char *metric, static int32_t taosDumpTable(char *tbName, char *metric,
FILE *fp, TAOS* taosCon, char* dbName); FILE *fp, TAOS* taosCon, char* dbName, int precision);
static int taosDumpTableData(FILE *fp, char *tbName, static int taosDumpTableData(FILE *fp, char *tbName,
TAOS* taosCon, char* dbName, TAOS* taosCon, char* dbName,
int precision,
char *jsonAvroSchema); char *jsonAvroSchema);
static int taosCheckParam(struct arguments *arguments); static int taosCheckParam(struct arguments *arguments);
static void taosFreeDbInfos(); static void taosFreeDbInfos();
static void taosStartDumpOutWorkThreads(int32_t numOfThread, char *dbName); static void taosStartDumpOutWorkThreads(
int32_t numOfThread,
char *dbName,
int precision);
struct arguments g_args = { struct arguments g_args = {
// connection option // connection option
...@@ -349,8 +354,10 @@ struct arguments g_args = { ...@@ -349,8 +354,10 @@ struct arguments g_args = {
false, // schemeonly false, // schemeonly
true, // with_property true, // with_property
false, // avro format false, // avro format
-INT64_MAX, // start_time -INT64_MAX + 1, // start_time
{0}, // humanStartTime
INT64_MAX, // end_time INT64_MAX, // end_time
{0}, // humanEndTime
"ms", // precision "ms", // precision
1, // data_batch 1, // data_batch
TSDB_MAX_SQL_LEN, // max_sql_len TSDB_MAX_SQL_LEN, // max_sql_len
...@@ -364,7 +371,8 @@ struct arguments g_args = { ...@@ -364,7 +371,8 @@ struct arguments g_args = {
false, // isDumpIn false, // isDumpIn
false, // debug_print false, // debug_print
false, // verbose_print false, // verbose_print
false // performance_print false, // performance_print
0, // dbCount
}; };
static void errorPrintReqArg2(char *program, char *wrong_arg) static void errorPrintReqArg2(char *program, char *wrong_arg)
...@@ -472,12 +480,8 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) { ...@@ -472,12 +480,8 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
break; break;
case 'S': case 'S':
// parse time here. // parse time here.
g_args.start_time = atol(arg);
break; break;
case 'E': case 'E':
g_args.end_time = atol(arg);
break;
case 'C':
break; break;
case 'B': case 'B':
g_args.data_batch = atoi(arg); g_args.data_batch = atoi(arg);
...@@ -550,7 +554,7 @@ static int queryDbImpl(TAOS *taos, char *command) { ...@@ -550,7 +554,7 @@ static int queryDbImpl(TAOS *taos, char *command) {
return 0; return 0;
} }
static void parse_precision_first( UNUSED_FUNC static void parse_precision_first(
int argc, char *argv[], SArguments *arguments) { int argc, char *argv[], SArguments *arguments) {
for (int i = 1; i < argc; i++) { for (int i = 1; i < argc; i++) {
if (strcmp(argv[i], "-C") == 0) { if (strcmp(argv[i], "-C") == 0) {
...@@ -616,6 +620,73 @@ static void parse_args( ...@@ -616,6 +620,73 @@ static void parse_args(
} }
} }
static void copyHumanTimeToArg(char *timeStr, bool isStartTime)
{
if (isStartTime)
strcpy(g_args.humanStartTime, timeStr);
else
strcpy(g_args.humanEndTime, timeStr);
}
static void copyTimestampToArg(char *timeStr, bool isStartTime)
{
if (isStartTime)
g_args.start_time = atol(timeStr);
else
g_args.end_time = atol(timeStr);
}
static void parse_timestamp(
int argc, char *argv[], SArguments *arguments) {
for (int i = 1; i < argc; i++) {
char *tmp;
bool isStartTime = false;
bool isEndTime = false;
if (strcmp(argv[i], "-S") == 0) {
isStartTime = true;
} else if (strcmp(argv[i], "-E") == 0) {
isEndTime = true;
}
if (isStartTime || isEndTime) {
if (NULL == argv[i+1]) {
errorPrint("%s need a valid value following!\n", argv[i]);
exit(-1);
}
tmp = strdup(argv[i+1]);
if (strchr(tmp, ':') && strchr(tmp, '-')) {
copyHumanTimeToArg(tmp, isStartTime);
} else {
copyTimestampToArg(tmp, isStartTime);
}
}
}
}
static int getPrecisionByString(char *precision)
{
if (0 == strncasecmp(precision,
"ms", 2)) {
return TSDB_TIME_PRECISION_MILLI;
} else if (0 == strncasecmp(precision,
"us", 2)) {
return TSDB_TIME_PRECISION_MICRO;
#if TSDB_SUPPORT_NANOSECOND == 1
} else if (0 == strncasecmp(precision,
"ns", 2)) {
return TSDB_TIME_PRECISION_NANO;
#endif
} else {
errorPrint("Invalid time precision: %s",
precision);
}
return -1;
}
/*
static void parse_timestamp( static void parse_timestamp(
int argc, char *argv[], SArguments *arguments) { int argc, char *argv[], SArguments *arguments) {
for (int i = 1; i < argc; i++) { for (int i = 1; i < argc; i++) {
...@@ -634,6 +705,7 @@ static void parse_timestamp( ...@@ -634,6 +705,7 @@ static void parse_timestamp(
int64_t tmpEpoch; int64_t tmpEpoch;
if (strchr(tmp, ':') && strchr(tmp, '-')) { if (strchr(tmp, ':') && strchr(tmp, '-')) {
strcpy(g_args.humanStartTime, tmp)
int32_t timePrec; int32_t timePrec;
if (0 == strncasecmp(arguments->precision, if (0 == strncasecmp(arguments->precision,
"ms", strlen("ms"))) { "ms", strlen("ms"))) {
...@@ -672,6 +744,7 @@ static void parse_timestamp( ...@@ -672,6 +744,7 @@ static void parse_timestamp(
} }
} }
} }
*/
int main(int argc, char *argv[]) { int main(int argc, char *argv[]) {
static char verType[32] = {0}; static char verType[32] = {0};
...@@ -682,7 +755,7 @@ int main(int argc, char *argv[]) { ...@@ -682,7 +755,7 @@ int main(int argc, char *argv[]) {
/* Parse our arguments; every option seen by parse_opt will be /* Parse our arguments; every option seen by parse_opt will be
reflected in arguments. */ reflected in arguments. */
if (argc > 1) { if (argc > 1) {
parse_precision_first(argc, argv, &g_args); // parse_precision_first(argc, argv, &g_args);
parse_timestamp(argc, argv, &g_args); parse_timestamp(argc, argv, &g_args);
parse_args(argc, argv, &g_args); parse_args(argc, argv, &g_args);
} }
...@@ -714,7 +787,9 @@ int main(int argc, char *argv[]) { ...@@ -714,7 +787,9 @@ int main(int argc, char *argv[]) {
printf("with_property: %s\n", g_args.with_property?"true":"false"); printf("with_property: %s\n", g_args.with_property?"true":"false");
printf("avro format: %s\n", g_args.avro?"true":"false"); printf("avro format: %s\n", g_args.avro?"true":"false");
printf("start_time: %" PRId64 "\n", g_args.start_time); printf("start_time: %" PRId64 "\n", g_args.start_time);
printf("human readable start time: %s \n", g_args.humanStartTime);
printf("end_time: %" PRId64 "\n", g_args.end_time); printf("end_time: %" PRId64 "\n", g_args.end_time);
printf("human readable end time: %s \n", g_args.humanEndTime);
printf("precision: %s\n", g_args.precision); printf("precision: %s\n", g_args.precision);
printf("data_batch: %d\n", g_args.data_batch); printf("data_batch: %d\n", g_args.data_batch);
printf("max_sql_len: %d\n", g_args.max_sql_len); printf("max_sql_len: %d\n", g_args.max_sql_len);
...@@ -759,7 +834,9 @@ int main(int argc, char *argv[]) { ...@@ -759,7 +834,9 @@ int main(int argc, char *argv[]) {
fprintf(g_fpOfResult, "with_property: %s\n", g_args.with_property?"true":"false"); fprintf(g_fpOfResult, "with_property: %s\n", g_args.with_property?"true":"false");
fprintf(g_fpOfResult, "avro format: %s\n", g_args.avro?"true":"false"); fprintf(g_fpOfResult, "avro format: %s\n", g_args.avro?"true":"false");
fprintf(g_fpOfResult, "start_time: %" PRId64 "\n", g_args.start_time); fprintf(g_fpOfResult, "start_time: %" PRId64 "\n", g_args.start_time);
fprintf(g_fpOfResult, "human readable start time: %s \n", g_args.humanStartTime);
fprintf(g_fpOfResult, "end_time: %" PRId64 "\n", g_args.end_time); fprintf(g_fpOfResult, "end_time: %" PRId64 "\n", g_args.end_time);
fprintf(g_fpOfResult, "human readable end time: %s \n", g_args.humanEndTime);
fprintf(g_fpOfResult, "precision: %s\n", g_args.precision); fprintf(g_fpOfResult, "precision: %s\n", g_args.precision);
fprintf(g_fpOfResult, "data_batch: %d\n", g_args.data_batch); fprintf(g_fpOfResult, "data_batch: %d\n", g_args.data_batch);
fprintf(g_fpOfResult, "max_sql_len: %d\n", g_args.max_sql_len); fprintf(g_fpOfResult, "max_sql_len: %d\n", g_args.max_sql_len);
...@@ -816,7 +893,8 @@ int main(int argc, char *argv[]) { ...@@ -816,7 +893,8 @@ int main(int argc, char *argv[]) {
static void taosFreeDbInfos() { static void taosFreeDbInfos() {
if (g_dbInfos == NULL) return; if (g_dbInfos == NULL) return;
for (int i = 0; i < 128; i++) tfree(g_dbInfos[i]); for (int i = 0; i < g_args.dbCount; i++)
tfree(g_dbInfos[i]);
tfree(g_dbInfos); tfree(g_dbInfos);
} }
...@@ -1046,6 +1124,88 @@ static int32_t taosSaveTableOfMetricToTempFile( ...@@ -1046,6 +1124,88 @@ static int32_t taosSaveTableOfMetricToTempFile(
return 0; return 0;
} }
static int getDbCount()
{
int count;
TAOS *taos = NULL;
TAOS_RES *result = NULL;
char *command = NULL;
TAOS_ROW row;
command = (char *)malloc(COMMAND_SIZE);
if (command == NULL) {
errorPrint("%s() LN%d, failed to allocate command buffer\n", __func__, __LINE__);
return 0;
}
/* Connect to server */
taos = taos_connect(g_args.host, g_args.user, g_args.password,
NULL, g_args.port);
if (NULL == taos) {
errorPrint("Failed to connect to TDengine server %s\n", g_args.host);
free(command);
return 0;
}
sprintf(command, "show databases");
result = taos_query(taos, command);
int32_t code = taos_errno(result);
if (0 != code) {
errorPrint("%s() LN%d, failed to run command: %s, reason: %s\n",
__func__, __LINE__, command, taos_errstr(result));
free(command);
return 0;
}
TAOS_FIELD *fields = taos_fetch_fields(result);
while ((row = taos_fetch_row(result)) != NULL) {
// sys database name : 'log', but subsequent version changed to 'log'
if ((strncasecmp(row[TSDB_SHOW_DB_NAME_INDEX], "log",
fields[TSDB_SHOW_DB_NAME_INDEX].bytes) == 0)
&& (!g_args.allow_sys)) {
continue;
}
if (g_args.databases) { // input multi dbs
for (int i = 0; g_args.arg_list[i]; i++) {
if (strncasecmp(g_args.arg_list[i],
(char *)row[TSDB_SHOW_DB_NAME_INDEX],
fields[TSDB_SHOW_DB_NAME_INDEX].bytes) == 0)
goto _dump_db_point;
}
continue;
} else if (!g_args.all_databases) { // only input one db
if (strncasecmp(g_args.arg_list[0],
(char *)row[TSDB_SHOW_DB_NAME_INDEX],
fields[TSDB_SHOW_DB_NAME_INDEX].bytes) == 0)
goto _dump_db_point;
else
continue;
}
_dump_db_point:
count++;
if (g_args.databases) {
if (count > g_args.arg_list_len) break;
} else if (!g_args.all_databases) {
if (count >= 1) break;
}
}
if (count == 0) {
errorPrint("%d databases valid to dump\n", count);
}
free(command);
return count;
}
static int taosDumpOut() { static int taosDumpOut() {
TAOS *taos = NULL; TAOS *taos = NULL;
TAOS_RES *result = NULL; TAOS_RES *result = NULL;
...@@ -1070,7 +1230,14 @@ static int taosDumpOut() { ...@@ -1070,7 +1230,14 @@ static int taosDumpOut() {
return -1; return -1;
} }
g_dbInfos = (SDbInfo **)calloc(128, sizeof(SDbInfo *)); g_args.dbCount = getDbCount();
if (0 == g_args.dbCount) {
errorPrint("%d databases valid to dump\n", g_args.dbCount);
return -1;
}
g_dbInfos = (SDbInfo **)calloc(g_args.dbCount, sizeof(SDbInfo *));
if (g_dbInfos == NULL) { if (g_dbInfos == NULL) {
errorPrint("%s() LN%d, failed to allocate memory\n", errorPrint("%s() LN%d, failed to allocate memory\n",
__func__, __LINE__); __func__, __LINE__);
...@@ -1165,9 +1332,9 @@ _dump_db_point: ...@@ -1165,9 +1332,9 @@ _dump_db_point:
g_dbInfos[count]->comp = (int8_t)(*((int8_t *)row[TSDB_SHOW_DB_COMP_INDEX])); g_dbInfos[count]->comp = (int8_t)(*((int8_t *)row[TSDB_SHOW_DB_COMP_INDEX]));
g_dbInfos[count]->cachelast = (int8_t)(*((int8_t *)row[TSDB_SHOW_DB_CACHELAST_INDEX])); g_dbInfos[count]->cachelast = (int8_t)(*((int8_t *)row[TSDB_SHOW_DB_CACHELAST_INDEX]));
tstrncpy(g_dbInfos[count]->precision, (char *)row[TSDB_SHOW_DB_PRECISION_INDEX], tstrncpy(g_dbInfos[count]->precision,
min(8, fields[TSDB_SHOW_DB_PRECISION_INDEX].bytes + 1)); (char *)row[TSDB_SHOW_DB_PRECISION_INDEX],
//g_dbInfos[count]->precision = *((int8_t *)row[TSDB_SHOW_DB_PRECISION_INDEX]); DB_PRECISION_LEN);
g_dbInfos[count]->update = *((int8_t *)row[TSDB_SHOW_DB_UPDATE_INDEX]); g_dbInfos[count]->update = *((int8_t *)row[TSDB_SHOW_DB_UPDATE_INDEX]);
} }
count++; count++;
...@@ -1263,8 +1430,10 @@ _dump_db_point: ...@@ -1263,8 +1430,10 @@ _dump_db_point:
} }
// start multi threads to dumpout // start multi threads to dumpout
taosStartDumpOutWorkThreads(totalNumOfThread, taosStartDumpOutWorkThreads(totalNumOfThread,
g_dbInfos[0]->name); g_dbInfos[0]->name,
getPrecisionByString(g_dbInfos[0]->precision));
char tmpFileName[MAX_FILE_NAME_LEN]; char tmpFileName[MAX_FILE_NAME_LEN];
_clean_tmp_file: _clean_tmp_file:
...@@ -1453,7 +1622,7 @@ static int convertSchemaToAvroSchema(STableDef *stableDes, char **avroSchema) ...@@ -1453,7 +1622,7 @@ static int convertSchemaToAvroSchema(STableDef *stableDes, char **avroSchema)
static int32_t taosDumpTable( static int32_t taosDumpTable(
char *tbName, char *metric, char *tbName, char *metric,
FILE *fp, TAOS* taosCon, char* dbName) { FILE *fp, TAOS* taosCon, char* dbName, int precision) {
int count = 0; int count = 0;
STableDef *tableDes = (STableDef *)calloc(1, sizeof(STableDef) STableDef *tableDes = (STableDef *)calloc(1, sizeof(STableDef)
...@@ -1504,7 +1673,7 @@ static int32_t taosDumpTable( ...@@ -1504,7 +1673,7 @@ static int32_t taosDumpTable(
int32_t ret = 0; int32_t ret = 0;
if (!g_args.schemaonly) { if (!g_args.schemaonly) {
ret = taosDumpTableData(fp, tbName, taosCon, dbName, ret = taosDumpTableData(fp, tbName, taosCon, dbName, precision,
jsonAvroSchema); jsonAvroSchema);
} }
...@@ -1595,7 +1764,8 @@ static void* taosDumpOutWorkThreadFp(void *arg) ...@@ -1595,7 +1764,8 @@ static void* taosDumpOutWorkThreadFp(void *arg)
int ret = taosDumpTable( int ret = taosDumpTable(
tableRecord.name, tableRecord.metric, tableRecord.name, tableRecord.metric,
fp, pThread->taosCon, pThread->dbName); fp, pThread->taosCon, pThread->dbName,
pThread->precision);
if (ret >= 0) { if (ret >= 0) {
// TODO: sum table count and table rows by self // TODO: sum table count and table rows by self
pThread->tablesOfDumpOut++; pThread->tablesOfDumpOut++;
...@@ -1644,7 +1814,7 @@ static void* taosDumpOutWorkThreadFp(void *arg) ...@@ -1644,7 +1814,7 @@ static void* taosDumpOutWorkThreadFp(void *arg)
return NULL; return NULL;
} }
static void taosStartDumpOutWorkThreads(int32_t numOfThread, char *dbName) static void taosStartDumpOutWorkThreads(int32_t numOfThread, char *dbName, int precision)
{ {
pthread_attr_t thattr; pthread_attr_t thattr;
SThreadParaObj *threadObj = SThreadParaObj *threadObj =
...@@ -1663,6 +1833,7 @@ static void taosStartDumpOutWorkThreads(int32_t numOfThread, char *dbName) ...@@ -1663,6 +1833,7 @@ static void taosStartDumpOutWorkThreads(int32_t numOfThread, char *dbName)
pThread->threadIndex = t; pThread->threadIndex = t;
pThread->totalThreads = numOfThread; pThread->totalThreads = numOfThread;
tstrncpy(pThread->dbName, dbName, TSDB_DB_NAME_LEN); tstrncpy(pThread->dbName, dbName, TSDB_DB_NAME_LEN);
pThread->precision = precision;
pThread->taosCon = taos_connect(g_args.host, g_args.user, g_args.password, pThread->taosCon = taos_connect(g_args.host, g_args.user, g_args.password,
NULL, g_args.port); NULL, g_args.port);
if (pThread->taosCon == NULL) { if (pThread->taosCon == NULL) {
...@@ -1912,7 +2083,8 @@ static int taosDumpDb(SDbInfo *dbInfo, FILE *fp, TAOS *taosCon) { ...@@ -1912,7 +2083,8 @@ static int taosDumpDb(SDbInfo *dbInfo, FILE *fp, TAOS *taosCon) {
} }
// start multi threads to dumpout // start multi threads to dumpout
taosStartDumpOutWorkThreads(numOfThread, dbInfo->name); taosStartDumpOutWorkThreads(numOfThread, dbInfo->name,
getPrecisionByString(dbInfo->precision));
for (int loopCnt = 0; loopCnt < numOfThread; loopCnt++) { for (int loopCnt = 0; loopCnt < numOfThread; loopCnt++) {
sprintf(tmpBuf, ".tables.tmp.%d", loopCnt); sprintf(tmpBuf, ".tables.tmp.%d", loopCnt);
(void)remove(tmpBuf); (void)remove(tmpBuf);
...@@ -2190,14 +2362,38 @@ static int64_t writeResultToSql(TAOS_RES *res, FILE *fp, char *dbName, char *tbN ...@@ -2190,14 +2362,38 @@ static int64_t writeResultToSql(TAOS_RES *res, FILE *fp, char *dbName, char *tbN
} }
static int taosDumpTableData(FILE *fp, char *tbName, static int taosDumpTableData(FILE *fp, char *tbName,
TAOS* taosCon, char* dbName, TAOS* taosCon, char* dbName, int precision,
char *jsonAvroSchema) { char *jsonAvroSchema) {
int64_t totalRows = 0; int64_t totalRows = 0;
char sqlstr[1024] = {0}; char sqlstr[1024] = {0};
int64_t start_time, end_time;
if (strlen(g_args.humanStartTime)) {
if (TSDB_CODE_SUCCESS != taosParseTime(
g_args.humanStartTime, &start_time, strlen(g_args.humanStartTime),
precision, 0)) {
errorPrint("Input %s, time format error!\n", g_args.humanStartTime);
return -1;
}
} else {
start_time = g_args.start_time;
}
if (strlen(g_args.humanEndTime)) {
if (TSDB_CODE_SUCCESS != taosParseTime(
g_args.humanEndTime, &end_time, strlen(g_args.humanEndTime),
precision, 0)) {
errorPrint("Input %s, time format error!\n", g_args.humanEndTime);
return -1;
}
} else {
end_time = g_args.end_time;
}
sprintf(sqlstr, sprintf(sqlstr,
"select * from %s.%s where _c0 >= %" PRId64 " and _c0 <= %" PRId64 " order by _c0 asc;", "select * from %s.%s where _c0 >= %" PRId64 " and _c0 <= %" PRId64 " order by _c0 asc;",
dbName, tbName, g_args.start_time, g_args.end_time); dbName, tbName, start_time, end_time);
TAOS_RES* res = taos_query(taosCon, sqlstr); TAOS_RES* res = taos_query(taosCon, sqlstr);
int32_t code = taos_errno(res); int32_t code = taos_errno(res);
......
...@@ -70,14 +70,14 @@ extern "C" { ...@@ -70,14 +70,14 @@ extern "C" {
#define TSDB_FUNC_DERIVATIVE 32 #define TSDB_FUNC_DERIVATIVE 32
#define TSDB_FUNC_BLKINFO 33 #define TSDB_FUNC_BLKINFO 33
#define TSDB_FUNC_CEIL 34
#define TSDB_FUNC_HISTOGRAM 34 #define TSDB_FUNC_FLOOR 35
#define TSDB_FUNC_HLL 35 #define TSDB_FUNC_ROUND 36
#define TSDB_FUNC_MODE 36
#define TSDB_FUNC_SAMPLE 37 #define TSDB_FUNC_HISTOGRAM 37
#define TSDB_FUNC_CEIL 38 #define TSDB_FUNC_HLL 38
#define TSDB_FUNC_FLOOR 39 #define TSDB_FUNC_MODE 39
#define TSDB_FUNC_ROUND 40 #define TSDB_FUNC_SAMPLE 40
#define TSDB_FUNC_MAVG 41 #define TSDB_FUNC_MAVG 41
#define TSDB_FUNC_CSUM 42 #define TSDB_FUNC_CSUM 42
...@@ -88,6 +88,7 @@ extern "C" { ...@@ -88,6 +88,7 @@ extern "C" {
#define TSDB_FUNCSTATE_OF 0x10u // outer forward #define TSDB_FUNCSTATE_OF 0x10u // outer forward
#define TSDB_FUNCSTATE_NEED_TS 0x20u // timestamp is required during query processing #define TSDB_FUNCSTATE_NEED_TS 0x20u // timestamp is required during query processing
#define TSDB_FUNCSTATE_SELECTIVITY 0x40u // selectivity functions, can exists along with tag columns #define TSDB_FUNCSTATE_SELECTIVITY 0x40u // selectivity functions, can exists along with tag columns
#define TSDB_FUNCSTATE_SCALAR 0x80u
#define TSDB_BASE_FUNC_SO TSDB_FUNCSTATE_SO | TSDB_FUNCSTATE_STREAM | TSDB_FUNCSTATE_STABLE | TSDB_FUNCSTATE_OF #define TSDB_BASE_FUNC_SO TSDB_FUNCSTATE_SO | TSDB_FUNCSTATE_STREAM | TSDB_FUNCSTATE_STABLE | TSDB_FUNCSTATE_OF
#define TSDB_BASE_FUNC_MO TSDB_FUNCSTATE_MO | TSDB_FUNCSTATE_STREAM | TSDB_FUNCSTATE_STABLE | TSDB_FUNCSTATE_OF #define TSDB_BASE_FUNC_MO TSDB_FUNCSTATE_MO | TSDB_FUNCSTATE_STREAM | TSDB_FUNCSTATE_STABLE | TSDB_FUNCSTATE_OF
......
...@@ -179,7 +179,9 @@ int32_t getResultDataInfo(int32_t dataType, int32_t dataBytes, int32_t functionI ...@@ -179,7 +179,9 @@ int32_t getResultDataInfo(int32_t dataType, int32_t dataBytes, int32_t functionI
if (functionId == TSDB_FUNC_TS || functionId == TSDB_FUNC_TS_DUMMY || functionId == TSDB_FUNC_TAG_DUMMY || if (functionId == TSDB_FUNC_TS || functionId == TSDB_FUNC_TS_DUMMY || functionId == TSDB_FUNC_TAG_DUMMY ||
functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_PRJ || functionId == TSDB_FUNC_TAGPRJ || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_PRJ || functionId == TSDB_FUNC_TAGPRJ ||
functionId == TSDB_FUNC_TAG || functionId == TSDB_FUNC_INTERP) { functionId == TSDB_FUNC_TAG || functionId == TSDB_FUNC_INTERP || functionId == TSDB_FUNC_CEIL ||
functionId == TSDB_FUNC_FLOOR || functionId == TSDB_FUNC_ROUND)
{
*type = (int16_t)dataType; *type = (int16_t)dataType;
*bytes = (int16_t)dataBytes; *bytes = (int16_t)dataBytes;
...@@ -405,7 +407,7 @@ int32_t getResultDataInfo(int32_t dataType, int32_t dataBytes, int32_t functionI ...@@ -405,7 +407,7 @@ int32_t getResultDataInfo(int32_t dataType, int32_t dataBytes, int32_t functionI
// TODO use hash table // TODO use hash table
int32_t isValidFunction(const char* name, int32_t len) { int32_t isValidFunction(const char* name, int32_t len) {
for(int32_t i = 0; i <= TSDB_FUNC_BLKINFO; ++i) { for(int32_t i = 0; i <= TSDB_FUNC_ROUND; ++i) {
int32_t nameLen = (int32_t) strlen(aAggs[i].name); int32_t nameLen = (int32_t) strlen(aAggs[i].name);
if (len != nameLen) { if (len != nameLen) {
continue; continue;
...@@ -4256,6 +4258,231 @@ void blockinfo_func_finalizer(SQLFunctionCtx* pCtx) { ...@@ -4256,6 +4258,231 @@ void blockinfo_func_finalizer(SQLFunctionCtx* pCtx) {
doFinalizer(pCtx); doFinalizer(pCtx);
} }
#define CFR_SET_VAL(type, data, pCtx, func, i, step, notNullElems) \
do { \
type *pData = (type *) data; \
type *pOutput = (type *) pCtx->pOutput; \
\
for (; i < pCtx->size && i >= 0; i += step) { \
if (pCtx->hasNull && isNull((const char*) &pData[i], pCtx->inputType)) { \
continue; \
} \
\
*pOutput++ = (type) func((double) pData[i]); \
\
notNullElems++; \
} \
} while (0)
#define CFR_SET_VAL_DOUBLE(data, pCtx, func, i, step, notNullElems) \
do { \
double *pData = (double *) data; \
double *pOutput = (double *) pCtx->pOutput; \
\
for (; i < pCtx->size && i >= 0; i += step) { \
if (pCtx->hasNull && isNull((const char*) &pData[i], pCtx->inputType)) { \
continue; \
} \
\
SET_DOUBLE_VAL(pOutput, func(pData[i])); \
pOutput++; \
\
notNullElems++; \
} \
} while (0)
static void ceil_function(SQLFunctionCtx *pCtx) {
void *data = GET_INPUT_DATA_LIST(pCtx);
int32_t notNullElems = 0;
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pCtx->order);
int32_t i = (pCtx->order == TSDB_ORDER_ASC) ? 0 : pCtx->size - 1;
switch (pCtx->inputType) {
case TSDB_DATA_TYPE_INT: {
CFR_SET_VAL(int32_t, data, pCtx, ceil, i, step, notNullElems);
break;
};
case TSDB_DATA_TYPE_UINT: {
CFR_SET_VAL(uint32_t, data, pCtx, ceil, i, step, notNullElems);
break;
};
case TSDB_DATA_TYPE_BIGINT: {
CFR_SET_VAL(int64_t, data, pCtx, ceil, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_UBIGINT: {
CFR_SET_VAL(uint64_t, data, pCtx, ceil, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_DOUBLE: {
CFR_SET_VAL_DOUBLE(data, pCtx, ceil, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_FLOAT: {
CFR_SET_VAL(float, data, pCtx, ceil, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_SMALLINT: {
CFR_SET_VAL(int16_t, data, pCtx, ceil, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_USMALLINT: {
CFR_SET_VAL(uint16_t, data, pCtx, ceil, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_TINYINT: {
CFR_SET_VAL(int8_t, data, pCtx, ceil, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_UTINYINT: {
CFR_SET_VAL(uint8_t, data, pCtx, ceil, i, step, notNullElems);
break;
}
default:
qError("error input type");
}
if (notNullElems <= 0) {
/*
* current block may be null value
*/
assert(pCtx->hasNull);
} else {
GET_RES_INFO(pCtx)->numOfRes += notNullElems;
}
}
static void floor_function(SQLFunctionCtx *pCtx) {
void *data = GET_INPUT_DATA_LIST(pCtx);
int32_t notNullElems = 0;
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pCtx->order);
int32_t i = (pCtx->order == TSDB_ORDER_ASC) ? 0 : pCtx->size - 1;
switch (pCtx->inputType) {
case TSDB_DATA_TYPE_INT: {
CFR_SET_VAL(int32_t, data, pCtx, floor, i, step, notNullElems);
break;
};
case TSDB_DATA_TYPE_UINT: {
CFR_SET_VAL(uint32_t, data, pCtx, floor, i, step, notNullElems);
break;
};
case TSDB_DATA_TYPE_BIGINT: {
CFR_SET_VAL(int64_t, data, pCtx, floor, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_UBIGINT: {
CFR_SET_VAL(uint64_t, data, pCtx, floor, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_DOUBLE: {
CFR_SET_VAL_DOUBLE(data, pCtx, floor, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_FLOAT: {
CFR_SET_VAL(float, data, pCtx, floor, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_SMALLINT: {
CFR_SET_VAL(int16_t, data, pCtx, floor, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_USMALLINT: {
CFR_SET_VAL(uint16_t, data, pCtx, floor, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_TINYINT: {
CFR_SET_VAL(int8_t, data, pCtx, floor, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_UTINYINT: {
CFR_SET_VAL(uint8_t, data, pCtx, floor, i, step, notNullElems);
break;
}
default:
qError("error input type");
}
if (notNullElems <= 0) {
/*
* current block may be null value
*/
assert(pCtx->hasNull);
} else {
GET_RES_INFO(pCtx)->numOfRes += notNullElems;
}
}
static void round_function(SQLFunctionCtx *pCtx) {
void *data = GET_INPUT_DATA_LIST(pCtx);
int32_t notNullElems = 0;
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pCtx->order);
int32_t i = (pCtx->order == TSDB_ORDER_ASC) ? 0 : pCtx->size - 1;
switch (pCtx->inputType) {
case TSDB_DATA_TYPE_INT: {
CFR_SET_VAL(int32_t, data, pCtx, round, i, step, notNullElems);
break;
};
case TSDB_DATA_TYPE_UINT: {
CFR_SET_VAL(uint32_t, data, pCtx, round, i, step, notNullElems);
break;
};
case TSDB_DATA_TYPE_BIGINT: {
CFR_SET_VAL(int64_t, data, pCtx, round, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_UBIGINT: {
CFR_SET_VAL(uint64_t, data, pCtx, round, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_DOUBLE: {
CFR_SET_VAL_DOUBLE(data, pCtx, round, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_FLOAT: {
CFR_SET_VAL(float, data, pCtx, round, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_SMALLINT: {
CFR_SET_VAL(int16_t, data, pCtx, round, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_USMALLINT: {
CFR_SET_VAL(uint16_t, data, pCtx, round, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_TINYINT: {
CFR_SET_VAL(int8_t, data, pCtx, round, i, step, notNullElems);
break;
}
case TSDB_DATA_TYPE_UTINYINT: {
CFR_SET_VAL(uint8_t, data, pCtx, round, i, step, notNullElems);
break;
}
default:
qError("error input type");
}
if (notNullElems <= 0) {
/*
* current block may be null value
*/
assert(pCtx->hasNull);
} else {
GET_RES_INFO(pCtx)->numOfRes += notNullElems;
}
}
#undef CFR_SET_VAL
#undef CFR_SET_VAL_DOUBLE
///////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////////
/* /*
* function compatible list. * function compatible list.
...@@ -4274,8 +4501,8 @@ int32_t functionCompatList[] = { ...@@ -4274,8 +4501,8 @@ int32_t functionCompatList[] = {
4, -1, -1, 1, 1, 1, 1, 1, 1, -1, 4, -1, -1, 1, 1, 1, 1, 1, 1, -1,
// tag, colprj, tagprj, arithmetic, diff, first_dist, last_dist, stddev_dst, interp rate irate // tag, colprj, tagprj, arithmetic, diff, first_dist, last_dist, stddev_dst, interp rate irate
1, 1, 1, 1, -1, 1, 1, 1, 5, 1, 1, 1, 1, 1, 1, -1, 1, 1, 1, 5, 1, 1,
// tid_tag, derivative, blk_info // tid_tag, derivative, blk_info,ceil, floor, round
6, 8, 7, 6, 8, 7, 1, 1, 1
}; };
SAggFunctionInfo aAggs[] = {{ SAggFunctionInfo aAggs[] = {{
...@@ -4678,7 +4905,7 @@ SAggFunctionInfo aAggs[] = {{ ...@@ -4678,7 +4905,7 @@ SAggFunctionInfo aAggs[] = {{
dataBlockRequired, dataBlockRequired,
}, },
{ {
// 33 // 33
"_block_dist", // return table id and the corresponding tags for join match and subscribe "_block_dist", // return table id and the corresponding tags for join match and subscribe
TSDB_FUNC_BLKINFO, TSDB_FUNC_BLKINFO,
TSDB_FUNC_BLKINFO, TSDB_FUNC_BLKINFO,
...@@ -4688,4 +4915,40 @@ SAggFunctionInfo aAggs[] = {{ ...@@ -4688,4 +4915,40 @@ SAggFunctionInfo aAggs[] = {{
blockinfo_func_finalizer, blockinfo_func_finalizer,
block_func_merge, block_func_merge,
dataBlockRequired, dataBlockRequired,
},
{
// 34
"ceil",
TSDB_FUNC_CEIL,
TSDB_FUNC_CEIL,
TSDB_FUNCSTATE_MO | TSDB_FUNCSTATE_STABLE | TSDB_FUNCSTATE_NEED_TS | TSDB_FUNCSTATE_SCALAR,
function_setup,
ceil_function,
doFinalizer,
noop1,
dataBlockRequired
},
{
// 35
"floor",
TSDB_FUNC_FLOOR,
TSDB_FUNC_FLOOR,
TSDB_FUNCSTATE_MO | TSDB_FUNCSTATE_STABLE | TSDB_FUNCSTATE_NEED_TS | TSDB_FUNCSTATE_SCALAR,
function_setup,
floor_function,
doFinalizer,
noop1,
dataBlockRequired
},
{
// 36
"round",
TSDB_FUNC_ROUND,
TSDB_FUNC_ROUND,
TSDB_FUNCSTATE_MO | TSDB_FUNCSTATE_STABLE | TSDB_FUNCSTATE_NEED_TS | TSDB_FUNCSTATE_SCALAR,
function_setup,
round_function,
doFinalizer,
noop1,
dataBlockRequired
}}; }};
...@@ -405,6 +405,25 @@ static bool isSelectivityWithTagsQuery(SQLFunctionCtx *pCtx, int32_t numOfOutput ...@@ -405,6 +405,25 @@ static bool isSelectivityWithTagsQuery(SQLFunctionCtx *pCtx, int32_t numOfOutput
return (numOfSelectivity > 0 && hasTags); return (numOfSelectivity > 0 && hasTags);
} }
static bool isScalarWithTagsQuery(SQLFunctionCtx *pCtx, int32_t numOfOutput) {
bool hasTags = false;
int32_t numOfScalar = 0;
for (int32_t i = 0; i < numOfOutput; ++i) {
int32_t functId = pCtx[i].functionId;
if (functId == TSDB_FUNC_TAG_DUMMY || functId == TSDB_FUNC_TS_DUMMY) {
hasTags = true;
continue;
}
if ((aAggs[functId].status & TSDB_FUNCSTATE_SCALAR) != 0) {
numOfScalar++;
}
}
return (numOfScalar > 0 && hasTags);
}
static bool isProjQuery(SQueryAttr *pQueryAttr) { static bool isProjQuery(SQueryAttr *pQueryAttr) {
for (int32_t i = 0; i < pQueryAttr->numOfOutput; ++i) { for (int32_t i = 0; i < pQueryAttr->numOfOutput; ++i) {
int32_t functId = pQueryAttr->pExpr1[i].base.functionId; int32_t functId = pQueryAttr->pExpr1[i].base.functionId;
...@@ -1939,7 +1958,7 @@ void setBlockStatisInfo(SQLFunctionCtx *pCtx, SSDataBlock* pSDataBlock, SColInde ...@@ -1939,7 +1958,7 @@ void setBlockStatisInfo(SQLFunctionCtx *pCtx, SSDataBlock* pSDataBlock, SColInde
// set the output buffer for the selectivity + tag query // set the output buffer for the selectivity + tag query
static int32_t setCtxTagColumnInfo(SQLFunctionCtx *pCtx, int32_t numOfOutput) { static int32_t setCtxTagColumnInfo(SQLFunctionCtx *pCtx, int32_t numOfOutput) {
if (!isSelectivityWithTagsQuery(pCtx, numOfOutput)) { if (!isSelectivityWithTagsQuery(pCtx, numOfOutput) && !isScalarWithTagsQuery(pCtx, numOfOutput)) {
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
...@@ -1958,7 +1977,7 @@ static int32_t setCtxTagColumnInfo(SQLFunctionCtx *pCtx, int32_t numOfOutput) { ...@@ -1958,7 +1977,7 @@ static int32_t setCtxTagColumnInfo(SQLFunctionCtx *pCtx, int32_t numOfOutput) {
if (functionId == TSDB_FUNC_TAG_DUMMY || functionId == TSDB_FUNC_TS_DUMMY) { if (functionId == TSDB_FUNC_TAG_DUMMY || functionId == TSDB_FUNC_TS_DUMMY) {
tagLen += pCtx[i].outputBytes; tagLen += pCtx[i].outputBytes;
pTagCtx[num++] = &pCtx[i]; pTagCtx[num++] = &pCtx[i];
} else if ((aAggs[functionId].status & TSDB_FUNCSTATE_SELECTIVITY) != 0) { } else if ((aAggs[functionId].status & TSDB_FUNCSTATE_SELECTIVITY) != 0 || (aAggs[functionId].status & TSDB_FUNCSTATE_SCALAR) != 0) {
p = &pCtx[i]; p = &pCtx[i];
} else if (functionId == TSDB_FUNC_TS || functionId == TSDB_FUNC_TAG) { } else if (functionId == TSDB_FUNC_TS || functionId == TSDB_FUNC_TAG) {
// tag function may be the group by tag column // tag function may be the group by tag column
......
...@@ -645,6 +645,12 @@ SArray* createExecOperatorPlan(SQueryAttr* pQueryAttr) { ...@@ -645,6 +645,12 @@ SArray* createExecOperatorPlan(SQueryAttr* pQueryAttr) {
} else { } else {
op = OP_Project; op = OP_Project;
taosArrayPush(plan, &op); taosArrayPush(plan, &op);
if (pQueryAttr->pExpr2 != NULL) {
op = OP_Project;
taosArrayPush(plan, &op);
}
if (pQueryAttr->distinct) { if (pQueryAttr->distinct) {
op = OP_Distinct; op = OP_Distinct;
taosArrayPush(plan, &op); taosArrayPush(plan, &op);
......
...@@ -18109,3 +18109,72 @@ ...@@ -18109,3 +18109,72 @@
fun:_PyEval_EvalCodeWithName fun:_PyEval_EvalCodeWithName
fun:_PyFunction_Vectorcall fun:_PyFunction_Vectorcall
} }
{
<insert_a_suppression_name_here>
Memcheck:Leak
match-leak-kinds: definite
fun:malloc
fun:lib_build_and_cache_attr
fun:lib_getattr
fun:PyObject_GetAttr
fun:_PyEval_EvalFrameDefault
fun:_PyFunction_Vectorcall
fun:_PyEval_EvalFrameDefault
fun:_PyEval_EvalCodeWithName
fun:PyEval_EvalCode
obj:/usr/bin/python3.8
obj:/usr/bin/python3.8
fun:PyVectorcall_Call
}
{
<insert_a_suppression_name_here>
Memcheck:Leak
match-leak-kinds: definite
fun:malloc
fun:lib_build_and_cache_attr
fun:lib_getattr
fun:PyObject_GetAttr
obj:/usr/bin/python3.8
obj:/usr/bin/python3.8
fun:_PyEval_EvalFrameDefault
fun:_PyFunction_Vectorcall
fun:_PyEval_EvalFrameDefault
obj:/usr/bin/python3.8
fun:_PyEval_EvalFrameDefault
obj:/usr/bin/python3.8
}
{
<insert_a_suppression_name_here>
Memcheck:Leak
match-leak-kinds: definite
fun:malloc
fun:_my_Py_InitModule
fun:b_init_cffi_1_0_external_module
obj:/usr/bin/python3.8
obj:/usr/bin/python3.8
fun:PyObject_CallMethod
fun:PyInit__constant_time
fun:_PyImport_LoadDynamicModuleWithSpec
obj:/usr/bin/python3.8
obj:/usr/bin/python3.8
fun:PyVectorcall_Call
fun:_PyEval_EvalFrameDefault
fun:_PyEval_EvalCodeWithName
}
{
<insert_a_suppression_name_here>
Memcheck:Leak
match-leak-kinds: definite
fun:malloc
fun:lib_build_cpython_func.isra.87
fun:lib_build_and_cache_attr
fun:lib_getattr
fun:PyObject_GetAttr
obj:/usr/bin/python3.8
obj:/usr/bin/python3.8
fun:_PyEval_EvalFrameDefault
fun:_PyFunction_Vectorcall
fun:_PyEval_EvalFrameDefault
obj:/usr/bin/python3.8
fun:_PyEval_EvalFrameDefault
}
\ No newline at end of file
...@@ -181,7 +181,7 @@ python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoIns ...@@ -181,7 +181,7 @@ python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoIns
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuery.py python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuery.py
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanosubscribe.py python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanosubscribe.py
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestInsertTime_step.py python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestInsertTime_step.py
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdumpTestNanoSupport.py python3 test.py -f tools/taosdumpTestNanoSupport.py
# #
python3 ./test.py -f tsdb/tsdbComp.py python3 ./test.py -f tsdb/tsdbComp.py
...@@ -359,6 +359,9 @@ python3 ./test.py -f functions/queryTestCases.py ...@@ -359,6 +359,9 @@ python3 ./test.py -f functions/queryTestCases.py
python3 ./test.py -f functions/function_stateWindow.py python3 ./test.py -f functions/function_stateWindow.py
python3 ./test.py -f functions/function_derivative.py python3 ./test.py -f functions/function_derivative.py
python3 ./test.py -f functions/function_irate.py python3 ./test.py -f functions/function_irate.py
python3 ./test.py -f functions/function_ceil.py
python3 ./test.py -f functions/function_floor.py
python3 ./test.py -f functions/function_round.py
python3 ./test.py -f insert/unsignedInt.py python3 ./test.py -f insert/unsignedInt.py
python3 ./test.py -f insert/unsignedBigint.py python3 ./test.py -f insert/unsignedBigint.py
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -55,7 +55,7 @@ class TDTestCase: ...@@ -55,7 +55,7 @@ class TDTestCase:
tdSql.execute('''drop database if exists test_updata_0 ;''') tdSql.execute('''drop database if exists test_updata_0 ;''')
# update 0 不更新 ; update 1 覆盖更新 ;update 2 合并更新 # update 0 不更新 ; update 1 覆盖更新 ;update 2 合并更新
tdLog.info("========== test database updata = 0 ==========") tdLog.info("========== test database updata = 0 ==========")
tdSql.execute('''create database test_updata_0 update 0 minrows 10 maxrows 200 ;''') tdSql.execute('''create database test_updata_0 update 0 minrows 10 maxrows 200 keep 36500;''')
tdSql.execute('''use test_updata_0;''') tdSql.execute('''use test_updata_0;''')
tdSql.execute('''create stable stable_1 tdSql.execute('''create stable stable_1
(ts timestamp , q_int int , q_bigint bigint , q_smallint smallint , q_tinyint tinyint, (ts timestamp , q_int int , q_bigint bigint , q_smallint smallint , q_tinyint tinyint,
......
...@@ -55,7 +55,7 @@ class TDTestCase: ...@@ -55,7 +55,7 @@ class TDTestCase:
tdSql.execute('''drop database if exists test_updata_1 ;''') tdSql.execute('''drop database if exists test_updata_1 ;''')
# update 0 不更新 ; update 1 覆盖更新 ;update 2 合并更新 # update 0 不更新 ; update 1 覆盖更新 ;update 2 合并更新
tdLog.info("========== test database updata = 1 ==========") tdLog.info("========== test database updata = 1 ==========")
tdSql.execute('''create database test_updata_1 update 1 minrows 10 maxrows 200 ;''') tdSql.execute('''create database test_updata_1 update 1 minrows 10 maxrows 200 keep 36500;''')
tdSql.execute('''use test_updata_1;''') tdSql.execute('''use test_updata_1;''')
tdSql.execute('''create stable stable_1 tdSql.execute('''create stable stable_1
(ts timestamp , q_int int , q_bigint bigint , q_smallint smallint , q_tinyint tinyint, (ts timestamp , q_int int , q_bigint bigint , q_smallint smallint , q_tinyint tinyint,
......
...@@ -55,7 +55,7 @@ class TDTestCase: ...@@ -55,7 +55,7 @@ class TDTestCase:
tdSql.execute('''drop database if exists test_updata_2 ;''') tdSql.execute('''drop database if exists test_updata_2 ;''')
# update 0 不更新 ; update 1 覆盖更新 ;update 2 合并更新 # update 0 不更新 ; update 1 覆盖更新 ;update 2 合并更新
tdLog.info("========== test database updata = 2 ==========") tdLog.info("========== test database updata = 2 ==========")
tdSql.execute('''create database test_updata_2 update 2 minrows 10 maxrows 200 ;''') tdSql.execute('''create database test_updata_2 update 2 minrows 10 maxrows 200 keep 36500;''')
tdSql.execute('''use test_updata_2;''') tdSql.execute('''use test_updata_2;''')
tdSql.execute('''create stable stable_1 tdSql.execute('''create stable stable_1
(ts timestamp , q_int int , q_bigint bigint , q_smallint smallint , q_tinyint tinyint, (ts timestamp , q_int int , q_bigint bigint , q_smallint smallint , q_tinyint tinyint,
......
...@@ -17,6 +17,7 @@ from util.log import tdLog ...@@ -17,6 +17,7 @@ from util.log import tdLog
from util.cases import tdCases from util.cases import tdCases
from util.sql import tdSql from util.sql import tdSql
import random import random
import time
class TDTestCase: class TDTestCase:
...@@ -24,7 +25,8 @@ class TDTestCase: ...@@ -24,7 +25,8 @@ class TDTestCase:
tdLog.debug("start to execute %s" % __file__) tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql) tdSql.init(conn.cursor(), logSql)
self.ts = 1600000000000 now = time.time()
self.ts = int(round(now * 1000))
self.num = 10 self.num = 10
def run(self): def run(self):
......
...@@ -25,7 +25,8 @@ class TDTestCase: ...@@ -25,7 +25,8 @@ class TDTestCase:
tdLog.debug("start to execute %s" % __file__) tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql) tdSql.init(conn.cursor(), logSql)
self.ts = 1600000000000 now = time.time()
self.ts = int(round(now * 1000))
self.num = 10 self.num = 10
def run(self): def run(self):
......
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
"cache": 50, "cache": 50,
"blocks": 8, "blocks": 8,
"precision": "ms", "precision": "ms",
"keep": 365, "keep": 36500,
"minRows": 100, "minRows": 100,
"maxRows": 4096, "maxRows": 4096,
"comp":2, "comp":2,
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
import sys import sys
import os import os
import time
from util.log import * from util.log import *
from util.cases import * from util.cases import *
from util.sql import * from util.sql import *
...@@ -24,6 +25,9 @@ class TDTestCase: ...@@ -24,6 +25,9 @@ class TDTestCase:
tdLog.debug("start to execute %s" % __file__) tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql) tdSql.init(conn.cursor(), logSql)
now = time.time()
self.ts = int(round(now * 1000))
def getBuildPath(self): def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__)) selfPath = os.path.dirname(os.path.realpath(__file__))
...@@ -50,6 +54,7 @@ class TDTestCase: ...@@ -50,6 +54,7 @@ class TDTestCase:
# insert: create one or mutiple tables per sql and insert multiple rows per sql # insert: create one or mutiple tables per sql and insert multiple rows per sql
# test case for https://jira.taosdata.com:18080/browse/TD-4985 # test case for https://jira.taosdata.com:18080/browse/TD-4985
os.system("rm -rf tools/taosdemoAllTest/TD-4985/query-limit-offset.py.sql")
os.system("%staosdemo -f tools/taosdemoAllTest/TD-4985/query-limit-offset.json -y " % binPath) os.system("%staosdemo -f tools/taosdemoAllTest/TD-4985/query-limit-offset.json -y " % binPath)
tdSql.execute("use db") tdSql.execute("use db")
tdSql.query("select count (tbname) from stb0") tdSql.query("select count (tbname) from stb0")
...@@ -57,25 +62,25 @@ class TDTestCase: ...@@ -57,25 +62,25 @@ class TDTestCase:
for i in range(1000): for i in range(1000):
tdSql.execute('''insert into stb00_9999 values(%d, %d, %d,'test99.%s')''' tdSql.execute('''insert into stb00_9999 values(%d, %d, %d,'test99.%s')'''
% (1600000000000 + i, i, -10000+i, i)) % (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_8888 values(%d, %d, %d,'test98.%s')''' tdSql.execute('''insert into stb00_8888 values(%d, %d, %d,'test98.%s')'''
% (1600000000000 + i, i, -10000+i, i)) % (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_7777 values(%d, %d, %d,'test97.%s')''' tdSql.execute('''insert into stb00_7777 values(%d, %d, %d,'test97.%s')'''
% (1600000000000 + i, i, -10000+i, i)) % (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_6666 values(%d, %d, %d,'test96.%s')''' tdSql.execute('''insert into stb00_6666 values(%d, %d, %d,'test96.%s')'''
% (1600000000000 + i, i, -10000+i, i)) % (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_5555 values(%d, %d, %d,'test95.%s')''' tdSql.execute('''insert into stb00_5555 values(%d, %d, %d,'test95.%s')'''
% (1600000000000 + i, i, -10000+i, i)) % (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_4444 values(%d, %d, %d,'test94.%s')''' tdSql.execute('''insert into stb00_4444 values(%d, %d, %d,'test94.%s')'''
% (1600000000000 + i, i, -10000+i, i)) % (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_3333 values(%d, %d, %d,'test93.%s')''' tdSql.execute('''insert into stb00_3333 values(%d, %d, %d,'test93.%s')'''
% (1600000000000 + i, i, -10000+i, i)) % (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_2222 values(%d, %d, %d,'test92.%s')''' tdSql.execute('''insert into stb00_2222 values(%d, %d, %d,'test92.%s')'''
% (1600000000000 + i, i, -10000+i, i)) % (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_1111 values(%d, %d, %d,'test91.%s')''' tdSql.execute('''insert into stb00_1111 values(%d, %d, %d,'test91.%s')'''
% (1600000000000 + i, i, -10000+i, i)) % (self.ts + i, i, -10000+i, i))
tdSql.execute('''insert into stb00_100 values(%d, %d, %d,'test90.%s')''' tdSql.execute('''insert into stb00_100 values(%d, %d, %d,'test90.%s')'''
% (1600000000000 + i, i, -10000+i, i)) % (self.ts + i, i, -10000+i, i))
tdSql.query("select * from stb0 where c2 like 'test99%' ") tdSql.query("select * from stb0 where c2 like 'test99%' ")
tdSql.checkRows(1000) tdSql.checkRows(1000)
tdSql.query("select * from stb0 where tbname like 'stb00_9999' limit 10" ) tdSql.query("select * from stb0 where tbname like 'stb00_9999' limit 10" )
...@@ -176,7 +181,7 @@ class TDTestCase: ...@@ -176,7 +181,7 @@ class TDTestCase:
tdSql.checkData(0, 1, 5) tdSql.checkData(0, 1, 5)
tdSql.checkData(1, 1, 6) tdSql.checkData(1, 1, 6)
tdSql.checkData(2, 1, 7) tdSql.checkData(2, 1, 7)
os.system("rm -rf tools/taosdemoAllTest/TD-4985/query-limit-offset.py.sql")
def stop(self): def stop(self):
tdSql.close() tdSql.close()
......
...@@ -26,7 +26,10 @@ class TDTestCase: ...@@ -26,7 +26,10 @@ class TDTestCase:
tdLog.debug("start to execute %s" % __file__) tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql) tdSql.init(conn.cursor(), logSql)
self.ts = 1538548685000 os.system("rm -rf tools/taosdemoAllTest/TD-5213/insert4096columns_not_use_taosdemo.py.sql")
now = time.time()
self.ts = int(round(now * 1000))
self.num = 100 self.num = 100
def get_random_string(self, length): def get_random_string(self, length):
...@@ -691,7 +694,7 @@ class TDTestCase: ...@@ -691,7 +694,7 @@ class TDTestCase:
tdSql.query("describe table_40") tdSql.query("describe table_40")
tdSql.checkRows(4096) tdSql.checkRows(4096)
os.system("rm -rf tools/taosdemoAllTest/TD-5213/insert4096columns_not_use_taosdemo.py.sql")
def stop(self): def stop(self):
......
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
"cache": 50, "cache": 50,
"blocks": 8, "blocks": 8,
"precision": "ms", "precision": "ms",
"keep": 365, "keep": 36500,
"minRows": 100, "minRows": 100,
"maxRows": 4096, "maxRows": 4096,
"comp":2, "comp":2,
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册