Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
taosdata
TDengine
提交
bbda67eb
T
TDengine
项目概览
taosdata
/
TDengine
大约 2 年 前同步成功
通知
1192
Star
22018
Fork
4786
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
T
TDengine
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
bbda67eb
编写于
9月 14, 2021
作者:
S
Shenglian Zhou
浏览文件
操作
浏览文件
下载
差异文件
Merge branch 'develop' into feature/szhou/csum-sample-mavg
上级
cfce6c17
2ba315ff
变更
56
展开全部
显示空白变更内容
内联
并排
Showing
56 changed file
with
2501 addition
and
286 deletion
+2501
-286
documentation20/cn/05.insert/docs.md
documentation20/cn/05.insert/docs.md
+81
-16
documentation20/cn/08.connector/docs.md
documentation20/cn/08.connector/docs.md
+19
-0
documentation20/cn/11.administrator/docs.md
documentation20/cn/11.administrator/docs.md
+29
-0
documentation20/cn/12.taos-sql/docs.md
documentation20/cn/12.taos-sql/docs.md
+29
-7
src/client/CMakeLists.txt
src/client/CMakeLists.txt
+8
-8
src/client/inc/tscUtil.h
src/client/inc/tscUtil.h
+8
-2
src/client/inc/tsclient.h
src/client/inc/tsclient.h
+2
-13
src/client/src/tscAsync.c
src/client/src/tscAsync.c
+9
-1
src/client/src/tscParseLineProtocol.c
src/client/src/tscParseLineProtocol.c
+5
-4
src/client/src/tscParseOpenTSDB.c
src/client/src/tscParseOpenTSDB.c
+556
-7
src/client/src/tscPrepare.c
src/client/src/tscPrepare.c
+0
-1
src/client/src/tscSQLParser.c
src/client/src/tscSQLParser.c
+3
-6
src/client/src/tscServer.c
src/client/src/tscServer.c
+30
-14
src/client/src/tscSubquery.c
src/client/src/tscSubquery.c
+15
-16
src/client/src/tscUtil.c
src/client/src/tscUtil.c
+45
-46
src/connector/python/taos/cinterface.py
src/connector/python/taos/cinterface.py
+7
-1
src/connector/python/taos/connection.py
src/connector/python/taos/connection.py
+19
-0
src/connector/python/taos/error.py
src/connector/python/taos/error.py
+11
-1
src/inc/taos.h
src/inc/taos.h
+2
-0
src/inc/taoserror.h
src/inc/taoserror.h
+3
-0
src/inc/taosmsg.h
src/inc/taosmsg.h
+1
-12
src/plugins/http/inc/httpUtil.h
src/plugins/http/inc/httpUtil.h
+1
-0
src/plugins/http/src/httpHandle.c
src/plugins/http/src/httpHandle.c
+1
-0
src/plugins/http/src/httpParser.c
src/plugins/http/src/httpParser.c
+0
-4
src/plugins/http/src/httpResp.c
src/plugins/http/src/httpResp.c
+2
-0
src/plugins/http/src/httpRestJson.c
src/plugins/http/src/httpRestJson.c
+19
-4
src/plugins/http/src/httpServer.c
src/plugins/http/src/httpServer.c
+5
-3
src/plugins/http/src/httpSql.c
src/plugins/http/src/httpSql.c
+0
-1
src/plugins/http/src/httpUtil.c
src/plugins/http/src/httpUtil.c
+12
-0
src/query/inc/qExecutor.h
src/query/inc/qExecutor.h
+9
-1
src/query/src/qExecutor.c
src/query/src/qExecutor.c
+10
-7
src/query/src/qUtil.c
src/query/src/qUtil.c
+77
-13
src/tsdb/inc/tsdbMeta.h
src/tsdb/inc/tsdbMeta.h
+1
-1
src/tsdb/src/tsdbRead.c
src/tsdb/src/tsdbRead.c
+1
-5
src/util/inc/tlosertree.h
src/util/inc/tlosertree.h
+2
-3
src/util/src/hash.c
src/util/src/hash.c
+5
-3
src/util/src/tarray.c
src/util/src/tarray.c
+2
-1
src/util/src/terror.c
src/util/src/terror.c
+3
-0
src/util/src/tlosertree.c
src/util/src/tlosertree.c
+3
-2
tests/examples/JDBC/JDBCDemo/pom.xml
tests/examples/JDBC/JDBCDemo/pom.xml
+1
-1
tests/examples/JDBC/JDBCDemo/src/main/java/com/taosdata/example/JdbcDemo.java
...JDBCDemo/src/main/java/com/taosdata/example/JdbcDemo.java
+7
-11
tests/examples/JDBC/JDBCDemo/src/main/java/com/taosdata/example/JdbcRestfulDemo.java
...o/src/main/java/com/taosdata/example/JdbcRestfulDemo.java
+11
-12
tests/examples/JDBC/JDBCDemo/src/main/java/com/taosdata/example/SubscribeDemo.java
...emo/src/main/java/com/taosdata/example/SubscribeDemo.java
+1
-2
tests/examples/JDBC/springbootdemo/pom.xml
tests/examples/JDBC/springbootdemo/pom.xml
+6
-3
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/SpringbootdemoApplication.java
...ata/example/springbootdemo/SpringbootdemoApplication.java
+1
-1
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/controller/WeatherController.java
.../example/springbootdemo/controller/WeatherController.java
+5
-19
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.java
...om/taosdata/example/springbootdemo/dao/WeatherMapper.java
+2
-0
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.xml
...com/taosdata/example/springbootdemo/dao/WeatherMapper.xml
+43
-11
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/domain/Weather.java
...a/com/taosdata/example/springbootdemo/domain/Weather.java
+9
-0
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/service/WeatherService.java
...osdata/example/springbootdemo/service/WeatherService.java
+18
-0
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/util/TaosAspect.java
.../com/taosdata/example/springbootdemo/util/TaosAspect.java
+36
-0
tests/examples/JDBC/springbootdemo/src/main/resources/application.properties
.../springbootdemo/src/main/resources/application.properties
+4
-6
tests/examples/c/apitest.c
tests/examples/c/apitest.c
+751
-14
tests/examples/c/makefile
tests/examples/c/makefile
+2
-2
tests/pytest/fulltest.sh
tests/pytest/fulltest.sh
+1
-1
tests/pytest/insert/insertJSONPayload.py
tests/pytest/insert/insertJSONPayload.py
+568
-0
未找到文件。
documentation20/cn/05.insert/docs.md
浏览文件 @
bbda67eb
...
@@ -2,7 +2,7 @@
...
@@ -2,7 +2,7 @@
TDengine支持多种接口写入数据,包括SQL, Prometheus, Telegraf, EMQ MQTT Broker, HiveMQ Broker, CSV文件等,后续还将提供Kafka, OPC等接口。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。
TDengine支持多种接口写入数据,包括SQL, Prometheus, Telegraf, EMQ MQTT Broker, HiveMQ Broker, CSV文件等,后续还将提供Kafka, OPC等接口。数据可以单条插入,也可以批量插入,可以插入一个数据采集点的数据,也可以同时插入多个数据采集点的数据。支持多线程插入,支持时间乱序数据插入,也支持历史数据插入。
## <a class="anchor" id="sql"></a>SQL写入
## <a class="anchor" id="sql"></a>SQL
写入
应用通过C/C++、JDBC、GO、C#或Python Connector 执行SQL insert语句来插入数据,用户还可以通过TAOS Shell,手动输入SQL insert语句插入数据。比如下面这条insert 就将一条记录写入到表d1001中:
应用通过C/C++、JDBC、GO、C#或Python Connector 执行SQL insert语句来插入数据,用户还可以通过TAOS Shell,手动输入SQL insert语句插入数据。比如下面这条insert 就将一条记录写入到表d1001中:
```
mysql
```
mysql
...
@@ -27,11 +27,73 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6,
...
@@ -27,11 +27,73 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6,
-
对同一张表,如果新插入记录的时间戳已经存在,默认情形下(UPDATE=0)新记录将被直接抛弃,也就是说,在一张表里,时间戳必须是唯一的。如果应用自动生成记录,很有可能生成的时间戳是一样的,这样,成功插入的记录条数会小于应用插入的记录条数。如果在创建数据库时使用了 UPDATE 1 选项,插入相同时间戳的新记录将覆盖原有记录。
-
对同一张表,如果新插入记录的时间戳已经存在,默认情形下(UPDATE=0)新记录将被直接抛弃,也就是说,在一张表里,时间戳必须是唯一的。如果应用自动生成记录,很有可能生成的时间戳是一样的,这样,成功插入的记录条数会小于应用插入的记录条数。如果在创建数据库时使用了 UPDATE 1 选项,插入相同时间戳的新记录将覆盖原有记录。
-
写入的数据的时间戳必须大于当前时间减去配置参数keep的时间。如果keep配置为3650天,那么无法写入比3650天还早的数据。写入数据的时间戳也不能大于当前时间加配置参数days。如果days为2,那么无法写入比当前时间还晚2天的数据。
-
写入的数据的时间戳必须大于当前时间减去配置参数keep的时间。如果keep配置为3650天,那么无法写入比3650天还早的数据。写入数据的时间戳也不能大于当前时间加配置参数days。如果days为2,那么无法写入比当前时间还晚2天的数据。
## <a class="anchor" id="prometheus"></a>Prometheus直接写入
## <a class="anchor" id="schemaless"></a>Schemaless 写入
在物联网应用中,常会采集比较多的数据项,用于实现智能控制、业务分析、设备监控等。由于应用逻辑的版本升级,或者设备自身的硬件调整等原因,数据采集项就有可能比较频繁地出现变动。为了在这种情况下方便地完成数据记录工作,TDengine 从 2.2.0.0 版本开始,提供 Schemaless 写入方式,可以免于预先创建超级表/数据子表,而是随着数据写入,自动创建与数据对应的存储结构。并且在必要时,Schemaless 将自动增加必要的数据列,保证用户写入的数据可以被正确存储。目前,TDengine 的 C/C++ Connector 提供支持 Schemaless 的操作接口,详情请参见
[
Schemaless 方式写入接口
](
https://www.taosdata.com/cn/documentation/connector#schemaless
)
章节。这里对 Schemaless 的数据表达格式进行描述。
### Schemaless 数据行协议
Schemaless 采用一个字符串来表达最终存储的一个数据行(可以向 Schemaless 写入 API 中一次传入多个字符串来实现多个数据行的批量写入),其格式约定如下:
```
json
measurement,tag_set
field_set
timestamp
```
其中,
*
measurement 将作为数据表名。它与 tag_set 之间使用一个英文逗号来分隔。
*
tag_set 将作为标签数据,其格式形如
`<tag_key>=<tag_value>,<tag_key>=<tag_value>`
,也即可以使用英文逗号来分隔多个标签数据。它与 field_set 之间使用一个半角空格来分隔。
*
field_set 将作为普通列数据,其格式形如
`<field_key>=<field_value>,<field_key>=<field_value>`
,同样是使用英文逗号来分隔多个普通列的数据。它与 timestamp 之间使用一个半角空格来分隔。
*
timestamp 即本行数据对应的主键时间戳。
在 Schemaless 的数据行协议中,tag_set、field_set 中的每个数据项都需要对自身的数据类型进行描述。具体来说:
*
如果两边有英文双引号,表示 BIANRY(32) 类型。例如
`"abc"`
。
*
如果两边有英文双引号而且带有 L 前缀,表示 NCHAR(32) 类型。例如
`L"报错信息"`
。
*
对空格、等号(=)、逗号(,)、双引号("),前面需要使用反斜杠(
\)
进行转义。(都指的是英文半角符号)
*
数值类型将通过后缀来区分数据类型:
-
没有后缀,为 FLOAT 类型;
-
后缀为 f32,为 FLOAT 类型;
-
后缀为 f64,为 DOUBLE 类型;
-
后缀为 i8,表示为 TINYINT (INT8) 类型;
-
后缀为 i16,表示为 SMALLINT (INT16) 类型;
-
后缀为 i32,表示为 INT (INT32) 类型;
-
后缀为 i64,表示为 BIGINT (INT64) 类型;
-
后缀为 b,表示为 BOOL 类型。
*
t, T, true, True, TRUE, f, F, false, False 将直接作为 BOOL 型来处理。
timestamp 位置的时间戳通过后缀来声明时间精度,具体如下:
*
不带任何后缀的长整数会被当作微秒来处理;
*
当后缀为 s 时,表示秒时间戳;
*
当后缀为 ms 时,表示毫秒时间戳;
*
当后缀为 us 时,表示微秒时间戳;
*
当后缀为 ns 时,表示纳秒时间戳;
*
当时间戳为 0 时,表示采用客户端的当前时间(因此,同一批提交的数据中,时间戳 0 会被解释为同一个时间点,于是就有可能导致时间戳重复)。
例如,如下 Schemaless 数据行表示:向名为 st 的超级表下的 t1 标签为 3(BIGINT 类型)、t2 标签为 4(DOUBLE 类型)、t3 标签为 "t3"(BINARY 类型)的数据子表,写入 c1 列为 3(BIGINT 类型)、c2 列为 false(BOOL 类型)、c3 列为 "passit"(NCHAR 类型)、c4 列为 4(DOUBLE 类型)、主键时间戳为 1626006833639000000(纳秒精度)的一行数据。
```
json
st,t
1
=
3
i
64
,t
2
=
4
f
64
,t
3
=
"t3"
c
1
=
3
i
64
,c
3
=L
"passit"
,c
2
=
false
,c
4
=
4
f
64
1626006833639000000
ns
```
### Schemaless 的处理逻辑
Schemaless 按照如下原则来处理行数据:
1.
当 tag_set 中有 ID 字段时,该字段的值将作为数据子表的表名。
2.
没有 ID 字段时,将使用
`measurement + tag_value1 + tag_value2 + ...`
的 md5 值来作为子表名。
3.
如果指定的超级表名不存在,则 Schemaless 会创建这个超级表。
4.
如果指定的数据子表不存在,则 Schemaless 会使用 tag values 创建这个数据子表。
5.
如果数据行中指定的标签列或普通列不存在,则 Schemaless 会在超级表中增加对应的标签列或普通列(只增不减)。
6.
如果超级表中存在一些标签列或普通列未在一个数据行中被指定取值,那么这些列的值在这一行中会被置为 NULL。
7.
对 BINARY 或 NCHAR 列,如果数据行中所提供值的长度超出了列类型的限制,那么 Schemaless 会增加该列允许存储的字符长度上限(只增不减),以保证数据的完整保存。
8.
如果指定的数据子表已经存在,而且本次指定的标签列取值跟已保存的值不一样,那么最新的数据行中的值会覆盖旧的标签列取值。
9.
整个处理过程中遇到的错误会中断写入过程,并返回错误代码。
**注意:**
Schemaless 所有的处理逻辑,仍会遵循 TDengine 对数据结构的底层限制,例如每行数据的总长度不能超过 16k 字节。这方面的具体限制约束请参见
[
TAOS SQL 边界限制
](
https://www.taosdata.com/cn/documentation/taos-sql#limitation
)
章节。
关于 Schemaless 的字符串编码处理、时区设置等,均会沿用 TAOSC 客户端的设置。
## <a class="anchor" id="prometheus"></a>Prometheus 直接写入
[
Prometheus
](
https://www.prometheus.io/
)
作为Cloud Native Computing Fundation毕业的项目,在性能监控以及K8S性能监控领域有着非常广泛的应用。TDengine提供一个小工具
[
Bailongma
](
https://github.com/taosdata/Bailongma
)
,只需对Prometheus做简单配置,无需任何代码,就可将Prometheus采集的数据直接写入TDengine,并按规则在TDengine自动创建库和相关表项。博文
[
用Docker容器快速搭建一个Devops监控Demo
](
https://www.taosdata.com/blog/2020/02/03/1189.html
)
即是采用Bailongma将Prometheus和Telegraf的数据写入TDengine中的示例,可以参考。
[
Prometheus
](
https://www.prometheus.io/
)
作为Cloud Native Computing Fundation毕业的项目,在性能监控以及K8S性能监控领域有着非常广泛的应用。TDengine提供一个小工具
[
Bailongma
](
https://github.com/taosdata/Bailongma
)
,只需对Prometheus做简单配置,无需任何代码,就可将Prometheus采集的数据直接写入TDengine,并按规则在TDengine自动创建库和相关表项。博文
[
用Docker容器快速搭建一个Devops监控Demo
](
https://www.taosdata.com/blog/2020/02/03/1189.html
)
即是采用Bailongma将Prometheus和Telegraf的数据写入TDengine中的示例,可以参考。
### 从源代码编译blm_prometheus
### 从源代码编译
blm_prometheus
用户需要从github下载
[
Bailongma
](
https://github.com/taosdata/Bailongma
)
的源码,使用Golang语言编译器编译生成可执行文件。在开始编译前,需要准备好以下条件:
用户需要从github下载
[
Bailongma
](
https://github.com/taosdata/Bailongma
)
的源码,使用Golang语言编译器编译生成可执行文件。在开始编译前,需要准备好以下条件:
-
Linux操作系统的服务器
-
Linux操作系统的服务器
...
@@ -46,11 +108,11 @@ go build
...
@@ -46,11 +108,11 @@ go build
一切正常的情况下,就会在对应的目录下生成一个blm_prometheus的可执行程序。
一切正常的情况下,就会在对应的目录下生成一个blm_prometheus的可执行程序。
### 安装Prometheus
### 安装
Prometheus
通过Prometheus的官网下载安装。具体请见:
[
下载地址
](
https://prometheus.io/download/
)
。
通过Prometheus的官网下载安装。具体请见:
[
下载地址
](
https://prometheus.io/download/
)
。
### 配置Prometheus
### 配置
Prometheus
参考Prometheus的
[
配置文档
](
https://prometheus.io/docs/prometheus/latest/configuration/configuration/
)
,在Prometheus的配置文件中的
<remote_write>
部分,增加以下配置:
参考Prometheus的
[
配置文档
](
https://prometheus.io/docs/prometheus/latest/configuration/configuration/
)
,在Prometheus的配置文件中的
<remote_write>
部分,增加以下配置:
...
@@ -60,7 +122,8 @@ go build
...
@@ -60,7 +122,8 @@ go build
启动Prometheus后,可以通过taos客户端查询确认数据是否成功写入。
启动Prometheus后,可以通过taos客户端查询确认数据是否成功写入。
### 启动blm_prometheus程序
### 启动 blm_prometheus 程序
blm_prometheus程序有以下选项,在启动blm_prometheus程序时可以通过设定这些选项来设定blm_prometheus的配置。
blm_prometheus程序有以下选项,在启动blm_prometheus程序时可以通过设定这些选项来设定blm_prometheus的配置。
```
bash
```
bash
--tdengine-name
--tdengine-name
...
@@ -94,7 +157,8 @@ remote_write:
...
@@ -94,7 +157,8 @@ remote_write:
-
url
:
"
http://10.1.2.3:8088/receive"
-
url
:
"
http://10.1.2.3:8088/receive"
```
```
### 查询prometheus写入数据
### 查询 prometheus 写入数据
prometheus产生的数据格式如下:
prometheus产生的数据格式如下:
```
json
```
json
{
{
...
@@ -105,10 +169,10 @@ prometheus产生的数据格式如下:
...
@@ -105,10 +169,10 @@ prometheus产生的数据格式如下:
instance=
"192.168.99.116:8443"
,
instance=
"192.168.99.116:8443"
,
job=
"kubernetes-apiservers"
,
job=
"kubernetes-apiservers"
,
le=
"125000"
,
le=
"125000"
,
resource=
"persistentvolumes"
,
s
resource=
"persistentvolumes"
,
cope=
"cluster"
,
s
cope=
"cluster"
,
verb=
"LIST"
,
verb=
"LIST"
,
version=
“
v
1
"
version=
"
v1"
}
}
}
}
```
```
...
@@ -118,11 +182,11 @@ use prometheus;
...
@@ -118,11 +182,11 @@ use prometheus;
select * from apiserver_request_latencies_bucket;
select * from apiserver_request_latencies_bucket;
```
```
## <a class="anchor" id="telegraf"></a>Telegraf直接写入
## <a class="anchor" id="telegraf"></a>Telegraf
直接写入
[
Telegraf
](
https://www.influxdata.com/time-series-platform/telegraf/
)
是一流行的IT运维数据采集开源工具,TDengine提供一个小工具
[
Bailongma
](
https://github.com/taosdata/Bailongma
)
,只需在Telegraf做简单配置,无需任何代码,就可将Telegraf采集的数据直接写入TDengine,并按规则在TDengine自动创建库和相关表项。博文
[
用Docker容器快速搭建一个Devops监控Demo
](
https://www.taosdata.com/blog/2020/02/03/1189.html
)
即是采用bailongma将Prometheus和Telegraf的数据写入TDengine中的示例,可以参考。
[
Telegraf
](
https://www.influxdata.com/time-series-platform/telegraf/
)
是一流行的IT运维数据采集开源工具,TDengine提供一个小工具
[
Bailongma
](
https://github.com/taosdata/Bailongma
)
,只需在Telegraf做简单配置,无需任何代码,就可将Telegraf采集的数据直接写入TDengine,并按规则在TDengine自动创建库和相关表项。博文
[
用Docker容器快速搭建一个Devops监控Demo
](
https://www.taosdata.com/blog/2020/02/03/1189.html
)
即是采用bailongma将Prometheus和Telegraf的数据写入TDengine中的示例,可以参考。
### 从源代码编译blm_telegraf
### 从源代码编译
blm_telegraf
用户需要从github下载
[
Bailongma
](
https://github.com/taosdata/Bailongma
)
的源码,使用Golang语言编译器编译生成可执行文件。在开始编译前,需要准备好以下条件:
用户需要从github下载
[
Bailongma
](
https://github.com/taosdata/Bailongma
)
的源码,使用Golang语言编译器编译生成可执行文件。在开始编译前,需要准备好以下条件:
...
@@ -139,11 +203,11 @@ go build
...
@@ -139,11 +203,11 @@ go build
一切正常的情况下,就会在对应的目录下生成一个blm_telegraf的可执行程序。
一切正常的情况下,就会在对应的目录下生成一个blm_telegraf的可执行程序。
### 安装Telegraf
### 安装
Telegraf
目前TDengine支持Telegraf 1.7.4以上的版本。用户可以根据当前的操作系统,到Telegraf官网下载安装包,并执行安装。下载地址如下:https://portal.influxdata.com/downloads 。
目前TDengine支持Telegraf 1.7.4以上的版本。用户可以根据当前的操作系统,到Telegraf官网下载安装包,并执行安装。下载地址如下:https://portal.influxdata.com/downloads 。
### 配置Telegraf
### 配置
Telegraf
修改Telegraf配置文件/etc/telegraf/telegraf.conf中与TDengine有关的配置项。
修改Telegraf配置文件/etc/telegraf/telegraf.conf中与TDengine有关的配置项。
...
@@ -160,7 +224,8 @@ go build
...
@@ -160,7 +224,8 @@ go build
关于如何使用Telegraf采集数据以及更多有关使用Telegraf的信息,请参考Telegraf官方的
[
文档
](
https://docs.influxdata.com/telegraf/v1.11/
)
。
关于如何使用Telegraf采集数据以及更多有关使用Telegraf的信息,请参考Telegraf官方的
[
文档
](
https://docs.influxdata.com/telegraf/v1.11/
)
。
### 启动blm_telegraf程序
### 启动 blm_telegraf 程序
blm_telegraf程序有以下选项,在启动blm_telegraf程序时可以通过设定这些选项来设定blm_telegraf的配置。
blm_telegraf程序有以下选项,在启动blm_telegraf程序时可以通过设定这些选项来设定blm_telegraf的配置。
```
bash
```
bash
...
@@ -196,7 +261,7 @@ blm_telegraf对telegraf提供服务的端口号。
...
@@ -196,7 +261,7 @@ blm_telegraf对telegraf提供服务的端口号。
url = "http://10.1.2.3:8089/telegraf"
url = "http://10.1.2.3:8089/telegraf"
```
```
### 查询
telegraf
写入数据
### 查询
telegraf
写入数据
telegraf产生的数据格式如下:
telegraf产生的数据格式如下:
```
json
```
json
...
...
documentation20/cn/08.connector/docs.md
浏览文件 @
bbda67eb
...
@@ -403,6 +403,25 @@ typedef struct TAOS_MULTI_BIND {
...
@@ -403,6 +403,25 @@ typedef struct TAOS_MULTI_BIND {
(2.1.3.0 版本新增)
(2.1.3.0 版本新增)
用于在其他 stmt API 返回错误(返回错误码或空指针)时获取错误信息。
用于在其他 stmt API 返回错误(返回错误码或空指针)时获取错误信息。
<a
class=
"anchor"
id=
"schemaless"
></a>
### Schemaless 方式写入接口
除了使用 SQL 方式或者使用参数绑定 API 写入数据外,还可以使用 Schemaless 的方式完成写入。Schemaless 可以免于预先创建超级表/数据子表的数据结构,而是可以直接写入数据,TDengine 系统会根据写入的数据内容自动创建和维护所需要的表结构。Schemaless 的使用方式详见
[
Schemaless 写入
](
https://www.taosdata.com/cn/documentation/insert#schemaless
)
章节,这里介绍与之配套使用的 C/C++ API。
-
`int taos_insert_lines(TAOS* taos, char* lines[], int numLines)`
(2.2.0.0 版本新增)
以 Schemaless 格式写入多行数据。其中:
*
taos:调用 taos_connect 返回的数据库连接。
*
lines:由 char 字符串指针组成的数组,指向本次想要写入数据库的多行数据。
*
numLines:lines 数据的总行数。
返回值为 0 表示写入成功,非零值表示出错。具体错误代码请参见
[
taoserror.h
](
https://github.com/taosdata/TDengine/blob/develop/src/inc/taoserror.h
)
文件。
说明:
1.
此接口是一个同步阻塞式接口,使用时机与
`taos_query()`
一致。
2.
在调用此接口之前,必须先调用
`taos_select_db()`
来确定目前是在向哪个 DB 来写入。
### 连续查询接口
### 连续查询接口
TDengine提供时间驱动的实时流式计算API。可以每隔一指定的时间段,对一张或多张数据库的表(数据流)进行各种实时聚合计算操作。操作简单,仅有打开、关闭流的API。具体如下:
TDengine提供时间驱动的实时流式计算API。可以每隔一指定的时间段,对一张或多张数据库的表(数据流)进行各种实时聚合计算操作。操作简单,仅有打开、关闭流的API。具体如下:
...
...
documentation20/cn/11.administrator/docs.md
浏览文件 @
bbda67eb
...
@@ -568,6 +568,35 @@ COMPACT 命令对指定的一个或多个 VGroup 启动碎片重整,系统会
...
@@ -568,6 +568,35 @@ COMPACT 命令对指定的一个或多个 VGroup 启动碎片重整,系统会
需要注意的是,碎片重整操作会大幅消耗磁盘 I/O。因此在重整进行期间,有可能会影响节点的写入和查询性能,甚至在极端情况下导致短时间的阻写。
需要注意的是,碎片重整操作会大幅消耗磁盘 I/O。因此在重整进行期间,有可能会影响节点的写入和查询性能,甚至在极端情况下导致短时间的阻写。
<a
class=
"anchor"
id=
"tsz_compress"
></a>
## 浮点数有损压缩
在车联网等物联网智能应用场景中,经常会采集和存储海量的浮点数类型数据,如果能更高效地对此类数据进行压缩,那么不但能够节省数据存储的硬件资源,也能够因降低磁盘 I/O 数据量而提升系统性能表现。
从 2.1.6.0 版本开始,TDengine 提供一种名为 TSZ 的新型数据压缩算法,无论设置为有损压缩还是无损压缩,都能够显著提升浮点数类型数据的压缩率表现。目前该功能以可选模块的方式进行发布,可以通过添加特定的编译参数来启用该功能(也即常规安装包中暂未包含该功能)。
**需要注意的是,该功能一旦启用,效果是全局的,也即会对系统中所有的 FLOAT、DOUBLE 类型的数据生效。同时,在启用了浮点数有损压缩功能后写入的数据,也无法被未启用该功能的版本载入,并有可能因此而导致数据库服务报错退出。**
### 创建支持 TSZ 压缩算法的 TDengine 版本
TSZ 模块保存在单独的代码仓库 https://github.com/taosdata/TSZ 中。可以通过以下步骤创建包含此模块的 TDengine 版本:
1.
TDengine 中的插件目前只支持通过 SSH 的方式拉取和编译,所以需要自己先配置好通过 SSH 拉取 GitHub 代码的环境。
2.
`git clone git@github.com:taosdata/TDengine -b your_branchname --recurse-submodules`
通过
`--recurse-submodules`
使依赖模块的源代码可以被一并下载。
3.
`mkdir debug && cd debug`
进入单独的编译目录。
4.
`cmake .. -DTSZ_ENABLED=true`
其中参数
`-DTSZ_ENABLED=true`
表示在编译过程中加入对 TSZ 插件功能的支持。如果成功激活对 TSZ 模块的编译,那么 CMAKE 过程中也会显示
`build with TSZ enabled`
字样。
5.
编译成功后,包含 TSZ 浮点压缩功能的插件便已经编译进了 TDengine 中了,可以通过调整 taos.cfg 中的配置参数来使用此功能了。
### 通过配置文件来启用 TSZ 压缩算法
如果要启用 TSZ 压缩算法,除了在 TDengine 的编译过程需要声明启用 TSZ 模块之外,还需要在 taos.cfg 配置文件中对以下参数进行设置:
*
lossyColumns:配置要进行有损压缩的浮点数数据类型。参数值类型为字符串,含义为:空 - 关闭有损压缩;float - 只对 FLOAT 类型进行有损压缩;double - 只对 DOUBLE 类型进行有损压缩;float|double:对 FLOAT 和 DOUBLE 类型都进行有损压缩。默认值是“空”,也即关闭有损压缩。
*
fPrecision:设置 float 类型浮点数压缩精度,小于此值的浮点数尾数部分将被截断。参数值类型为 FLOAT,最小值为 0.0,最大值为 100,000.0。缺省值为 0.00000001(1E-8)。
*
dPrecision:设置 double 类型浮点数压缩精度,小于此值的浮点数尾数部分将被截断。参数值类型为 DOUBLE,最小值为 0.0,最大值为 100,000.0。缺省值为 0.0000000000000001(1E-16)。
*
maxRange:表示数据的最大浮动范围。一般无需调整,在数据具有特定特征时可以配合 range 参数来实现极高的数据压缩率。默认值为 500。
*
range:表示数据大体浮动范围。一般无需调整,在数据具有特定特征时可以配合 maxRange 参数来实现极高的数据压缩率。默认值为 100。
**注意:**
对 cfg 配置文件中参数值的任何调整,都需要重新启动 taosd 才能生效。并且以上选项为全局配置选项,配置后对所有数据库中所有表的 FLOAT 及 DOUBLE 类型的字段生效。
## <a class="anchor" id="directories"></a>文件目录结构
## <a class="anchor" id="directories"></a>文件目录结构
安装TDengine后,默认会在操作系统中生成下列目录或文件:
安装TDengine后,默认会在操作系统中生成下列目录或文件:
...
...
documentation20/cn/12.taos-sql/docs.md
浏览文件 @
bbda67eb
...
@@ -730,6 +730,34 @@ Query OK, 1 row(s) in set (0.001091s)
...
@@ -730,6 +730,34 @@ Query OK, 1 row(s) in set (0.001091s)
5. 从 2.0.17.0 版本开始,条件过滤开始支持 BETWEEN AND 语法,例如 `WHERE col2 BETWEEN 1.5 AND 3.25` 表示查询条件为“1.5 ≤ col2 ≤ 3.25”。
5. 从 2.0.17.0 版本开始,条件过滤开始支持 BETWEEN AND 语法,例如 `WHERE col2 BETWEEN 1.5 AND 3.25` 表示查询条件为“1.5 ≤ col2 ≤ 3.25”。
6. 从 2.1.4.0 版本开始,条件过滤开始支持 IN 算子,例如 `WHERE city IN ('Beijing', 'Shanghai')`。说明:BOOL 类型写作 `{true, false}` 或 `{0, 1}` 均可,但不能写作 0、1 之外的整数;FLOAT 和 DOUBLE 类型会受到浮点数精度影响,集合内的值在精度范围内认为和数据行的值完全相等才能匹配成功;TIMESTAMP 类型支持非主键的列。<!-- REPLACE_OPEN_TO_ENTERPRISE__IN_OPERATOR_AND_UNSIGNED_INTEGER -->
6. 从 2.1.4.0 版本开始,条件过滤开始支持 IN 算子,例如 `WHERE city IN ('Beijing', 'Shanghai')`。说明:BOOL 类型写作 `{true, false}` 或 `{0, 1}` 均可,但不能写作 0、1 之外的整数;FLOAT 和 DOUBLE 类型会受到浮点数精度影响,集合内的值在精度范围内认为和数据行的值完全相等才能匹配成功;TIMESTAMP 类型支持非主键的列。<!-- REPLACE_OPEN_TO_ENTERPRISE__IN_OPERATOR_AND_UNSIGNED_INTEGER -->
<a class="anchor" id="join"></a>
### JOIN 子句
从 2.2.0.0 版本开始,TDengine 对内连接(INNER JOIN)中的自然连接(Natural join)操作实现了完整的支持。也即支持“普通表与普通表之间”、“超级表与超级表之间”、“子查询与子查询之间”进行自然连接。自然连接与内连接的主要区别是,自然连接要求参与连接的字段在不同的表/超级表中必须是同名字段。也即,TDengine 在连接关系的表达中,要求必须使用同名数据列/标签列的相等关系。
在普通表与普通表之间的 JOIN 操作中,只能使用主键时间戳之间的相等关系。例如:
```
sql
SELECT
*
FROM temp_tb_1 t1, pressure_tb_1 t2
WHERE t1.ts = t2.ts
```
在超级表与超级表之间的 JOIN 操作中,除了主键时间戳一致的条件外,还要求引入能实现一一对应的标签列的相等关系。例如:
```
sql
SELECT
*
FROM temp_stable t1, temp_stable t2
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
```
类似地,也可以对多个子查询的查询结果进行 JOIN 操作。
注意,JOIN 操作存在如下限制要求:
1. 参与一条语句中 JOIN 操作的表/超级表最多可以有 10 个。
2. 在包含 JOIN 操作的查询语句中不支持 FILL。
3. 暂不支持参与 JOIN 操作的表之间聚合后的四则运算。
4. 不支持只对其中一部分表做 GROUP BY。
5. JOIN 查询的不同表的过滤条件之间不能为 OR。
<a class="anchor" id="nested"></a>
<a class="anchor" id="nested"></a>
### 嵌套查询
### 嵌套查询
...
@@ -757,7 +785,7 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
...
@@ -757,7 +785,7 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
* 外层查询不支持 GROUP BY。
* 外层查询不支持 GROUP BY。
<a class="anchor" id="union"></a>
<a class="anchor" id="union"></a>
### UNION ALL
操作符
### UNION ALL
子句
```
mysql
```
mysql
SELECT ...
SELECT ...
...
@@ -1486,12 +1514,6 @@ SELECT AVG(current), MAX(current), LEASTSQUARES(current, start_val, step_val), P
...
@@ -1486,12 +1514,6 @@ SELECT AVG(current), MAX(current), LEASTSQUARES(current, start_val, step_val), P
TAOS SQL 支持对标签、TBNAME 进行 GROUP BY 操作,也支持普通列进行 GROUP BY,前提是:仅限一列且该列的唯一值小于 10 万个。
TAOS SQL 支持对标签、TBNAME 进行 GROUP BY 操作,也支持普通列进行 GROUP BY,前提是:仅限一列且该列的唯一值小于 10 万个。
**JOIN 操作的限制**
TAOS SQL 支持表之间按主键时间戳来 join 两张表的列,暂不支持两个表之间聚合后的四则运算。
JOIN 查询的不同表的过滤条件之间不能为 OR。
**IS NOT NULL 与不为空的表达式适用范围**
**IS NOT NULL 与不为空的表达式适用范围**
IS NOT NULL 支持所有类型的列。不为空的表达式为 <>"",仅对非数值类型的列适用。
IS NOT NULL 支持所有类型的列。不为空的表达式为 <>"",仅对非数值类型的列适用。
...
...
src/client/CMakeLists.txt
浏览文件 @
bbda67eb
...
@@ -13,13 +13,13 @@ IF (TD_LINUX)
...
@@ -13,13 +13,13 @@ IF (TD_LINUX)
# set the static lib name
# set the static lib name
ADD_LIBRARY
(
taos_static STATIC
${
SRC
}
)
ADD_LIBRARY
(
taos_static STATIC
${
SRC
}
)
TARGET_LINK_LIBRARIES
(
taos_static common query trpc tutil pthread m rt
${
VAR_TSZ
}
)
TARGET_LINK_LIBRARIES
(
taos_static common query trpc tutil pthread m rt
cJson
${
VAR_TSZ
}
)
SET_TARGET_PROPERTIES
(
taos_static PROPERTIES OUTPUT_NAME
"taos_static"
)
SET_TARGET_PROPERTIES
(
taos_static PROPERTIES OUTPUT_NAME
"taos_static"
)
SET_TARGET_PROPERTIES
(
taos_static PROPERTIES CLEAN_DIRECT_OUTPUT 1
)
SET_TARGET_PROPERTIES
(
taos_static PROPERTIES CLEAN_DIRECT_OUTPUT 1
)
# generate dynamic library (*.so)
# generate dynamic library (*.so)
ADD_LIBRARY
(
taos SHARED
${
SRC
}
)
ADD_LIBRARY
(
taos SHARED
${
SRC
}
)
TARGET_LINK_LIBRARIES
(
taos common query trpc tutil pthread m rt
)
TARGET_LINK_LIBRARIES
(
taos common query trpc tutil pthread m rt
cJson
)
IF
(
TD_LINUX_64
)
IF
(
TD_LINUX_64
)
TARGET_LINK_LIBRARIES
(
taos lua
)
TARGET_LINK_LIBRARIES
(
taos lua
)
ENDIF
()
ENDIF
()
...
@@ -39,13 +39,13 @@ ELSEIF (TD_DARWIN)
...
@@ -39,13 +39,13 @@ ELSEIF (TD_DARWIN)
# set the static lib name
# set the static lib name
ADD_LIBRARY
(
taos_static STATIC
${
SRC
}
)
ADD_LIBRARY
(
taos_static STATIC
${
SRC
}
)
TARGET_LINK_LIBRARIES
(
taos_static common query trpc tutil pthread m lua
)
TARGET_LINK_LIBRARIES
(
taos_static common query trpc tutil pthread m lua
cJson
)
SET_TARGET_PROPERTIES
(
taos_static PROPERTIES OUTPUT_NAME
"taos_static"
)
SET_TARGET_PROPERTIES
(
taos_static PROPERTIES OUTPUT_NAME
"taos_static"
)
SET_TARGET_PROPERTIES
(
taos_static PROPERTIES CLEAN_DIRECT_OUTPUT 1
)
SET_TARGET_PROPERTIES
(
taos_static PROPERTIES CLEAN_DIRECT_OUTPUT 1
)
# generate dynamic library (*.dylib)
# generate dynamic library (*.dylib)
ADD_LIBRARY
(
taos SHARED
${
SRC
}
)
ADD_LIBRARY
(
taos SHARED
${
SRC
}
)
TARGET_LINK_LIBRARIES
(
taos common query trpc tutil pthread m lua
)
TARGET_LINK_LIBRARIES
(
taos common query trpc tutil pthread m lua
cJson
)
SET_TARGET_PROPERTIES
(
taos PROPERTIES CLEAN_DIRECT_OUTPUT 1
)
SET_TARGET_PROPERTIES
(
taos PROPERTIES CLEAN_DIRECT_OUTPUT 1
)
#set version of .dylib
#set version of .dylib
...
@@ -63,26 +63,26 @@ ELSEIF (TD_WINDOWS)
...
@@ -63,26 +63,26 @@ ELSEIF (TD_WINDOWS)
CONFIGURE_FILE
(
"
${
TD_COMMUNITY_DIR
}
/src/client/src/taos.rc.in"
"
${
TD_COMMUNITY_DIR
}
/src/client/src/taos.rc"
)
CONFIGURE_FILE
(
"
${
TD_COMMUNITY_DIR
}
/src/client/src/taos.rc.in"
"
${
TD_COMMUNITY_DIR
}
/src/client/src/taos.rc"
)
ADD_LIBRARY
(
taos_static STATIC
${
SRC
}
)
ADD_LIBRARY
(
taos_static STATIC
${
SRC
}
)
TARGET_LINK_LIBRARIES
(
taos_static trpc tutil query
)
TARGET_LINK_LIBRARIES
(
taos_static trpc tutil query
cJson
)
# generate dynamic library (*.dll)
# generate dynamic library (*.dll)
ADD_LIBRARY
(
taos SHARED
${
SRC
}
${
TD_COMMUNITY_DIR
}
/src/client/src/taos.rc
)
ADD_LIBRARY
(
taos SHARED
${
SRC
}
${
TD_COMMUNITY_DIR
}
/src/client/src/taos.rc
)
IF
(
NOT TD_GODLL
)
IF
(
NOT TD_GODLL
)
SET_TARGET_PROPERTIES
(
taos PROPERTIES LINK_FLAGS /DEF:
${
TD_COMMUNITY_DIR
}
/src/client/src/taos.def
)
SET_TARGET_PROPERTIES
(
taos PROPERTIES LINK_FLAGS /DEF:
${
TD_COMMUNITY_DIR
}
/src/client/src/taos.def
)
ENDIF
()
ENDIF
()
TARGET_LINK_LIBRARIES
(
taos trpc tutil query lua
)
TARGET_LINK_LIBRARIES
(
taos trpc tutil query lua
cJson
)
ELSEIF
(
TD_DARWIN
)
ELSEIF
(
TD_DARWIN
)
SET
(
CMAKE_MACOSX_RPATH 1
)
SET
(
CMAKE_MACOSX_RPATH 1
)
INCLUDE_DIRECTORIES
(
${
TD_COMMUNITY_DIR
}
/deps/jni/linux
)
INCLUDE_DIRECTORIES
(
${
TD_COMMUNITY_DIR
}
/deps/jni/linux
)
ADD_LIBRARY
(
taos_static STATIC
${
SRC
}
)
ADD_LIBRARY
(
taos_static STATIC
${
SRC
}
)
TARGET_LINK_LIBRARIES
(
taos_static query trpc tutil pthread m lua
)
TARGET_LINK_LIBRARIES
(
taos_static query trpc tutil pthread m lua
cJson
)
SET_TARGET_PROPERTIES
(
taos_static PROPERTIES OUTPUT_NAME
"taos_static"
)
SET_TARGET_PROPERTIES
(
taos_static PROPERTIES OUTPUT_NAME
"taos_static"
)
# generate dynamic library (*.dylib)
# generate dynamic library (*.dylib)
ADD_LIBRARY
(
taos SHARED
${
SRC
}
)
ADD_LIBRARY
(
taos SHARED
${
SRC
}
)
TARGET_LINK_LIBRARIES
(
taos query trpc tutil pthread m lua
)
TARGET_LINK_LIBRARIES
(
taos query trpc tutil pthread m lua
cJson
)
SET_TARGET_PROPERTIES
(
taos PROPERTIES CLEAN_DIRECT_OUTPUT 1
)
SET_TARGET_PROPERTIES
(
taos PROPERTIES CLEAN_DIRECT_OUTPUT 1
)
...
...
src/client/inc/tscUtil.h
浏览文件 @
bbda67eb
...
@@ -92,7 +92,7 @@ typedef struct SMergeTsCtx {
...
@@ -92,7 +92,7 @@ typedef struct SMergeTsCtx {
}
SMergeTsCtx
;
}
SMergeTsCtx
;
typedef
struct
SVgroupTableInfo
{
typedef
struct
SVgroupTableInfo
{
SVgroup
Info
vgInfo
;
SVgroup
Msg
vgInfo
;
SArray
*
itemList
;
// SArray<STableIdInfo>
SArray
*
itemList
;
// SArray<STableIdInfo>
}
SVgroupTableInfo
;
}
SVgroupTableInfo
;
...
@@ -174,7 +174,9 @@ void tscClearInterpInfo(SQueryInfo* pQueryInfo);
...
@@ -174,7 +174,9 @@ void tscClearInterpInfo(SQueryInfo* pQueryInfo);
bool
tscIsInsertData
(
char
*
sqlstr
);
bool
tscIsInsertData
(
char
*
sqlstr
);
int
tscAllocPayload
(
SSqlCmd
*
pCmd
,
int
size
);
// the memory is not reset in case of fast allocate payload function
int32_t
tscAllocPayloadFast
(
SSqlCmd
*
pCmd
,
size_t
size
);
int32_t
tscAllocPayload
(
SSqlCmd
*
pCmd
,
int
size
);
TAOS_FIELD
tscCreateField
(
int8_t
type
,
const
char
*
name
,
int16_t
bytes
);
TAOS_FIELD
tscCreateField
(
int8_t
type
,
const
char
*
name
,
int16_t
bytes
);
...
@@ -288,7 +290,11 @@ void doExecuteQuery(SSqlObj* pSql, SQueryInfo* pQueryInfo);
...
@@ -288,7 +290,11 @@ void doExecuteQuery(SSqlObj* pSql, SQueryInfo* pQueryInfo);
SVgroupsInfo
*
tscVgroupInfoClone
(
SVgroupsInfo
*
pInfo
);
SVgroupsInfo
*
tscVgroupInfoClone
(
SVgroupsInfo
*
pInfo
);
void
*
tscVgroupInfoClear
(
SVgroupsInfo
*
pInfo
);
void
*
tscVgroupInfoClear
(
SVgroupsInfo
*
pInfo
);
#if 0
void tscSVgroupInfoCopy(SVgroupInfo* dst, const SVgroupInfo* src);
void tscSVgroupInfoCopy(SVgroupInfo* dst, const SVgroupInfo* src);
#endif
/**
/**
* The create object function must be successful expect for the out of memory issue.
* The create object function must be successful expect for the out of memory issue.
*
*
...
...
src/client/inc/tsclient.h
浏览文件 @
bbda67eb
...
@@ -234,7 +234,6 @@ typedef struct STableDataBlocks {
...
@@ -234,7 +234,6 @@ typedef struct STableDataBlocks {
typedef
struct
{
typedef
struct
{
STableMeta
*
pTableMeta
;
STableMeta
*
pTableMeta
;
SArray
*
vgroupIdList
;
SArray
*
vgroupIdList
;
// SVgroupsInfo *pVgroupsInfo;
}
STableMetaVgroupInfo
;
}
STableMetaVgroupInfo
;
typedef
struct
SInsertStatementParam
{
typedef
struct
SInsertStatementParam
{
...
@@ -286,20 +285,14 @@ typedef struct {
...
@@ -286,20 +285,14 @@ typedef struct {
int32_t
resColumnId
;
int32_t
resColumnId
;
}
SSqlCmd
;
}
SSqlCmd
;
typedef
struct
SResRec
{
int
numOfRows
;
int
numOfTotal
;
}
SResRec
;
typedef
struct
{
typedef
struct
{
int32_t
numOfRows
;
// num of results in current retrieval
int32_t
numOfRows
;
// num of results in current retrieval
int64_t
numOfRowsGroup
;
// num of results of current group
int64_t
numOfTotal
;
// num of total results
int64_t
numOfTotal
;
// num of total results
int64_t
numOfClauseTotal
;
// num of total result in current subclause
int64_t
numOfClauseTotal
;
// num of total result in current subclause
char
*
pRsp
;
char
*
pRsp
;
int32_t
rspType
;
int32_t
rspType
;
int32_t
rspLen
;
int32_t
rspLen
;
uint64_t
qId
;
uint64_t
qId
;
// query id of SQInfo
int64_t
useconds
;
int64_t
useconds
;
int64_t
offset
;
// offset value from vnode during projection query of stable
int64_t
offset
;
// offset value from vnode during projection query of stable
int32_t
row
;
int32_t
row
;
...
@@ -307,8 +300,6 @@ typedef struct {
...
@@ -307,8 +300,6 @@ typedef struct {
int16_t
precision
;
int16_t
precision
;
bool
completed
;
bool
completed
;
int32_t
code
;
int32_t
code
;
int32_t
numOfGroups
;
SResRec
*
pGroupRec
;
char
*
data
;
char
*
data
;
TAOS_ROW
tsrow
;
TAOS_ROW
tsrow
;
TAOS_ROW
urow
;
TAOS_ROW
urow
;
...
@@ -317,7 +308,6 @@ typedef struct {
...
@@ -317,7 +308,6 @@ typedef struct {
SColumnIndex
*
pColumnIndex
;
SColumnIndex
*
pColumnIndex
;
TAOS_FIELD
*
final
;
TAOS_FIELD
*
final
;
SArithmeticSupport
*
pArithSup
;
// support the arithmetic expression calculation on agg functions
struct
SGlobalMerger
*
pMerger
;
struct
SGlobalMerger
*
pMerger
;
}
SSqlRes
;
}
SSqlRes
;
...
@@ -377,7 +367,6 @@ typedef struct SSqlObj {
...
@@ -377,7 +367,6 @@ typedef struct SSqlObj {
tsem_t
rspSem
;
tsem_t
rspSem
;
SSqlCmd
cmd
;
SSqlCmd
cmd
;
SSqlRes
res
;
SSqlRes
res
;
bool
isBind
;
SSubqueryState
subState
;
SSubqueryState
subState
;
struct
SSqlObj
**
pSubs
;
struct
SSqlObj
**
pSubs
;
...
...
src/client/src/tscAsync.c
浏览文件 @
bbda67eb
...
@@ -60,17 +60,25 @@ void doAsyncQuery(STscObj* pObj, SSqlObj* pSql, __async_cb_func_t fp, void* para
...
@@ -60,17 +60,25 @@ void doAsyncQuery(STscObj* pObj, SSqlObj* pSql, __async_cb_func_t fp, void* para
tscDebugL
(
"0x%"
PRIx64
" SQL: %s"
,
pSql
->
self
,
pSql
->
sqlstr
);
tscDebugL
(
"0x%"
PRIx64
" SQL: %s"
,
pSql
->
self
,
pSql
->
sqlstr
);
pCmd
->
resColumnId
=
TSDB_RES_COL_ID
;
pCmd
->
resColumnId
=
TSDB_RES_COL_ID
;
taosAcquireRef
(
tscObjRef
,
pSql
->
self
);
int32_t
code
=
tsParseSql
(
pSql
,
true
);
int32_t
code
=
tsParseSql
(
pSql
,
true
);
if
(
code
==
TSDB_CODE_TSC_ACTION_IN_PROGRESS
)
return
;
if
(
code
==
TSDB_CODE_TSC_ACTION_IN_PROGRESS
)
{
taosReleaseRef
(
tscObjRef
,
pSql
->
self
);
return
;
}
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
pSql
->
res
.
code
=
code
;
pSql
->
res
.
code
=
code
;
tscAsyncResultOnError
(
pSql
);
tscAsyncResultOnError
(
pSql
);
taosReleaseRef
(
tscObjRef
,
pSql
->
self
);
return
;
return
;
}
}
SQueryInfo
*
pQueryInfo
=
tscGetQueryInfo
(
pCmd
);
SQueryInfo
*
pQueryInfo
=
tscGetQueryInfo
(
pCmd
);
executeQuery
(
pSql
,
pQueryInfo
);
executeQuery
(
pSql
,
pQueryInfo
);
taosReleaseRef
(
tscObjRef
,
pSql
->
self
);
}
}
// TODO return the correct error code to client in tscQueueAsyncError
// TODO return the correct error code to client in tscQueueAsyncError
...
...
src/client/src/tscParseLineProtocol.c
浏览文件 @
bbda67eb
...
@@ -2128,11 +2128,12 @@ int32_t tscParseLines(char* lines[], int numLines, SArray* points, SArray* faile
...
@@ -2128,11 +2128,12 @@ int32_t tscParseLines(char* lines[], int numLines, SArray* points, SArray* faile
int
taos_insert_lines
(
TAOS
*
taos
,
char
*
lines
[],
int
numLines
)
{
int
taos_insert_lines
(
TAOS
*
taos
,
char
*
lines
[],
int
numLines
)
{
int32_t
code
=
0
;
int32_t
code
=
0
;
SSmlLinesInfo
*
info
=
calloc
(
1
,
sizeof
(
SSmlLinesInfo
));
SSmlLinesInfo
*
info
=
t
calloc
(
1
,
sizeof
(
SSmlLinesInfo
));
info
->
id
=
genLinesSmlId
();
info
->
id
=
genLinesSmlId
();
if
(
numLines
<=
0
||
numLines
>
65536
)
{
if
(
numLines
<=
0
||
numLines
>
65536
)
{
tscError
(
"SML:0x%"
PRIx64
" taos_insert_lines numLines should be between 1 and 65536. numLines: %d"
,
info
->
id
,
numLines
);
tscError
(
"SML:0x%"
PRIx64
" taos_insert_lines numLines should be between 1 and 65536. numLines: %d"
,
info
->
id
,
numLines
);
tfree
(
info
);
code
=
TSDB_CODE_TSC_APP_ERROR
;
code
=
TSDB_CODE_TSC_APP_ERROR
;
return
code
;
return
code
;
}
}
...
@@ -2140,7 +2141,7 @@ int taos_insert_lines(TAOS* taos, char* lines[], int numLines) {
...
@@ -2140,7 +2141,7 @@ int taos_insert_lines(TAOS* taos, char* lines[], int numLines) {
for
(
int
i
=
0
;
i
<
numLines
;
++
i
)
{
for
(
int
i
=
0
;
i
<
numLines
;
++
i
)
{
if
(
lines
[
i
]
==
NULL
)
{
if
(
lines
[
i
]
==
NULL
)
{
tscError
(
"SML:0x%"
PRIx64
" taos_insert_lines line %d is NULL"
,
info
->
id
,
i
);
tscError
(
"SML:0x%"
PRIx64
" taos_insert_lines line %d is NULL"
,
info
->
id
,
i
);
free
(
info
);
t
free
(
info
);
code
=
TSDB_CODE_TSC_APP_ERROR
;
code
=
TSDB_CODE_TSC_APP_ERROR
;
return
code
;
return
code
;
}
}
...
@@ -2149,7 +2150,7 @@ int taos_insert_lines(TAOS* taos, char* lines[], int numLines) {
...
@@ -2149,7 +2150,7 @@ int taos_insert_lines(TAOS* taos, char* lines[], int numLines) {
SArray
*
lpPoints
=
taosArrayInit
(
numLines
,
sizeof
(
TAOS_SML_DATA_POINT
));
SArray
*
lpPoints
=
taosArrayInit
(
numLines
,
sizeof
(
TAOS_SML_DATA_POINT
));
if
(
lpPoints
==
NULL
)
{
if
(
lpPoints
==
NULL
)
{
tscError
(
"SML:0x%"
PRIx64
" taos_insert_lines failed to allocate memory"
,
info
->
id
);
tscError
(
"SML:0x%"
PRIx64
" taos_insert_lines failed to allocate memory"
,
info
->
id
);
free
(
info
);
t
free
(
info
);
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
}
}
...
@@ -2177,7 +2178,7 @@ cleanup:
...
@@ -2177,7 +2178,7 @@ cleanup:
taosArrayDestroy
(
lpPoints
);
taosArrayDestroy
(
lpPoints
);
free
(
info
);
t
free
(
info
);
return
code
;
return
code
;
}
}
src/client/src/tscParseOpenTSDB.c
浏览文件 @
bbda67eb
...
@@ -3,6 +3,7 @@
...
@@ -3,6 +3,7 @@
#include <stdlib.h>
#include <stdlib.h>
#include <string.h>
#include <string.h>
#include "cJSON.h"
#include "hash.h"
#include "hash.h"
#include "taos.h"
#include "taos.h"
...
@@ -12,9 +13,12 @@
...
@@ -12,9 +13,12 @@
#include "tscParseLine.h"
#include "tscParseLine.h"
#define MAX_TELNET_FILEDS_NUM 2
#define OTD_MAX_FIELDS_NUM 2
#define OTS_TIMESTAMP_COLUMN_NAME "ts"
#define OTD_JSON_SUB_FIELDS_NUM 2
#define OTS_METRIC_VALUE_COLUMN_NAME "value"
#define OTD_JSON_FIELDS_NUM 4
#define OTD_TIMESTAMP_COLUMN_NAME "ts"
#define OTD_METRIC_VALUE_COLUMN_NAME "value"
/* telnet style API parser */
/* telnet style API parser */
static
uint64_t
HandleId
=
0
;
static
uint64_t
HandleId
=
0
;
...
@@ -77,12 +81,12 @@ static int32_t parseTelnetTimeStamp(TAOS_SML_KV **pTS, int *num_kvs, const char
...
@@ -77,12 +81,12 @@ static int32_t parseTelnetTimeStamp(TAOS_SML_KV **pTS, int *num_kvs, const char
const
char
*
start
,
*
cur
;
const
char
*
start
,
*
cur
;
int32_t
ret
=
TSDB_CODE_SUCCESS
;
int32_t
ret
=
TSDB_CODE_SUCCESS
;
int
len
=
0
;
int
len
=
0
;
char
key
[]
=
OT
S
_TIMESTAMP_COLUMN_NAME
;
char
key
[]
=
OT
D
_TIMESTAMP_COLUMN_NAME
;
char
*
value
=
NULL
;
char
*
value
=
NULL
;
start
=
cur
=
*
index
;
start
=
cur
=
*
index
;
//allocate fields for timestamp and value
//allocate fields for timestamp and value
*
pTS
=
tcalloc
(
MAX_TELNET_FILE
DS_NUM
,
sizeof
(
TAOS_SML_KV
));
*
pTS
=
tcalloc
(
OTD_MAX_FIEL
DS_NUM
,
sizeof
(
TAOS_SML_KV
));
while
(
*
cur
!=
'\0'
)
{
while
(
*
cur
!=
'\0'
)
{
if
(
*
cur
==
' '
)
{
if
(
*
cur
==
' '
)
{
...
@@ -123,7 +127,7 @@ static int32_t parseTelnetMetricValue(TAOS_SML_KV **pKVs, int *num_kvs, const ch
...
@@ -123,7 +127,7 @@ static int32_t parseTelnetMetricValue(TAOS_SML_KV **pKVs, int *num_kvs, const ch
const
char
*
start
,
*
cur
;
const
char
*
start
,
*
cur
;
int32_t
ret
=
TSDB_CODE_SUCCESS
;
int32_t
ret
=
TSDB_CODE_SUCCESS
;
int
len
=
0
;
int
len
=
0
;
char
key
[]
=
OT
S
_METRIC_VALUE_COLUMN_NAME
;
char
key
[]
=
OT
D
_METRIC_VALUE_COLUMN_NAME
;
char
*
value
=
NULL
;
char
*
value
=
NULL
;
start
=
cur
=
*
index
;
start
=
cur
=
*
index
;
...
@@ -405,7 +409,7 @@ cleanup:
...
@@ -405,7 +409,7 @@ cleanup:
tscDebug
(
"OTD:0x%"
PRIx64
" taos_insert_telnet_lines finish inserting %d lines. code: %d"
,
info
->
id
,
numLines
,
code
);
tscDebug
(
"OTD:0x%"
PRIx64
" taos_insert_telnet_lines finish inserting %d lines. code: %d"
,
info
->
id
,
numLines
,
code
);
points
=
TARRAY_GET_START
(
lpPoints
);
points
=
TARRAY_GET_START
(
lpPoints
);
numPoints
=
taosArrayGetSize
(
lpPoints
);
numPoints
=
taosArrayGetSize
(
lpPoints
);
for
(
int
i
=
0
;
i
<
numPoints
;
++
i
)
{
for
(
int
i
=
0
;
i
<
numPoints
;
++
i
)
{
destroySmlDataPoint
(
points
+
i
);
destroySmlDataPoint
(
points
+
i
);
}
}
...
@@ -422,3 +426,548 @@ int taos_telnet_insert(TAOS* taos, TAOS_SML_DATA_POINT* points, int numPoint) {
...
@@ -422,3 +426,548 @@ int taos_telnet_insert(TAOS* taos, TAOS_SML_DATA_POINT* points, int numPoint) {
tfree
(
info
);
tfree
(
info
);
return
code
;
return
code
;
}
}
/* telnet style API parser */
int32_t
parseMetricFromJSON
(
cJSON
*
root
,
TAOS_SML_DATA_POINT
*
pSml
,
SSmlLinesInfo
*
info
)
{
cJSON
*
metric
=
cJSON_GetObjectItem
(
root
,
"metric"
);
if
(
!
cJSON_IsString
(
metric
))
{
return
TSDB_CODE_TSC_INVALID_JSON
;
}
size_t
stableLen
=
strlen
(
metric
->
valuestring
);
if
(
stableLen
>
TSDB_TABLE_NAME_LEN
)
{
tscError
(
"OTD:0x%"
PRIx64
" Metric cannot exceeds 193 characters in JSON"
,
info
->
id
);
return
TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH
;
}
pSml
->
stableName
=
tcalloc
(
stableLen
+
1
,
sizeof
(
char
));
if
(
pSml
->
stableName
==
NULL
){
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
}
if
(
isdigit
(
metric
->
valuestring
[
0
]))
{
tscError
(
"OTD:0x%"
PRIx64
" Metric cannnot start with digit in JSON"
,
info
->
id
);
tfree
(
pSml
->
stableName
);
return
TSDB_CODE_TSC_INVALID_JSON
;
}
tstrncpy
(
pSml
->
stableName
,
metric
->
valuestring
,
stableLen
+
1
);
return
TSDB_CODE_SUCCESS
;
}
int32_t
parseTimestampFromJSONObj
(
cJSON
*
root
,
int64_t
*
tsVal
,
SSmlLinesInfo
*
info
)
{
int32_t
size
=
cJSON_GetArraySize
(
root
);
if
(
size
!=
OTD_JSON_SUB_FIELDS_NUM
)
{
return
TSDB_CODE_TSC_INVALID_JSON
;
}
cJSON
*
value
=
cJSON_GetObjectItem
(
root
,
"value"
);
if
(
!
cJSON_IsNumber
(
value
))
{
return
TSDB_CODE_TSC_INVALID_JSON
;
}
cJSON
*
type
=
cJSON_GetObjectItem
(
root
,
"type"
);
if
(
!
cJSON_IsString
(
type
))
{
return
TSDB_CODE_TSC_INVALID_JSON
;
}
*
tsVal
=
value
->
valueint
;
//if timestamp value is 0 use current system time
if
(
*
tsVal
==
0
)
{
*
tsVal
=
taosGetTimestampNs
();
return
TSDB_CODE_SUCCESS
;
}
size_t
typeLen
=
strlen
(
type
->
valuestring
);
if
(
typeLen
==
1
&&
type
->
valuestring
[
0
]
==
's'
)
{
//seconds
*
tsVal
=
(
int64_t
)(
*
tsVal
*
1e9
);
}
else
if
(
typeLen
==
2
&&
type
->
valuestring
[
1
]
==
's'
)
{
switch
(
type
->
valuestring
[
0
])
{
case
'm'
:
//milliseconds
*
tsVal
=
convertTimePrecision
(
*
tsVal
,
TSDB_TIME_PRECISION_MILLI
,
TSDB_TIME_PRECISION_NANO
);
break
;
case
'u'
:
//microseconds
*
tsVal
=
convertTimePrecision
(
*
tsVal
,
TSDB_TIME_PRECISION_MICRO
,
TSDB_TIME_PRECISION_NANO
);
break
;
case
'n'
:
//nanoseconds
*
tsVal
=
*
tsVal
*
1
;
break
;
default:
return
TSDB_CODE_TSC_INVALID_JSON
;
}
}
return
TSDB_CODE_SUCCESS
;
}
int32_t
parseTimestampFromJSON
(
cJSON
*
root
,
TAOS_SML_KV
**
pTS
,
int
*
num_kvs
,
SSmlLinesInfo
*
info
)
{
//Timestamp must be the first KV to parse
assert
(
*
num_kvs
==
0
);
int64_t
tsVal
;
char
key
[]
=
OTD_TIMESTAMP_COLUMN_NAME
;
cJSON
*
timestamp
=
cJSON_GetObjectItem
(
root
,
"timestamp"
);
if
(
cJSON_IsNumber
(
timestamp
))
{
//timestamp value 0 indicates current system time
if
(
timestamp
->
valueint
==
0
)
{
tsVal
=
taosGetTimestampNs
();
}
else
{
tsVal
=
convertTimePrecision
(
timestamp
->
valueint
,
TSDB_TIME_PRECISION_MICRO
,
TSDB_TIME_PRECISION_NANO
);
}
}
else
if
(
cJSON_IsObject
(
timestamp
))
{
int32_t
ret
=
parseTimestampFromJSONObj
(
timestamp
,
&
tsVal
,
info
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
tscError
(
"OTD:0x%"
PRIx64
" Failed to parse timestamp from JSON Obj"
,
info
->
id
);
return
ret
;
}
}
else
{
return
TSDB_CODE_TSC_INVALID_JSON
;
}
//allocate fields for timestamp and value
*
pTS
=
tcalloc
(
OTD_MAX_FIELDS_NUM
,
sizeof
(
TAOS_SML_KV
));
(
*
pTS
)
->
key
=
tcalloc
(
sizeof
(
key
),
1
);
memcpy
((
*
pTS
)
->
key
,
key
,
sizeof
(
key
));
(
*
pTS
)
->
type
=
TSDB_DATA_TYPE_TIMESTAMP
;
(
*
pTS
)
->
length
=
(
int16_t
)
tDataTypes
[(
*
pTS
)
->
type
].
bytes
;
(
*
pTS
)
->
value
=
tcalloc
((
*
pTS
)
->
length
,
1
);
memcpy
((
*
pTS
)
->
value
,
&
tsVal
,
(
*
pTS
)
->
length
);
*
num_kvs
+=
1
;
return
TSDB_CODE_SUCCESS
;
}
int32_t
convertJSONBool
(
TAOS_SML_KV
*
pVal
,
char
*
typeStr
,
int64_t
valueInt
,
SSmlLinesInfo
*
info
)
{
if
(
strcasecmp
(
typeStr
,
"bool"
)
!=
0
)
{
tscError
(
"OTD:0x%"
PRIx64
" invalid type(%s) for JSON Bool"
,
info
->
id
,
typeStr
);
return
TSDB_CODE_TSC_INVALID_JSON_TYPE
;
}
pVal
->
type
=
TSDB_DATA_TYPE_BOOL
;
pVal
->
length
=
(
int16_t
)
tDataTypes
[
pVal
->
type
].
bytes
;
pVal
->
value
=
tcalloc
(
pVal
->
length
,
1
);
*
(
bool
*
)(
pVal
->
value
)
=
valueInt
?
true
:
false
;
return
TSDB_CODE_SUCCESS
;
}
int32_t
convertJSONNumber
(
TAOS_SML_KV
*
pVal
,
char
*
typeStr
,
cJSON
*
value
,
SSmlLinesInfo
*
info
)
{
//tinyint
if
(
strcasecmp
(
typeStr
,
"i8"
)
==
0
||
strcasecmp
(
typeStr
,
"tinyint"
)
==
0
)
{
if
(
!
IS_VALID_TINYINT
(
value
->
valueint
))
{
tscError
(
"OTD:0x%"
PRIx64
" JSON value(%"
PRId64
") cannot fit in type(tinyint)"
,
info
->
id
,
value
->
valueint
);
return
TSDB_CODE_TSC_VALUE_OUT_OF_RANGE
;
}
pVal
->
type
=
TSDB_DATA_TYPE_TINYINT
;
pVal
->
length
=
(
int16_t
)
tDataTypes
[
pVal
->
type
].
bytes
;
pVal
->
value
=
tcalloc
(
pVal
->
length
,
1
);
*
(
int8_t
*
)(
pVal
->
value
)
=
(
int8_t
)(
value
->
valueint
);
return
TSDB_CODE_SUCCESS
;
}
//smallint
if
(
strcasecmp
(
typeStr
,
"i16"
)
==
0
||
strcasecmp
(
typeStr
,
"smallint"
)
==
0
)
{
if
(
!
IS_VALID_SMALLINT
(
value
->
valueint
))
{
tscError
(
"OTD:0x%"
PRIx64
" JSON value(%"
PRId64
") cannot fit in type(smallint)"
,
info
->
id
,
value
->
valueint
);
return
TSDB_CODE_TSC_VALUE_OUT_OF_RANGE
;
}
pVal
->
type
=
TSDB_DATA_TYPE_SMALLINT
;
pVal
->
length
=
(
int16_t
)
tDataTypes
[
pVal
->
type
].
bytes
;
pVal
->
value
=
tcalloc
(
pVal
->
length
,
1
);
*
(
int16_t
*
)(
pVal
->
value
)
=
(
int16_t
)(
value
->
valueint
);
return
TSDB_CODE_SUCCESS
;
}
//int
if
(
strcasecmp
(
typeStr
,
"i32"
)
==
0
||
strcasecmp
(
typeStr
,
"int"
)
==
0
)
{
if
(
!
IS_VALID_INT
(
value
->
valueint
))
{
tscError
(
"OTD:0x%"
PRIx64
" JSON value(%"
PRId64
") cannot fit in type(int)"
,
info
->
id
,
value
->
valueint
);
return
TSDB_CODE_TSC_VALUE_OUT_OF_RANGE
;
}
pVal
->
type
=
TSDB_DATA_TYPE_INT
;
pVal
->
length
=
(
int16_t
)
tDataTypes
[
pVal
->
type
].
bytes
;
pVal
->
value
=
tcalloc
(
pVal
->
length
,
1
);
*
(
int32_t
*
)(
pVal
->
value
)
=
(
int32_t
)(
value
->
valueint
);
return
TSDB_CODE_SUCCESS
;
}
//bigint
if
(
strcasecmp
(
typeStr
,
"i64"
)
==
0
||
strcasecmp
(
typeStr
,
"bigint"
)
==
0
)
{
if
(
!
IS_VALID_BIGINT
(
value
->
valueint
))
{
tscError
(
"OTD:0x%"
PRIx64
" JSON value(%"
PRId64
") cannot fit in type(bigint)"
,
info
->
id
,
value
->
valueint
);
return
TSDB_CODE_TSC_VALUE_OUT_OF_RANGE
;
}
pVal
->
type
=
TSDB_DATA_TYPE_BIGINT
;
pVal
->
length
=
(
int16_t
)
tDataTypes
[
pVal
->
type
].
bytes
;
pVal
->
value
=
tcalloc
(
pVal
->
length
,
1
);
*
(
int64_t
*
)(
pVal
->
value
)
=
(
int64_t
)(
value
->
valueint
);
return
TSDB_CODE_SUCCESS
;
}
//float
if
(
strcasecmp
(
typeStr
,
"f32"
)
==
0
||
strcasecmp
(
typeStr
,
"float"
)
==
0
)
{
if
(
!
IS_VALID_FLOAT
(
value
->
valuedouble
))
{
tscError
(
"OTD:0x%"
PRIx64
" JSON value(%f) cannot fit in type(float)"
,
info
->
id
,
value
->
valuedouble
);
return
TSDB_CODE_TSC_VALUE_OUT_OF_RANGE
;
}
pVal
->
type
=
TSDB_DATA_TYPE_FLOAT
;
pVal
->
length
=
(
int16_t
)
tDataTypes
[
pVal
->
type
].
bytes
;
pVal
->
value
=
tcalloc
(
pVal
->
length
,
1
);
*
(
float
*
)(
pVal
->
value
)
=
(
float
)(
value
->
valuedouble
);
return
TSDB_CODE_SUCCESS
;
}
//double
if
(
strcasecmp
(
typeStr
,
"f64"
)
==
0
||
strcasecmp
(
typeStr
,
"double"
)
==
0
)
{
if
(
!
IS_VALID_DOUBLE
(
value
->
valuedouble
))
{
tscError
(
"OTD:0x%"
PRIx64
" JSON value(%f) cannot fit in type(double)"
,
info
->
id
,
value
->
valuedouble
);
return
TSDB_CODE_TSC_VALUE_OUT_OF_RANGE
;
}
pVal
->
type
=
TSDB_DATA_TYPE_DOUBLE
;
pVal
->
length
=
(
int16_t
)
tDataTypes
[
pVal
->
type
].
bytes
;
pVal
->
value
=
tcalloc
(
pVal
->
length
,
1
);
*
(
double
*
)(
pVal
->
value
)
=
(
double
)(
value
->
valuedouble
);
return
TSDB_CODE_SUCCESS
;
}
//if reach here means type is unsupported
tscError
(
"OTD:0x%"
PRIx64
" invalid type(%s) for JSON Number"
,
info
->
id
,
typeStr
);
return
TSDB_CODE_TSC_INVALID_JSON_TYPE
;
}
int32_t
convertJSONString
(
TAOS_SML_KV
*
pVal
,
char
*
typeStr
,
cJSON
*
value
,
SSmlLinesInfo
*
info
)
{
if
(
strcasecmp
(
typeStr
,
"binary"
)
==
0
)
{
pVal
->
type
=
TSDB_DATA_TYPE_BINARY
;
}
else
if
(
strcasecmp
(
typeStr
,
"nchar"
)
==
0
)
{
pVal
->
type
=
TSDB_DATA_TYPE_NCHAR
;
}
else
{
tscError
(
"OTD:0x%"
PRIx64
" invalid type(%s) for JSON String"
,
info
->
id
,
typeStr
);
return
TSDB_CODE_TSC_INVALID_JSON_TYPE
;
}
pVal
->
length
=
(
int16_t
)
strlen
(
value
->
valuestring
);
pVal
->
value
=
tcalloc
(
pVal
->
length
+
1
,
1
);
memcpy
(
pVal
->
value
,
value
->
valuestring
,
pVal
->
length
);
return
TSDB_CODE_SUCCESS
;
}
int32_t
parseValueFromJSONObj
(
cJSON
*
root
,
TAOS_SML_KV
*
pVal
,
SSmlLinesInfo
*
info
)
{
int32_t
ret
=
TSDB_CODE_SUCCESS
;
int32_t
size
=
cJSON_GetArraySize
(
root
);
if
(
size
!=
OTD_JSON_SUB_FIELDS_NUM
)
{
return
TSDB_CODE_TSC_INVALID_JSON
;
}
cJSON
*
value
=
cJSON_GetObjectItem
(
root
,
"value"
);
if
(
value
==
NULL
)
{
return
TSDB_CODE_TSC_INVALID_JSON
;
}
cJSON
*
type
=
cJSON_GetObjectItem
(
root
,
"type"
);
if
(
!
cJSON_IsString
(
type
))
{
return
TSDB_CODE_TSC_INVALID_JSON
;
}
switch
(
value
->
type
)
{
case
cJSON_True
:
case
cJSON_False
:
{
ret
=
convertJSONBool
(
pVal
,
type
->
valuestring
,
value
->
valueint
,
info
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
}
break
;
}
case
cJSON_Number
:
{
ret
=
convertJSONNumber
(
pVal
,
type
->
valuestring
,
value
,
info
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
}
break
;
}
case
cJSON_String
:
{
ret
=
convertJSONString
(
pVal
,
type
->
valuestring
,
value
,
info
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
}
break
;
}
default:
return
TSDB_CODE_TSC_INVALID_JSON_TYPE
;
}
return
TSDB_CODE_SUCCESS
;
}
int32_t
parseValueFromJSON
(
cJSON
*
root
,
TAOS_SML_KV
*
pVal
,
SSmlLinesInfo
*
info
)
{
int
type
=
root
->
type
;
switch
(
type
)
{
case
cJSON_True
:
case
cJSON_False
:
{
pVal
->
type
=
TSDB_DATA_TYPE_BOOL
;
pVal
->
length
=
(
int16_t
)
tDataTypes
[
pVal
->
type
].
bytes
;
pVal
->
value
=
tcalloc
(
pVal
->
length
,
1
);
*
(
bool
*
)(
pVal
->
value
)
=
root
->
valueint
?
true
:
false
;
break
;
}
case
cJSON_Number
:
{
//convert default JSON Number type to float
pVal
->
type
=
TSDB_DATA_TYPE_FLOAT
;
pVal
->
length
=
(
int16_t
)
tDataTypes
[
pVal
->
type
].
bytes
;
pVal
->
value
=
tcalloc
(
pVal
->
length
,
1
);
*
(
float
*
)(
pVal
->
value
)
=
(
float
)(
root
->
valuedouble
);
break
;
}
case
cJSON_String
:
{
//convert default JSON String type to nchar
pVal
->
type
=
TSDB_DATA_TYPE_NCHAR
;
//pVal->length = wcslen((wchar_t *)root->valuestring) * TSDB_NCHAR_SIZE;
pVal
->
length
=
(
int16_t
)
strlen
(
root
->
valuestring
);
pVal
->
value
=
tcalloc
(
pVal
->
length
+
1
,
1
);
memcpy
(
pVal
->
value
,
root
->
valuestring
,
pVal
->
length
);
break
;
}
case
cJSON_Object
:
{
int32_t
ret
=
parseValueFromJSONObj
(
root
,
pVal
,
info
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
tscError
(
"OTD:0x%"
PRIx64
" Failed to parse timestamp from JSON Obj"
,
info
->
id
);
return
ret
;
}
break
;
}
default:
return
TSDB_CODE_TSC_INVALID_JSON
;
}
return
TSDB_CODE_SUCCESS
;
}
int32_t
parseMetricValueFromJSON
(
cJSON
*
root
,
TAOS_SML_KV
**
pKVs
,
int
*
num_kvs
,
SSmlLinesInfo
*
info
)
{
//skip timestamp
TAOS_SML_KV
*
pVal
=
*
pKVs
+
1
;
char
key
[]
=
OTD_METRIC_VALUE_COLUMN_NAME
;
cJSON
*
metricVal
=
cJSON_GetObjectItem
(
root
,
"value"
);
if
(
metricVal
==
NULL
)
{
return
TSDB_CODE_TSC_INVALID_JSON
;
}
int32_t
ret
=
parseValueFromJSON
(
metricVal
,
pVal
,
info
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
}
pVal
->
key
=
tcalloc
(
sizeof
(
key
),
1
);
memcpy
(
pVal
->
key
,
key
,
sizeof
(
key
));
*
num_kvs
+=
1
;
return
TSDB_CODE_SUCCESS
;
}
int32_t
parseTagsFromJSON
(
cJSON
*
root
,
TAOS_SML_KV
**
pKVs
,
int
*
num_kvs
,
char
**
childTableName
,
SSmlLinesInfo
*
info
)
{
int32_t
ret
=
TSDB_CODE_SUCCESS
;
cJSON
*
tags
=
cJSON_GetObjectItem
(
root
,
"tags"
);
if
(
tags
==
NULL
||
tags
->
type
!=
cJSON_Object
)
{
return
TSDB_CODE_TSC_INVALID_JSON
;
}
//only pick up the first ID value as child table name
cJSON
*
id
=
cJSON_GetObjectItem
(
tags
,
"ID"
);
if
(
id
!=
NULL
)
{
size_t
idLen
=
strlen
(
id
->
valuestring
);
ret
=
isValidChildTableName
(
id
->
valuestring
,
(
int16_t
)
idLen
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
}
*
childTableName
=
tcalloc
(
idLen
+
1
,
sizeof
(
char
));
memcpy
(
*
childTableName
,
id
->
valuestring
,
idLen
);
//remove all ID fields from tags list no case sensitive
while
(
id
!=
NULL
)
{
cJSON_DeleteItemFromObject
(
tags
,
"ID"
);
id
=
cJSON_GetObjectItem
(
tags
,
"ID"
);
}
}
int32_t
tagNum
=
cJSON_GetArraySize
(
tags
);
//at least one tag pair required
if
(
tagNum
<=
0
)
{
return
TSDB_CODE_TSC_INVALID_JSON
;
}
//allocate memory for tags
*
pKVs
=
tcalloc
(
tagNum
,
sizeof
(
TAOS_SML_KV
));
TAOS_SML_KV
*
pkv
=
*
pKVs
;
for
(
int32_t
i
=
0
;
i
<
tagNum
;
++
i
)
{
cJSON
*
tag
=
cJSON_GetArrayItem
(
tags
,
i
);
if
(
tag
==
NULL
)
{
return
TSDB_CODE_TSC_INVALID_JSON
;
}
//key
size_t
keyLen
=
strlen
(
tag
->
string
);
pkv
->
key
=
tcalloc
(
keyLen
+
1
,
sizeof
(
char
));
strncpy
(
pkv
->
key
,
tag
->
string
,
keyLen
);
//value
ret
=
parseValueFromJSON
(
tag
,
pkv
,
info
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
}
*
num_kvs
+=
1
;
pkv
++
;
}
return
ret
;
}
int32_t
tscParseJSONPayload
(
cJSON
*
root
,
TAOS_SML_DATA_POINT
*
pSml
,
SSmlLinesInfo
*
info
)
{
int32_t
ret
=
TSDB_CODE_SUCCESS
;
if
(
!
cJSON_IsObject
(
root
))
{
tscError
(
"OTD:0x%"
PRIx64
" data point needs to be JSON object"
,
info
->
id
);
return
TSDB_CODE_TSC_INVALID_JSON
;
}
int32_t
size
=
cJSON_GetArraySize
(
root
);
//outmost json fields has to be exactly 4
if
(
size
!=
OTD_JSON_FIELDS_NUM
)
{
tscError
(
"OTD:0x%"
PRIx64
" Invalid number of JSON fields in data point %d"
,
info
->
id
,
size
);
return
TSDB_CODE_TSC_INVALID_JSON
;
}
//Parse metric
ret
=
parseMetricFromJSON
(
root
,
pSml
,
info
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
tscError
(
"OTD:0x%"
PRIx64
" Unable to parse metric from JSON payload"
,
info
->
id
);
return
ret
;
}
tscDebug
(
"OTD:0x%"
PRIx64
" Parse metric from JSON payload finished"
,
info
->
id
);
//Parse timestamp
ret
=
parseTimestampFromJSON
(
root
,
&
pSml
->
fields
,
&
pSml
->
fieldNum
,
info
);
if
(
ret
)
{
tscError
(
"OTD:0x%"
PRIx64
" Unable to parse timestamp from JSON payload"
,
info
->
id
);
return
ret
;
}
tscDebug
(
"OTD:0x%"
PRIx64
" Parse timestamp from JSON payload finished"
,
info
->
id
);
//Parse metric value
ret
=
parseMetricValueFromJSON
(
root
,
&
pSml
->
fields
,
&
pSml
->
fieldNum
,
info
);
if
(
ret
)
{
tscError
(
"OTD:0x%"
PRIx64
" Unable to parse metric value from JSON payload"
,
info
->
id
);
return
ret
;
}
tscDebug
(
"OTD:0x%"
PRIx64
" Parse metric value from JSON payload finished"
,
info
->
id
);
//Parse tags
ret
=
parseTagsFromJSON
(
root
,
&
pSml
->
tags
,
&
pSml
->
tagNum
,
&
pSml
->
childTableName
,
info
);
if
(
ret
)
{
tscError
(
"OTD:0x%"
PRIx64
" Unable to parse tags from JSON payload"
,
info
->
id
);
return
ret
;
}
tscDebug
(
"OTD:0x%"
PRIx64
" Parse tags from JSON payload finished"
,
info
->
id
);
return
TSDB_CODE_SUCCESS
;
}
int32_t
tscParseMultiJSONPayload
(
char
*
payload
,
SArray
*
points
,
SSmlLinesInfo
*
info
)
{
int32_t
payloadNum
,
ret
;
ret
=
TSDB_CODE_SUCCESS
;
if
(
payload
==
NULL
)
{
tscError
(
"OTD:0x%"
PRIx64
" empty JSON Payload"
,
info
->
id
);
return
TSDB_CODE_TSC_INVALID_JSON
;
}
cJSON
*
root
=
cJSON_Parse
(
payload
);
//multiple data points must be sent in JSON array
if
(
cJSON_IsObject
(
root
))
{
payloadNum
=
1
;
}
else
if
(
cJSON_IsArray
(
root
))
{
payloadNum
=
cJSON_GetArraySize
(
root
);
}
else
{
tscError
(
"OTD:0x%"
PRIx64
" Invalid JSON Payload"
,
info
->
id
);
ret
=
TSDB_CODE_TSC_INVALID_JSON
;
goto
PARSE_JSON_OVER
;
}
for
(
int32_t
i
=
0
;
i
<
payloadNum
;
++
i
)
{
TAOS_SML_DATA_POINT
point
=
{
0
};
cJSON
*
dataPoint
=
(
payloadNum
==
1
)
?
root
:
cJSON_GetArrayItem
(
root
,
i
);
ret
=
tscParseJSONPayload
(
dataPoint
,
&
point
,
info
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
tscError
(
"OTD:0x%"
PRIx64
" JSON data point parse failed"
,
info
->
id
);
destroySmlDataPoint
(
&
point
);
goto
PARSE_JSON_OVER
;
}
else
{
tscDebug
(
"OTD:0x%"
PRIx64
" JSON data point parse success"
,
info
->
id
);
}
taosArrayPush
(
points
,
&
point
);
}
PARSE_JSON_OVER:
cJSON_Delete
(
root
);
return
ret
;
}
int
taos_insert_json_payload
(
TAOS
*
taos
,
char
*
payload
)
{
int32_t
code
=
0
;
SSmlLinesInfo
*
info
=
tcalloc
(
1
,
sizeof
(
SSmlLinesInfo
));
info
->
id
=
genUID
();
if
(
payload
==
NULL
)
{
tscError
(
"OTD:0x%"
PRIx64
" taos_insert_json_payload payload is NULL"
,
info
->
id
);
tfree
(
info
);
code
=
TSDB_CODE_TSC_APP_ERROR
;
return
code
;
}
SArray
*
lpPoints
=
taosArrayInit
(
1
,
sizeof
(
TAOS_SML_DATA_POINT
));
if
(
lpPoints
==
NULL
)
{
tscError
(
"OTD:0x%"
PRIx64
" taos_insert_json_payload failed to allocate memory"
,
info
->
id
);
tfree
(
info
);
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
}
tscDebug
(
"OTD:0x%"
PRIx64
" taos_insert_telnet_lines begin inserting %d points"
,
info
->
id
,
1
);
code
=
tscParseMultiJSONPayload
(
payload
,
lpPoints
,
info
);
size_t
numPoints
=
taosArrayGetSize
(
lpPoints
);
if
(
code
!=
0
)
{
goto
cleanup
;
}
TAOS_SML_DATA_POINT
*
points
=
TARRAY_GET_START
(
lpPoints
);
code
=
tscSmlInsert
(
taos
,
points
,
(
int
)
numPoints
,
info
);
if
(
code
!=
0
)
{
tscError
(
"OTD:0x%"
PRIx64
" taos_insert_json_payload error: %s"
,
info
->
id
,
tstrerror
((
code
)));
}
cleanup:
tscDebug
(
"OTD:0x%"
PRIx64
" taos_insert_json_payload finish inserting 1 Point. code: %d"
,
info
->
id
,
code
);
points
=
TARRAY_GET_START
(
lpPoints
);
numPoints
=
taosArrayGetSize
(
lpPoints
);
for
(
int
i
=
0
;
i
<
numPoints
;
++
i
)
{
destroySmlDataPoint
(
points
+
i
);
}
taosArrayDestroy
(
lpPoints
);
tfree
(
info
);
return
code
;
}
src/client/src/tscPrepare.c
浏览文件 @
bbda67eb
...
@@ -1491,7 +1491,6 @@ TAOS_STMT* taos_stmt_init(TAOS* taos) {
...
@@ -1491,7 +1491,6 @@ TAOS_STMT* taos_stmt_init(TAOS* taos) {
pSql
->
signature
=
pSql
;
pSql
->
signature
=
pSql
;
pSql
->
pTscObj
=
pObj
;
pSql
->
pTscObj
=
pObj
;
pSql
->
maxRetry
=
TSDB_MAX_REPLICA
;
pSql
->
maxRetry
=
TSDB_MAX_REPLICA
;
pSql
->
isBind
=
true
;
pStmt
->
pSql
=
pSql
;
pStmt
->
pSql
=
pSql
;
pStmt
->
last
=
STMT_INIT
;
pStmt
->
last
=
STMT_INIT
;
...
...
src/client/src/tscSQLParser.c
浏览文件 @
bbda67eb
...
@@ -8685,7 +8685,7 @@ static int32_t doLoadAllTableMeta(SSqlObj* pSql, SQueryInfo* pQueryInfo, SSqlNod
...
@@ -8685,7 +8685,7 @@ static int32_t doLoadAllTableMeta(SSqlObj* pSql, SQueryInfo* pQueryInfo, SSqlNod
if
(
p
->
vgroupIdList
!=
NULL
)
{
if
(
p
->
vgroupIdList
!=
NULL
)
{
size_t
s
=
taosArrayGetSize
(
p
->
vgroupIdList
);
size_t
s
=
taosArrayGetSize
(
p
->
vgroupIdList
);
size_t
vgroupsz
=
sizeof
(
SVgroup
Info
)
*
s
+
sizeof
(
SVgroupsInfo
);
size_t
vgroupsz
=
sizeof
(
SVgroup
Msg
)
*
s
+
sizeof
(
SVgroupsInfo
);
pTableMetaInfo
->
vgroupList
=
calloc
(
1
,
vgroupsz
);
pTableMetaInfo
->
vgroupList
=
calloc
(
1
,
vgroupsz
);
if
(
pTableMetaInfo
->
vgroupList
==
NULL
)
{
if
(
pTableMetaInfo
->
vgroupList
==
NULL
)
{
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
...
@@ -8700,14 +8700,11 @@ static int32_t doLoadAllTableMeta(SSqlObj* pSql, SQueryInfo* pQueryInfo, SSqlNod
...
@@ -8700,14 +8700,11 @@ static int32_t doLoadAllTableMeta(SSqlObj* pSql, SQueryInfo* pQueryInfo, SSqlNod
taosHashGetClone
(
tscVgroupMap
,
id
,
sizeof
(
*
id
),
NULL
,
&
existVgroupInfo
);
taosHashGetClone
(
tscVgroupMap
,
id
,
sizeof
(
*
id
),
NULL
,
&
existVgroupInfo
);
assert
(
existVgroupInfo
.
inUse
>=
0
);
assert
(
existVgroupInfo
.
inUse
>=
0
);
SVgroup
Info
*
pVgroup
=
&
pTableMetaInfo
->
vgroupList
->
vgroups
[
j
];
SVgroup
Msg
*
pVgroup
=
&
pTableMetaInfo
->
vgroupList
->
vgroups
[
j
];
pVgroup
->
numOfEps
=
existVgroupInfo
.
numOfEps
;
pVgroup
->
numOfEps
=
existVgroupInfo
.
numOfEps
;
pVgroup
->
vgId
=
existVgroupInfo
.
vgId
;
pVgroup
->
vgId
=
existVgroupInfo
.
vgId
;
for
(
int32_t
k
=
0
;
k
<
existVgroupInfo
.
numOfEps
;
++
k
)
{
memcpy
(
&
pVgroup
->
epAddr
,
&
existVgroupInfo
.
ep
,
sizeof
(
pVgroup
->
epAddr
));
pVgroup
->
epAddr
[
k
].
port
=
existVgroupInfo
.
ep
[
k
].
port
;
pVgroup
->
epAddr
[
k
].
fqdn
=
strndup
(
existVgroupInfo
.
ep
[
k
].
fqdn
,
TSDB_FQDN_LEN
);
}
}
}
}
}
}
}
...
...
src/client/src/tscServer.c
浏览文件 @
bbda67eb
...
@@ -73,7 +73,7 @@ static int32_t removeDupVgid(int32_t *src, int32_t sz) {
...
@@ -73,7 +73,7 @@ static int32_t removeDupVgid(int32_t *src, int32_t sz) {
return
ret
;
return
ret
;
}
}
static
void
tscSetDnodeEpSet
(
SRpcEpSet
*
pEpSet
,
SVgroup
Info
*
pVgroupInfo
)
{
static
void
tscSetDnodeEpSet
(
SRpcEpSet
*
pEpSet
,
SVgroup
Msg
*
pVgroupInfo
)
{
assert
(
pEpSet
!=
NULL
&&
pVgroupInfo
!=
NULL
&&
pVgroupInfo
->
numOfEps
>
0
);
assert
(
pEpSet
!=
NULL
&&
pVgroupInfo
!=
NULL
&&
pVgroupInfo
->
numOfEps
>
0
);
// Issue the query to one of the vnode among a vgroup randomly.
// Issue the query to one of the vnode among a vgroup randomly.
...
@@ -93,6 +93,7 @@ static void tscSetDnodeEpSet(SRpcEpSet* pEpSet, SVgroupInfo* pVgroupInfo) {
...
@@ -93,6 +93,7 @@ static void tscSetDnodeEpSet(SRpcEpSet* pEpSet, SVgroupInfo* pVgroupInfo) {
existed
=
true
;
existed
=
true
;
}
}
}
}
assert
(
existed
);
assert
(
existed
);
}
}
...
@@ -723,7 +724,7 @@ static char *doSerializeTableInfo(SQueryTableMsg *pQueryMsg, SSqlObj *pSql, STab
...
@@ -723,7 +724,7 @@ static char *doSerializeTableInfo(SQueryTableMsg *pQueryMsg, SSqlObj *pSql, STab
int32_t
index
=
pTableMetaInfo
->
vgroupIndex
;
int32_t
index
=
pTableMetaInfo
->
vgroupIndex
;
assert
(
index
>=
0
);
assert
(
index
>=
0
);
SVgroup
Info
*
pVgroupInfo
=
NULL
;
SVgroup
Msg
*
pVgroupInfo
=
NULL
;
if
(
pTableMetaInfo
->
vgroupList
&&
pTableMetaInfo
->
vgroupList
->
numOfVgroups
>
0
)
{
if
(
pTableMetaInfo
->
vgroupList
&&
pTableMetaInfo
->
vgroupList
->
numOfVgroups
>
0
)
{
assert
(
index
<
pTableMetaInfo
->
vgroupList
->
numOfVgroups
);
assert
(
index
<
pTableMetaInfo
->
vgroupList
->
numOfVgroups
);
pVgroupInfo
=
&
pTableMetaInfo
->
vgroupList
->
vgroups
[
index
];
pVgroupInfo
=
&
pTableMetaInfo
->
vgroupList
->
vgroups
[
index
];
...
@@ -861,8 +862,8 @@ static int32_t serializeSqlExpr(SSqlExpr* pExpr, STableMetaInfo* pTableMetaInfo,
...
@@ -861,8 +862,8 @@ static int32_t serializeSqlExpr(SSqlExpr* pExpr, STableMetaInfo* pTableMetaInfo,
(
*
pMsg
)
+=
sizeof
(
SSqlExpr
);
(
*
pMsg
)
+=
sizeof
(
SSqlExpr
);
for
(
int32_t
j
=
0
;
j
<
pExpr
->
numOfParams
;
++
j
)
{
// todo add log
for
(
int32_t
j
=
0
;
j
<
pExpr
->
numOfParams
;
++
j
)
{
// todo add log
pSqlExpr
->
param
[
j
].
nType
=
hton
s
((
uint16_t
)
pExpr
->
param
[
j
].
nType
);
pSqlExpr
->
param
[
j
].
nType
=
hton
l
(
pExpr
->
param
[
j
].
nType
);
pSqlExpr
->
param
[
j
].
nLen
=
htons
(
pExpr
->
param
[
j
].
nLen
);
pSqlExpr
->
param
[
j
].
nLen
=
htonl
(
pExpr
->
param
[
j
].
nLen
);
if
(
pExpr
->
param
[
j
].
nType
==
TSDB_DATA_TYPE_BINARY
)
{
if
(
pExpr
->
param
[
j
].
nType
==
TSDB_DATA_TYPE_BINARY
)
{
memcpy
((
*
pMsg
),
pExpr
->
param
[
j
].
pz
,
pExpr
->
param
[
j
].
nLen
);
memcpy
((
*
pMsg
),
pExpr
->
param
[
j
].
pz
,
pExpr
->
param
[
j
].
nLen
);
...
@@ -880,17 +881,22 @@ static int32_t serializeSqlExpr(SSqlExpr* pExpr, STableMetaInfo* pTableMetaInfo,
...
@@ -880,17 +881,22 @@ static int32_t serializeSqlExpr(SSqlExpr* pExpr, STableMetaInfo* pTableMetaInfo,
int
tscBuildQueryMsg
(
SSqlObj
*
pSql
,
SSqlInfo
*
pInfo
)
{
int
tscBuildQueryMsg
(
SSqlObj
*
pSql
,
SSqlInfo
*
pInfo
)
{
SSqlCmd
*
pCmd
=
&
pSql
->
cmd
;
SSqlCmd
*
pCmd
=
&
pSql
->
cmd
;
SQueryInfo
*
pQueryInfo
=
NULL
;
STableMeta
*
pTableMeta
=
NULL
;
STableMetaInfo
*
pTableMetaInfo
=
NULL
;
int32_t
code
=
TSDB_CODE_SUCCESS
;
int32_t
code
=
TSDB_CODE_SUCCESS
;
int32_t
size
=
tscEstimateQueryMsgSize
(
pSql
);
int32_t
size
=
tscEstimateQueryMsgSize
(
pSql
);
assert
(
size
>
0
);
if
(
TSDB_CODE_SUCCESS
!=
tscAllocPayload
(
pCmd
,
size
))
{
if
(
TSDB_CODE_SUCCESS
!=
tscAllocPayload
Fast
(
pCmd
,
size
))
{
tscError
(
"%p failed to malloc for query msg"
,
pSql
);
tscError
(
"%p failed to malloc for query msg"
,
pSql
);
return
TSDB_CODE_TSC_INVALID_OPERATION
;
// todo add test for this
return
TSDB_CODE_TSC_INVALID_OPERATION
;
// todo add test for this
}
}
SQueryInfo
*
pQueryInfo
=
tscGetQueryInfo
(
pCmd
);
pQueryInfo
=
tscGetQueryInfo
(
pCmd
);
STableMetaInfo
*
pTableMetaInfo
=
tscGetMetaInfo
(
pQueryInfo
,
0
);
pTableMetaInfo
=
tscGetMetaInfo
(
pQueryInfo
,
0
);
STableMeta
*
pTableMeta
=
pTableMetaInfo
->
pTableMeta
;
pTableMeta
=
pTableMetaInfo
->
pTableMeta
;
SQueryAttr
query
=
{{
0
}};
SQueryAttr
query
=
{{
0
}};
tscCreateQueryFromQueryInfo
(
pQueryInfo
,
&
query
,
pSql
);
tscCreateQueryFromQueryInfo
(
pQueryInfo
,
&
query
,
pSql
);
...
@@ -941,7 +947,6 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
...
@@ -941,7 +947,6 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pQueryMsg
->
pointInterpQuery
=
query
.
pointInterpQuery
;
pQueryMsg
->
pointInterpQuery
=
query
.
pointInterpQuery
;
pQueryMsg
->
needReverseScan
=
query
.
needReverseScan
;
pQueryMsg
->
needReverseScan
=
query
.
needReverseScan
;
pQueryMsg
->
stateWindow
=
query
.
stateWindow
;
pQueryMsg
->
stateWindow
=
query
.
stateWindow
;
pQueryMsg
->
numOfTags
=
htonl
(
numOfTags
);
pQueryMsg
->
numOfTags
=
htonl
(
numOfTags
);
pQueryMsg
->
sqlstrLen
=
htonl
(
sqlLen
);
pQueryMsg
->
sqlstrLen
=
htonl
(
sqlLen
);
pQueryMsg
->
sw
.
gap
=
htobe64
(
query
.
sw
.
gap
);
pQueryMsg
->
sw
.
gap
=
htobe64
(
query
.
sw
.
gap
);
...
@@ -968,7 +973,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
...
@@ -968,7 +973,7 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pQueryMsg
->
tableCols
[
i
].
type
=
htons
(
pCol
->
type
);
pQueryMsg
->
tableCols
[
i
].
type
=
htons
(
pCol
->
type
);
//pQueryMsg->tableCols[i].flist.numOfFilters = htons(pCol->flist.numOfFilters);
//pQueryMsg->tableCols[i].flist.numOfFilters = htons(pCol->flist.numOfFilters);
pQueryMsg
->
tableCols
[
i
].
flist
.
numOfFilters
=
0
;
pQueryMsg
->
tableCols
[
i
].
flist
.
numOfFilters
=
0
;
pQueryMsg
->
tableCols
[
i
].
flist
.
filterInfo
=
0
;
// append the filter information after the basic column information
// append the filter information after the basic column information
//serializeColFilterInfo(pCol->flist.filterInfo, pCol->flist.numOfFilters, &pMsg);
//serializeColFilterInfo(pCol->flist.filterInfo, pCol->flist.numOfFilters, &pMsg);
}
}
...
@@ -981,6 +986,8 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
...
@@ -981,6 +986,8 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pMsg
+=
pCond
->
len
;
pMsg
+=
pCond
->
len
;
}
}
}
else
{
pQueryMsg
->
colCondLen
=
0
;
}
}
for
(
int32_t
i
=
0
;
i
<
query
.
numOfOutput
;
++
i
)
{
for
(
int32_t
i
=
0
;
i
<
query
.
numOfOutput
;
++
i
)
{
...
@@ -1060,6 +1067,8 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
...
@@ -1060,6 +1067,8 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pMsg
+=
pCond
->
len
;
pMsg
+=
pCond
->
len
;
}
}
}
else
{
pQueryMsg
->
tagCondLen
=
0
;
}
}
if
(
pQueryInfo
->
bufLen
>
0
)
{
if
(
pQueryInfo
->
bufLen
>
0
)
{
...
@@ -1089,6 +1098,9 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
...
@@ -1089,6 +1098,9 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pQueryMsg
->
tsBuf
.
tsOrder
=
htonl
(
pQueryInfo
->
tsBuf
->
tsOrder
);
pQueryMsg
->
tsBuf
.
tsOrder
=
htonl
(
pQueryInfo
->
tsBuf
->
tsOrder
);
pQueryMsg
->
tsBuf
.
tsLen
=
htonl
(
pQueryMsg
->
tsBuf
.
tsLen
);
pQueryMsg
->
tsBuf
.
tsLen
=
htonl
(
pQueryMsg
->
tsBuf
.
tsLen
);
pQueryMsg
->
tsBuf
.
tsNumOfBlocks
=
htonl
(
pQueryMsg
->
tsBuf
.
tsNumOfBlocks
);
pQueryMsg
->
tsBuf
.
tsNumOfBlocks
=
htonl
(
pQueryMsg
->
tsBuf
.
tsNumOfBlocks
);
}
else
{
pQueryMsg
->
tsBuf
.
tsLen
=
0
;
pQueryMsg
->
tsBuf
.
tsNumOfBlocks
=
0
;
}
}
int32_t
numOfOperator
=
(
int32_t
)
taosArrayGetSize
(
queryOperator
);
int32_t
numOfOperator
=
(
int32_t
)
taosArrayGetSize
(
queryOperator
);
...
@@ -1126,6 +1138,9 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
...
@@ -1126,6 +1138,9 @@ int tscBuildQueryMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
pMsg
+=
pUdfInfo
->
contLen
;
pMsg
+=
pUdfInfo
->
contLen
;
}
}
}
else
{
pQueryMsg
->
udfContentOffset
=
0
;
pQueryMsg
->
udfContentLen
=
0
;
}
}
memcpy
(
pMsg
,
pSql
->
sqlstr
,
sqlLen
);
memcpy
(
pMsg
,
pSql
->
sqlstr
,
sqlLen
);
...
@@ -2146,7 +2161,7 @@ static SVgroupsInfo* createVgroupInfoFromMsg(char* pMsg, int32_t* size, uint64_t
...
@@ -2146,7 +2161,7 @@ static SVgroupsInfo* createVgroupInfoFromMsg(char* pMsg, int32_t* size, uint64_t
*
size
=
(
int32_t
)(
sizeof
(
SVgroupMsg
)
*
pVgroupMsg
->
numOfVgroups
+
sizeof
(
SVgroupsMsg
));
*
size
=
(
int32_t
)(
sizeof
(
SVgroupMsg
)
*
pVgroupMsg
->
numOfVgroups
+
sizeof
(
SVgroupsMsg
));
size_t
vgroupsz
=
sizeof
(
SVgroup
Info
)
*
pVgroupMsg
->
numOfVgroups
+
sizeof
(
SVgroupsInfo
);
size_t
vgroupsz
=
sizeof
(
SVgroup
Msg
)
*
pVgroupMsg
->
numOfVgroups
+
sizeof
(
SVgroupsInfo
);
SVgroupsInfo
*
pVgroupInfo
=
calloc
(
1
,
vgroupsz
);
SVgroupsInfo
*
pVgroupInfo
=
calloc
(
1
,
vgroupsz
);
assert
(
pVgroupInfo
!=
NULL
);
assert
(
pVgroupInfo
!=
NULL
);
...
@@ -2156,7 +2171,7 @@ static SVgroupsInfo* createVgroupInfoFromMsg(char* pMsg, int32_t* size, uint64_t
...
@@ -2156,7 +2171,7 @@ static SVgroupsInfo* createVgroupInfoFromMsg(char* pMsg, int32_t* size, uint64_t
}
else
{
}
else
{
for
(
int32_t
j
=
0
;
j
<
pVgroupInfo
->
numOfVgroups
;
++
j
)
{
for
(
int32_t
j
=
0
;
j
<
pVgroupInfo
->
numOfVgroups
;
++
j
)
{
// just init, no need to lock
// just init, no need to lock
SVgroup
Info
*
pVgroup
=
&
pVgroupInfo
->
vgroups
[
j
];
SVgroup
Msg
*
pVgroup
=
&
pVgroupInfo
->
vgroups
[
j
];
SVgroupMsg
*
vmsg
=
&
pVgroupMsg
->
vgroups
[
j
];
SVgroupMsg
*
vmsg
=
&
pVgroupMsg
->
vgroups
[
j
];
vmsg
->
vgId
=
htonl
(
vmsg
->
vgId
);
vmsg
->
vgId
=
htonl
(
vmsg
->
vgId
);
...
@@ -2168,7 +2183,8 @@ static SVgroupsInfo* createVgroupInfoFromMsg(char* pMsg, int32_t* size, uint64_t
...
@@ -2168,7 +2183,8 @@ static SVgroupsInfo* createVgroupInfoFromMsg(char* pMsg, int32_t* size, uint64_t
pVgroup
->
vgId
=
vmsg
->
vgId
;
pVgroup
->
vgId
=
vmsg
->
vgId
;
for
(
int32_t
k
=
0
;
k
<
vmsg
->
numOfEps
;
++
k
)
{
for
(
int32_t
k
=
0
;
k
<
vmsg
->
numOfEps
;
++
k
)
{
pVgroup
->
epAddr
[
k
].
port
=
vmsg
->
epAddr
[
k
].
port
;
pVgroup
->
epAddr
[
k
].
port
=
vmsg
->
epAddr
[
k
].
port
;
pVgroup
->
epAddr
[
k
].
fqdn
=
strndup
(
vmsg
->
epAddr
[
k
].
fqdn
,
TSDB_FQDN_LEN
);
tstrncpy
(
pVgroup
->
epAddr
[
k
].
fqdn
,
vmsg
->
epAddr
[
k
].
fqdn
,
TSDB_FQDN_LEN
);
// pVgroup->epAddr[k].fqdn = strndup(vmsg->epAddr[k].fqdn, TSDB_FQDN_LEN);
}
}
doUpdateVgroupInfo
(
pVgroup
->
vgId
,
vmsg
);
doUpdateVgroupInfo
(
pVgroup
->
vgId
,
vmsg
);
...
...
src/client/src/tscSubquery.c
浏览文件 @
bbda67eb
...
@@ -623,13 +623,12 @@ static int32_t tscLaunchRealSubqueries(SSqlObj* pSql) {
...
@@ -623,13 +623,12 @@ static int32_t tscLaunchRealSubqueries(SSqlObj* pSql) {
int16_t
colId
=
tscGetJoinTagColIdByUid
(
&
pQueryInfo
->
tagCond
,
pTableMetaInfo
->
pTableMeta
->
id
.
uid
);
int16_t
colId
=
tscGetJoinTagColIdByUid
(
&
pQueryInfo
->
tagCond
,
pTableMetaInfo
->
pTableMeta
->
id
.
uid
);
// set the tag column id for executor to extract correct tag value
// set the tag column id for executor to extract correct tag value
#ifndef _TD_NINGSI_60
tVariant
*
pVariant
=
&
pExpr
->
base
.
param
[
0
];
pExpr
->
base
.
param
[
0
]
=
(
tVariant
)
{.
i64
=
colId
,
.
nType
=
TSDB_DATA_TYPE_BIGINT
,
.
nLen
=
sizeof
(
int64_t
)};
#else
pVariant
->
i64
=
colId
;
pExpr
->
base
.
param
[
0
].
i64
=
colId
;
pVariant
->
nType
=
TSDB_DATA_TYPE_BIGINT
;
pExpr
->
base
.
param
[
0
].
nType
=
TSDB_DATA_TYPE_BIGINT
;
pVariant
->
nLen
=
sizeof
(
int64_t
);
pExpr
->
base
.
param
[
0
].
nLen
=
sizeof
(
int64_t
);
#endif
pExpr
->
base
.
numOfParams
=
1
;
pExpr
->
base
.
numOfParams
=
1
;
}
}
...
@@ -748,10 +747,11 @@ void tscBuildVgroupTableInfo(SSqlObj* pSql, STableMetaInfo* pTableMetaInfo, SArr
...
@@ -748,10 +747,11 @@ void tscBuildVgroupTableInfo(SSqlObj* pSql, STableMetaInfo* pTableMetaInfo, SArr
SVgroupTableInfo
info
=
{{
0
}};
SVgroupTableInfo
info
=
{{
0
}};
for
(
int32_t
m
=
0
;
m
<
pvg
->
numOfVgroups
;
++
m
)
{
for
(
int32_t
m
=
0
;
m
<
pvg
->
numOfVgroups
;
++
m
)
{
if
(
tt
->
vgId
==
pvg
->
vgroups
[
m
].
vgId
)
{
if
(
tt
->
vgId
==
pvg
->
vgroups
[
m
].
vgId
)
{
tscSVgroupInfoCopy
(
&
info
.
vgInfo
,
&
pvg
->
vgroups
[
m
]
);
memcpy
(
&
info
.
vgInfo
,
&
pvg
->
vgroups
[
m
],
sizeof
(
info
.
vgInfo
)
);
break
;
break
;
}
}
}
}
assert
(
info
.
vgInfo
.
numOfEps
!=
0
);
assert
(
info
.
vgInfo
.
numOfEps
!=
0
);
vgTables
=
taosArrayInit
(
4
,
sizeof
(
STableIdInfo
));
vgTables
=
taosArrayInit
(
4
,
sizeof
(
STableIdInfo
));
...
@@ -2463,7 +2463,7 @@ static void doConcurrentlySendSubQueries(SSqlObj* pSql) {
...
@@ -2463,7 +2463,7 @@ static void doConcurrentlySendSubQueries(SSqlObj* pSql) {
SSubqueryState
*
pState
=
&
pSql
->
subState
;
SSubqueryState
*
pState
=
&
pSql
->
subState
;
// concurrently sent the query requests.
// concurrently sent the query requests.
const
int32_t
MAX_REQUEST_PER_TASK
=
8
;
const
int32_t
MAX_REQUEST_PER_TASK
=
4
;
int32_t
numOfTasks
=
(
pState
->
numOfSub
+
MAX_REQUEST_PER_TASK
-
1
)
/
MAX_REQUEST_PER_TASK
;
int32_t
numOfTasks
=
(
pState
->
numOfSub
+
MAX_REQUEST_PER_TASK
-
1
)
/
MAX_REQUEST_PER_TASK
;
assert
(
numOfTasks
>=
1
);
assert
(
numOfTasks
>=
1
);
...
@@ -2550,13 +2550,14 @@ int32_t tscHandleMasterSTableQuery(SSqlObj *pSql) {
...
@@ -2550,13 +2550,14 @@ int32_t tscHandleMasterSTableQuery(SSqlObj *pSql) {
trs
->
pExtMemBuffer
=
pMemoryBuf
;
trs
->
pExtMemBuffer
=
pMemoryBuf
;
trs
->
pOrderDescriptor
=
pDesc
;
trs
->
pOrderDescriptor
=
pDesc
;
trs
->
localBuffer
=
(
tFilePage
*
)
calloc
(
1
,
nBufferSize
+
sizeof
(
tFilePage
));
trs
->
localBuffer
=
(
tFilePage
*
)
malloc
(
nBufferSize
+
sizeof
(
tFilePage
));
if
(
trs
->
localBuffer
==
NULL
)
{
if
(
trs
->
localBuffer
==
NULL
)
{
tscError
(
"0x%"
PRIx64
" failed to malloc buffer for local buffer, orderOfSub:%d, reason:%s"
,
pSql
->
self
,
i
,
strerror
(
errno
));
tscError
(
"0x%"
PRIx64
" failed to malloc buffer for local buffer, orderOfSub:%d, reason:%s"
,
pSql
->
self
,
i
,
strerror
(
errno
));
tfree
(
trs
);
tfree
(
trs
);
break
;
break
;
}
}
trs
->
localBuffer
->
num
=
0
;
trs
->
subqueryIndex
=
i
;
trs
->
subqueryIndex
=
i
;
trs
->
pParentSql
=
pSql
;
trs
->
pParentSql
=
pSql
;
...
@@ -2651,7 +2652,7 @@ static int32_t tscReissueSubquery(SRetrieveSupport *oriTrs, SSqlObj *pSql, int32
...
@@ -2651,7 +2652,7 @@ static int32_t tscReissueSubquery(SRetrieveSupport *oriTrs, SSqlObj *pSql, int32
int32_t
subqueryIndex
=
trsupport
->
subqueryIndex
;
int32_t
subqueryIndex
=
trsupport
->
subqueryIndex
;
STableMetaInfo
*
pTableMetaInfo
=
tscGetTableMetaInfoFromCmd
(
&
pSql
->
cmd
,
0
);
STableMetaInfo
*
pTableMetaInfo
=
tscGetTableMetaInfoFromCmd
(
&
pSql
->
cmd
,
0
);
SVgroup
Info
*
pVgroup
=
&
pTableMetaInfo
->
vgroupList
->
vgroups
[
0
];
SVgroup
Msg
*
pVgroup
=
&
pTableMetaInfo
->
vgroupList
->
vgroups
[
0
];
tExtMemBufferClear
(
trsupport
->
pExtMemBuffer
[
subqueryIndex
]);
tExtMemBufferClear
(
trsupport
->
pExtMemBuffer
[
subqueryIndex
]);
...
@@ -2879,7 +2880,6 @@ static void tscAllDataRetrievedFromDnode(SRetrieveSupport *trsupport, SSqlObj* p
...
@@ -2879,7 +2880,6 @@ static void tscAllDataRetrievedFromDnode(SRetrieveSupport *trsupport, SSqlObj* p
pParentSql
->
res
.
precision
=
pSql
->
res
.
precision
;
pParentSql
->
res
.
precision
=
pSql
->
res
.
precision
;
pParentSql
->
res
.
numOfRows
=
0
;
pParentSql
->
res
.
numOfRows
=
0
;
pParentSql
->
res
.
row
=
0
;
pParentSql
->
res
.
row
=
0
;
pParentSql
->
res
.
numOfGroups
=
0
;
tscFreeRetrieveSup
(
pSql
);
tscFreeRetrieveSup
(
pSql
);
...
@@ -2930,7 +2930,7 @@ static void tscRetrieveFromDnodeCallBack(void *param, TAOS_RES *tres, int numOfR
...
@@ -2930,7 +2930,7 @@ static void tscRetrieveFromDnodeCallBack(void *param, TAOS_RES *tres, int numOfR
SSubqueryState
*
pState
=
&
pParentSql
->
subState
;
SSubqueryState
*
pState
=
&
pParentSql
->
subState
;
STableMetaInfo
*
pTableMetaInfo
=
tscGetTableMetaInfoFromCmd
(
&
pSql
->
cmd
,
0
);
STableMetaInfo
*
pTableMetaInfo
=
tscGetTableMetaInfoFromCmd
(
&
pSql
->
cmd
,
0
);
SVgroup
Info
*
pVgroup
=
&
pTableMetaInfo
->
vgroupList
->
vgroups
[
0
];
SVgroup
Msg
*
pVgroup
=
&
pTableMetaInfo
->
vgroupList
->
vgroups
[
0
];
if
(
pParentSql
->
res
.
code
!=
TSDB_CODE_SUCCESS
)
{
if
(
pParentSql
->
res
.
code
!=
TSDB_CODE_SUCCESS
)
{
trsupport
->
numOfRetry
=
MAX_NUM_OF_SUBQUERY_RETRY
;
trsupport
->
numOfRetry
=
MAX_NUM_OF_SUBQUERY_RETRY
;
...
@@ -3058,7 +3058,7 @@ void tscRetrieveDataRes(void *param, TAOS_RES *tres, int code) {
...
@@ -3058,7 +3058,7 @@ void tscRetrieveDataRes(void *param, TAOS_RES *tres, int code) {
assert
(
pQueryInfo
->
numOfTables
==
1
);
assert
(
pQueryInfo
->
numOfTables
==
1
);
STableMetaInfo
*
pTableMetaInfo
=
tscGetTableMetaInfoFromCmd
(
&
pSql
->
cmd
,
0
);
STableMetaInfo
*
pTableMetaInfo
=
tscGetTableMetaInfoFromCmd
(
&
pSql
->
cmd
,
0
);
SVgroup
Info
*
pVgroup
=
&
pTableMetaInfo
->
vgroupList
->
vgroups
[
trsupport
->
subqueryIndex
];
SVgroup
Msg
*
pVgroup
=
&
pTableMetaInfo
->
vgroupList
->
vgroups
[
trsupport
->
subqueryIndex
];
// stable query killed or other subquery failed, all query stopped
// stable query killed or other subquery failed, all query stopped
if
(
pParentSql
->
res
.
code
!=
TSDB_CODE_SUCCESS
)
{
if
(
pParentSql
->
res
.
code
!=
TSDB_CODE_SUCCESS
)
{
...
@@ -3404,7 +3404,6 @@ static void doBuildResFromSubqueries(SSqlObj* pSql) {
...
@@ -3404,7 +3404,6 @@ static void doBuildResFromSubqueries(SSqlObj* pSql) {
return
;
return
;
}
}
// tscRestoreFuncForSTableQuery(pQueryInfo);
int32_t
rowSize
=
tscGetResRowLength
(
pQueryInfo
->
exprList
);
int32_t
rowSize
=
tscGetResRowLength
(
pQueryInfo
->
exprList
);
assert
(
numOfRes
*
rowSize
>
0
);
assert
(
numOfRes
*
rowSize
>
0
);
...
...
src/client/src/tscUtil.c
浏览文件 @
bbda67eb
...
@@ -1347,14 +1347,7 @@ static void tscDestroyResPointerInfo(SSqlRes* pRes) {
...
@@ -1347,14 +1347,7 @@ static void tscDestroyResPointerInfo(SSqlRes* pRes) {
tfree
(
pRes
->
buffer
);
tfree
(
pRes
->
buffer
);
tfree
(
pRes
->
urow
);
tfree
(
pRes
->
urow
);
tfree
(
pRes
->
pGroupRec
);
tfree
(
pRes
->
pColumnIndex
);
tfree
(
pRes
->
pColumnIndex
);
if
(
pRes
->
pArithSup
!=
NULL
)
{
tfree
(
pRes
->
pArithSup
->
data
);
tfree
(
pRes
->
pArithSup
);
}
tfree
(
pRes
->
final
);
tfree
(
pRes
->
final
);
pRes
->
data
=
NULL
;
// pRes->data points to the buffer of pRsp, no need to free
pRes
->
data
=
NULL
;
// pRes->data points to the buffer of pRsp, no need to free
...
@@ -2087,32 +2080,35 @@ bool tscIsInsertData(char* sqlstr) {
...
@@ -2087,32 +2080,35 @@ bool tscIsInsertData(char* sqlstr) {
}
while
(
1
);
}
while
(
1
);
}
}
int
tscAllocPayload
(
SSqlCmd
*
pCmd
,
in
t
size
)
{
int
32_t
tscAllocPayloadFast
(
SSqlCmd
*
pCmd
,
size_
t
size
)
{
if
(
pCmd
->
payload
==
NULL
)
{
if
(
pCmd
->
payload
==
NULL
)
{
assert
(
pCmd
->
allocSize
==
0
);
assert
(
pCmd
->
allocSize
==
0
);
pCmd
->
payload
=
(
char
*
)
calloc
(
1
,
size
);
pCmd
->
payload
=
malloc
(
size
);
if
(
pCmd
->
payload
==
NULL
)
{
pCmd
->
allocSize
=
(
uint32_t
)
size
;
}
else
if
(
pCmd
->
allocSize
<
size
)
{
char
*
tmp
=
realloc
(
pCmd
->
payload
,
size
);
if
(
tmp
==
NULL
)
{
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
}
}
pCmd
->
allocSize
=
size
;
pCmd
->
payload
=
tmp
;
}
else
{
pCmd
->
allocSize
=
(
uint32_t
)
size
;
if
(
pCmd
->
allocSize
<
(
uint32_t
)
size
)
{
char
*
b
=
realloc
(
pCmd
->
payload
,
size
);
if
(
b
==
NULL
)
{
return
TSDB_CODE_TSC_OUT_OF_MEMORY
;
}
}
pCmd
->
payload
=
b
;
assert
(
pCmd
->
allocSize
>=
size
);
pCmd
->
allocSize
=
size
;
return
TSDB_CODE_SUCCESS
;
}
}
int32_t
tscAllocPayload
(
SSqlCmd
*
pCmd
,
int
size
)
{
assert
(
size
>
0
);
int32_t
code
=
tscAllocPayloadFast
(
pCmd
,
(
size_t
)
size
);
if
(
code
==
TSDB_CODE_SUCCESS
)
{
memset
(
pCmd
->
payload
,
0
,
pCmd
->
allocSize
);
memset
(
pCmd
->
payload
,
0
,
pCmd
->
allocSize
);
}
}
assert
(
pCmd
->
allocSize
>=
(
uint32_t
)
size
&&
size
>
0
);
return
code
;
return
TSDB_CODE_SUCCESS
;
}
}
TAOS_FIELD
tscCreateField
(
int8_t
type
,
const
char
*
name
,
int16_t
bytes
)
{
TAOS_FIELD
tscCreateField
(
int8_t
type
,
const
char
*
name
,
int16_t
bytes
)
{
...
@@ -3369,11 +3365,11 @@ void tscFreeVgroupTableInfo(SArray* pVgroupTables) {
...
@@ -3369,11 +3365,11 @@ void tscFreeVgroupTableInfo(SArray* pVgroupTables) {
size_t
num
=
taosArrayGetSize
(
pVgroupTables
);
size_t
num
=
taosArrayGetSize
(
pVgroupTables
);
for
(
size_t
i
=
0
;
i
<
num
;
i
++
)
{
for
(
size_t
i
=
0
;
i
<
num
;
i
++
)
{
SVgroupTableInfo
*
pInfo
=
taosArrayGet
(
pVgroupTables
,
i
);
SVgroupTableInfo
*
pInfo
=
taosArrayGet
(
pVgroupTables
,
i
);
#if 0
for(int32_t j = 0; j < pInfo->vgInfo.numOfEps; ++j) {
for(int32_t j = 0; j < pInfo->vgInfo.numOfEps; ++j) {
tfree(pInfo->vgInfo.epAddr[j].fqdn);
tfree(pInfo->vgInfo.epAddr[j].fqdn);
}
}
#endif
taosArrayDestroy
(
pInfo
->
itemList
);
taosArrayDestroy
(
pInfo
->
itemList
);
}
}
...
@@ -3387,9 +3383,9 @@ void tscRemoveVgroupTableGroup(SArray* pVgroupTable, int32_t index) {
...
@@ -3387,9 +3383,9 @@ void tscRemoveVgroupTableGroup(SArray* pVgroupTable, int32_t index) {
assert
(
size
>
index
);
assert
(
size
>
index
);
SVgroupTableInfo
*
pInfo
=
taosArrayGet
(
pVgroupTable
,
index
);
SVgroupTableInfo
*
pInfo
=
taosArrayGet
(
pVgroupTable
,
index
);
for
(
int32_t
j
=
0
;
j
<
pInfo
->
vgInfo
.
numOfEps
;
++
j
)
{
//
for(int32_t j = 0; j < pInfo->vgInfo.numOfEps; ++j) {
tfree
(
pInfo
->
vgInfo
.
epAddr
[
j
].
fqdn
);
//
tfree(pInfo->vgInfo.epAddr[j].fqdn);
}
//
}
taosArrayDestroy
(
pInfo
->
itemList
);
taosArrayDestroy
(
pInfo
->
itemList
);
taosArrayRemove
(
pVgroupTable
,
index
);
taosArrayRemove
(
pVgroupTable
,
index
);
...
@@ -3399,9 +3395,12 @@ void tscVgroupTableCopy(SVgroupTableInfo* info, SVgroupTableInfo* pInfo) {
...
@@ -3399,9 +3395,12 @@ void tscVgroupTableCopy(SVgroupTableInfo* info, SVgroupTableInfo* pInfo) {
memset
(
info
,
0
,
sizeof
(
SVgroupTableInfo
));
memset
(
info
,
0
,
sizeof
(
SVgroupTableInfo
));
info
->
vgInfo
=
pInfo
->
vgInfo
;
info
->
vgInfo
=
pInfo
->
vgInfo
;
#if 0
for(int32_t j = 0; j < pInfo->vgInfo.numOfEps; ++j) {
for(int32_t j = 0; j < pInfo->vgInfo.numOfEps; ++j) {
info->vgInfo.epAddr[j].fqdn = strdup(pInfo->vgInfo.epAddr[j].fqdn);
info->vgInfo.epAddr[j].fqdn = strdup(pInfo->vgInfo.epAddr[j].fqdn);
}
}
#endif
if
(
pInfo
->
itemList
)
{
if
(
pInfo
->
itemList
)
{
info
->
itemList
=
taosArrayDup
(
pInfo
->
itemList
);
info
->
itemList
=
taosArrayDup
(
pInfo
->
itemList
);
...
@@ -3464,13 +3463,9 @@ STableMetaInfo* tscAddTableMetaInfo(SQueryInfo* pQueryInfo, SName* name, STableM
...
@@ -3464,13 +3463,9 @@ STableMetaInfo* tscAddTableMetaInfo(SQueryInfo* pQueryInfo, SName* name, STableM
}
}
pTableMetaInfo
->
pTableMeta
=
pTableMeta
;
pTableMetaInfo
->
pTableMeta
=
pTableMeta
;
if
(
pTableMetaInfo
->
pTableMeta
==
NULL
)
{
pTableMetaInfo
->
tableMetaSize
=
(
pTableMetaInfo
->
pTableMeta
==
NULL
)
?
0
:
tscGetTableMetaSize
(
pTableMeta
);
pTableMetaInfo
->
tableMetaSize
=
0
;
}
else
{
pTableMetaInfo
->
tableMetaSize
=
tscGetTableMetaSize
(
pTableMeta
);
}
pTableMetaInfo
->
tableMetaCapacity
=
(
size_t
)(
pTableMetaInfo
->
tableMetaSize
);
pTableMetaInfo
->
tableMetaCapacity
=
(
size_t
)(
pTableMetaInfo
->
tableMetaSize
);
if
(
vgroupList
!=
NULL
)
{
if
(
vgroupList
!=
NULL
)
{
pTableMetaInfo
->
vgroupList
=
tscVgroupInfoClone
(
vgroupList
);
pTableMetaInfo
->
vgroupList
=
tscVgroupInfoClone
(
vgroupList
);
...
@@ -3718,8 +3713,8 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, __async_cb_func_t
...
@@ -3718,8 +3713,8 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, __async_cb_func_t
terrno
=
TSDB_CODE_TSC_OUT_OF_MEMORY
;
terrno
=
TSDB_CODE_TSC_OUT_OF_MEMORY
;
goto
_error
;
goto
_error
;
}
}
pNewQueryInfo
->
numOfFillVal
=
pQueryInfo
->
fieldsInfo
.
numOfOutput
;
pNewQueryInfo
->
numOfFillVal
=
pQueryInfo
->
fieldsInfo
.
numOfOutput
;
memcpy
(
pNewQueryInfo
->
fillVal
,
pQueryInfo
->
fillVal
,
pQueryInfo
->
fieldsInfo
.
numOfOutput
*
sizeof
(
int64_t
));
memcpy
(
pNewQueryInfo
->
fillVal
,
pQueryInfo
->
fillVal
,
pQueryInfo
->
fieldsInfo
.
numOfOutput
*
sizeof
(
int64_t
));
}
}
...
@@ -3760,7 +3755,6 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, __async_cb_func_t
...
@@ -3760,7 +3755,6 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, __async_cb_func_t
pFinalInfo
=
tscAddTableMetaInfo
(
pNewQueryInfo
,
&
pTableMetaInfo
->
name
,
pTableMeta
,
pTableMetaInfo
->
vgroupList
,
pFinalInfo
=
tscAddTableMetaInfo
(
pNewQueryInfo
,
&
pTableMetaInfo
->
name
,
pTableMeta
,
pTableMetaInfo
->
vgroupList
,
pTableMetaInfo
->
tagColList
,
pTableMetaInfo
->
pVgroupTables
);
pTableMetaInfo
->
tagColList
,
pTableMetaInfo
->
pVgroupTables
);
}
else
{
// transfer the ownership of pTableMeta to the newly create sql object.
}
else
{
// transfer the ownership of pTableMeta to the newly create sql object.
STableMetaInfo
*
pPrevInfo
=
tscGetTableMetaInfoFromCmd
(
&
pPrevSql
->
cmd
,
0
);
STableMetaInfo
*
pPrevInfo
=
tscGetTableMetaInfoFromCmd
(
&
pPrevSql
->
cmd
,
0
);
if
(
pPrevInfo
->
pTableMeta
&&
pPrevInfo
->
pTableMeta
->
tableType
<
0
)
{
if
(
pPrevInfo
->
pTableMeta
&&
pPrevInfo
->
pTableMeta
->
tableType
<
0
)
{
...
@@ -3770,8 +3764,8 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, __async_cb_func_t
...
@@ -3770,8 +3764,8 @@ SSqlObj* createSubqueryObj(SSqlObj* pSql, int16_t tableIndex, __async_cb_func_t
STableMeta
*
pPrevTableMeta
=
tscTableMetaDup
(
pPrevInfo
->
pTableMeta
);
STableMeta
*
pPrevTableMeta
=
tscTableMetaDup
(
pPrevInfo
->
pTableMeta
);
SVgroupsInfo
*
pVgroupsInfo
=
pPrevInfo
->
vgroupList
;
SVgroupsInfo
*
pVgroupsInfo
=
pPrevInfo
->
vgroupList
;
pFinalInfo
=
tscAddTableMetaInfo
(
pNewQueryInfo
,
&
pTableMetaInfo
->
name
,
pPrevTableMeta
,
pVgroupsInfo
,
pTableMetaInfo
->
tagColList
,
pFinalInfo
=
tscAddTableMetaInfo
(
pNewQueryInfo
,
&
pTableMetaInfo
->
name
,
pPrevTableMeta
,
pVgroupsInfo
,
pTableMetaInfo
->
pVgroupTables
);
pTableMetaInfo
->
tagColList
,
pTableMetaInfo
->
pVgroupTables
);
}
}
// this case cannot be happened
// this case cannot be happened
...
@@ -4415,8 +4409,8 @@ SVgroupsInfo* tscVgroupInfoClone(SVgroupsInfo *vgroupList) {
...
@@ -4415,8 +4409,8 @@ SVgroupsInfo* tscVgroupInfoClone(SVgroupsInfo *vgroupList) {
return
NULL
;
return
NULL
;
}
}
size_t
size
=
sizeof
(
SVgroupsInfo
)
+
sizeof
(
SVgroup
Info
)
*
vgroupList
->
numOfVgroups
;
size_t
size
=
sizeof
(
SVgroupsInfo
)
+
sizeof
(
SVgroup
Msg
)
*
vgroupList
->
numOfVgroups
;
SVgroupsInfo
*
pNew
=
calloc
(
1
,
size
);
SVgroupsInfo
*
pNew
=
malloc
(
size
);
if
(
pNew
==
NULL
)
{
if
(
pNew
==
NULL
)
{
return
NULL
;
return
NULL
;
}
}
...
@@ -4424,15 +4418,15 @@ SVgroupsInfo* tscVgroupInfoClone(SVgroupsInfo *vgroupList) {
...
@@ -4424,15 +4418,15 @@ SVgroupsInfo* tscVgroupInfoClone(SVgroupsInfo *vgroupList) {
pNew
->
numOfVgroups
=
vgroupList
->
numOfVgroups
;
pNew
->
numOfVgroups
=
vgroupList
->
numOfVgroups
;
for
(
int32_t
i
=
0
;
i
<
vgroupList
->
numOfVgroups
;
++
i
)
{
for
(
int32_t
i
=
0
;
i
<
vgroupList
->
numOfVgroups
;
++
i
)
{
SVgroup
Info
*
pNewVInfo
=
&
pNew
->
vgroups
[
i
];
SVgroup
Msg
*
pNewVInfo
=
&
pNew
->
vgroups
[
i
];
SVgroup
Info
*
pvInfo
=
&
vgroupList
->
vgroups
[
i
];
SVgroup
Msg
*
pvInfo
=
&
vgroupList
->
vgroups
[
i
];
pNewVInfo
->
vgId
=
pvInfo
->
vgId
;
pNewVInfo
->
vgId
=
pvInfo
->
vgId
;
pNewVInfo
->
numOfEps
=
pvInfo
->
numOfEps
;
pNewVInfo
->
numOfEps
=
pvInfo
->
numOfEps
;
for
(
int32_t
j
=
0
;
j
<
pvInfo
->
numOfEps
;
++
j
)
{
for
(
int32_t
j
=
0
;
j
<
pvInfo
->
numOfEps
;
++
j
)
{
pNewVInfo
->
epAddr
[
j
].
fqdn
=
strdup
(
pvInfo
->
epAddr
[
j
].
fqdn
);
pNewVInfo
->
epAddr
[
j
].
port
=
pvInfo
->
epAddr
[
j
].
port
;
pNewVInfo
->
epAddr
[
j
].
port
=
pvInfo
->
epAddr
[
j
].
port
;
tstrncpy
(
pNewVInfo
->
epAddr
[
j
].
fqdn
,
pvInfo
->
epAddr
[
j
].
fqdn
,
TSDB_FQDN_LEN
);
}
}
}
}
...
@@ -4444,8 +4438,9 @@ void* tscVgroupInfoClear(SVgroupsInfo *vgroupList) {
...
@@ -4444,8 +4438,9 @@ void* tscVgroupInfoClear(SVgroupsInfo *vgroupList) {
return
NULL
;
return
NULL
;
}
}
#if 0
for(int32_t i = 0; i < vgroupList->numOfVgroups; ++i) {
for(int32_t i = 0; i < vgroupList->numOfVgroups; ++i) {
SVgroup
Info
*
pVgroupInfo
=
&
vgroupList
->
vgroups
[
i
];
SVgroup
Msg
* pVgroupInfo = &vgroupList->vgroups[i];
for(int32_t j = 0; j < pVgroupInfo->numOfEps; ++j) {
for(int32_t j = 0; j < pVgroupInfo->numOfEps; ++j) {
tfree(pVgroupInfo->epAddr[j].fqdn);
tfree(pVgroupInfo->epAddr[j].fqdn);
...
@@ -4456,10 +4451,11 @@ void* tscVgroupInfoClear(SVgroupsInfo *vgroupList) {
...
@@ -4456,10 +4451,11 @@ void* tscVgroupInfoClear(SVgroupsInfo *vgroupList) {
}
}
}
}
#endif
tfree
(
vgroupList
);
tfree
(
vgroupList
);
return
NULL
;
return
NULL
;
}
}
# if 0
void
tscSVgroupInfoCopy
(
SVgroupInfo
*
dst
,
const
SVgroupInfo
*
src
)
{
void
tscSVgroupInfoCopy
(
SVgroupInfo
*
dst
,
const
SVgroupInfo
*
src
)
{
dst
->
vgId
=
src
->
vgId
;
dst
->
vgId
=
src
->
vgId
;
dst
->
numOfEps
=
src
->
numOfEps
;
dst
->
numOfEps
=
src
->
numOfEps
;
...
@@ -4472,6 +4468,8 @@ void tscSVgroupInfoCopy(SVgroupInfo* dst, const SVgroupInfo* src) {
...
@@ -4472,6 +4468,8 @@ void tscSVgroupInfoCopy(SVgroupInfo* dst, const SVgroupInfo* src) {
}
}
}
}
#endif
char
*
serializeTagData
(
STagData
*
pTagData
,
char
*
pMsg
)
{
char
*
serializeTagData
(
STagData
*
pTagData
,
char
*
pMsg
)
{
int32_t
n
=
(
int32_t
)
strlen
(
pTagData
->
name
);
int32_t
n
=
(
int32_t
)
strlen
(
pTagData
->
name
);
*
(
int32_t
*
)
pMsg
=
htonl
(
n
);
*
(
int32_t
*
)
pMsg
=
htonl
(
n
);
...
@@ -4612,11 +4610,12 @@ STableMeta* tscTableMetaDup(STableMeta* pTableMeta) {
...
@@ -4612,11 +4610,12 @@ STableMeta* tscTableMetaDup(STableMeta* pTableMeta) {
SVgroupsInfo
*
tscVgroupsInfoDup
(
SVgroupsInfo
*
pVgroupsInfo
)
{
SVgroupsInfo
*
tscVgroupsInfoDup
(
SVgroupsInfo
*
pVgroupsInfo
)
{
assert
(
pVgroupsInfo
!=
NULL
);
assert
(
pVgroupsInfo
!=
NULL
);
size_t
size
=
sizeof
(
SVgroup
Info
)
*
pVgroupsInfo
->
numOfVgroups
+
sizeof
(
SVgroupsInfo
);
size_t
size
=
sizeof
(
SVgroup
Msg
)
*
pVgroupsInfo
->
numOfVgroups
+
sizeof
(
SVgroupsInfo
);
SVgroupsInfo
*
pInfo
=
calloc
(
1
,
size
);
SVgroupsInfo
*
pInfo
=
calloc
(
1
,
size
);
pInfo
->
numOfVgroups
=
pVgroupsInfo
->
numOfVgroups
;
pInfo
->
numOfVgroups
=
pVgroupsInfo
->
numOfVgroups
;
for
(
int32_t
m
=
0
;
m
<
pVgroupsInfo
->
numOfVgroups
;
++
m
)
{
for
(
int32_t
m
=
0
;
m
<
pVgroupsInfo
->
numOfVgroups
;
++
m
)
{
tscSVgroupInfoCopy
(
&
pInfo
->
vgroups
[
m
],
&
pVgroupsInfo
->
vgroups
[
m
]);
memcpy
(
&
pInfo
->
vgroups
[
m
],
&
pVgroupsInfo
->
vgroups
[
m
],
sizeof
(
SVgroupMsg
));
// tscSVgroupInfoCopy(&pInfo->vgroups[m], &pVgroupsInfo->vgroups[m]);
}
}
return
pInfo
;
return
pInfo
;
}
}
...
...
src/connector/python/taos/cinterface.py
浏览文件 @
bbda67eb
...
@@ -835,8 +835,14 @@ def taos_insert_telnet_lines(connection, lines):
...
@@ -835,8 +835,14 @@ def taos_insert_telnet_lines(connection, lines):
p_lines
=
lines_type
(
*
lines
)
p_lines
=
lines_type
(
*
lines
)
errno
=
_libtaos
.
taos_insert_telnet_lines
(
connection
,
p_lines
,
num_of_lines
)
errno
=
_libtaos
.
taos_insert_telnet_lines
(
connection
,
p_lines
,
num_of_lines
)
if
errno
!=
0
:
if
errno
!=
0
:
raise
LinesError
(
"insert telnet lines error"
,
errno
)
raise
Telnet
LinesError
(
"insert telnet lines error"
,
errno
)
def
taos_insert_json_payload
(
connection
,
payload
):
# type: (c_void_p, list[str] | tuple(str)) -> None
payload
=
payload
.
encode
(
"utf-8"
)
errno
=
_libtaos
.
taos_insert_json_payload
(
connection
,
payload
)
if
errno
!=
0
:
raise
JsonPayloadError
(
"insert json payload error"
,
errno
)
class
CTaosInterface
(
object
):
class
CTaosInterface
(
object
):
def
__init__
(
self
,
config
=
None
):
def
__init__
(
self
,
config
=
None
):
...
...
src/connector/python/taos/connection.py
浏览文件 @
bbda67eb
...
@@ -154,6 +154,25 @@ class TaosConnection(object):
...
@@ -154,6 +154,25 @@ class TaosConnection(object):
"""
"""
return
taos_insert_telnet_lines
(
self
.
_conn
,
lines
)
return
taos_insert_telnet_lines
(
self
.
_conn
,
lines
)
def
insert_json_payload
(
self
,
payload
):
"""OpenTSDB HTTP JSON format support
## Example
"{
"metric": "cpu_load_0",
"timestamp": 1626006833610123,
"value": 55.5,
"tags":
{
"host": "ubuntu",
"interface": "eth0",
"Id": "tb0"
}
}"
"""
return
taos_insert_json_payload
(
self
.
_conn
,
payload
)
def
cursor
(
self
):
def
cursor
(
self
):
# type: () -> TaosCursor
# type: () -> TaosCursor
"""Return a new Cursor object using the connection."""
"""Return a new Cursor object using the connection."""
...
...
src/connector/python/taos/error.py
浏览文件 @
bbda67eb
...
@@ -84,3 +84,13 @@ class LinesError(DatabaseError):
...
@@ -84,3 +84,13 @@ class LinesError(DatabaseError):
"""taos_insert_lines errors."""
"""taos_insert_lines errors."""
pass
pass
class
TelnetLinesError
(
DatabaseError
):
"""taos_insert_telnet_lines errors."""
pass
class
JsonPayloadError
(
DatabaseError
):
"""taos_insert_json_payload errors."""
pass
src/inc/taos.h
浏览文件 @
bbda67eb
...
@@ -174,6 +174,8 @@ DLL_EXPORT int taos_insert_lines(TAOS* taos, char* lines[], int numLines);
...
@@ -174,6 +174,8 @@ DLL_EXPORT int taos_insert_lines(TAOS* taos, char* lines[], int numLines);
DLL_EXPORT
int
taos_insert_telnet_lines
(
TAOS
*
taos
,
char
*
lines
[],
int
numLines
);
DLL_EXPORT
int
taos_insert_telnet_lines
(
TAOS
*
taos
,
char
*
lines
[],
int
numLines
);
DLL_EXPORT
int
taos_insert_json_payload
(
TAOS
*
taos
,
char
*
payload
);
#ifdef __cplusplus
#ifdef __cplusplus
}
}
#endif
#endif
...
...
src/inc/taoserror.h
浏览文件 @
bbda67eb
...
@@ -108,6 +108,9 @@ int32_t* taosGetErrno();
...
@@ -108,6 +108,9 @@ int32_t* taosGetErrno();
#define TSDB_CODE_TSC_INVALID_TAG_LENGTH TAOS_DEF_ERROR_CODE(0, 0x021E) //"Invalid tag length")
#define TSDB_CODE_TSC_INVALID_TAG_LENGTH TAOS_DEF_ERROR_CODE(0, 0x021E) //"Invalid tag length")
#define TSDB_CODE_TSC_INVALID_COLUMN_LENGTH TAOS_DEF_ERROR_CODE(0, 0x021F) //"Invalid column length")
#define TSDB_CODE_TSC_INVALID_COLUMN_LENGTH TAOS_DEF_ERROR_CODE(0, 0x021F) //"Invalid column length")
#define TSDB_CODE_TSC_DUP_TAG_NAMES TAOS_DEF_ERROR_CODE(0, 0x0220) //"duplicated tag names")
#define TSDB_CODE_TSC_DUP_TAG_NAMES TAOS_DEF_ERROR_CODE(0, 0x0220) //"duplicated tag names")
#define TSDB_CODE_TSC_INVALID_JSON TAOS_DEF_ERROR_CODE(0, 0x0221) //"Invalid JSON format")
#define TSDB_CODE_TSC_INVALID_JSON_TYPE TAOS_DEF_ERROR_CODE(0, 0x0222) //"Invalid JSON data type")
#define TSDB_CODE_TSC_VALUE_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x0223) //"Value out of range")
// mnode
// mnode
#define TSDB_CODE_MND_MSG_NOT_PROCESSED TAOS_DEF_ERROR_CODE(0, 0x0300) //"Message not processed")
#define TSDB_CODE_MND_MSG_NOT_PROCESSED TAOS_DEF_ERROR_CODE(0, 0x0300) //"Message not processed")
...
...
src/inc/taosmsg.h
浏览文件 @
bbda67eb
...
@@ -766,27 +766,16 @@ typedef struct SSTableVgroupMsg {
...
@@ -766,27 +766,16 @@ typedef struct SSTableVgroupMsg {
int32_t
numOfTables
;
int32_t
numOfTables
;
}
SSTableVgroupMsg
,
SSTableVgroupRspMsg
;
}
SSTableVgroupMsg
,
SSTableVgroupRspMsg
;
typedef
struct
{
int32_t
vgId
;
int8_t
numOfEps
;
SEpAddr1
epAddr
[
TSDB_MAX_REPLICA
];
}
SVgroupInfo
;
typedef
struct
{
typedef
struct
{
int32_t
vgId
;
int32_t
vgId
;
int8_t
numOfEps
;
int8_t
numOfEps
;
SEpAddrMsg
epAddr
[
TSDB_MAX_REPLICA
];
SEpAddrMsg
epAddr
[
TSDB_MAX_REPLICA
];
}
SVgroupMsg
;
}
SVgroupMsg
;
typedef
struct
{
int32_t
numOfVgroups
;
SVgroupInfo
vgroups
[];
}
SVgroupsInfo
;
typedef
struct
{
typedef
struct
{
int32_t
numOfVgroups
;
int32_t
numOfVgroups
;
SVgroupMsg
vgroups
[];
SVgroupMsg
vgroups
[];
}
SVgroupsMsg
;
}
SVgroupsMsg
,
SVgroupsInfo
;
typedef
struct
STableMetaMsg
{
typedef
struct
STableMetaMsg
{
int32_t
contLen
;
int32_t
contLen
;
...
...
src/plugins/http/inc/httpUtil.h
浏览文件 @
bbda67eb
...
@@ -17,6 +17,7 @@
...
@@ -17,6 +17,7 @@
#define TDENGINE_HTTP_UTIL_H
#define TDENGINE_HTTP_UTIL_H
bool
httpCheckUsedbSql
(
char
*
sql
);
bool
httpCheckUsedbSql
(
char
*
sql
);
bool
httpCheckAlterSql
(
char
*
sql
);
void
httpTimeToString
(
int32_t
t
,
char
*
buf
,
int32_t
buflen
);
void
httpTimeToString
(
int32_t
t
,
char
*
buf
,
int32_t
buflen
);
bool
httpUrlMatch
(
HttpContext
*
pContext
,
int32_t
pos
,
char
*
cmp
);
bool
httpUrlMatch
(
HttpContext
*
pContext
,
int32_t
pos
,
char
*
cmp
);
...
...
src/plugins/http/src/httpHandle.c
浏览文件 @
bbda67eb
...
@@ -35,6 +35,7 @@ bool httpProcessData(HttpContext* pContext) {
...
@@ -35,6 +35,7 @@ bool httpProcessData(HttpContext* pContext) {
if
(
!
httpAlterContextState
(
pContext
,
HTTP_CONTEXT_STATE_READY
,
HTTP_CONTEXT_STATE_HANDLING
))
{
if
(
!
httpAlterContextState
(
pContext
,
HTTP_CONTEXT_STATE_READY
,
HTTP_CONTEXT_STATE_HANDLING
))
{
httpTrace
(
"context:%p, fd:%d, state:%s not in ready state, stop process request"
,
pContext
,
pContext
->
fd
,
httpTrace
(
"context:%p, fd:%d, state:%s not in ready state, stop process request"
,
pContext
,
pContext
->
fd
,
httpContextStateStr
(
pContext
->
state
));
httpContextStateStr
(
pContext
->
state
));
pContext
->
error
=
true
;
httpCloseContextByApp
(
pContext
);
httpCloseContextByApp
(
pContext
);
return
false
;
return
false
;
}
}
...
...
src/plugins/http/src/httpParser.c
浏览文件 @
bbda67eb
...
@@ -1157,10 +1157,6 @@ static int32_t httpParseChar(HttpParser *parser, const char c, int32_t *again) {
...
@@ -1157,10 +1157,6 @@ static int32_t httpParseChar(HttpParser *parser, const char c, int32_t *again) {
httpOnError
(
parser
,
HTTP_CODE_INTERNAL_SERVER_ERROR
,
TSDB_CODE_HTTP_PARSE_ERROR_STATE
);
httpOnError
(
parser
,
HTTP_CODE_INTERNAL_SERVER_ERROR
,
TSDB_CODE_HTTP_PARSE_ERROR_STATE
);
}
}
if
(
ok
!=
0
)
{
pContext
->
error
=
true
;
}
return
ok
;
return
ok
;
}
}
...
...
src/plugins/http/src/httpResp.c
浏览文件 @
bbda67eb
...
@@ -147,6 +147,8 @@ void httpSendErrorResp(HttpContext *pContext, int32_t errNo) {
...
@@ -147,6 +147,8 @@ void httpSendErrorResp(HttpContext *pContext, int32_t errNo) {
httpCode
=
pContext
->
parser
->
httpCode
;
httpCode
=
pContext
->
parser
->
httpCode
;
}
}
pContext
->
error
=
true
;
char
*
httpCodeStr
=
httpGetStatusDesc
(
httpCode
);
char
*
httpCodeStr
=
httpGetStatusDesc
(
httpCode
);
httpSendErrorRespImp
(
pContext
,
httpCode
,
httpCodeStr
,
errNo
&
0XFFFF
,
tstrerror
(
errNo
));
httpSendErrorRespImp
(
pContext
,
httpCode
,
httpCodeStr
,
errNo
&
0XFFFF
,
tstrerror
(
errNo
));
}
}
...
...
src/plugins/http/src/httpRestJson.c
浏览文件 @
bbda67eb
...
@@ -16,6 +16,7 @@
...
@@ -16,6 +16,7 @@
#define _DEFAULT_SOURCE
#define _DEFAULT_SOURCE
#include "os.h"
#include "os.h"
#include "tglobal.h"
#include "tglobal.h"
#include "tsclient.h"
#include "httpLog.h"
#include "httpLog.h"
#include "httpJson.h"
#include "httpJson.h"
#include "httpRestHandle.h"
#include "httpRestHandle.h"
...
@@ -62,15 +63,23 @@ void restStartSqlJson(HttpContext *pContext, HttpSqlCmd *cmd, TAOS_RES *result)
...
@@ -62,15 +63,23 @@ void restStartSqlJson(HttpContext *pContext, HttpSqlCmd *cmd, TAOS_RES *result)
httpJsonItemToken
(
jsonBuf
);
httpJsonItemToken
(
jsonBuf
);
httpJsonToken
(
jsonBuf
,
JsonArrStt
);
httpJsonToken
(
jsonBuf
,
JsonArrStt
);
SSqlObj
*
pObj
=
(
SSqlObj
*
)
result
;
bool
isAlterSql
=
(
pObj
->
sqlstr
==
NULL
)
?
false
:
httpCheckAlterSql
(
pObj
->
sqlstr
);
if
(
num_fields
==
0
)
{
if
(
num_fields
==
0
)
{
httpJsonItemToken
(
jsonBuf
);
httpJsonItemToken
(
jsonBuf
);
httpJsonString
(
jsonBuf
,
REST_JSON_AFFECT_ROWS
,
REST_JSON_AFFECT_ROWS_LEN
);
httpJsonString
(
jsonBuf
,
REST_JSON_AFFECT_ROWS
,
REST_JSON_AFFECT_ROWS_LEN
);
}
else
{
if
(
isAlterSql
==
true
)
{
httpJsonItemToken
(
jsonBuf
);
httpJsonString
(
jsonBuf
,
REST_JSON_AFFECT_ROWS
,
REST_JSON_AFFECT_ROWS_LEN
);
}
else
{
}
else
{
for
(
int32_t
i
=
0
;
i
<
num_fields
;
++
i
)
{
for
(
int32_t
i
=
0
;
i
<
num_fields
;
++
i
)
{
httpJsonItemToken
(
jsonBuf
);
httpJsonItemToken
(
jsonBuf
);
httpJsonString
(
jsonBuf
,
fields
[
i
].
name
,
(
int32_t
)
strlen
(
fields
[
i
].
name
));
httpJsonString
(
jsonBuf
,
fields
[
i
].
name
,
(
int32_t
)
strlen
(
fields
[
i
].
name
));
}
}
}
}
}
// head array end
// head array end
httpJsonToken
(
jsonBuf
,
JsonArrEnd
);
httpJsonToken
(
jsonBuf
,
JsonArrEnd
);
...
@@ -99,8 +108,14 @@ void restStartSqlJson(HttpContext *pContext, HttpSqlCmd *cmd, TAOS_RES *result)
...
@@ -99,8 +108,14 @@ void restStartSqlJson(HttpContext *pContext, HttpSqlCmd *cmd, TAOS_RES *result)
httpJsonItemToken
(
jsonBuf
);
httpJsonItemToken
(
jsonBuf
);
httpJsonToken
(
jsonBuf
,
JsonArrStt
);
httpJsonToken
(
jsonBuf
,
JsonArrStt
);
if
(
isAlterSql
==
true
)
{
httpJsonItemToken
(
jsonBuf
);
httpJsonString
(
jsonBuf
,
REST_JSON_AFFECT_ROWS
,
REST_JSON_AFFECT_ROWS_LEN
);
}
else
{
httpJsonItemToken
(
jsonBuf
);
httpJsonItemToken
(
jsonBuf
);
httpJsonString
(
jsonBuf
,
fields
[
i
].
name
,
(
int32_t
)
strlen
(
fields
[
i
].
name
));
httpJsonString
(
jsonBuf
,
fields
[
i
].
name
,
(
int32_t
)
strlen
(
fields
[
i
].
name
));
}
httpJsonItemToken
(
jsonBuf
);
httpJsonItemToken
(
jsonBuf
);
httpJsonInt
(
jsonBuf
,
fields
[
i
].
type
);
httpJsonInt
(
jsonBuf
,
fields
[
i
].
type
);
httpJsonItemToken
(
jsonBuf
);
httpJsonItemToken
(
jsonBuf
);
...
...
src/plugins/http/src/httpServer.c
浏览文件 @
bbda67eb
...
@@ -191,8 +191,6 @@ static void httpProcessHttpData(void *param) {
...
@@ -191,8 +191,6 @@ static void httpProcessHttpData(void *param) {
if
(
httpReadData
(
pContext
))
{
if
(
httpReadData
(
pContext
))
{
(
*
(
pThread
->
processData
))(
pContext
);
(
*
(
pThread
->
processData
))(
pContext
);
atomic_fetch_add_32
(
&
pServer
->
requestNum
,
1
);
atomic_fetch_add_32
(
&
pServer
->
requestNum
,
1
);
}
else
{
httpReleaseContext
(
pContext
/*, false*/
);
}
}
}
}
}
}
...
@@ -402,13 +400,17 @@ static bool httpReadData(HttpContext *pContext) {
...
@@ -402,13 +400,17 @@ static bool httpReadData(HttpContext *pContext) {
}
else
if
(
nread
<
0
)
{
}
else
if
(
nread
<
0
)
{
if
(
errno
==
EINTR
||
errno
==
EAGAIN
||
errno
==
EWOULDBLOCK
)
{
if
(
errno
==
EINTR
||
errno
==
EAGAIN
||
errno
==
EWOULDBLOCK
)
{
httpDebug
(
"context:%p, fd:%d, read from socket error:%d, wait another event"
,
pContext
,
pContext
->
fd
,
errno
);
httpDebug
(
"context:%p, fd:%d, read from socket error:%d, wait another event"
,
pContext
,
pContext
->
fd
,
errno
);
return
fals
e
;
// later again
continu
e
;
// later again
}
else
{
}
else
{
httpError
(
"context:%p, fd:%d, read from socket error:%d, close connect"
,
pContext
,
pContext
->
fd
,
errno
);
httpError
(
"context:%p, fd:%d, read from socket error:%d, close connect"
,
pContext
,
pContext
->
fd
,
errno
);
taosCloseSocket
(
pContext
->
fd
);
httpReleaseContext
(
pContext
/*, false */
);
return
false
;
return
false
;
}
}
}
else
{
}
else
{
httpError
(
"context:%p, fd:%d, nread:%d, wait another event"
,
pContext
,
pContext
->
fd
,
nread
);
httpError
(
"context:%p, fd:%d, nread:%d, wait another event"
,
pContext
,
pContext
->
fd
,
nread
);
taosCloseSocket
(
pContext
->
fd
);
httpReleaseContext
(
pContext
/*, false */
);
return
false
;
return
false
;
}
}
}
}
...
...
src/plugins/http/src/httpSql.c
浏览文件 @
bbda67eb
...
@@ -405,7 +405,6 @@ void httpProcessRequestCb(void *param, TAOS_RES *result, int32_t code) {
...
@@ -405,7 +405,6 @@ void httpProcessRequestCb(void *param, TAOS_RES *result, int32_t code) {
if
(
pContext
->
session
==
NULL
)
{
if
(
pContext
->
session
==
NULL
)
{
httpSendErrorResp
(
pContext
,
TSDB_CODE_HTTP_SESSION_FULL
);
httpSendErrorResp
(
pContext
,
TSDB_CODE_HTTP_SESSION_FULL
);
httpCloseContextByApp
(
pContext
);
}
else
{
}
else
{
httpExecCmd
(
pContext
);
httpExecCmd
(
pContext
);
}
}
...
...
src/plugins/http/src/httpUtil.c
浏览文件 @
bbda67eb
...
@@ -21,6 +21,7 @@
...
@@ -21,6 +21,7 @@
#include "httpResp.h"
#include "httpResp.h"
#include "httpSql.h"
#include "httpSql.h"
#include "httpUtil.h"
#include "httpUtil.h"
#include "ttoken.h"
bool
httpCheckUsedbSql
(
char
*
sql
)
{
bool
httpCheckUsedbSql
(
char
*
sql
)
{
if
(
strstr
(
sql
,
"use "
)
!=
NULL
)
{
if
(
strstr
(
sql
,
"use "
)
!=
NULL
)
{
...
@@ -29,6 +30,17 @@ bool httpCheckUsedbSql(char *sql) {
...
@@ -29,6 +30,17 @@ bool httpCheckUsedbSql(char *sql) {
return
false
;
return
false
;
}
}
bool
httpCheckAlterSql
(
char
*
sql
)
{
int32_t
index
=
0
;
do
{
SStrToken
t0
=
tStrGetToken
(
sql
,
&
index
,
false
);
if
(
t0
.
type
!=
TK_LP
)
{
return
t0
.
type
==
TK_ALTER
;
}
}
while
(
1
);
}
void
httpTimeToString
(
int32_t
t
,
char
*
buf
,
int32_t
buflen
)
{
void
httpTimeToString
(
int32_t
t
,
char
*
buf
,
int32_t
buflen
)
{
memset
(
buf
,
0
,
(
size_t
)
buflen
);
memset
(
buf
,
0
,
(
size_t
)
buflen
);
char
ts
[
32
]
=
{
0
};
char
ts
[
32
]
=
{
0
};
...
...
src/query/inc/qExecutor.h
浏览文件 @
bbda67eb
...
@@ -86,11 +86,18 @@ typedef struct SResultRow {
...
@@ -86,11 +86,18 @@ typedef struct SResultRow {
char
*
key
;
// start key of current result row
char
*
key
;
// start key of current result row
}
SResultRow
;
}
SResultRow
;
typedef
struct
SResultRowCell
{
uint64_t
groupId
;
SResultRow
*
pRow
;
}
SResultRowCell
;
typedef
struct
SGroupResInfo
{
typedef
struct
SGroupResInfo
{
int32_t
totalGroup
;
int32_t
totalGroup
;
int32_t
currentGroup
;
int32_t
currentGroup
;
int32_t
index
;
int32_t
index
;
SArray
*
pRows
;
// SArray<SResultRow*>
SArray
*
pRows
;
// SArray<SResultRow*>
bool
ordered
;
int32_t
position
;
}
SGroupResInfo
;
}
SGroupResInfo
;
/**
/**
...
@@ -284,8 +291,9 @@ typedef struct SQueryRuntimeEnv {
...
@@ -284,8 +291,9 @@ typedef struct SQueryRuntimeEnv {
SDiskbasedResultBuf
*
pResultBuf
;
// query result buffer based on blocked-wised disk file
SDiskbasedResultBuf
*
pResultBuf
;
// query result buffer based on blocked-wised disk file
SHashObj
*
pResultRowHashTable
;
// quick locate the window object for each result
SHashObj
*
pResultRowHashTable
;
// quick locate the window object for each result
SHashObj
*
pResultRowListSet
;
// used to check if current ResultRowInfo has ResultRow object or not
SHashObj
*
pResultRowListSet
;
// used to check if current ResultRowInfo has ResultRow object or not
SArray
*
pResultRowArrayList
;
// The array list that contains the Result rows
char
*
keyBuf
;
// window key buffer
char
*
keyBuf
;
// window key buffer
SResultRowPool
*
pool
;
//
window result object pool
SResultRowPool
*
pool
;
//
The window result objects pool, all the resultRow Objects are allocated and managed by this object.
char
**
prevRow
;
char
**
prevRow
;
SArray
*
prevResult
;
// intermediate result, SArray<SInterResult>
SArray
*
prevResult
;
// intermediate result, SArray<SInterResult>
...
...
src/query/src/qExecutor.c
浏览文件 @
bbda67eb
...
@@ -544,6 +544,8 @@ static SResultRow* doSetResultOutBufByKey(SQueryRuntimeEnv* pRuntimeEnv, SResult
...
@@ -544,6 +544,8 @@ static SResultRow* doSetResultOutBufByKey(SQueryRuntimeEnv* pRuntimeEnv, SResult
// add a new result set for a new group
// add a new result set for a new group
taosHashPut
(
pRuntimeEnv
->
pResultRowHashTable
,
pRuntimeEnv
->
keyBuf
,
GET_RES_WINDOW_KEY_LEN
(
bytes
),
&
pResult
,
POINTER_BYTES
);
taosHashPut
(
pRuntimeEnv
->
pResultRowHashTable
,
pRuntimeEnv
->
keyBuf
,
GET_RES_WINDOW_KEY_LEN
(
bytes
),
&
pResult
,
POINTER_BYTES
);
SResultRowCell
cell
=
{.
groupId
=
tableGroupId
,
.
pRow
=
pResult
};
taosArrayPush
(
pRuntimeEnv
->
pResultRowArrayList
,
&
cell
);
}
else
{
}
else
{
pResult
=
*
p1
;
pResult
=
*
p1
;
}
}
...
@@ -2107,9 +2109,10 @@ static int32_t setupQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv, int32_t numOf
...
@@ -2107,9 +2109,10 @@ static int32_t setupQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv, int32_t numOf
pRuntimeEnv
->
pQueryAttr
=
pQueryAttr
;
pRuntimeEnv
->
pQueryAttr
=
pQueryAttr
;
pRuntimeEnv
->
pResultRowHashTable
=
taosHashInit
(
numOfTables
,
taosGetDefaultHashFunction
(
TSDB_DATA_TYPE_BINARY
),
true
,
HASH_NO_LOCK
);
pRuntimeEnv
->
pResultRowHashTable
=
taosHashInit
(
numOfTables
,
taosGetDefaultHashFunction
(
TSDB_DATA_TYPE_BINARY
),
true
,
HASH_NO_LOCK
);
pRuntimeEnv
->
pResultRowListSet
=
taosHashInit
(
numOfTables
,
taosGetDefaultHashFunction
(
TSDB_DATA_TYPE_BINARY
),
false
,
HASH_NO_LOCK
);
pRuntimeEnv
->
pResultRowListSet
=
taosHashInit
(
numOfTables
*
10
,
taosGetDefaultHashFunction
(
TSDB_DATA_TYPE_BINARY
),
false
,
HASH_NO_LOCK
);
pRuntimeEnv
->
keyBuf
=
malloc
(
pQueryAttr
->
maxTableColumnWidth
+
sizeof
(
int64_t
)
+
POINTER_BYTES
);
pRuntimeEnv
->
keyBuf
=
malloc
(
pQueryAttr
->
maxTableColumnWidth
+
sizeof
(
int64_t
)
+
POINTER_BYTES
);
pRuntimeEnv
->
pool
=
initResultRowPool
(
getResultRowSize
(
pRuntimeEnv
));
pRuntimeEnv
->
pool
=
initResultRowPool
(
getResultRowSize
(
pRuntimeEnv
));
pRuntimeEnv
->
pResultRowArrayList
=
taosArrayInit
(
numOfTables
,
sizeof
(
SResultRowCell
));
pRuntimeEnv
->
prevRow
=
malloc
(
POINTER_BYTES
*
pQueryAttr
->
numOfCols
+
pQueryAttr
->
srcRowSize
);
pRuntimeEnv
->
prevRow
=
malloc
(
POINTER_BYTES
*
pQueryAttr
->
numOfCols
+
pQueryAttr
->
srcRowSize
);
pRuntimeEnv
->
tagVal
=
malloc
(
pQueryAttr
->
tagLen
);
pRuntimeEnv
->
tagVal
=
malloc
(
pQueryAttr
->
tagLen
);
...
@@ -2384,6 +2387,7 @@ static void teardownQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv) {
...
@@ -2384,6 +2387,7 @@ static void teardownQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv) {
pRuntimeEnv
->
pool
=
destroyResultRowPool
(
pRuntimeEnv
->
pool
);
pRuntimeEnv
->
pool
=
destroyResultRowPool
(
pRuntimeEnv
->
pool
);
taosArrayDestroyEx
(
pRuntimeEnv
->
prevResult
,
freeInterResult
);
taosArrayDestroyEx
(
pRuntimeEnv
->
prevResult
,
freeInterResult
);
taosArrayDestroy
(
pRuntimeEnv
->
pResultRowArrayList
);
pRuntimeEnv
->
prevResult
=
NULL
;
pRuntimeEnv
->
prevResult
=
NULL
;
}
}
...
@@ -4808,7 +4812,6 @@ int32_t doInitQInfo(SQInfo* pQInfo, STSBuf* pTsBuf, void* tsdb, void* sourceOptr
...
@@ -4808,7 +4812,6 @@ int32_t doInitQInfo(SQInfo* pQInfo, STSBuf* pTsBuf, void* tsdb, void* sourceOptr
SQueryAttr
*
pQueryAttr
=
pQInfo
->
runtimeEnv
.
pQueryAttr
;
SQueryAttr
*
pQueryAttr
=
pQInfo
->
runtimeEnv
.
pQueryAttr
;
pQueryAttr
->
tsdb
=
tsdb
;
pQueryAttr
->
tsdb
=
tsdb
;
if
(
tsdb
!=
NULL
)
{
if
(
tsdb
!=
NULL
)
{
int32_t
code
=
setupQueryHandle
(
tsdb
,
pRuntimeEnv
,
pQInfo
->
qId
,
pQueryAttr
->
stableQuery
);
int32_t
code
=
setupQueryHandle
(
tsdb
,
pRuntimeEnv
,
pQInfo
->
qId
,
pQueryAttr
->
stableQuery
);
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
if
(
code
!=
TSDB_CODE_SUCCESS
)
{
...
@@ -6379,6 +6382,7 @@ static SSDataBlock* hashGroupbyAggregate(void* param, bool* newgroup) {
...
@@ -6379,6 +6382,7 @@ static SSDataBlock* hashGroupbyAggregate(void* param, bool* newgroup) {
if
(
!
pRuntimeEnv
->
pQueryAttr
->
stableQuery
)
{
if
(
!
pRuntimeEnv
->
pQueryAttr
->
stableQuery
)
{
sortGroupResByOrderList
(
&
pRuntimeEnv
->
groupResInfo
,
pRuntimeEnv
,
pInfo
->
binfo
.
pRes
);
sortGroupResByOrderList
(
&
pRuntimeEnv
->
groupResInfo
,
pRuntimeEnv
,
pInfo
->
binfo
.
pRes
);
}
}
toSSDataBlock
(
&
pRuntimeEnv
->
groupResInfo
,
pRuntimeEnv
,
pInfo
->
binfo
.
pRes
);
toSSDataBlock
(
&
pRuntimeEnv
->
groupResInfo
,
pRuntimeEnv
,
pInfo
->
binfo
.
pRes
);
if
(
pInfo
->
binfo
.
pRes
->
info
.
rows
==
0
||
!
hasRemainDataInCurrentGroup
(
&
pRuntimeEnv
->
groupResInfo
))
{
if
(
pInfo
->
binfo
.
pRes
->
info
.
rows
==
0
||
!
hasRemainDataInCurrentGroup
(
&
pRuntimeEnv
->
groupResInfo
))
{
...
@@ -7600,8 +7604,8 @@ int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param) {
...
@@ -7600,8 +7604,8 @@ int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param) {
pMsg
+=
sizeof
(
SSqlExpr
);
pMsg
+=
sizeof
(
SSqlExpr
);
for
(
int32_t
j
=
0
;
j
<
pExprMsg
->
numOfParams
;
++
j
)
{
for
(
int32_t
j
=
0
;
j
<
pExprMsg
->
numOfParams
;
++
j
)
{
pExprMsg
->
param
[
j
].
nType
=
hton
s
(
pExprMsg
->
param
[
j
].
nType
);
pExprMsg
->
param
[
j
].
nType
=
hton
l
(
pExprMsg
->
param
[
j
].
nType
);
pExprMsg
->
param
[
j
].
nLen
=
htons
(
pExprMsg
->
param
[
j
].
nLen
);
pExprMsg
->
param
[
j
].
nLen
=
htonl
(
pExprMsg
->
param
[
j
].
nLen
);
if
(
pExprMsg
->
param
[
j
].
nType
==
TSDB_DATA_TYPE_BINARY
)
{
if
(
pExprMsg
->
param
[
j
].
nType
==
TSDB_DATA_TYPE_BINARY
)
{
pExprMsg
->
param
[
j
].
pz
=
pMsg
;
pExprMsg
->
param
[
j
].
pz
=
pMsg
;
...
@@ -7648,8 +7652,8 @@ int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param) {
...
@@ -7648,8 +7652,8 @@ int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param) {
pMsg
+=
sizeof
(
SSqlExpr
);
pMsg
+=
sizeof
(
SSqlExpr
);
for
(
int32_t
j
=
0
;
j
<
pExprMsg
->
numOfParams
;
++
j
)
{
for
(
int32_t
j
=
0
;
j
<
pExprMsg
->
numOfParams
;
++
j
)
{
pExprMsg
->
param
[
j
].
nType
=
hton
s
(
pExprMsg
->
param
[
j
].
nType
);
pExprMsg
->
param
[
j
].
nType
=
hton
l
(
pExprMsg
->
param
[
j
].
nType
);
pExprMsg
->
param
[
j
].
nLen
=
hton
s
(
pExprMsg
->
param
[
j
].
nLen
);
pExprMsg
->
param
[
j
].
nLen
=
hton
l
(
pExprMsg
->
param
[
j
].
nLen
);
if
(
pExprMsg
->
param
[
j
].
nType
==
TSDB_DATA_TYPE_BINARY
)
{
if
(
pExprMsg
->
param
[
j
].
nType
==
TSDB_DATA_TYPE_BINARY
)
{
pExprMsg
->
param
[
j
].
pz
=
pMsg
;
pExprMsg
->
param
[
j
].
pz
=
pMsg
;
...
@@ -8648,7 +8652,6 @@ int32_t initQInfo(STsBufInfo* pTsBufInfo, void* tsdb, void* sourceOptr, SQInfo*
...
@@ -8648,7 +8652,6 @@ int32_t initQInfo(STsBufInfo* pTsBufInfo, void* tsdb, void* sourceOptr, SQInfo*
SArray
*
prevResult
=
NULL
;
SArray
*
prevResult
=
NULL
;
if
(
prevResultLen
>
0
)
{
if
(
prevResultLen
>
0
)
{
prevResult
=
interResFromBinary
(
param
->
prevResult
,
prevResultLen
);
prevResult
=
interResFromBinary
(
param
->
prevResult
,
prevResultLen
);
pRuntimeEnv
->
prevResult
=
prevResult
;
pRuntimeEnv
->
prevResult
=
prevResult
;
}
}
...
...
src/query/src/qUtil.c
浏览文件 @
bbda67eb
...
@@ -436,13 +436,13 @@ static int32_t tableResultComparFn(const void *pLeft, const void *pRight, void *
...
@@ -436,13 +436,13 @@ static int32_t tableResultComparFn(const void *pLeft, const void *pRight, void *
}
}
STableQueryInfo
**
pList
=
supporter
->
pTableQueryInfo
;
STableQueryInfo
**
pList
=
supporter
->
pTableQueryInfo
;
SResultRow
*
pWindowRes1
=
pList
[
left
]
->
resInfo
.
pResult
[
leftPos
];
SResultRowInfo
*
pWindowResInfo1
=
&
(
pList
[
left
]
->
resInfo
);
// SResultRow * pWindowRes1 = getResultRow(&(pList[left]->resInfo), leftPos);
SResultRow
*
pWindowRes1
=
getResultRow
(
pWindowResInfo1
,
leftPos
);
TSKEY
leftTimestamp
=
pWindowRes1
->
win
.
skey
;
TSKEY
leftTimestamp
=
pWindowRes1
->
win
.
skey
;
SResultRowInfo
*
pWindowResInfo2
=
&
(
pList
[
right
]
->
resInfo
);
// SResultRowInfo *pWindowResInfo2 = &(pList[right]->resInfo);
SResultRow
*
pWindowRes2
=
getResultRow
(
pWindowResInfo2
,
rightPos
);
// SResultRow * pWindowRes2 = getResultRow(pWindowResInfo2, rightPos);
SResultRow
*
pWindowRes2
=
pList
[
right
]
->
resInfo
.
pResult
[
rightPos
];
TSKEY
rightTimestamp
=
pWindowRes2
->
win
.
skey
;
TSKEY
rightTimestamp
=
pWindowRes2
->
win
.
skey
;
if
(
leftTimestamp
==
rightTimestamp
)
{
if
(
leftTimestamp
==
rightTimestamp
)
{
...
@@ -456,7 +456,77 @@ static int32_t tableResultComparFn(const void *pLeft, const void *pRight, void *
...
@@ -456,7 +456,77 @@ static int32_t tableResultComparFn(const void *pLeft, const void *pRight, void *
}
}
}
}
static
int32_t
mergeIntoGroupResultImpl
(
SQueryRuntimeEnv
*
pRuntimeEnv
,
SGroupResInfo
*
pGroupResInfo
,
SArray
*
pTableList
,
int32_t
tsAscOrder
(
const
void
*
p1
,
const
void
*
p2
)
{
SResultRowCell
*
pc1
=
(
SResultRowCell
*
)
p1
;
SResultRowCell
*
pc2
=
(
SResultRowCell
*
)
p2
;
if
(
pc1
->
groupId
==
pc2
->
groupId
)
{
if
(
pc1
->
pRow
->
win
.
skey
==
pc2
->
pRow
->
win
.
skey
)
{
return
0
;
}
else
{
return
(
pc1
->
pRow
->
win
.
skey
<
pc2
->
pRow
->
win
.
skey
)
?
-
1
:
1
;
}
}
else
{
return
(
pc1
->
groupId
<
pc2
->
groupId
)
?
-
1
:
1
;
}
}
int32_t
tsDescOrder
(
const
void
*
p1
,
const
void
*
p2
)
{
SResultRowCell
*
pc1
=
(
SResultRowCell
*
)
p1
;
SResultRowCell
*
pc2
=
(
SResultRowCell
*
)
p2
;
if
(
pc1
->
groupId
==
pc2
->
groupId
)
{
if
(
pc1
->
pRow
->
win
.
skey
==
pc2
->
pRow
->
win
.
skey
)
{
return
0
;
}
else
{
return
(
pc1
->
pRow
->
win
.
skey
<
pc2
->
pRow
->
win
.
skey
)
?
1
:-
1
;
}
}
else
{
return
(
pc1
->
groupId
<
pc2
->
groupId
)
?
-
1
:
1
;
}
}
void
orderTheResultRows
(
SQueryRuntimeEnv
*
pRuntimeEnv
)
{
__compar_fn_t
fn
=
NULL
;
if
(
pRuntimeEnv
->
pQueryAttr
->
order
.
order
==
TSDB_ORDER_ASC
)
{
fn
=
tsAscOrder
;
}
else
{
fn
=
tsDescOrder
;
}
taosArraySort
(
pRuntimeEnv
->
pResultRowArrayList
,
fn
);
}
static
int32_t
mergeIntoGroupResultImplRv
(
SQueryRuntimeEnv
*
pRuntimeEnv
,
SGroupResInfo
*
pGroupResInfo
,
uint64_t
groupId
,
int32_t
*
rowCellInfoOffset
)
{
if
(
!
pGroupResInfo
->
ordered
)
{
orderTheResultRows
(
pRuntimeEnv
);
pGroupResInfo
->
ordered
=
true
;
}
if
(
pGroupResInfo
->
pRows
==
NULL
)
{
pGroupResInfo
->
pRows
=
taosArrayInit
(
100
,
POINTER_BYTES
);
}
size_t
len
=
taosArrayGetSize
(
pRuntimeEnv
->
pResultRowArrayList
);
for
(;
pGroupResInfo
->
position
<
len
;
++
pGroupResInfo
->
position
)
{
SResultRowCell
*
pResultRowCell
=
taosArrayGet
(
pRuntimeEnv
->
pResultRowArrayList
,
pGroupResInfo
->
position
);
if
(
pResultRowCell
->
groupId
!=
groupId
)
{
break
;
}
int64_t
num
=
getNumOfResultWindowRes
(
pRuntimeEnv
,
pResultRowCell
->
pRow
,
rowCellInfoOffset
);
if
(
num
<=
0
)
{
continue
;
}
taosArrayPush
(
pGroupResInfo
->
pRows
,
&
pResultRowCell
->
pRow
);
pResultRowCell
->
pRow
->
numOfRows
=
(
uint32_t
)
num
;
}
return
TSDB_CODE_SUCCESS
;
}
static
UNUSED_FUNC
int32_t
mergeIntoGroupResultImpl
(
SQueryRuntimeEnv
*
pRuntimeEnv
,
SGroupResInfo
*
pGroupResInfo
,
SArray
*
pTableList
,
int32_t
*
rowCellInfoOffset
)
{
int32_t
*
rowCellInfoOffset
)
{
bool
ascQuery
=
QUERY_IS_ASC_QUERY
(
pRuntimeEnv
->
pQueryAttr
);
bool
ascQuery
=
QUERY_IS_ASC_QUERY
(
pRuntimeEnv
->
pQueryAttr
);
...
@@ -562,12 +632,7 @@ int32_t mergeIntoGroupResult(SGroupResInfo* pGroupResInfo, SQueryRuntimeEnv* pRu
...
@@ -562,12 +632,7 @@ int32_t mergeIntoGroupResult(SGroupResInfo* pGroupResInfo, SQueryRuntimeEnv* pRu
int64_t
st
=
taosGetTimestampUs
();
int64_t
st
=
taosGetTimestampUs
();
while
(
pGroupResInfo
->
currentGroup
<
pGroupResInfo
->
totalGroup
)
{
while
(
pGroupResInfo
->
currentGroup
<
pGroupResInfo
->
totalGroup
)
{
SArray
*
group
=
GET_TABLEGROUP
(
pRuntimeEnv
,
pGroupResInfo
->
currentGroup
);
mergeIntoGroupResultImplRv
(
pRuntimeEnv
,
pGroupResInfo
,
pGroupResInfo
->
currentGroup
,
offset
);
int32_t
ret
=
mergeIntoGroupResultImpl
(
pRuntimeEnv
,
pGroupResInfo
,
group
,
offset
);
if
(
ret
!=
TSDB_CODE_SUCCESS
)
{
return
ret
;
}
// this group generates at least one result, return results
// this group generates at least one result, return results
if
(
taosArrayGetSize
(
pGroupResInfo
->
pRows
)
>
0
)
{
if
(
taosArrayGetSize
(
pGroupResInfo
->
pRows
)
>
0
)
{
...
@@ -583,7 +648,6 @@ int32_t mergeIntoGroupResult(SGroupResInfo* pGroupResInfo, SQueryRuntimeEnv* pRu
...
@@ -583,7 +648,6 @@ int32_t mergeIntoGroupResult(SGroupResInfo* pGroupResInfo, SQueryRuntimeEnv* pRu
qDebug
(
"QInfo:%"
PRIu64
" merge res data into group, index:%d, total group:%d, elapsed time:%"
PRId64
"us"
,
GET_QID
(
pRuntimeEnv
),
qDebug
(
"QInfo:%"
PRIu64
" merge res data into group, index:%d, total group:%d, elapsed time:%"
PRId64
"us"
,
GET_QID
(
pRuntimeEnv
),
pGroupResInfo
->
currentGroup
,
pGroupResInfo
->
totalGroup
,
elapsedTime
);
pGroupResInfo
->
currentGroup
,
pGroupResInfo
->
totalGroup
,
elapsedTime
);
// pQInfo->summary.firstStageMergeTime += elapsedTime;
return
TSDB_CODE_SUCCESS
;
return
TSDB_CODE_SUCCESS
;
}
}
...
...
src/tsdb/inc/tsdbMeta.h
浏览文件 @
bbda67eb
...
@@ -100,7 +100,7 @@ static FORCE_INLINE int tsdbCompareSchemaVersion(const void *key1, const void *k
...
@@ -100,7 +100,7 @@ static FORCE_INLINE int tsdbCompareSchemaVersion(const void *key1, const void *k
}
}
static
FORCE_INLINE
STSchema
*
tsdbGetTableSchemaImpl
(
STable
*
pTable
,
bool
lock
,
bool
copy
,
int16_t
_version
)
{
static
FORCE_INLINE
STSchema
*
tsdbGetTableSchemaImpl
(
STable
*
pTable
,
bool
lock
,
bool
copy
,
int16_t
_version
)
{
STable
*
pDTable
=
(
TABLE_TYPE
(
pTable
)
==
TSDB_CHILD_TABLE
)
?
pTable
->
pSuper
:
pTable
;
STable
*
pDTable
=
(
pTable
->
pSuper
!=
NULL
)
?
pTable
->
pSuper
:
pTable
;
// for performance purpose
STSchema
*
pSchema
=
NULL
;
STSchema
*
pSchema
=
NULL
;
STSchema
*
pTSchema
=
NULL
;
STSchema
*
pTSchema
=
NULL
;
...
...
src/tsdb/src/tsdbRead.c
浏览文件 @
bbda67eb
...
@@ -288,8 +288,6 @@ static SArray* createCheckInfoFromTableGroup(STsdbQueryHandle* pQueryHandle, STa
...
@@ -288,8 +288,6 @@ static SArray* createCheckInfoFromTableGroup(STsdbQueryHandle* pQueryHandle, STa
STableKeyInfo
*
pKeyInfo
=
(
STableKeyInfo
*
)
taosArrayGet
(
group
,
j
);
STableKeyInfo
*
pKeyInfo
=
(
STableKeyInfo
*
)
taosArrayGet
(
group
,
j
);
STableCheckInfo
info
=
{
.
lastKey
=
pKeyInfo
->
lastKey
,
.
pTableObj
=
pKeyInfo
->
pTable
};
STableCheckInfo
info
=
{
.
lastKey
=
pKeyInfo
->
lastKey
,
.
pTableObj
=
pKeyInfo
->
pTable
};
info
.
tableId
=
((
STable
*
)(
pKeyInfo
->
pTable
))
->
tableId
;
assert
(
info
.
pTableObj
!=
NULL
&&
(
info
.
pTableObj
->
type
==
TSDB_NORMAL_TABLE
||
assert
(
info
.
pTableObj
!=
NULL
&&
(
info
.
pTableObj
->
type
==
TSDB_NORMAL_TABLE
||
info
.
pTableObj
->
type
==
TSDB_CHILD_TABLE
||
info
.
pTableObj
->
type
==
TSDB_STREAM_TABLE
));
info
.
pTableObj
->
type
==
TSDB_CHILD_TABLE
||
info
.
pTableObj
->
type
==
TSDB_STREAM_TABLE
));
...
@@ -2218,7 +2216,7 @@ static int32_t createDataBlocksInfo(STsdbQueryHandle* pQueryHandle, int32_t numO
...
@@ -2218,7 +2216,7 @@ static int32_t createDataBlocksInfo(STsdbQueryHandle* pQueryHandle, int32_t numO
SBlock
*
pBlock
=
pTableCheck
->
pCompInfo
->
blocks
;
SBlock
*
pBlock
=
pTableCheck
->
pCompInfo
->
blocks
;
sup
.
numOfBlocksPerTable
[
numOfQualTables
]
=
pTableCheck
->
numOfBlocks
;
sup
.
numOfBlocksPerTable
[
numOfQualTables
]
=
pTableCheck
->
numOfBlocks
;
char
*
buf
=
calloc
(
1
,
sizeof
(
STableBlockInfo
)
*
pTableCheck
->
numOfBlocks
);
char
*
buf
=
malloc
(
sizeof
(
STableBlockInfo
)
*
pTableCheck
->
numOfBlocks
);
if
(
buf
==
NULL
)
{
if
(
buf
==
NULL
)
{
cleanBlockOrderSupporter
(
&
sup
,
numOfQualTables
);
cleanBlockOrderSupporter
(
&
sup
,
numOfQualTables
);
return
TSDB_CODE_TDB_OUT_OF_MEMORY
;
return
TSDB_CODE_TDB_OUT_OF_MEMORY
;
...
@@ -3618,8 +3616,6 @@ SArray* createTableGroup(SArray* pTableList, STSchema* pTagSchema, SColIndex* pC
...
@@ -3618,8 +3616,6 @@ SArray* createTableGroup(SArray* pTableList, STSchema* pTagSchema, SColIndex* pC
for
(
int32_t
i
=
0
;
i
<
size
;
++
i
)
{
for
(
int32_t
i
=
0
;
i
<
size
;
++
i
)
{
STableKeyInfo
*
pKeyInfo
=
taosArrayGet
(
pTableList
,
i
);
STableKeyInfo
*
pKeyInfo
=
taosArrayGet
(
pTableList
,
i
);
assert
(((
STable
*
)
pKeyInfo
->
pTable
)
->
type
==
TSDB_CHILD_TABLE
);
tsdbRefTable
(
pKeyInfo
->
pTable
);
tsdbRefTable
(
pKeyInfo
->
pTable
);
STableKeyInfo
info
=
{.
pTable
=
pKeyInfo
->
pTable
,
.
lastKey
=
skey
};
STableKeyInfo
info
=
{.
pTable
=
pKeyInfo
->
pTable
,
.
lastKey
=
skey
};
...
...
src/util/inc/tlosertree.h
浏览文件 @
bbda67eb
...
@@ -26,7 +26,7 @@ typedef int (*__merge_compare_fn_t)(const void *, const void *, void *param);
...
@@ -26,7 +26,7 @@ typedef int (*__merge_compare_fn_t)(const void *, const void *, void *param);
typedef
struct
SLoserTreeNode
{
typedef
struct
SLoserTreeNode
{
int32_t
index
;
int32_t
index
;
void
*
pData
;
void
*
pData
;
}
SLoserTreeNode
;
}
SLoserTreeNode
;
typedef
struct
SLoserTreeInfo
{
typedef
struct
SLoserTreeInfo
{
...
@@ -34,7 +34,6 @@ typedef struct SLoserTreeInfo {
...
@@ -34,7 +34,6 @@ typedef struct SLoserTreeInfo {
int32_t
totalEntries
;
int32_t
totalEntries
;
__merge_compare_fn_t
comparFn
;
__merge_compare_fn_t
comparFn
;
void
*
param
;
void
*
param
;
SLoserTreeNode
*
pNode
;
SLoserTreeNode
*
pNode
;
}
SLoserTreeInfo
;
}
SLoserTreeInfo
;
...
...
src/util/src/hash.c
浏览文件 @
bbda67eb
...
@@ -741,7 +741,7 @@ void taosHashTableResize(SHashObj *pHashObj) {
...
@@ -741,7 +741,7 @@ void taosHashTableResize(SHashObj *pHashObj) {
}
}
SHashNode
*
doCreateHashNode
(
const
void
*
key
,
size_t
keyLen
,
const
void
*
pData
,
size_t
dsize
,
uint32_t
hashVal
)
{
SHashNode
*
doCreateHashNode
(
const
void
*
key
,
size_t
keyLen
,
const
void
*
pData
,
size_t
dsize
,
uint32_t
hashVal
)
{
SHashNode
*
pNewNode
=
calloc
(
1
,
sizeof
(
SHashNode
)
+
keyLen
+
dsize
);
SHashNode
*
pNewNode
=
malloc
(
sizeof
(
SHashNode
)
+
keyLen
+
dsize
);
if
(
pNewNode
==
NULL
)
{
if
(
pNewNode
==
NULL
)
{
uError
(
"failed to allocate memory, reason:%s"
,
strerror
(
errno
));
uError
(
"failed to allocate memory, reason:%s"
,
strerror
(
errno
));
...
@@ -752,6 +752,8 @@ SHashNode *doCreateHashNode(const void *key, size_t keyLen, const void *pData, s
...
@@ -752,6 +752,8 @@ SHashNode *doCreateHashNode(const void *key, size_t keyLen, const void *pData, s
pNewNode
->
hashVal
=
hashVal
;
pNewNode
->
hashVal
=
hashVal
;
pNewNode
->
dataLen
=
(
uint32_t
)
dsize
;
pNewNode
->
dataLen
=
(
uint32_t
)
dsize
;
pNewNode
->
count
=
1
;
pNewNode
->
count
=
1
;
pNewNode
->
removed
=
0
;
pNewNode
->
next
=
NULL
;
memcpy
(
GET_HASH_NODE_DATA
(
pNewNode
),
pData
,
dsize
);
memcpy
(
GET_HASH_NODE_DATA
(
pNewNode
),
pData
,
dsize
);
memcpy
(
GET_HASH_NODE_KEY
(
pNewNode
),
key
,
keyLen
);
memcpy
(
GET_HASH_NODE_KEY
(
pNewNode
),
key
,
keyLen
);
...
...
src/util/src/tarray.c
浏览文件 @
bbda67eb
...
@@ -24,11 +24,12 @@ void* taosArrayInit(size_t size, size_t elemSize) {
...
@@ -24,11 +24,12 @@ void* taosArrayInit(size_t size, size_t elemSize) {
size
=
TARRAY_MIN_SIZE
;
size
=
TARRAY_MIN_SIZE
;
}
}
SArray
*
pArray
=
calloc
(
1
,
sizeof
(
SArray
));
SArray
*
pArray
=
malloc
(
sizeof
(
SArray
));
if
(
pArray
==
NULL
)
{
if
(
pArray
==
NULL
)
{
return
NULL
;
return
NULL
;
}
}
pArray
->
size
=
0
;
pArray
->
pData
=
calloc
(
size
,
elemSize
);
pArray
->
pData
=
calloc
(
size
,
elemSize
);
if
(
pArray
->
pData
==
NULL
)
{
if
(
pArray
->
pData
==
NULL
)
{
free
(
pArray
);
free
(
pArray
);
...
...
src/util/src/terror.c
浏览文件 @
bbda67eb
...
@@ -116,6 +116,9 @@ TAOS_DEFINE_ERROR(TSDB_CODE_TSC_DUP_COL_NAMES, "duplicated column nam
...
@@ -116,6 +116,9 @@ TAOS_DEFINE_ERROR(TSDB_CODE_TSC_DUP_COL_NAMES, "duplicated column nam
TAOS_DEFINE_ERROR
(
TSDB_CODE_TSC_INVALID_TAG_LENGTH
,
"Invalid tag length"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_TSC_INVALID_TAG_LENGTH
,
"Invalid tag length"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_TSC_INVALID_COLUMN_LENGTH
,
"Invalid column length"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_TSC_INVALID_COLUMN_LENGTH
,
"Invalid column length"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_TSC_DUP_TAG_NAMES
,
"duplicated tag names"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_TSC_DUP_TAG_NAMES
,
"duplicated tag names"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_TSC_INVALID_JSON
,
"Invalid JSON format"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_TSC_INVALID_JSON_TYPE
,
"Invalid JSON data type"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_TSC_VALUE_OUT_OF_RANGE
,
"Value out of range"
)
// mnode
// mnode
TAOS_DEFINE_ERROR
(
TSDB_CODE_MND_MSG_NOT_PROCESSED
,
"Message not processed"
)
TAOS_DEFINE_ERROR
(
TSDB_CODE_MND_MSG_NOT_PROCESSED
,
"Message not processed"
)
...
...
src/util/src/tlosertree.c
浏览文件 @
bbda67eb
...
@@ -90,12 +90,13 @@ void tLoserTreeAdjust(SLoserTreeInfo* pTree, int32_t idx) {
...
@@ -90,12 +90,13 @@ void tLoserTreeAdjust(SLoserTreeInfo* pTree, int32_t idx) {
SLoserTreeNode
kLeaf
=
pTree
->
pNode
[
idx
];
SLoserTreeNode
kLeaf
=
pTree
->
pNode
[
idx
];
while
(
parentId
>
0
)
{
while
(
parentId
>
0
)
{
if
(
pTree
->
pNode
[
parentId
].
index
==
-
1
)
{
SLoserTreeNode
*
pCur
=
&
pTree
->
pNode
[
parentId
];
if
(
pCur
->
index
==
-
1
)
{
pTree
->
pNode
[
parentId
]
=
kLeaf
;
pTree
->
pNode
[
parentId
]
=
kLeaf
;
return
;
return
;
}
}
int32_t
ret
=
pTree
->
comparFn
(
&
pTree
->
pNode
[
parentId
]
,
&
kLeaf
,
pTree
->
param
);
int32_t
ret
=
pTree
->
comparFn
(
pCur
,
&
kLeaf
,
pTree
->
param
);
if
(
ret
<
0
)
{
if
(
ret
<
0
)
{
SLoserTreeNode
t
=
pTree
->
pNode
[
parentId
];
SLoserTreeNode
t
=
pTree
->
pNode
[
parentId
];
pTree
->
pNode
[
parentId
]
=
kLeaf
;
pTree
->
pNode
[
parentId
]
=
kLeaf
;
...
...
tests/examples/JDBC/JDBCDemo/pom.xml
浏览文件 @
bbda67eb
...
@@ -17,7 +17,7 @@
...
@@ -17,7 +17,7 @@
<dependency>
<dependency>
<groupId>
com.taosdata.jdbc
</groupId>
<groupId>
com.taosdata.jdbc
</groupId>
<artifactId>
taos-jdbcdriver
</artifactId>
<artifactId>
taos-jdbcdriver
</artifactId>
<version>
2.0.3
1
</version>
<version>
2.0.3
4
</version>
</dependency>
</dependency>
</dependencies>
</dependencies>
...
...
tests/examples/JDBC/JDBCDemo/src/main/java/com/taosdata/example/JdbcDemo.java
浏览文件 @
bbda67eb
...
@@ -7,6 +7,9 @@ public class JdbcDemo {
...
@@ -7,6 +7,9 @@ public class JdbcDemo {
private
static
String
host
;
private
static
String
host
;
private
static
final
String
dbName
=
"test"
;
private
static
final
String
dbName
=
"test"
;
private
static
final
String
tbName
=
"weather"
;
private
static
final
String
tbName
=
"weather"
;
private
static
final
String
user
=
"root"
;
private
static
final
String
password
=
"taosdata"
;
private
Connection
connection
;
private
Connection
connection
;
public
static
void
main
(
String
[]
args
)
{
public
static
void
main
(
String
[]
args
)
{
...
@@ -30,10 +33,9 @@ public class JdbcDemo {
...
@@ -30,10 +33,9 @@ public class JdbcDemo {
}
}
private
void
init
()
{
private
void
init
()
{
final
String
url
=
"jdbc:TAOS://"
+
host
+
":6030/?user=
root&password=taosdata"
;
final
String
url
=
"jdbc:TAOS://"
+
host
+
":6030/?user=
"
+
user
+
"&password="
+
password
;
// get connection
// get connection
try
{
try
{
Class
.
forName
(
"com.taosdata.jdbc.TSDBDriver"
);
Properties
properties
=
new
Properties
();
Properties
properties
=
new
Properties
();
properties
.
setProperty
(
"charset"
,
"UTF-8"
);
properties
.
setProperty
(
"charset"
,
"UTF-8"
);
properties
.
setProperty
(
"locale"
,
"en_US.UTF-8"
);
properties
.
setProperty
(
"locale"
,
"en_US.UTF-8"
);
...
@@ -42,8 +44,7 @@ public class JdbcDemo {
...
@@ -42,8 +44,7 @@ public class JdbcDemo {
connection
=
DriverManager
.
getConnection
(
url
,
properties
);
connection
=
DriverManager
.
getConnection
(
url
,
properties
);
if
(
connection
!=
null
)
if
(
connection
!=
null
)
System
.
out
.
println
(
"[ OK ] Connection established."
);
System
.
out
.
println
(
"[ OK ] Connection established."
);
}
catch
(
ClassNotFoundException
|
SQLException
e
)
{
}
catch
(
SQLException
e
)
{
System
.
out
.
println
(
"[ ERROR! ] Connection establish failed."
);
e
.
printStackTrace
();
e
.
printStackTrace
();
}
}
}
}
...
@@ -74,7 +75,7 @@ public class JdbcDemo {
...
@@ -74,7 +75,7 @@ public class JdbcDemo {
}
}
private
void
select
()
{
private
void
select
()
{
final
String
sql
=
"select * from "
+
dbName
+
"."
+
tbName
;
final
String
sql
=
"select * from "
+
dbName
+
"."
+
tbName
;
executeQuery
(
sql
);
executeQuery
(
sql
);
}
}
...
@@ -89,8 +90,6 @@ public class JdbcDemo {
...
@@ -89,8 +90,6 @@ public class JdbcDemo {
}
}
}
}
/************************************************************************/
private
void
executeQuery
(
String
sql
)
{
private
void
executeQuery
(
String
sql
)
{
long
start
=
System
.
currentTimeMillis
();
long
start
=
System
.
currentTimeMillis
();
try
(
Statement
statement
=
connection
.
createStatement
())
{
try
(
Statement
statement
=
connection
.
createStatement
())
{
...
@@ -117,7 +116,6 @@ public class JdbcDemo {
...
@@ -117,7 +116,6 @@ public class JdbcDemo {
}
}
}
}
private
void
printSql
(
String
sql
,
boolean
succeed
,
long
cost
)
{
private
void
printSql
(
String
sql
,
boolean
succeed
,
long
cost
)
{
System
.
out
.
println
(
"[ "
+
(
succeed
?
"OK"
:
"ERROR!"
)
+
" ] time cost: "
+
cost
+
" ms, execute statement ====> "
+
sql
);
System
.
out
.
println
(
"[ "
+
(
succeed
?
"OK"
:
"ERROR!"
)
+
" ] time cost: "
+
cost
+
" ms, execute statement ====> "
+
sql
);
}
}
...
@@ -132,7 +130,6 @@ public class JdbcDemo {
...
@@ -132,7 +130,6 @@ public class JdbcDemo {
long
end
=
System
.
currentTimeMillis
();
long
end
=
System
.
currentTimeMillis
();
printSql
(
sql
,
false
,
(
end
-
start
));
printSql
(
sql
,
false
,
(
end
-
start
));
e
.
printStackTrace
();
e
.
printStackTrace
();
}
}
}
}
...
@@ -141,5 +138,4 @@ public class JdbcDemo {
...
@@ -141,5 +138,4 @@ public class JdbcDemo {
System
.
exit
(
0
);
System
.
exit
(
0
);
}
}
}
}
tests/examples/JDBC/JDBCDemo/src/main/java/com/taosdata/example/JdbcRestfulDemo.java
浏览文件 @
bbda67eb
...
@@ -4,14 +4,15 @@ import java.sql.*;
...
@@ -4,14 +4,15 @@ import java.sql.*;
import
java.util.Properties
;
import
java.util.Properties
;
public
class
JdbcRestfulDemo
{
public
class
JdbcRestfulDemo
{
private
static
final
String
host
=
"127.0.0.1"
;
private
static
final
String
host
=
"localhost"
;
private
static
final
String
dbname
=
"test"
;
private
static
final
String
user
=
"root"
;
private
static
final
String
password
=
"taosdata"
;
public
static
void
main
(
String
[]
args
)
{
public
static
void
main
(
String
[]
args
)
{
try
{
try
{
// load JDBC-restful driver
Class
.
forName
(
"com.taosdata.jdbc.rs.RestfulDriver"
);
// use port 6041 in url when use JDBC-restful
// use port 6041 in url when use JDBC-restful
String
url
=
"jdbc:TAOS-RS://"
+
host
+
":6041/?user=
root&password=taosdata"
;
String
url
=
"jdbc:TAOS-RS://"
+
host
+
":6041/?user=
"
+
user
+
"&password="
+
password
;
Properties
properties
=
new
Properties
();
Properties
properties
=
new
Properties
();
properties
.
setProperty
(
"charset"
,
"UTF-8"
);
properties
.
setProperty
(
"charset"
,
"UTF-8"
);
...
@@ -21,12 +22,12 @@ public class JdbcRestfulDemo {
...
@@ -21,12 +22,12 @@ public class JdbcRestfulDemo {
Connection
conn
=
DriverManager
.
getConnection
(
url
,
properties
);
Connection
conn
=
DriverManager
.
getConnection
(
url
,
properties
);
Statement
stmt
=
conn
.
createStatement
();
Statement
stmt
=
conn
.
createStatement
();
stmt
.
execute
(
"drop database if exists
restful_test"
);
stmt
.
execute
(
"drop database if exists
"
+
dbname
);
stmt
.
execute
(
"create database if not exists
restful_test"
);
stmt
.
execute
(
"create database if not exists
"
+
dbname
);
stmt
.
execute
(
"use
restful_test"
);
stmt
.
execute
(
"use
"
+
dbname
);
stmt
.
execute
(
"create table
restful_test
.weather(ts timestamp, temperature float) tags(location nchar(64))"
);
stmt
.
execute
(
"create table
"
+
dbname
+
"
.weather(ts timestamp, temperature float) tags(location nchar(64))"
);
stmt
.
executeUpdate
(
"insert into t1 using
restful_test
.weather tags('北京') values(now, 18.2)"
);
stmt
.
executeUpdate
(
"insert into t1 using
"
+
dbname
+
"
.weather tags('北京') values(now, 18.2)"
);
ResultSet
rs
=
stmt
.
executeQuery
(
"select * from
restful_test
.weather"
);
ResultSet
rs
=
stmt
.
executeQuery
(
"select * from
"
+
dbname
+
"
.weather"
);
ResultSetMetaData
meta
=
rs
.
getMetaData
();
ResultSetMetaData
meta
=
rs
.
getMetaData
();
while
(
rs
.
next
())
{
while
(
rs
.
next
())
{
for
(
int
i
=
1
;
i
<=
meta
.
getColumnCount
();
i
++)
{
for
(
int
i
=
1
;
i
<=
meta
.
getColumnCount
();
i
++)
{
...
@@ -38,8 +39,6 @@ public class JdbcRestfulDemo {
...
@@ -38,8 +39,6 @@ public class JdbcRestfulDemo {
rs
.
close
();
rs
.
close
();
stmt
.
close
();
stmt
.
close
();
conn
.
close
();
conn
.
close
();
}
catch
(
ClassNotFoundException
e
)
{
e
.
printStackTrace
();
}
catch
(
SQLException
e
)
{
}
catch
(
SQLException
e
)
{
e
.
printStackTrace
();
e
.
printStackTrace
();
}
}
...
...
tests/examples/JDBC/JDBCDemo/src/main/java/com/taosdata/example/SubscribeDemo.java
浏览文件 @
bbda67eb
...
@@ -34,9 +34,8 @@ public class SubscribeDemo {
...
@@ -34,9 +34,8 @@ public class SubscribeDemo {
System
.
out
.
println
(
usage
);
System
.
out
.
println
(
usage
);
return
;
return
;
}
}
/*********************************************************************************************/
try
{
try
{
Class
.
forName
(
"com.taosdata.jdbc.TSDBDriver"
);
Properties
properties
=
new
Properties
();
Properties
properties
=
new
Properties
();
properties
.
setProperty
(
TSDBDriver
.
PROPERTY_KEY_CHARSET
,
"UTF-8"
);
properties
.
setProperty
(
TSDBDriver
.
PROPERTY_KEY_CHARSET
,
"UTF-8"
);
properties
.
setProperty
(
TSDBDriver
.
PROPERTY_KEY_LOCALE
,
"en_US.UTF-8"
);
properties
.
setProperty
(
TSDBDriver
.
PROPERTY_KEY_LOCALE
,
"en_US.UTF-8"
);
...
...
tests/examples/JDBC/springbootdemo/pom.xml
浏览文件 @
bbda67eb
...
@@ -60,12 +60,15 @@
...
@@ -60,12 +60,15 @@
</exclusions>
</exclusions>
</dependency>
</dependency>
<dependency>
<groupId>
org.springframework.boot
</groupId>
<artifactId>
spring-boot-starter-aop
</artifactId>
</dependency>
<dependency>
<dependency>
<groupId>
com.taosdata.jdbc
</groupId>
<groupId>
com.taosdata.jdbc
</groupId>
<artifactId>
taos-jdbcdriver
</artifactId>
<artifactId>
taos-jdbcdriver
</artifactId>
<version>
2.0.28
</version>
<version>
2.0.34
</version>
<!-- <scope>system</scope>-->
<!-- <systemPath>${project.basedir}/src/main/resources/taos-jdbcdriver-2.0.28-dist.jar</systemPath>-->
</dependency>
</dependency>
<dependency>
<dependency>
...
...
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/SpringbootdemoApplication.java
浏览文件 @
bbda67eb
...
@@ -4,7 +4,7 @@ import org.mybatis.spring.annotation.MapperScan;
...
@@ -4,7 +4,7 @@ import org.mybatis.spring.annotation.MapperScan;
import
org.springframework.boot.SpringApplication
;
import
org.springframework.boot.SpringApplication
;
import
org.springframework.boot.autoconfigure.SpringBootApplication
;
import
org.springframework.boot.autoconfigure.SpringBootApplication
;
@MapperScan
(
basePackages
=
{
"com.taosdata.example.springbootdemo
.dao
"
})
@MapperScan
(
basePackages
=
{
"com.taosdata.example.springbootdemo"
})
@SpringBootApplication
@SpringBootApplication
public
class
SpringbootdemoApplication
{
public
class
SpringbootdemoApplication
{
...
...
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/controller/WeatherController.java
浏览文件 @
bbda67eb
...
@@ -15,35 +15,21 @@ public class WeatherController {
...
@@ -15,35 +15,21 @@ public class WeatherController {
@Autowired
@Autowired
private
WeatherService
weatherService
;
private
WeatherService
weatherService
;
/**
@GetMapping
(
"/lastOne"
)
* create database and table
public
Weather
lastOne
()
{
*
return
weatherService
.
lastOne
();
* @return
}
*/
@GetMapping
(
"/init"
)
@GetMapping
(
"/init"
)
public
int
init
()
{
public
int
init
()
{
return
weatherService
.
init
();
return
weatherService
.
init
();
}
}
/**
* Pagination Query
*
* @param limit
* @param offset
* @return
*/
@GetMapping
(
"/{limit}/{offset}"
)
@GetMapping
(
"/{limit}/{offset}"
)
public
List
<
Weather
>
queryWeather
(
@PathVariable
Long
limit
,
@PathVariable
Long
offset
)
{
public
List
<
Weather
>
queryWeather
(
@PathVariable
Long
limit
,
@PathVariable
Long
offset
)
{
return
weatherService
.
query
(
limit
,
offset
);
return
weatherService
.
query
(
limit
,
offset
);
}
}
/**
* upload single weather info
*
* @param temperature
* @param humidity
* @return
*/
@PostMapping
(
"/{temperature}/{humidity}"
)
@PostMapping
(
"/{temperature}/{humidity}"
)
public
int
saveWeather
(
@PathVariable
float
temperature
,
@PathVariable
float
humidity
)
{
public
int
saveWeather
(
@PathVariable
float
temperature
,
@PathVariable
float
humidity
)
{
return
weatherService
.
save
(
temperature
,
humidity
);
return
weatherService
.
save
(
temperature
,
humidity
);
...
...
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.java
浏览文件 @
bbda67eb
...
@@ -8,6 +8,8 @@ import java.util.Map;
...
@@ -8,6 +8,8 @@ import java.util.Map;
public
interface
WeatherMapper
{
public
interface
WeatherMapper
{
Map
<
String
,
Object
>
lastOne
();
void
dropDB
();
void
dropDB
();
void
createDB
();
void
createDB
();
...
...
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/dao/WeatherMapper.xml
浏览文件 @
bbda67eb
...
@@ -9,20 +9,48 @@
...
@@ -9,20 +9,48 @@
<result
column=
"humidity"
jdbcType=
"FLOAT"
property=
"humidity"
/>
<result
column=
"humidity"
jdbcType=
"FLOAT"
property=
"humidity"
/>
</resultMap>
</resultMap>
<select
id=
"lastOne"
resultType=
"java.util.Map"
>
select last_row(*), location, groupid
from test.weather
</select>
<update
id=
"dropDB"
>
<update
id=
"dropDB"
>
drop database if exists test
drop
database if exists test
</update>
</update>
<update
id=
"createDB"
>
<update
id=
"createDB"
>
create database if not exists test
create
database if not exists test
</update>
</update>
<update
id=
"createSuperTable"
>
<update
id=
"createSuperTable"
>
create table if not exists test.weather(ts timestamp, temperature float, humidity float) tags(location nchar(64), groupId int)
create table if not exists test.weather
(
ts
timestamp,
temperature
float,
humidity
float,
note
binary
(
64
)) tags
(
location nchar
(
64
), groupId int)
</update>
</update>
<update
id=
"createTable"
parameterType=
"com.taosdata.example.springbootdemo.domain.Weather"
>
<update
id=
"createTable"
parameterType=
"com.taosdata.example.springbootdemo.domain.Weather"
>
create table if not exists test.t#{groupId} using test.weather tags(#{location}, #{groupId})
create table if not exists test.t#{groupId} using test.weather tags
(
#{location},
#{groupId}
)
</update>
</update>
<select
id=
"select"
resultMap=
"BaseResultMap"
>
<select
id=
"select"
resultMap=
"BaseResultMap"
>
...
@@ -36,25 +64,29 @@
...
@@ -36,25 +64,29 @@
</select>
</select>
<insert
id=
"insert"
parameterType=
"com.taosdata.example.springbootdemo.domain.Weather"
>
<insert
id=
"insert"
parameterType=
"com.taosdata.example.springbootdemo.domain.Weather"
>
insert into test.t#{groupId} (ts, temperature, humidity) values (#{ts}, ${temperature}, ${humidity})
insert into test.t#{groupId} (ts, temperature, humidity, note)
values (#{ts}, ${temperature}, ${humidity}, #{note})
</insert>
</insert>
<select
id=
"getSubTables"
resultType=
"String"
>
<select
id=
"getSubTables"
resultType=
"String"
>
select tbname from test.weather
select tbname
from test.weather
</select>
</select>
<select
id=
"count"
resultType=
"int"
>
<select
id=
"count"
resultType=
"int"
>
select count(*) from test.weather
select count(*)
from test.weather
</select>
</select>
<resultMap
id=
"avgResultSet"
type=
"com.taosdata.example.springbootdemo.domain.Weather"
>
<resultMap
id=
"avgResultSet"
type=
"com.taosdata.example.springbootdemo.domain.Weather"
>
<id
column=
"ts"
jdbcType=
"TIMESTAMP"
property=
"ts"
/>
<id
column=
"ts"
jdbcType=
"TIMESTAMP"
property=
"ts"
/>
<result
column=
"avg(temperature)"
jdbcType=
"FLOAT"
property=
"temperature"
/>
<result
column=
"avg(temperature)"
jdbcType=
"FLOAT"
property=
"temperature"
/>
<result
column=
"avg(humidity)"
jdbcType=
"FLOAT"
property=
"humidity"
/>
<result
column=
"avg(humidity)"
jdbcType=
"FLOAT"
property=
"humidity"
/>
</resultMap>
</resultMap>
<select
id=
"avg"
resultMap=
"avgResultSet"
>
<select
id=
"avg"
resultMap=
"avgResultSet"
>
select avg(temperature), avg(humidity)from test.weather interval(1m)
select avg(temperature), avg(humidity)
from test.weather interval(1m)
</select>
</select>
</mapper>
</mapper>
\ No newline at end of file
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/domain/Weather.java
浏览文件 @
bbda67eb
...
@@ -11,6 +11,7 @@ public class Weather {
...
@@ -11,6 +11,7 @@ public class Weather {
private
Float
temperature
;
private
Float
temperature
;
private
Float
humidity
;
private
Float
humidity
;
private
String
location
;
private
String
location
;
private
String
note
;
private
int
groupId
;
private
int
groupId
;
public
Weather
()
{
public
Weather
()
{
...
@@ -61,4 +62,12 @@ public class Weather {
...
@@ -61,4 +62,12 @@ public class Weather {
public
void
setGroupId
(
int
groupId
)
{
public
void
setGroupId
(
int
groupId
)
{
this
.
groupId
=
groupId
;
this
.
groupId
=
groupId
;
}
}
public
String
getNote
()
{
return
note
;
}
public
void
setNote
(
String
note
)
{
this
.
note
=
note
;
}
}
}
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/service/WeatherService.java
浏览文件 @
bbda67eb
...
@@ -29,6 +29,7 @@ public class WeatherService {
...
@@ -29,6 +29,7 @@ public class WeatherService {
Weather
weather
=
new
Weather
(
new
Timestamp
(
ts
+
(
thirtySec
*
i
)),
30
*
random
.
nextFloat
(),
random
.
nextInt
(
100
));
Weather
weather
=
new
Weather
(
new
Timestamp
(
ts
+
(
thirtySec
*
i
)),
30
*
random
.
nextFloat
(),
random
.
nextInt
(
100
));
weather
.
setLocation
(
locations
[
random
.
nextInt
(
locations
.
length
)]);
weather
.
setLocation
(
locations
[
random
.
nextInt
(
locations
.
length
)]);
weather
.
setGroupId
(
i
%
locations
.
length
);
weather
.
setGroupId
(
i
%
locations
.
length
);
weather
.
setNote
(
"note-"
+
i
);
weatherMapper
.
createTable
(
weather
);
weatherMapper
.
createTable
(
weather
);
count
+=
weatherMapper
.
insert
(
weather
);
count
+=
weatherMapper
.
insert
(
weather
);
}
}
...
@@ -58,4 +59,21 @@ public class WeatherService {
...
@@ -58,4 +59,21 @@ public class WeatherService {
public
List
<
Weather
>
avg
()
{
public
List
<
Weather
>
avg
()
{
return
weatherMapper
.
avg
();
return
weatherMapper
.
avg
();
}
}
public
Weather
lastOne
()
{
Map
<
String
,
Object
>
result
=
weatherMapper
.
lastOne
();
long
ts
=
(
long
)
result
.
get
(
"ts"
);
float
temperature
=
(
float
)
result
.
get
(
"temperature"
);
float
humidity
=
(
float
)
result
.
get
(
"humidity"
);
String
note
=
(
String
)
result
.
get
(
"note"
);
int
groupId
=
(
int
)
result
.
get
(
"groupid"
);
String
location
=
(
String
)
result
.
get
(
"location"
);
Weather
weather
=
new
Weather
(
new
Timestamp
(
ts
),
temperature
,
humidity
);
weather
.
setNote
(
note
);
weather
.
setGroupId
(
groupId
);
weather
.
setLocation
(
location
);
return
weather
;
}
}
}
tests/examples/JDBC/springbootdemo/src/main/java/com/taosdata/example/springbootdemo/util/TaosAspect.java
0 → 100644
浏览文件 @
bbda67eb
package
com.taosdata.example.springbootdemo.util
;
import
org.aspectj.lang.ProceedingJoinPoint
;
import
org.aspectj.lang.annotation.Around
;
import
org.aspectj.lang.annotation.Aspect
;
import
org.springframework.stereotype.Component
;
import
java.sql.Timestamp
;
import
java.util.Map
;
@Aspect
@Component
public
class
TaosAspect
{
@Around
(
"execution(java.util.Map<String,Object> com.taosdata.example.springbootdemo.dao.*.*(..))"
)
public
Object
handleType
(
ProceedingJoinPoint
joinPoint
)
{
Map
<
String
,
Object
>
result
=
null
;
try
{
result
=
(
Map
<
String
,
Object
>)
joinPoint
.
proceed
();
for
(
String
key
:
result
.
keySet
())
{
Object
obj
=
result
.
get
(
key
);
if
(
obj
instanceof
byte
[])
{
obj
=
new
String
((
byte
[])
obj
);
result
.
put
(
key
,
obj
);
}
if
(
obj
instanceof
Timestamp
)
{
obj
=
((
Timestamp
)
obj
).
getTime
();
result
.
put
(
key
,
obj
);
}
}
}
catch
(
Throwable
e
)
{
e
.
printStackTrace
();
}
return
result
;
}
}
tests/examples/JDBC/springbootdemo/src/main/resources/application.properties
浏览文件 @
bbda67eb
# datasource config - JDBC-JNI
# datasource config - JDBC-JNI
#spring.datasource.driver-class-name=com.taosdata.jdbc.TSDBDriver
#spring.datasource.driver-class-name=com.taosdata.jdbc.TSDBDriver
#spring.datasource.url=jdbc:TAOS://
127.0.0.1:6030/test
?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
#spring.datasource.url=jdbc:TAOS://
localhost:6030/
?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
#spring.datasource.username=root
#spring.datasource.username=root
#spring.datasource.password=taosdata
#spring.datasource.password=taosdata
# datasource config - JDBC-RESTful
# datasource config - JDBC-RESTful
spring.datasource.driver-class-name
=
com.taosdata.jdbc.rs.RestfulDriver
spring.datasource.driver-class-name
=
com.taosdata.jdbc.rs.RestfulDriver
spring.datasource.url
=
jdbc:TAOS-RS://
master
:6041/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
spring.datasource.url
=
jdbc:TAOS-RS://
localhsot
:6041/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
spring.datasource.username
=
root
spring.datasource.username
=
root
spring.datasource.password
=
taosdata
spring.datasource.password
=
taosdata
spring.datasource.druid.initial-size
=
5
spring.datasource.druid.initial-size
=
5
spring.datasource.druid.min-idle
=
5
spring.datasource.druid.min-idle
=
5
spring.datasource.druid.max-active
=
5
spring.datasource.druid.max-active
=
5
spring.datasource.druid.max-wait
=
30000
spring.datasource.druid.max-wait
=
30000
spring.datasource.druid.validation-query
=
select server_status();
spring.datasource.druid.validation-query
=
select server_status();
spring.aop.auto
=
true
spring.aop.proxy-target-class
=
true
#mybatis
#mybatis
mybatis.mapper-locations
=
classpath:mapper/*.xml
mybatis.mapper-locations
=
classpath:mapper/*.xml
logging.level.com.taosdata.jdbc.springbootdemo.dao
=
debug
logging.level.com.taosdata.jdbc.springbootdemo.dao
=
debug
tests/examples/c/apitest.c
浏览文件 @
bbda67eb
此差异已折叠。
点击以展开。
tests/examples/c/makefile
浏览文件 @
bbda67eb
...
@@ -6,8 +6,8 @@ TARGET=exe
...
@@ -6,8 +6,8 @@ TARGET=exe
LFLAGS
=
'-Wl,-rpath,/usr/local/taos/driver/'
-ltaos
-lpthread
-lm
-lrt
LFLAGS
=
'-Wl,-rpath,/usr/local/taos/driver/'
-ltaos
-lpthread
-lm
-lrt
CFLAGS
=
-O3
-g
-Wall
-Wno-deprecated
-fPIC
-Wno-unused-result
-Wconversion
\
CFLAGS
=
-O3
-g
-Wall
-Wno-deprecated
-fPIC
-Wno-unused-result
-Wconversion
\
-Wno-char-subscripts
-D_REENTRANT
-Wno-format
-D_REENTRANT
-DLINUX
\
-Wno-char-subscripts
-D_REENTRANT
-Wno-format
-D_REENTRANT
-DLINUX
\
-Wno-unused-function
-D_M_X64
-I
/usr/local/taos/include
-std
=
gnu99
-Wno-unused-function
-D_M_X64
-I
/usr/local/taos/include
-std
=
gnu99
\
-I
../../../deps/cJson/inc
all
:
$(TARGET)
all
:
$(TARGET)
exe
:
exe
:
...
...
tests/pytest/fulltest.sh
浏览文件 @
bbda67eb
...
@@ -267,7 +267,7 @@ python3 ./test.py -f query/queryStateWindow.py
...
@@ -267,7 +267,7 @@ python3 ./test.py -f query/queryStateWindow.py
# python3 ./test.py -f query/nestedQuery/queryWithOrderLimit.py
# python3 ./test.py -f query/nestedQuery/queryWithOrderLimit.py
python3 ./test.py
-f
query/nestquery_last_row.py
python3 ./test.py
-f
query/nestquery_last_row.py
python3 ./test.py
-f
query/queryCnameDisplay.py
python3 ./test.py
-f
query/queryCnameDisplay.py
python3 ./test.py
-f
query/operator_cost.py
#
python3 ./test.py -f query/operator_cost.py
# python3 ./test.py -f query/long_where_query.py
# python3 ./test.py -f query/long_where_query.py
python3 test.py
-f
query/nestedQuery/queryWithSpread.py
python3 test.py
-f
query/nestedQuery/queryWithSpread.py
...
...
tests/pytest/insert/insertJSONPayload.py
0 → 100644
浏览文件 @
bbda67eb
###################################################################
# Copyright (c) 2021 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import
sys
from
util.log
import
*
from
util.cases
import
*
from
util.sql
import
*
class
TDTestCase
:
def
init
(
self
,
conn
,
logSql
):
tdLog
.
debug
(
"start to execute %s"
%
__file__
)
tdSql
.
init
(
conn
.
cursor
(),
logSql
)
self
.
_conn
=
conn
def
run
(
self
):
print
(
"running {}"
.
format
(
__file__
))
tdSql
.
execute
(
"drop database if exists test"
)
tdSql
.
execute
(
"create database if not exists test precision 'us'"
)
tdSql
.
execute
(
'use test'
)
### Default format ###
### metric value ###
print
(
"============= step1 : test metric value types ================"
)
payload
=
'''
{
"metric": "stb0_0",
"timestamp": 1626006833610123,
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb0_0"
)
tdSql
.
checkData
(
1
,
1
,
"FLOAT"
)
payload
=
'''
{
"metric": "stb0_1",
"timestamp": 1626006833610123,
"value": true,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb0_1"
)
tdSql
.
checkData
(
1
,
1
,
"BOOL"
)
payload
=
'''
{
"metric": "stb0_2",
"timestamp": 1626006833610123,
"value": false,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb0_2"
)
tdSql
.
checkData
(
1
,
1
,
"BOOL"
)
payload
=
'''
{
"metric": "stb0_3",
"timestamp": 1626006833610123,
"value": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>",
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb0_3"
)
tdSql
.
checkData
(
1
,
1
,
"NCHAR"
)
### timestamp 0 ###
payload
=
'''
{
"metric": "stb0_4",
"timestamp": 0,
"value": 123,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
### ID ###
payload
=
'''
{
"metric": "stb0_5",
"timestamp": 0,
"value": 123,
"tags": {
"ID": "tb0_5",
"t1": true,
"iD": "tb000",
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>",
"id": "tb555"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"select tbname from stb0_5"
)
tdSql
.
checkData
(
0
,
0
,
"tb0_5"
)
### Nested format ###
### timestamp ###
#seconds
payload
=
'''
{
"metric": "stb1_0",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"select ts from stb1_0"
)
tdSql
.
checkData
(
0
,
0
,
"2021-07-11 20:33:53.000000"
)
#milliseconds
payload
=
'''
{
"metric": "stb1_1",
"timestamp": {
"value": 1626006833610,
"type": "ms"
},
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"select ts from stb1_1"
)
tdSql
.
checkData
(
0
,
0
,
"2021-07-11 20:33:53.610000"
)
#microseconds
payload
=
'''
{
"metric": "stb1_2",
"timestamp": {
"value": 1626006833610123,
"type": "us"
},
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"select ts from stb1_2"
)
tdSql
.
checkData
(
0
,
0
,
"2021-07-11 20:33:53.610123"
)
#nanoseconds
payload
=
'''
{
"metric": "stb1_3",
"timestamp": {
"value": 1.6260068336101233e+18,
"type": "ns"
},
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"select ts from stb1_3"
)
tdSql
.
checkData
(
0
,
0
,
"2021-07-11 20:33:53.610123"
)
#now
tdSql
.
execute
(
'use test'
)
payload
=
'''
{
"metric": "stb1_4",
"timestamp": {
"value": 0,
"type": "ns"
},
"value": 10,
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
### metric value ###
payload
=
'''
{
"metric": "stb2_0",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": true,
"type": "bool"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb2_0"
)
tdSql
.
checkData
(
1
,
1
,
"BOOL"
)
payload
=
'''
{
"metric": "stb2_1",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": 127,
"type": "tinyint"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb2_1"
)
tdSql
.
checkData
(
1
,
1
,
"TINYINT"
)
payload
=
'''
{
"metric": "stb2_2",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": 32767,
"type": "smallint"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb2_2"
)
tdSql
.
checkData
(
1
,
1
,
"SMALLINT"
)
payload
=
'''
{
"metric": "stb2_3",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": 2147483647,
"type": "int"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb2_3"
)
tdSql
.
checkData
(
1
,
1
,
"INT"
)
payload
=
'''
{
"metric": "stb2_4",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": 9.2233720368547758e+18,
"type": "bigint"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb2_4"
)
tdSql
.
checkData
(
1
,
1
,
"BIGINT"
)
payload
=
'''
{
"metric": "stb2_5",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": 11.12345,
"type": "float"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb2_5"
)
tdSql
.
checkData
(
1
,
1
,
"FLOAT"
)
payload
=
'''
{
"metric": "stb2_6",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": 22.123456789,
"type": "double"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb2_6"
)
tdSql
.
checkData
(
1
,
1
,
"DOUBLE"
)
payload
=
'''
{
"metric": "stb2_7",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>",
"type": "binary"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb2_7"
)
tdSql
.
checkData
(
1
,
1
,
"BINARY"
)
payload
=
'''
{
"metric": "stb2_8",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": "你好",
"type": "nchar"
},
"tags": {
"t1": true,
"t2": false,
"t3": 10,
"t4": "123_abc_.!@#$%^&*:;,./?|+-=()[]{}<>"
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb2_8"
)
tdSql
.
checkData
(
1
,
1
,
"NCHAR"
)
### tag value ###
payload
=
'''
{
"metric": "stb3_0",
"timestamp": {
"value": 1626006833,
"type": "s"
},
"value": {
"value": "hello",
"type": "nchar"
},
"tags": {
"t1": {
"value": true,
"type": "bool"
},
"t2": {
"value": 127,
"type": "tinyint"
},
"t3": {
"value": 32767,
"type": "smallint"
},
"t4": {
"value": 2147483647,
"type": "int"
},
"t5": {
"value": 9.2233720368547758e+18,
"type": "bigint"
},
"t6": {
"value": 11.12345,
"type": "float"
},
"t7": {
"value": 22.123456789,
"type": "double"
},
"t8": {
"value": "binary_val",
"type": "binary"
},
"t9": {
"value": "你好",
"type": "nchar"
}
}
}
'''
code
=
self
.
_conn
.
insert_json_payload
(
payload
)
print
(
"insert_json_payload result {}"
.
format
(
code
))
tdSql
.
query
(
"describe stb3_0"
)
tdSql
.
checkData
(
2
,
1
,
"BOOL"
)
tdSql
.
checkData
(
3
,
1
,
"TINYINT"
)
tdSql
.
checkData
(
4
,
1
,
"SMALLINT"
)
tdSql
.
checkData
(
5
,
1
,
"INT"
)
tdSql
.
checkData
(
6
,
1
,
"BIGINT"
)
tdSql
.
checkData
(
7
,
1
,
"FLOAT"
)
tdSql
.
checkData
(
8
,
1
,
"DOUBLE"
)
tdSql
.
checkData
(
9
,
1
,
"BINARY"
)
tdSql
.
checkData
(
10
,
1
,
"NCHAR"
)
def
stop
(
self
):
tdSql
.
close
()
tdLog
.
success
(
"%s successfully executed"
%
__file__
)
tdCases
.
addWindows
(
__file__
,
TDTestCase
())
tdCases
.
addLinux
(
__file__
,
TDTestCase
())
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录