提交 986aff05 编写于 作者: H Haojun Liao

[TD-225]merge develop

......@@ -32,7 +32,7 @@ ELSEIF (TD_WINDOWS)
#INSTALL(TARGETS taos RUNTIME DESTINATION driver)
#INSTALL(TARGETS shell RUNTIME DESTINATION .)
IF (TD_MVN_INSTALLED)
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.15-dist.jar DESTINATION connector/jdbc)
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.16-dist.jar DESTINATION connector/jdbc)
ENDIF ()
ELSEIF (TD_DARWIN)
SET(TD_MAKE_INSTALL_SH "${TD_COMMUNITY_DIR}/packaging/tools/make_install.sh")
......
......@@ -4,7 +4,7 @@ PROJECT(TDengine)
IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER})
ELSE ()
SET(TD_VER_NUMBER "2.0.13.0")
SET(TD_VER_NUMBER "2.0.14.0")
ENDIF ()
IF (DEFINED VERCOMPATIBLE)
......
......@@ -68,7 +68,9 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
2) UPDATE 标志数据库支持更新相同时间戳数据;
3) 数据库名最大长度为33;
4) 一条SQL 语句的最大长度为65480个字符;
5) 数据库还有更多与存储相关的配置参数,请参见系统管理。
- **显示系统当前参数**
......@@ -119,6 +121,7 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
**Tips**: 以上所有参数修改后都可以用show databases来确认是否修改成功。
- **显示系统所有数据库**
```mysql
SHOW DATABASES;
```
......@@ -130,10 +133,15 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]);
```
说明:
1) 表的第一个字段必须是TIMESTAMP,并且系统自动将其设为主键;
2) 表名最大长度为192;
3) 表的每行长度不能超过16k个字符;
4) 子表名只能由字母、数字和下划线组成,且不能以数字开头
5) 使用数据类型binary或nchar,需指定其最长的字节数,如binary(20),表示20字节;
- **以超级表为模板创建数据表**
......@@ -149,7 +157,12 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
```
以更快的速度批量创建大量数据表。(服务器端 2.0.14 及以上版本)
说明:批量建表方式要求数据表必须以超级表为模板。
说明:
1)批量建表方式要求数据表必须以超级表为模板。
2)在不超出 SQL 语句长度限制的前提下,单条语句中的建表数量建议控制在 1000~3000 之间,将会获得比较理想的建表速度。
- **删除数据表**
......@@ -164,7 +177,9 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
```
显示当前数据库下的所有数据表信息。
说明:可在like中使用通配符进行名称的匹配,这一通配符字符串最长不能超过24字节。
通配符匹配:1)’%’ (百分号)匹配0到任意个字符;2)’\_’下划线匹配一个字符。
- **在线修改显示字符宽度**
......@@ -185,7 +200,9 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
ALTER TABLE tb_name ADD COLUMN field_name data_type;
```
说明:
1) 列的最大个数为1024,最小个数为2;
2) 列名最大长度为64;
- **表删除列**
......@@ -204,9 +221,13 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
创建STable, 与创建表的SQL语法相似,但需指定TAGS字段的名称和类型
说明:
1) TAGS 列的数据类型不能是timestamp类型;
2) TAGS 列名不能与其他列名相同;
3) TAGS 列名不能为预留关键字;
4) TAGS 最多允许128个,至少1个,总长度不超过16k个字符。
- **删除超级表**
......@@ -333,7 +354,9 @@ SELECT select_expr [, select_expr ...]
[LIMIT limit_val [, OFFSET offset_val]]
[>> export_file]
```
说明:针对 insert 类型的 SQL 语句,我们采用的流式解析策略,在发现后面的错误之前,前面正确的部分SQL仍会执行。下面的sql中,insert语句是无效的,但是d1001仍会被创建。
```mysql
taos> CREATE TABLE meters(ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS(location BINARY(30), groupId INT);
Query OK, 0 row(s) affected (0.008245s)
......@@ -614,10 +637,20 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause];
```
功能说明:统计表/超级表中记录行数或某列的非空值个数。
返回结果数据类型:长整型INT64。
应用字段:应用全部字段。
适用于:表、超级表。
说明:1)可以使用星号*来替代具体的字段,使用星号(*)返回全部记录数量。2)针对同一表的(不包含NULL值)字段查询结果均相同。3)如果统计对象是具体的列,则返回该列中非NULL值的记录数量。
说明:
1)可以使用星号*来替代具体的字段,使用星号(*)返回全部记录数量。
2)针对同一表的(不包含NULL值)字段查询结果均相同。
3)如果统计对象是具体的列,则返回该列中非NULL值的记录数量。
示例:
```mysql
......@@ -639,8 +672,11 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT AVG(field_name) FROM tb_name [WHERE clause];
```
功能说明:统计表/超级表中某列的平均值。
返回结果数据类型:双精度浮点数Double。
应用字段:不能应用在timestamp、binary、nchar、bool字段。
适用于:表、超级表。
示例:
......@@ -663,8 +699,11 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT TWA(field_name) FROM tb_name WHERE clause;
```
功能说明:时间加权平均函数。统计表/超级表中某列在一段时间内的时间加权平均。
返回结果数据类型:双精度浮点数Double。
应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
适用于:表、超级表。
- **SUM**
......@@ -672,8 +711,11 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT SUM(field_name) FROM tb_name [WHERE clause];
```
功能说明:统计表/超级表中某列的和。
返回结果数据类型:双精度浮点数Double和长整型INT64。
应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
适用于:表、超级表。
示例:
......@@ -696,9 +738,12 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT STDDEV(field_name) FROM tb_name [WHERE clause];
```
功能说明:统计表中某列的均方差。
返回结果数据类型:双精度浮点数Double。
应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
适用于:表。
适用于:表。(从 2.0.15 版本开始,本函数也支持超级表)
示例:
```mysql
......@@ -714,9 +759,13 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT LEASTSQUARES(field_name, start_val, step_val) FROM tb_name [WHERE clause];
```
功能说明:统计表中某列的值是主键(时间戳)的拟合直线方程。start_val是自变量初始值,step_val是自变量的步长值。
返回结果数据类型:字符串表达式(斜率, 截距)。
应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
说明:自变量是时间戳,因变量是该列的值。
适用于:表。
示例:
......@@ -735,7 +784,9 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause];
```
功能说明:统计表/超级表中某列的值最小值。
返回结果数据类型:同应用的字段。
应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
示例:
......@@ -758,7 +809,9 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause];
```
功能说明:统计表/超级表中某列的值最大值。
返回结果数据类型:同应用的字段。
应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
示例:
......@@ -781,9 +834,18 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause];
```
功能说明:统计表/超级表中某列的值最先写入的非NULL值。
返回结果数据类型:同应用的字段。
应用字段:所有字段。
说明:1)如果要返回各个列的首个(时间戳最小)非NULL值,可以使用FIRST(\*);2) 如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;3) 如果结果集中所有列全部为NULL值,则不返回结果。
说明:
1)如果要返回各个列的首个(时间戳最小)非NULL值,可以使用FIRST(\*);
2) 如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;
3) 如果结果集中所有列全部为NULL值,则不返回结果。
示例:
```mysql
......@@ -805,9 +867,16 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause];
```
功能说明:统计表/超级表中某列的值最后写入的非NULL值。
返回结果数据类型:同应用的字段。
应用字段:所有字段。
说明:1)如果要返回各个列的最后(时间戳最大)一个非NULL值,可以使用LAST(\*);2)如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;如果结果集中所有列全部为NULL值,则不返回结果。
说明:
1)如果要返回各个列的最后(时间戳最大)一个非NULL值,可以使用LAST(\*);
2)如果结果集中的某列全部为NULL值,则该列的返回结果也是NULL;如果结果集中所有列全部为NULL值,则不返回结果。
示例:
```mysql
......@@ -829,9 +898,16 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause];
```
功能说明: 统计表/超级表中某列的值最大*k*个非NULL值。若多于k个列值并列最大,则返回时间戳小的。
返回结果数据类型:同应用的字段。
应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
说明:1)*k*值取值范围1≤*k*≤100;2)系统同时返回该记录关联的时间戳列。
说明:
1)*k*值取值范围1≤*k*≤100;
2)系统同时返回该记录关联的时间戳列。
示例:
```mysql
......@@ -856,9 +932,16 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause];
```
功能说明:统计表/超级表中某列的值最小*k*个非NULL值。若多于k个列值并列最小,则返回时间戳小的。
返回结果数据类型:同应用的字段。
应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
说明:1)*k*值取值范围1≤*k*≤100;2)系统同时返回该记录关联的时间戳列。
说明:
1)*k*值取值范围1≤*k*≤100;
2)系统同时返回该记录关联的时间戳列。
示例:
```mysql
......@@ -882,8 +965,11 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT PERCENTILE(field_name, P) FROM { tb_name } [WHERE clause];
```
功能说明:统计表中某列的值百分比分位数。
返回结果数据类型: 双精度浮点数Double。
应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
说明:*P*值取值范围0≤*P*≤100,为0的时候等同于MIN,为100的时候等同于MAX。
示例:
......@@ -900,9 +986,13 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT APERCENTILE(field_name, P) FROM { tb_name | stb_name } [WHERE clause];
```
功能说明:统计表中某列的值百分比分位数,与PERCENTILE函数相似,但是返回近似结果。
返回结果数据类型: 双精度浮点数Double。
应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
说明:*P*值取值范围0≤*P*≤100,为0的时候等同于MIN,为100的时候等同于MAX。推荐使用```APERCENTILE```函数,该函数性能远胜于```PERCENTILE```函数
```mysql
taos> SELECT APERCENTILE(current, 20) FROM d1001;
apercentile(current, 20) |
......@@ -916,8 +1006,11 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT LAST_ROW(field_name) FROM { tb_name | stb_name };
```
功能说明:返回表(超级表)的最后一条记录。
返回结果数据类型:同应用的字段。
应用字段:所有字段。
说明:与last函数不同,last_row不支持时间范围限制,强制返回最后一条记录。
示例:
......@@ -941,8 +1034,11 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT DIFF(field_name) FROM tb_name [WHERE clause];
```
功能说明:统计表中某列的值与前一行对应值的差。
返回结果数据类型: 同应用字段。
应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
说明:输出结果行数是范围内总行数减一,第一行没有结果输出。
示例:
......@@ -960,8 +1056,11 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause];
```
功能说明:统计表/超级表中某列的最大值和最小值之差。
返回结果数据类型: 双精度浮点数。
应用字段:不能应用在binary、nchar、bool类型字段。
说明:可用于TIMESTAMP字段,此时表示记录的时间覆盖范围。
示例:
......@@ -985,9 +1084,16 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
SELECT field_name [+|-|*|/|%][Value|field_name] FROM { tb_name | stb_name } [WHERE clause];
```
功能说明:统计表/超级表中某列或多列间的值加、减、乘、除、取余计算结果。
返回结果数据类型:双精度浮点数。
应用字段:不能应用在timestamp、binary、nchar、bool类型字段。
说明:1)支持两列或多列之间进行计算,可使用括号控制计算优先级;2)NULL字段不参与计算,如果参与计算的某行中包含NULL,该行的计算结果为NULL。
说明:
1)支持两列或多列之间进行计算,可使用括号控制计算优先级;
2)NULL字段不参与计算,如果参与计算的某行中包含NULL,该行的计算结果为NULL。
```mysql
taos> SELECT current + voltage * phase FROM d1001;
......@@ -1051,7 +1157,7 @@ SELECT AVG(current), MAX(current), LEASTSQUARES(current, start_val, step_val), P
- 数据库名最大长度为32
- 表名最大长度为192,每行数据最大长度16k个字符
- 列名最大长度为64,最多允许1024列,最少需要2列,第一列必须是时间戳
- 标签最多允许128个,可以0个,标签总长度不超过16k个字符
- 标签最多允许128个,可以1个,标签总长度不超过16k个字符
- SQL语句最大长度65480个字符,但可通过系统配置参数maxSQLLength修改,最长可配置为1M
- 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制
......
......@@ -255,7 +255,7 @@ taos -C 或 taos --dump-config
CREATE USER <user_name> PASS <'password'>;
```
创建用户,并指定用户名和密码,密码需要用单引号引起来,单引号为英文半角
创建用户,并指定用户名和密码,密码需要用单引号引起来单引号为英文半角
```sql
DROP USER <user_name>;
......@@ -267,13 +267,15 @@ DROP USER <user_name>;
ALTER USER <user_name> PASS <'password'>;
```
修改用户密码, 为避免被转换为小写,密码需要用单引号引用,单引号为英文半角
修改用户密码,为避免被转换为小写,密码需要用单引号引用,单引号为英文半角
```sql
ALTER USER <user_name> PRIVILEGE <super|write|read>;
ALTER USER <user_name> PRIVILEGE <write|read>;
```
修改用户权限为:super/write/read,不需要添加单引号
修改用户权限为:write 或 read,不需要添加单引号
说明:系统内共有 super/write/read 三种权限级别,但目前不允许通过 alter 指令把 super 权限赋予用户。
```mysql
SHOW USERS;
......@@ -432,11 +434,12 @@ TDengine的所有可执行文件默认存放在 _/usr/local/taos/bin_ 目录下
- 数据库名:不能包含“.”以及特殊字符,不能超过32个字符
- 表名:不能包含“.”以及特殊字符,与所属数据库名一起,不能超过192个字符
- 表的列名:不能包含特殊字符,不能超过64个字符
- 数据库名、表名、列名,都不能以数字开头
- 表的列数:不能超过1024列
- 记录的最大长度:包括时间戳8 byte,不能超过16KB
- 单条SQL语句默认最大字符串长度:65480 byte
- 数据库副本数:不能超过3
- 用户名:不能超过20个byte
- 用户名:不能超过23个byte
- 用户密码:不能超过15个byte
- 标签(Tags)数量:不能超过128个
- 标签的总长度:不能超过16Kbyte
......
......@@ -248,7 +248,7 @@ Master Vnode遵循下面的写入流程:
1. Master vnode收到应用的数据插入请求,验证OK,进入下一步;
2. 如果系统配置参数walLevel大于0,vnode将把该请求的原始数据包写入数据库日志文件WAL。如果walLevel设置为2,而且fsync设置为0,TDengine还将WAL数据立即落盘,以保证即使宕机,也能从数据库日志文件中恢复数据,避免数据的丢失;
3. 如果有多个副本,vnode将把数据包转发给同一虚拟节点组内slave vnodes, 该转发包带有数据的版本号(version);
4. 写入内存,并记录加入到skip list;
4. 写入内存,并记录加入到skip list;
5. Master vnode返回确认信息给应用,表示写入成功。
6. 如果第2,3,4步中任何一步失败,将直接返回错误给应用。
......@@ -372,7 +372,7 @@ select count(*) from d1001 interval(1h);
select count(*) from d1001 interval(1h) fill(prev);
```
针对d1001设备采集数据统计每小时记录数,如果某一个小时不存在数据,返回之前一个小时的统计数据。TDengine提供前向插值(prev)、线性插值(linear)、NULL值填充(NULL)、特定值填充(value)。
针对d1001设备采集数据统计每小时记录数,如果某一个小时不存在数据,返回之前一个小时的统计数据。TDengine提供前向插值(prev)、线性插值(linear)、NULL值填充(NULL)、特定值填充(value)。
### 多表聚合查询
TDengine对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以多个,可以随时增加、删除和修改。 应用可通过指定标签的过滤条件,对一个STable下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示:
......
......@@ -218,7 +218,7 @@ SHOW MNODES;
如果一个数据节点离线,TDengine集群将自动检测到。有如下两种情况:
- 该数据节点离线超过一定时间(taos.cfg里配置参数offlineThreshold控制时长),系统将自动把该数据节点删除,产生系统报警信息,触发负载均衡流程。如果该被删除的数据节点重上线时,它将无法加入集群,需要系统管理员重新将其添加进集群才会开始工作。
- 该数据节点离线超过一定时间(taos.cfg里配置参数offlineThreshold控制时长),系统将自动把该数据节点删除,产生系统报警信息,触发负载均衡流程。如果该被删除的数据节点重上线时,它将无法加入集群,需要系统管理员重新将其添加进集群才会开始工作。
- 离线后,在offlineThreshold的时长内重新上线,系统将自动启动数据恢复流程,等数据完全恢复后,该节点将开始正常工作。
**注意:**如果一个虚拟节点组(包括mnode组)里所归属的每个数据节点都处于离线或unsynced状态,必须等该虚拟节点组里的所有数据节点都上线、都能交换状态信息后,才能选出Master,该虚拟节点组才能对外提供服务。比如整个集群有3个数据节点,副本数为3,如果3个数据节点都宕机,然后2个数据节点重启,是无法工作的,只有等3个数据节点都重启成功,才能对外服务。
......@@ -227,5 +227,5 @@ SHOW MNODES;
如果副本数为偶数,当一个vnode group里一半vnode不工作时,是无法从中选出master的。同理,一半mnode不工作时,是无法选出mnode的master的,因为存在“split brain”问题。为解决这个问题,TDengine引入了arbitrator的概念。Arbitrator模拟一个vnode或mnode在工作,但只简单的负责网络连接,不处理任何数据插入或访问。只要包含arbitrator在内,超过半数的vnode或mnode工作,那么该vnode group或mnode组就可以正常的提供数据插入或查询服务。比如对于副本数为2的情形,如果一个节点A离线,但另外一个节点B正常,而且能连接到arbitrator, 那么节点B就能正常工作。
TDengine提供一个执行程序tarbitrator, 找任何一台Linux服务器运行它即可。请点击[安装包下载](https://www.taosdata.com/cn/all-downloads/),在TDengine Arbitrator Linux一节中,选择适合的版本下载并安装。该程序对系统资源几乎没有要求,只需要保证有网络连接即可。该应用的命令行参数`-p`可以指定其对外服务的端口号,缺省是6042。配置每个taosd实例时,可以在配置文件taos.cfg里将参数arbitrator设置为arbitrator的End Point。如果该参数配置了,当副本数为偶数,系统将自动连接配置的arbitrator。如果副本数为奇数,即使配置了arbitrator, 系统也不会去建立连接。
TDengine提供一个执行程序tarbitrator, 找任何一台Linux服务器运行它即可。请点击[安装包下载](https://www.taosdata.com/cn/all-downloads/),在TDengine Arbitrator Linux一节中,选择适合的版本下载并安装。该程序对系统资源几乎没有要求,只需要保证有网络连接即可。该应用的命令行参数`-p`可以指定其对外服务的端口号,缺省是6042。配置每个taosd实例时,可以在配置文件taos.cfg里将参数arbitrator设置为arbitrator的End Point。如果该参数配置了,当副本数为偶数,系统将自动连接配置的arbitrator。如果副本数为奇数,即使配置了arbitrator, 系统也不会去建立连接。
......@@ -89,6 +89,8 @@ SHOW DNODES;
```
它将列出集群中所有的dnode,每个dnode的fqdn:port, 状态(ready, offline等),vnode数目,还未使用的vnode数目等信息。在添加或删除一个节点后,可以使用该命令查看。
如果集群配置了Arbitrator,那么它也会在这个节点列表中显示出来,其role列的值会是“arb”。
###查看虚拟节点组
为充分利用多核技术,并提供scalability,数据需要分片处理。因此TDengine会将一个DB的数据切分成多份,存放在多个vnode里。这些vnode可能分布在多个dnode里,这样就实现了水平扩展。一个vnode仅仅属于一个DB,但一个DB可以有多个vnode。vnode的是mnode根据当前系统资源的情况,自动进行分配的,无需任何人工干预。
......@@ -139,4 +141,6 @@ SHOW MNODES;
如果副本数为偶数,当一个vnode group里一半vnode不工作时,是无法从中选出master的。同理,一半mnode不工作时,是无法选出mnode的master的,因为存在“split brain”问题。为解决这个问题,TDengine引入了arbitrator的概念。Arbitrator模拟一个vnode或mnode在工作,但只简单的负责网络连接,不处理任何数据插入或访问。只要包含arbitrator在内,超过半数的vnode或mnode工作,那么该vnode group或mnode组就可以正常的提供数据插入或查询服务。比如对于副本数为2的情形,如果一个节点A离线,但另外一个节点B正常,而且能连接到arbitrator, 那么节点B就能正常工作。
TDengine安装包里带有一个执行程序tarbitrator, 找任何一台Linux服务器运行它即可。该程序对系统资源几乎没有要求,只需要保证有网络连接即可。该应用的命令行参数`-p`可以指定其对外服务的端口号,缺省是6030。配置每个taosd实例时,可以在配置文件taos.cfg里将参数arbitrator设置为arbitrator的End Point。如果该参数配置了,当副本数为偶数数,系统将自动连接配置的arbitrator。
TDengine安装包里带有一个执行程序tarbitrator, 找任何一台Linux服务器运行它即可。该程序对系统资源几乎没有要求,只需要保证有网络连接即可。该应用的命令行参数`-p`可以指定其对外服务的端口号,缺省是6030。配置每个taosd实例时,可以在配置文件taos.cfg里将参数arbitrator设置为Arbitrator的End Point。如果该参数配置了,当副本数为偶数时,系统将自动连接配置的Arbitrator。
在配置了Arbitrator的情况下,它也会显示在“show dnodes;”指令给出的节点列表中。
......@@ -252,7 +252,7 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine
- `void taos_free_result(TAOS_RES *res)`
释放查询结果集以及相关的资源。查询完成后,务必调用该API释放资源,否则可能导致应用内存泄露。
释放查询结果集以及相关的资源。查询完成后,务必调用该API释放资源,否则可能导致应用内存泄露。但也需注意,释放资源后,如果再调用`taos_consume`等获取查询结果的函数,将导致应用Crash。
- `char *taos_errstr(TAOS_RES *res)`
......@@ -262,11 +262,11 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine
获取最近一次API调用失败的原因,返回值为错误代码。
**注意**对于每个数据库应用,2.0及以上版本 TDengine 推荐只建立一个连接。同时在应用中将该连接 (TAOS*) 结构体传递到不同的线程共享使用。基于 TAOS 结构体发出的查询、写入等操作具有多线程安全性。C 语言的连接器可以按照需求动态建立面向数据库的新连接(该过程对用户不可见),同时建议只有在程序最后退出的时候才调用 taos_close 关闭连接。
**注意**2.0及以上版本 TDengine 推荐数据库应用的每个线程都建立一个独立的连接,或基于线程建立连接池。而不推荐在应用中将该连接 (TAOS\*) 结构体传递到不同的线程共享使用。基于 TAOS 结构体发出的查询、写入等操作具有多线程安全性,但 “USE statement” 等状态量有可能在线程之间相互干扰。此外,C 语言的连接器可以按照需求动态建立面向数据库的新连接(该过程对用户不可见),同时建议只有在程序最后退出的时候才调用 taos_close 关闭连接。
### 异步查询API
同步API之外,TDengine还提供性能更高的异步调用API处理数据插入、查询操作。在软硬件环境相同的情况下,异步API处理数据插入的速度比同步API快2~4倍。异步API采用非阻塞式的调用方式,在系统真正完成某个具体数据库操作前,立即返回。调用的线程可以去处理其他工作,从而可以提升整个应用的性能。异步API在网络延迟严重的情况下,优点尤为突出。
同步API之外,TDengine还提供性能更高的异步调用API处理数据插入、查询操作。在软硬件环境相同的情况下,异步API处理数据插入的速度比同步API快2\~4倍。异步API采用非阻塞式的调用方式,在系统真正完成某个具体数据库操作前,立即返回。调用的线程可以去处理其他工作,从而可以提升整个应用的性能。异步API在网络延迟严重的情况下,优点尤为突出。
异步API都需要应用提供相应的回调函数,回调函数参数设置如下:前两个参数都是一致的,第三个参数依不同的API而定。第一个参数param是应用调用异步API时提供给系统的,用于回调时,应用能够找回具体操作的上下文,依具体实现而定。第二个参数是SQL操作的结果集,如果为空,比如insert操作,表示没有记录返回,如果不为空,比如select操作,表示有记录返回。
......@@ -288,13 +288,6 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine
* res:`taos_query_a`回调时返回的结果集
* fp:回调函数。其参数`param`是用户可定义的传递给回调函数的参数结构体;`numOfRows`是获取到的数据的行数(不是整个查询结果集的函数)。 在回调函数中,应用可以通过调用`taos_fetch_row`前向迭代获取批量记录中每一行记录。读完一块内的所有记录后,应用需要在回调函数中继续调用`taos_fetch_rows_a`获取下一批记录进行处理,直到返回的记录数(numOfRows)为零(结果返回完成)或记录数为负值(查询出错)。
- `void taos_fetch_row_a(TAOS_RES *res, void (*fp)(void *param, TAOS_RES *, TAOS_ROW row), void *param);`
异步获取一条记录。其中:
* res:`taos_query_a`回调时返回的结果集
* fp:回调函数。其参数`param`是应用提供的一个用于回调的参数。回调时,第三个参数`row`指向一行记录。不同于`taos_fetch_rows_a`,应用无需调用`taos_fetch_row`来获取一行数据,更加简单,但数据提取性能不及批量获取的API。
TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线程同时打开多张表,并可以同时对每张打开的表进行查询或者插入操作。需要指出的是,**客户端应用必须确保对同一张表的操作完全串行化**,即对同一个表的插入或查询操作未完成时(未返回时),不能够执行第二个插入或查询操作。
### 参数绑定API
......@@ -425,7 +418,7 @@ cd C:\TDengine\connector\python\windows
python -m pip install python3\
```
*如果机器上没有pip命令,用户可将src/connector/python/python3或src/connector/python/python2下的taos文件夹拷贝到应用程序的目录使用。
* 如果机器上没有pip命令,用户可将src/connector/python/python3或src/connector/python/python2下的taos文件夹拷贝到应用程序的目录使用。
对于windows 客户端,安装TDengine windows 客户端后,将C:\TDengine\driver\taos.dll拷贝到C:\windows\system32目录下即可。
### 使用
......@@ -442,7 +435,7 @@ import taos
conn = taos.connect(host="127.0.0.1", user="root", password="taosdata", config="/etc/taos")
c1 = conn.cursor()
```
*<em>host</em> 是TDengine 服务端所有IP, <em>config</em> 为客户端配置文件所在目录
* <em>host</em> 是TDengine 服务端所有IP, <em>config</em> 为客户端配置文件所在目录
* 写入数据
```python
......@@ -510,17 +503,17 @@ conn.close()
用户可通过python的帮助信息直接查看模块的使用信息,或者参考tests/examples/python中的示例程序。以下为部分常用类和方法:
- _TDengineConnection_类
- _TDengineConnection_
参考python中help(taos.TDengineConnection)。
这个类对应客户端和TDengine建立的一个连接。在客户端多线程的场景下,这个连接实例可以是每个线程申请一个,也可以多线程共享一个连接。
这个类对应客户端和TDengine建立的一个连接。在客户端多线程的场景下,推荐每个线程申请一个独立的连接实例,而不建议多线程共享一个连接。
- _TDengineCursor_类
- _TDengineCursor_
参考python中help(taos.TDengineCursor)。
这个类对应客户端进行的写入、查询操作。在客户端多线程的场景下,这个游标实例必须保持线程独享,不能夸线程共享使用,否则会导致返回结果出现错误。
- _connect_方法
- _connect_ 方法
用于生成taos.TDengineConnection的实例。
......@@ -800,7 +793,7 @@ go env -w GOPROXY=https://goproxy.io,direct
- `sql.Open(DRIVER_NAME string, dataSourceName string) *DB`
该API用来打开DB,返回一个类型为*DB的对象,一般情况下,DRIVER_NAME设置为字符串`taosSql`, dataSourceName设置为字符串`user:password@/tcp(host:port)/dbname`,如果客户想要用多个goroutine并发访问TDengine, 那么需要在各个goroutine中分别创建一个sql.Open对象并用之访问TDengine
该API用来打开DB,返回一个类型为\*DB的对象,一般情况下,DRIVER_NAME设置为字符串`taosSql`, dataSourceName设置为字符串`user:password@/tcp(host:port)/dbname`,如果客户想要用多个goroutine并发访问TDengine, 那么需要在各个goroutine中分别创建一个sql.Open对象并用之访问TDengine
**注意**: 该API成功创建的时候,并没有做权限等检查,只有在真正执行Query或者Exec的时候才能真正的去创建连接,并同时检查user/password/host/port是不是合法。 另外,由于整个驱动程序大部分实现都下沉到taosSql所依赖的libtaos中。所以,sql.Open本身特别轻量。
......@@ -822,7 +815,7 @@ go env -w GOPROXY=https://goproxy.io,direct
- `func (s *Stmt) Query(args ...interface{}) (*Rows, error)`
sql.Open内置的方法,Query executes a prepared query statement with the given arguments and returns the query results as a *Rows.
sql.Open内置的方法,Query executes a prepared query statement with the given arguments and returns the query results as a \*Rows.
- `func (s *Stmt) Close() error`
......@@ -894,7 +887,7 @@ Node-example-raw.js
验证方法:
1. 新建安装验证目录,例如:~/tdengine-test,拷贝github上nodejsChecker.js源程序。下载地址:(https://github.com/taosdata/TDengine/tree/develop/tests/examples/nodejs/nodejsChecker.js)。
1. 新建安装验证目录,例如:\~/tdengine-test,拷贝github上nodejsChecker.js源程序。下载地址:(https://github.com/taosdata/TDengine/tree/develop/tests/examples/nodejs/nodejsChecker.js)。
2. 在命令中执行以下命令:
......
# 常见问题
## 0. 怎么报告问题?
如果 FAQ 中的信息不能够帮到您,需要 TDengine 技术团队的技术支持与协助,请将以下两个目录中内容打包:
1. /var/log/taos (如果没有修改过默认路径)
2. /etc/taos
附上必要的问题描述,包括使用的 TDengine 版本信息、平台环境信息、发生该问题的执行操作、出现问题的表征及大概的时间,在<a href='https://github.com/taosdata/TDengine'> GitHub</a>提交Issue。
为了保证有足够的debug信息,如果问题能够重复,请修改/etc/taos/taos.cfg文件,最后面添加一行“debugFlag 135"(不带引号本身),然后重启taosd, 重复问题,然后再递交。也可以通过如下SQL语句,临时设置taosd的日志级别。
```
alter dnode <dnode_id> debugFlag 135;
```
但系统正常运行时,请一定将debugFlag设置为131,否则会产生大量的日志信息,降低系统效率。
## 1. TDengine2.0之前的版本升级到2.0及以上的版本应该注意什么?☆☆☆
2.0版本在之前版本的基础上,进行了完全的重构,配置文件和数据文件是不兼容的。在升级之前务必进行如下操作:
......@@ -118,16 +132,8 @@ TDengine是根据hostname唯一标志一台机器的,在数据文件从机器A
- 2.0.7.0 及以后的版本,到/var/lib/taos/dnode下,修复dnodeEps.json的dnodeId对应的FQDN,重启。确保机器内所有机器的此文件是完全相同的。
- 1.x 和 2.x 版本的存储结构不兼容,需要使用迁移工具或者自己开发应用导出导入数据。
## 17. 怎么报告问题
## 17. TDengine 是否支持删除或更新已经写入的数据
如果 FAQ 中的信息不能够帮到您,需要 TDengine 技术团队的技术支持与协助,请将以下两个目录中内容打包:
1. /var/log/taos
2. /etc/taos
TDengine 目前尚不支持删除功能,未来根据用户需求可能会支持。
附上必要的问题描述,以及发生该问题的执行操作,出现问题的表征及大概的时间,在<a href='https://github.com/taosdata/TDengine'> GitHub</a>提交Issue。
为了保证有足够的debug信息,如果问题能够重复,请修改/etc/taos/taos.cfg文件,最后面添加一行“debugFlag 135"(不带引号本身),然后重启taosd, 重复问题,然后再递交。也可以通过如下SQL语句,临时设置taosd的日志级别。
```
alter dnode <dnode_id> debugFlag 135;
```
但系统正常运行时,请一定将debugFlag设置为131,否则会产生大量的日志信息,降低系统效率。
从 2.0.8.0 开始,TDengine 支持更新已经写入数据的功能。使用更新功能需要在创建数据库时使用 UPDATE 1 参数,之后可以使用 INSERT INTO 命令更新已经写入的相同时间戳数据。UPDATE 参数不支持 ALTER DATABASE 命令修改。没有使用 UPDATE 1 参数创建的数据库,写入相同时间戳的数据不会修改之前的数据,也不会报错。
\ No newline at end of file
......@@ -162,10 +162,10 @@
# stop writing logs when the disk size of the log folder is less than this value
# minimalLogDirGB 0.1
# stop writing temporary files when the disk size of the log folder is less than this value
# stop writing temporary files when the disk size of the tmp folder is less than this value
# minimalTmpDirGB 0.1
# stop writing data when the disk size of the log folder is less than this value
# if disk free space is less than this value, taosd service exit directly within startup process
# minimalDataDirGB 0.1
# One mnode is equal to the number of vnode consumed
......
name: tdengine
base: core18
version: '2.0.13.0'
version: '2.0.14.0'
icon: snap/gui/t-dengine.svg
summary: an open-source big data platform designed and optimized for IoT.
description: |
......@@ -72,7 +72,7 @@ parts:
- usr/bin/taosd
- usr/bin/taos
- usr/bin/taosdemo
- usr/lib/libtaos.so.2.0.13.0
- usr/lib/libtaos.so.2.0.14.0
- usr/lib/libtaos.so.1
- usr/lib/libtaos.so
......
......@@ -47,7 +47,6 @@ void tscLockByThread(int64_t *lockedBy);
void tscUnlockByThread(int64_t *lockedBy);
#ifdef __cplusplus
}
#endif
......
......@@ -317,7 +317,8 @@ typedef struct STscObj {
} STscObj;
typedef struct SSubqueryState {
int32_t numOfRemain; // the number of remain unfinished subquery
pthread_mutex_t mutex;
int8_t *states;
int32_t numOfSub; // the number of total sub-queries
uint64_t numOfRetrievedRows; // total number of points in this query
} SSubqueryState;
......
......@@ -1422,6 +1422,10 @@ int32_t tscDoLocalMerge(SSqlObj *pSql) {
tscResetForNextRetrieve(pRes);
if (pSql->signature != pSql || pRes == NULL || pRes->pLocalMerger == NULL) { // all data has been processed
if (pRes->code == TSDB_CODE_SUCCESS) {
return pRes->code;
}
tscError("%p local merge abort due to error occurs, code:%s", pSql, tstrerror(pRes->code));
return pRes->code;
}
......
......@@ -264,6 +264,7 @@ int32_t tscToSQLCmd(SSqlObj* pSql, struct SSqlInfo* pInfo) {
case TSDB_SQL_DROP_DB: {
const char* msg2 = "invalid name";
const char* msg3 = "param name too long";
const char* msg4 = "table is not super table";
SStrToken* pzName = taosArrayGet(pInfo->pMiscInfo->a, 0);
if ((pInfo->type != TSDB_SQL_DROP_DNODE) && (tscValidateName(pzName) != TSDB_CODE_SUCCESS)) {
......@@ -284,6 +285,18 @@ int32_t tscToSQLCmd(SSqlObj* pSql, struct SSqlInfo* pInfo) {
if(code != TSDB_CODE_SUCCESS) {
return code;
}
if (pInfo->pMiscInfo->tableType == TSDB_SUPER_TABLE) {
code = tscGetTableMeta(pSql, pTableMetaInfo);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
if (!UTIL_TABLE_IS_SUPER_TABLE(pTableMetaInfo)) {
return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg4);
}
}
} else if (pInfo->type == TSDB_SQL_DROP_DNODE) {
pzName->n = strdequote(pzName->z);
strncpy(pCmd->payload, pzName->z, pzName->n);
......@@ -4803,6 +4816,7 @@ int32_t setAlterTableInfo(SSqlObj* pSql, struct SSqlInfo* pInfo) {
const char* msg17 = "invalid column name";
const char* msg18 = "primary timestamp column cannot be dropped";
const char* msg19 = "invalid new tag name";
const char* msg20 = "table is not super table";
int32_t code = TSDB_CODE_SUCCESS;
......@@ -4828,6 +4842,10 @@ int32_t setAlterTableInfo(SSqlObj* pSql, struct SSqlInfo* pInfo) {
STableMeta* pTableMeta = pTableMetaInfo->pTableMeta;
if (pAlterSQL->tableType == TSDB_SUPER_TABLE && !(UTIL_TABLE_IS_SUPER_TABLE(pTableMetaInfo))) {
return invalidSqlErrMsg(tscGetErrorMsgPayload(pCmd), msg20);
}
if (pAlterSQL->type == TSDB_ALTER_TABLE_ADD_TAG_COLUMN || pAlterSQL->type == TSDB_ALTER_TABLE_DROP_TAG_COLUMN ||
pAlterSQL->type == TSDB_ALTER_TABLE_CHANGE_TAG_COLUMN) {
if (UTIL_TABLE_IS_NORMAL_TABLE(pTableMetaInfo)) {
......@@ -6119,8 +6137,10 @@ void tscPrintSelectClause(SSqlObj* pSql, int32_t subClauseIndex) {
int32_t tmpLen = 0;
tmpLen =
sprintf(tmpBuf, "%s(uid:%" PRId64 ", %d)", aAggs[pExpr->functionId].aName, pExpr->uid, pExpr->colInfo.colId);
if (tmpLen + offset >= totalBufSize - 1) break;
offset += sprintf(str + offset, "%s", tmpBuf);
if (i < size - 1) {
......@@ -6130,6 +6150,7 @@ void tscPrintSelectClause(SSqlObj* pSql, int32_t subClauseIndex) {
assert(offset < totalBufSize);
str[offset] = ']';
assert(offset < totalBufSize);
tscDebug("%p select clause:%s", pSql, str);
}
......
......@@ -2180,7 +2180,7 @@ int tscProcessDropTableRsp(SSqlObj *pSql) {
taosHashRemove(tscTableMetaInfo, name, strnlen(name, TSDB_TABLE_FNAME_LEN));
tscDebug("%p remove table meta after drop table:%s, numOfRemain:%d", pSql, name, (int32_t) taosHashGetSize(tscTableMetaInfo));
assert(pTableMetaInfo->pTableMeta == NULL);
pTableMetaInfo->pTableMeta = NULL;
return 0;
}
......
......@@ -55,6 +55,58 @@ static void skipRemainValue(STSBuf* pTSBuf, tVariant* tag1) {
}
}
static void subquerySetState(SSqlObj *pSql, SSubqueryState *subState, int idx, int8_t state) {
assert(idx < subState->numOfSub);
assert(subState->states);
pthread_mutex_lock(&subState->mutex);
tscDebug("subquery:%p,%d state set to %d", pSql, idx, state);
subState->states[idx] = state;
pthread_mutex_unlock(&subState->mutex);
}
static bool allSubqueryDone(SSqlObj *pParentSql) {
bool done = true;
SSubqueryState *subState = &pParentSql->subState;
//lock in caller
for (int i = 0; i < subState->numOfSub; i++) {
if (0 == subState->states[i]) {
tscDebug("%p subquery:%p,%d is NOT finished, total:%d", pParentSql, pParentSql->pSubs[i], i, subState->numOfSub);
done = false;
break;
} else {
tscDebug("%p subquery:%p,%d is finished, total:%d", pParentSql, pParentSql->pSubs[i], i, subState->numOfSub);
}
}
return done;
}
static bool subAndCheckDone(SSqlObj *pSql, SSqlObj *pParentSql, int idx) {
SSubqueryState *subState = &pParentSql->subState;
assert(idx < subState->numOfSub);
pthread_mutex_lock(&subState->mutex);
tscDebug("%p subquery:%p,%d state set to 1", pParentSql, pSql, idx);
subState->states[idx] = 1;
bool done = allSubqueryDone(pParentSql);
pthread_mutex_unlock(&subState->mutex);
return done;
}
static int64_t doTSBlockIntersect(SSqlObj* pSql, SJoinSupporter* pSupporter1, SJoinSupporter* pSupporter2, STimeWindow * win) {
SQueryInfo* pQueryInfo = tscGetQueryInfoDetail(&pSql->cmd, pSql->cmd.clauseIndex);
......@@ -367,10 +419,6 @@ static int32_t tscLaunchRealSubqueries(SSqlObj* pSql) {
// scan all subquery, if one sub query has only ts, ignore it
tscDebug("%p start to launch secondary subqueries, %d out of %d needs to query", pSql, numOfSub, pSql->subState.numOfSub);
//the subqueries that do not actually launch the secondary query to virtual node is set as completed.
SSubqueryState* pState = &pSql->subState;
pState->numOfRemain = numOfSub;
bool success = true;
for (int32_t i = 0; i < pSql->subState.numOfSub; ++i) {
......@@ -403,6 +451,7 @@ static int32_t tscLaunchRealSubqueries(SSqlObj* pSql) {
success = false;
break;
}
tscClearSubqueryInfo(&pNew->cmd);
pSql->pSubs[i] = pNew;
......@@ -480,6 +529,8 @@ static int32_t tscLaunchRealSubqueries(SSqlObj* pSql) {
}
}
subquerySetState(pPrevSub, &pSql->subState, i, 0);
size_t numOfCols = taosArrayGetSize(pQueryInfo->colList);
tscDebug("%p subquery:%p tableIndex:%d, vgroupIndex:%d, type:%d, exprInfo:%" PRIzu ", colList:%" PRIzu ", fieldsInfo:%d, name:%s",
pSql, pNew, 0, pTableMetaInfo->vgroupIndex, pQueryInfo->type, taosArrayGetSize(pQueryInfo->exprList),
......@@ -517,20 +568,25 @@ void freeJoinSubqueryObj(SSqlObj* pSql) {
SJoinSupporter* p = pSub->param;
tscDestroyJoinSupporter(p);
if (pSub->res.code == TSDB_CODE_SUCCESS) {
taos_free_result(pSub);
}
taos_free_result(pSub);
pSql->pSubs[i] = NULL;
}
if (pSql->subState.states) {
pthread_mutex_destroy(&pSql->subState.mutex);
}
tfree(pSql->subState.states);
pSql->subState.numOfSub = 0;
}
static void quitAllSubquery(SSqlObj* pSqlObj, SJoinSupporter* pSupporter) {
assert(pSqlObj->subState.numOfRemain > 0);
if (atomic_sub_fetch_32(&pSqlObj->subState.numOfRemain, 1) <= 0) {
tscError("%p all subquery return and query failed, global code:%s", pSqlObj, tstrerror(pSqlObj->res.code));
static void quitAllSubquery(SSqlObj* pSqlSub, SSqlObj* pSqlObj, SJoinSupporter* pSupporter) {
if (subAndCheckDone(pSqlSub, pSqlObj, pSupporter->subqueryIndex)) {
tscError("%p all subquery return and query failed, global code:%s", pSqlObj, tstrerror(pSqlObj->res.code));
freeJoinSubqueryObj(pSqlObj);
return;
}
//tscDestroyJoinSupporter(pSupporter);
......@@ -777,6 +833,15 @@ static void tidTagRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
SQueryInfo* pQueryInfo = tscGetQueryInfoDetail(pCmd, pCmd->clauseIndex);
assert(TSDB_QUERY_HAS_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_TAG_FILTER_QUERY));
if (pParentSql->res.code != TSDB_CODE_SUCCESS) {
tscError("%p abort query due to other subquery failure. code:%d, global code:%d", pSql, numOfRows, pParentSql->res.code);
quitAllSubquery(pSql, pParentSql, pSupporter);
tscAsyncResultOnError(pParentSql);
return;
}
// check for the error code firstly
if (taos_errno(pSql) != TSDB_CODE_SUCCESS) {
// todo retry if other subqueries are not failed
......@@ -785,7 +850,7 @@ static void tidTagRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
tscError("%p sub query failed, code:%s, index:%d", pSql, tstrerror(numOfRows), pSupporter->subqueryIndex);
pParentSql->res.code = numOfRows;
quitAllSubquery(pParentSql, pSupporter);
quitAllSubquery(pSql, pParentSql, pSupporter);
tscAsyncResultOnError(pParentSql);
return;
......@@ -802,7 +867,7 @@ static void tidTagRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
tscError("%p failed to malloc memory", pSql);
pParentSql->res.code = TAOS_SYSTEM_ERROR(errno);
quitAllSubquery(pParentSql, pSupporter);
quitAllSubquery(pSql, pParentSql, pSupporter);
tscAsyncResultOnError(pParentSql);
return;
......@@ -844,9 +909,10 @@ static void tidTagRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
// no data exists in next vnode, mark the <tid, tags> query completed
// only when there is no subquery exits any more, proceeds to get the intersect of the <tid, tags> tuple sets.
if (atomic_sub_fetch_32(&pParentSql->subState.numOfRemain, 1) > 0) {
if (!subAndCheckDone(pSql, pParentSql, pSupporter->subqueryIndex)) {
tscDebug("%p tagRetrieve:%p,%d completed, total:%d", pParentSql, tres, pSupporter->subqueryIndex, pParentSql->subState.numOfSub);
return;
}
}
SArray *s1 = NULL, *s2 = NULL;
int32_t code = getIntersectionOfTableTuple(pQueryInfo, pParentSql, &s1, &s2);
......@@ -891,8 +957,10 @@ static void tidTagRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
((SJoinSupporter*)psub2->param)->pVgroupTables = tscVgroupTableInfoDup(pTableMetaInfo2->pVgroupTables);
pParentSql->subState.numOfSub = 2;
pParentSql->subState.numOfRemain = pParentSql->subState.numOfSub;
memset(pParentSql->subState.states, 0, sizeof(pParentSql->subState.states[0]) * pParentSql->subState.numOfSub);
tscDebug("%p reset all sub states to 0", pParentSql);
for (int32_t m = 0; m < pParentSql->subState.numOfSub; ++m) {
SSqlObj* sub = pParentSql->pSubs[m];
issueTSCompQuery(sub, sub->param, pParentSql);
......@@ -915,6 +983,15 @@ static void tsCompRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
SQueryInfo* pQueryInfo = tscGetQueryInfoDetail(pCmd, pCmd->clauseIndex);
assert(!TSDB_QUERY_HAS_TYPE(pQueryInfo->type, TSDB_QUERY_TYPE_JOIN_SEC_STAGE));
if (pParentSql->res.code != TSDB_CODE_SUCCESS) {
tscError("%p abort query due to other subquery failure. code:%d, global code:%d", pSql, numOfRows, pParentSql->res.code);
quitAllSubquery(pSql, pParentSql, pSupporter);
tscAsyncResultOnError(pParentSql);
return;
}
// check for the error code firstly
if (taos_errno(pSql) != TSDB_CODE_SUCCESS) {
// todo retry if other subqueries are not failed yet
......@@ -922,7 +999,7 @@ static void tsCompRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
tscError("%p sub query failed, code:%s, index:%d", pSql, tstrerror(numOfRows), pSupporter->subqueryIndex);
pParentSql->res.code = numOfRows;
quitAllSubquery(pParentSql, pSupporter);
quitAllSubquery(pSql, pParentSql, pSupporter);
tscAsyncResultOnError(pParentSql);
return;
......@@ -937,7 +1014,7 @@ static void tsCompRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
pParentSql->res.code = TAOS_SYSTEM_ERROR(errno);
quitAllSubquery(pParentSql, pSupporter);
quitAllSubquery(pSql, pParentSql, pSupporter);
tscAsyncResultOnError(pParentSql);
......@@ -955,7 +1032,7 @@ static void tsCompRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
pParentSql->res.code = TAOS_SYSTEM_ERROR(errno);
quitAllSubquery(pParentSql, pSupporter);
quitAllSubquery(pSql, pParentSql, pSupporter);
tscAsyncResultOnError(pParentSql);
......@@ -1009,9 +1086,9 @@ static void tsCompRetrieveCallback(void* param, TAOS_RES* tres, int32_t numOfRow
return;
}
if (atomic_sub_fetch_32(&pParentSql->subState.numOfRemain, 1) > 0) {
if (!subAndCheckDone(pSql, pParentSql, pSupporter->subqueryIndex)) {
return;
}
}
tscDebug("%p all subquery retrieve ts complete, do ts block intersect", pParentSql);
......@@ -1049,6 +1126,17 @@ static void joinRetrieveFinalResCallback(void* param, TAOS_RES* tres, int numOfR
SSqlRes* pRes = &pSql->res;
SQueryInfo* pQueryInfo = tscGetQueryInfoDetail(pCmd, pCmd->clauseIndex);
if (pParentSql->res.code != TSDB_CODE_SUCCESS) {
tscError("%p abort query due to other subquery failure. code:%d, global code:%d", pSql, numOfRows, pParentSql->res.code);
quitAllSubquery(pSql, pParentSql, pSupporter);
tscAsyncResultOnError(pParentSql);
return;
}
if (taos_errno(pSql) != TSDB_CODE_SUCCESS) {
assert(numOfRows == taos_errno(pSql));
......@@ -1088,9 +1176,8 @@ static void joinRetrieveFinalResCallback(void* param, TAOS_RES* tres, int numOfR
}
}
assert(pState->numOfRemain > 0);
if (atomic_sub_fetch_32(&pState->numOfRemain, 1) > 0) {
tscDebug("%p sub:%p completed, remain:%d, total:%d", pParentSql, tres, pState->numOfRemain, pState->numOfSub);
if (!subAndCheckDone(pSql, pParentSql, pSupporter->subqueryIndex)) {
tscDebug("%p sub:%p,%d completed, total:%d", pParentSql, tres, pSupporter->subqueryIndex, pState->numOfSub);
return;
}
......@@ -1205,15 +1292,16 @@ void tscFetchDatablockForSubquery(SSqlObj* pSql) {
}
}
// get the number of subquery that need to retrieve the next vnode.
if (orderedPrjQuery) {
for (int32_t i = 0; i < pSql->subState.numOfSub; ++i) {
SSqlObj* pSub = pSql->pSubs[i];
if (pSub != NULL && pSub->res.row >= pSub->res.numOfRows && pSub->res.completed) {
pSql->subState.numOfRemain++;
subquerySetState(pSub, &pSql->subState, i, 0);
}
}
}
for (int32_t i = 0; i < pSql->subState.numOfSub; ++i) {
SSqlObj* pSub = pSql->pSubs[i];
......@@ -1270,7 +1358,19 @@ void tscFetchDatablockForSubquery(SSqlObj* pSql) {
// retrieve data from current vnode.
tscDebug("%p retrieve data from %d subqueries", pSql, numOfFetch);
SJoinSupporter* pSupporter = NULL;
pSql->subState.numOfRemain = numOfFetch;
for (int32_t i = 0; i < pSql->subState.numOfSub; ++i) {
SSqlObj* pSql1 = pSql->pSubs[i];
if (pSql1 == NULL) {
continue;
}
SSqlRes* pRes1 = &pSql1->res;
if (pRes1->row >= pRes1->numOfRows) {
subquerySetState(pSql1, &pSql->subState, i, 0);
}
}
for (int32_t i = 0; i < pSql->subState.numOfSub; ++i) {
SSqlObj* pSql1 = pSql->pSubs[i];
......@@ -1372,7 +1472,8 @@ void tscJoinQueryCallback(void* param, TAOS_RES* tres, int code) {
// retrieve actual query results from vnode during the second stage join subquery
if (pParentSql->res.code != TSDB_CODE_SUCCESS) {
tscError("%p abort query due to other subquery failure. code:%d, global code:%d", pSql, code, pParentSql->res.code);
quitAllSubquery(pParentSql, pSupporter);
quitAllSubquery(pSql, pParentSql, pSupporter);
tscAsyncResultOnError(pParentSql);
return;
......@@ -1384,7 +1485,8 @@ void tscJoinQueryCallback(void* param, TAOS_RES* tres, int code) {
tscError("%p abort query, code:%s, global code:%s", pSql, tstrerror(code), tstrerror(pParentSql->res.code));
pParentSql->res.code = code;
quitAllSubquery(pParentSql, pSupporter);
quitAllSubquery(pSql, pParentSql, pSupporter);
tscAsyncResultOnError(pParentSql);
return;
......@@ -1408,9 +1510,9 @@ void tscJoinQueryCallback(void* param, TAOS_RES* tres, int code) {
// In case of consequence query from other vnode, do not wait for other query response here.
if (!(pTableMetaInfo->vgroupIndex > 0 && tscNonOrderedProjectionQueryOnSTable(pQueryInfo, 0))) {
if (atomic_sub_fetch_32(&pParentSql->subState.numOfRemain, 1) > 0) {
if (!subAndCheckDone(pSql, pParentSql, pSupporter->subqueryIndex)) {
return;
}
}
}
tscSetupOutputColumnIndex(pParentSql);
......@@ -1422,6 +1524,7 @@ void tscJoinQueryCallback(void* param, TAOS_RES* tres, int code) {
if (pTableMetaInfo->vgroupIndex > 0 && tscNonOrderedProjectionQueryOnSTable(pQueryInfo, 0)) {
pSql->fp = joinRetrieveFinalResCallback; // continue retrieve data
pSql->cmd.command = TSDB_SQL_FETCH;
tscProcessSql(pSql);
} else { // first retrieve from vnode during the secondary stage sub-query
// set the command flag must be after the semaphore been correctly set.
......@@ -1457,8 +1560,7 @@ int32_t tscCreateJoinSubquery(SSqlObj *pSql, int16_t tableIndex, SJoinSupporter
return TSDB_CODE_TSC_OUT_OF_MEMORY;
}
pSql->pSubs[pSql->subState.numOfRemain++] = pNew;
assert(pSql->subState.numOfRemain <= pSql->subState.numOfSub);
pSql->pSubs[tableIndex] = pNew;
if (QUERY_IS_JOIN_QUERY(pQueryInfo->type)) {
addGroupInfoForSubquery(pSql, pNew, 0, tableIndex);
......@@ -1590,6 +1692,19 @@ void tscHandleMasterJoinQuery(SSqlObj* pSql) {
int32_t code = TSDB_CODE_SUCCESS;
pSql->subState.numOfSub = pQueryInfo->numOfTables;
if (pSql->subState.states == NULL) {
pSql->subState.states = calloc(pSql->subState.numOfSub, sizeof(*pSql->subState.states));
if (pSql->subState.states == NULL) {
code = TSDB_CODE_TSC_OUT_OF_MEMORY;
goto _error;
}
pthread_mutex_init(&pSql->subState.mutex, NULL);
}
memset(pSql->subState.states, 0, sizeof(*pSql->subState.states) * pSql->subState.numOfSub);
tscDebug("%p reset all sub states to 0", pSql);
bool hasEmptySub = false;
tscDebug("%p start subquery, total:%d", pSql, pQueryInfo->numOfTables);
......@@ -1622,14 +1737,25 @@ void tscHandleMasterJoinQuery(SSqlObj* pSql) {
pSql->cmd.command = TSDB_SQL_RETRIEVE_EMPTY_RESULT;
(*pSql->fp)(pSql->param, pSql, 0);
} else {
int fail = 0;
for (int32_t i = 0; i < pSql->subState.numOfSub; ++i) {
SSqlObj* pSub = pSql->pSubs[i];
if (fail) {
(*pSub->fp)(pSub->param, pSub, 0);
continue;
}
if ((code = tscProcessSql(pSub)) != TSDB_CODE_SUCCESS) {
pSql->subState.numOfRemain = i - 1; // the already sent request will continue and do not go to the error process routine
break;
pRes->code = code;
(*pSub->fp)(pSub->param, pSub, 0);
fail = 1;
}
}
if(fail) {
return;
}
pSql->cmd.command = TSDB_SQL_TABLE_JOIN_RETRIEVE;
}
......@@ -1728,7 +1854,21 @@ int32_t tscHandleMasterSTableQuery(SSqlObj *pSql) {
return ret;
}
pState->numOfRemain = pState->numOfSub;
if (pState->states == NULL) {
pState->states = calloc(pState->numOfSub, sizeof(*pState->states));
if (pState->states == NULL) {
pRes->code = TSDB_CODE_TSC_OUT_OF_MEMORY;
tscAsyncResultOnError(pSql);
tfree(pMemoryBuf);
return ret;
}
pthread_mutex_init(&pState->mutex, NULL);
}
memset(pState->states, 0, sizeof(*pState->states) * pState->numOfSub);
tscDebug("%p reset all sub states to 0", pSql);
pRes->code = TSDB_CODE_SUCCESS;
int32_t i = 0;
......@@ -1877,7 +2017,6 @@ void tscHandleSubqueryError(SRetrieveSupport *trsupport, SSqlObj *pSql, int numO
assert(pSql != NULL);
SSubqueryState* pState = &pParentSql->subState;
assert(pState->numOfRemain <= pState->numOfSub && pState->numOfRemain >= 0);
// retrieved in subquery failed. OR query cancelled in retrieve phase.
if (taos_errno(pSql) == TSDB_CODE_SUCCESS && pParentSql->res.code != TSDB_CODE_SUCCESS) {
......@@ -1908,14 +2047,12 @@ void tscHandleSubqueryError(SRetrieveSupport *trsupport, SSqlObj *pSql, int numO
}
}
int32_t remain = -1;
if ((remain = atomic_sub_fetch_32(&pState->numOfRemain, 1)) > 0) {
tscDebug("%p sub:%p orderOfSub:%d freed, finished subqueries:%d", pParentSql, pSql, trsupport->subqueryIndex,
pState->numOfSub - remain);
if (!subAndCheckDone(pSql, pParentSql, subqueryIndex)) {
tscDebug("%p sub:%p,%d freed, not finished, total:%d", pParentSql, pSql, trsupport->subqueryIndex, pState->numOfSub);
tscFreeRetrieveSup(pSql);
return;
}
}
// all subqueries are failed
tscError("%p retrieve from %d vnode(s) completed,code:%s.FAILED.", pParentSql, pState->numOfSub,
......@@ -1980,14 +2117,12 @@ static void tscAllDataRetrievedFromDnode(SRetrieveSupport *trsupport, SSqlObj* p
return;
}
int32_t remain = -1;
if ((remain = atomic_sub_fetch_32(&pParentSql->subState.numOfRemain, 1)) > 0) {
tscDebug("%p sub:%p orderOfSub:%d freed, finished subqueries:%d", pParentSql, pSql, trsupport->subqueryIndex,
pState->numOfSub - remain);
if (!subAndCheckDone(pSql, pParentSql, idx)) {
tscDebug("%p sub:%p orderOfSub:%d freed, not finished", pParentSql, pSql, trsupport->subqueryIndex);
tscFreeRetrieveSup(pSql);
return;
}
}
// all sub-queries are returned, start to local merge process
pDesc->pColumnModel->capacity = trsupport->pExtMemBuffer[idx]->numOfElemsPerPage;
......@@ -2033,7 +2168,6 @@ static void tscRetrieveFromDnodeCallBack(void *param, TAOS_RES *tres, int numOfR
SSqlObj * pParentSql = trsupport->pParentSql;
SSubqueryState* pState = &pParentSql->subState;
assert(pState->numOfRemain <= pState->numOfSub && pState->numOfRemain >= 0);
STableMetaInfo *pTableMetaInfo = tscGetTableMetaInfoFromCmd(&pSql->cmd, 0, 0);
SVgroupInfo *pVgroup = &pTableMetaInfo->vgroupList->vgroups[0];
......@@ -2254,7 +2388,8 @@ static void multiVnodeInsertFinalize(void* param, TAOS_RES* tres, int numOfRows)
}
}
if (atomic_sub_fetch_32(&pParentObj->subState.numOfRemain, 1) > 0) {
if (!subAndCheckDone(tres, pParentObj, pSupporter->index)) {
tscDebug("%p insert:%p,%d completed, total:%d", pParentObj, tres, pSupporter->index, pParentObj->subState.numOfSub);
return;
}
......@@ -2288,6 +2423,8 @@ static void multiVnodeInsertFinalize(void* param, TAOS_RES* tres, int numOfRows)
STableMetaInfo* pMasterTableMetaInfo = tscGetTableMetaInfoFromCmd(&pParentObj->cmd, pSql->cmd.clauseIndex, 0);
tscAddTableMetaInfo(pQueryInfo, &pMasterTableMetaInfo->name, NULL, NULL, NULL, NULL);
subquerySetState(pSql, &pParentObj->subState, i, 0);
tscDebug("%p, failed sub:%d, %p", pParentObj, i, pSql);
}
}
......@@ -2303,7 +2440,6 @@ static void multiVnodeInsertFinalize(void* param, TAOS_RES* tres, int numOfRows)
}
pParentObj->cmd.parseFinished = false;
pParentObj->subState.numOfRemain = numOfFailed;
tscResetSqlCmdObj(&pParentObj->cmd);
......@@ -2379,7 +2515,19 @@ int32_t tscHandleMultivnodeInsert(SSqlObj *pSql) {
// the number of already initialized subqueries
int32_t numOfSub = 0;
pSql->subState.numOfRemain = pSql->subState.numOfSub;
if (pSql->subState.states == NULL) {
pSql->subState.states = calloc(pSql->subState.numOfSub, sizeof(*pSql->subState.states));
if (pSql->subState.states == NULL) {
pRes->code = TSDB_CODE_TSC_OUT_OF_MEMORY;
goto _error;
}
pthread_mutex_init(&pSql->subState.mutex, NULL);
}
memset(pSql->subState.states, 0, sizeof(*pSql->subState.states) * pSql->subState.numOfSub);
tscDebug("%p reset all sub states to 0", pSql);
pSql->pSubs = calloc(pSql->subState.numOfSub, POINTER_BYTES);
if (pSql->pSubs == NULL) {
goto _error;
......
......@@ -426,6 +426,12 @@ static void tscFreeSubobj(SSqlObj* pSql) {
pSql->pSubs[i] = NULL;
}
if (pSql->subState.states) {
pthread_mutex_destroy(&pSql->subState.mutex);
}
tfree(pSql->subState.states);
pSql->subState.numOfSub = 0;
}
......
......@@ -399,6 +399,7 @@ static int32_t toNchar(tVariant *pVariant, char **pDest, int32_t *pDestSize) {
pVariant->wpz = (wchar_t *)tmp;
} else {
int32_t output = 0;
bool ret = taosMbsToUcs4(pDst, nLen, *pDest, (nLen + 1) * TSDB_NCHAR_SIZE, &output);
if (!ret) {
return -1;
......
<?xml version="1.0" encoding="UTF-8"?>
<classpath>
<classpathentry kind="src" output="target/classes" path="src/main/java">
<attributes>
<attribute name="optional" value="true"/>
<attribute name="maven.pomderived" value="true"/>
</attributes>
</classpathentry>
<classpathentry excluding="**" kind="src" output="target/classes" path="src/main/resources">
<attributes>
<attribute name="maven.pomderived" value="true"/>
</attributes>
</classpathentry>
<classpathentry kind="src" output="target/test-classes" path="src/test/java">
<attributes>
<attribute name="optional" value="true"/>
<attribute name="maven.pomderived" value="true"/>
<attribute name="test" value="true"/>
</attributes>
</classpathentry>
<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/JavaSE-1.8">
<attributes>
<attribute name="maven.pomderived" value="true"/>
</attributes>
</classpathentry>
<classpathentry kind="con" path="org.eclipse.m2e.MAVEN2_CLASSPATH_CONTAINER">
<attributes>
<attribute name="maven.pomderived" value="true"/>
</attributes>
</classpathentry>
<classpathentry kind="output" path="target/classes"/>
</classpath>
<?xml version="1.0" encoding="UTF-8"?>
<projectDescription>
<name>taos-jdbcdriver</name>
<comment></comment>
<projects>
</projects>
<buildSpec>
<buildCommand>
<name>org.eclipse.jdt.core.javabuilder</name>
<arguments>
</arguments>
</buildCommand>
<buildCommand>
<name>org.eclipse.m2e.core.maven2Builder</name>
<arguments>
</arguments>
</buildCommand>
</buildSpec>
<natures>
<nature>org.eclipse.jdt.core.javanature</nature>
<nature>org.eclipse.m2e.core.maven2Nature</nature>
</natures>
</projectDescription>
......@@ -8,7 +8,7 @@ IF (TD_MVN_INSTALLED)
ADD_CUSTOM_COMMAND(OUTPUT ${JDBC_CMD_NAME}
POST_BUILD
COMMAND mvn -Dmaven.test.skip=true install -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/target/taos-jdbcdriver-2.0.15-dist.jar ${LIBRARY_OUTPUT_PATH}
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/target/taos-jdbcdriver-2.0.16-dist.jar ${LIBRARY_OUTPUT_PATH}
COMMAND mvn -Dmaven.test.skip=true clean -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml
COMMENT "build jdbc driver")
ADD_CUSTOM_TARGET(${JDBC_TARGET_NAME} ALL WORKING_DIRECTORY ${EXECUTABLE_OUTPUT_PATH} DEPENDS ${JDBC_CMD_NAME})
......
......@@ -5,7 +5,7 @@
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.15</version>
<version>2.0.16</version>
<packaging>jar</packaging>
<name>JDBCDriver</name>
......
......@@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.15</version>
<version>2.0.16</version>
<packaging>jar</packaging>
<name>JDBCDriver</name>
<url>https://github.com/taosdata/TDengine/tree/master/src/connector/jdbc</url>
......@@ -49,6 +49,7 @@
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
......@@ -56,12 +57,6 @@
<scope>test</scope>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.47</version>
</dependency>
<!-- for restful -->
<dependency>
<groupId>org.apache.httpcomponents</groupId>
......@@ -79,12 +74,6 @@
<version>1.2.58</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.49</version>
</dependency>
</dependencies>
<build>
......
......@@ -497,12 +497,12 @@ public abstract class AbstractDatabaseMetaData implements DatabaseMetaData {
public ResultSet getProcedures(String catalog, String schemaPattern, String procedureNamePattern)
throws SQLException {
throw new SQLException(TSDBConstants.UNSUPPORT_METHOD_EXCEPTIONZ_MSG);
return null;
}
public ResultSet getProcedureColumns(String catalog, String schemaPattern, String procedureNamePattern,
String columnNamePattern) throws SQLException {
throw new SQLException(TSDBConstants.UNSUPPORT_METHOD_EXCEPTIONZ_MSG);
return null;
}
public abstract ResultSet getTables(String catalog, String schemaPattern, String tableNamePattern, String[] types)
......
......@@ -20,13 +20,11 @@ import java.util.List;
public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
private String dbProductName = null;
private String url = null;
private String userName = null;
private Connection conn = null;
private String url;
private String userName;
private Connection conn;
public TSDBDatabaseMetaData(String dbProductName, String url, String userName) {
this.dbProductName = dbProductName;
public TSDBDatabaseMetaData(String url, String userName) {
this.url = url;
this.userName = userName;
}
......@@ -35,12 +33,17 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
this.conn = conn;
}
@Override
public <T> T unwrap(Class<T> iface) throws SQLException {
return null;
try {
return iface.cast(this);
} catch (ClassCastException cce) {
throw new SQLException("Unable to unwrap to " + iface.toString());
}
}
public boolean isWrapperFor(Class<?> iface) throws SQLException {
return false;
return iface.isInstance(this);
}
public boolean allProceduresAreCallable() throws SQLException {
......@@ -80,11 +83,11 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
}
public String getDatabaseProductName() throws SQLException {
return this.dbProductName;
return "TDengine";
}
public String getDatabaseProductVersion() throws SQLException {
return "1.5.1";
return "2.0.x.x";
}
public String getDriverName() throws SQLException {
......@@ -92,7 +95,7 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
}
public String getDriverVersion() throws SQLException {
return "1.0.0";
return "2.0.x";
}
public int getDriverMajorVersion() {
......@@ -111,7 +114,9 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
return false;
}
public boolean supportsMixedCaseIdentifiers() throws SQLException {
//像database、table这些对象的标识符,在存储时是否采用大小写混合的模式
return false;
}
......@@ -120,7 +125,7 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
}
public boolean storesLowerCaseIdentifiers() throws SQLException {
return false;
return true;
}
public boolean storesMixedCaseIdentifiers() throws SQLException {
......@@ -128,6 +133,7 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
}
public boolean supportsMixedCaseQuotedIdentifiers() throws SQLException {
//像database、table这些对象的标识符,在存储时是否采用大小写混合、并带引号的模式
return false;
}
......@@ -188,10 +194,12 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
}
public boolean nullPlusNonNullIsNull() throws SQLException {
// null + non-null != null
return false;
}
public boolean supportsConvert() throws SQLException {
// 是否支持转换函数convert
return false;
}
......@@ -216,7 +224,7 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
}
public boolean supportsGroupBy() throws SQLException {
return false;
return true;
}
public boolean supportsGroupByUnrelated() throws SQLException {
......@@ -488,7 +496,7 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
}
public int getDefaultTransactionIsolation() throws SQLException {
return 0;
return Connection.TRANSACTION_NONE;
}
public boolean supportsTransactions() throws SQLException {
......@@ -496,6 +504,8 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
}
public boolean supportsTransactionIsolationLevel(int level) throws SQLException {
if (level == Connection.TRANSACTION_NONE)
return true;
return false;
}
......@@ -517,28 +527,27 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
public ResultSet getProcedures(String catalog, String schemaPattern, String procedureNamePattern)
throws SQLException {
throw new SQLException(TSDBConstants.UNSUPPORT_METHOD_EXCEPTIONZ_MSG);
return null;
}
public ResultSet getProcedureColumns(String catalog, String schemaPattern, String procedureNamePattern,
String columnNamePattern) throws SQLException {
throw new SQLException(TSDBConstants.UNSUPPORT_METHOD_EXCEPTIONZ_MSG);
return null;
}
public ResultSet getTables(String catalog, String schemaPattern, String tableNamePattern, String[] types)
throws SQLException {
Statement stmt = null;
if (null != conn && !conn.isClosed()) {
stmt = conn.createStatement();
if (catalog == null || catalog.length() < 1) {
catalog = conn.getCatalog();
}
public ResultSet getTables(String catalog, String schemaPattern, String tableNamePattern, String[] types) throws SQLException {
if (conn == null || conn.isClosed()) {
throw new SQLException(TSDBConstants.FixErrMsg(TSDBConstants.JNI_CONNECTION_NULL));
}
try (Statement stmt = conn.createStatement()) {
if (catalog == null || catalog.isEmpty())
return null;
stmt.executeUpdate("use " + catalog);
ResultSet resultSet0 = stmt.executeQuery("show tables");
GetTablesResultSet getTablesResultSet = new GetTablesResultSet(resultSet0, catalog, schemaPattern, tableNamePattern, types);
return getTablesResultSet;
} else {
throw new SQLException(TSDBConstants.FixErrMsg(TSDBConstants.JNI_CONNECTION_NULL));
}
}
......@@ -547,14 +556,12 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
}
public ResultSet getCatalogs() throws SQLException {
if (conn == null || conn.isClosed())
throw new SQLException(TSDBConstants.FixErrMsg(TSDBConstants.JNI_CONNECTION_NULL));
if (conn != null && !conn.isClosed()) {
Statement stmt = conn.createStatement();
ResultSet resultSet0 = stmt.executeQuery("show databases");
CatalogResultSet resultSet = new CatalogResultSet(resultSet0);
return resultSet;
} else {
return getEmptyResultSet();
try (Statement stmt = conn.createStatement()) {
ResultSet rs = stmt.executeQuery("show databases");
return new CatalogResultSet(rs);
}
}
......@@ -562,7 +569,7 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
DatabaseMetaDataResultSet resultSet = new DatabaseMetaDataResultSet();
// set up ColumnMetaDataList
List<ColumnMetaData> columnMetaDataList = new ArrayList<ColumnMetaData>(1);
List<ColumnMetaData> columnMetaDataList = new ArrayList<>(1);
ColumnMetaData colMetaData = new ColumnMetaData();
colMetaData.setColIndex(0);
colMetaData.setColName("TABLE_TYPE");
......@@ -571,7 +578,7 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
columnMetaDataList.add(colMetaData);
// set up rowDataList
List<TSDBResultSetRowData> rowDataList = new ArrayList<TSDBResultSetRowData>(2);
List<TSDBResultSetRowData> rowDataList = new ArrayList<>(2);
TSDBResultSetRowData rowData = new TSDBResultSetRowData();
rowData.setString(0, "TABLE");
rowDataList.add(rowData);
......@@ -591,11 +598,10 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
Statement stmt = null;
if (null != conn && !conn.isClosed()) {
stmt = conn.createStatement();
if (catalog == null || catalog.length() < 1) {
catalog = conn.getCatalog();
}
stmt.executeUpdate("use " + catalog);
if (catalog == null || catalog.isEmpty())
return null;
stmt.executeUpdate("use " + catalog);
DatabaseMetaDataResultSet resultSet = new DatabaseMetaDataResultSet();
// set up ColumnMetaDataList
List<ColumnMetaData> columnMetaDataList = new ArrayList<>(24);
......@@ -851,7 +857,7 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
}
public Connection getConnection() throws SQLException {
return null;
return this.conn;
}
public boolean supportsSavepoints() throws SQLException {
......@@ -884,15 +890,17 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
}
public boolean supportsResultSetHoldability(int holdability) throws SQLException {
if (holdability == ResultSet.HOLD_CURSORS_OVER_COMMIT)
return true;
return false;
}
public int getResultSetHoldability() throws SQLException {
return 0;
return ResultSet.HOLD_CURSORS_OVER_COMMIT;
}
public int getDatabaseMajorVersion() throws SQLException {
return 0;
return 2;
}
public int getDatabaseMinorVersion() throws SQLException {
......@@ -900,7 +908,7 @@ public class TSDBDatabaseMetaData implements java.sql.DatabaseMetaData {
}
public int getJDBCMajorVersion() throws SQLException {
return 0;
return 2;
}
public int getJDBCMinorVersion() throws SQLException {
......
......@@ -214,7 +214,7 @@ public class TSDBDriver extends AbstractTaosDriver {
urlProps.setProperty(TSDBDriver.PROPERTY_KEY_HOST, url);
}
this.dbMetaData = new TSDBDatabaseMetaData(dbProductName, urlForMeta, urlProps.getProperty(TSDBDriver.PROPERTY_KEY_USER));
this.dbMetaData = new TSDBDatabaseMetaData(urlForMeta, urlProps.getProperty(TSDBDriver.PROPERTY_KEY_USER));
return urlProps;
}
......
......@@ -39,7 +39,6 @@ import java.util.Iterator;
import java.util.List;
import java.util.Map;
@SuppressWarnings("unused")
public class TSDBResultSet implements ResultSet {
private TSDBJNIConnector jniConnector = null;
......@@ -104,6 +103,7 @@ public class TSDBResultSet implements ResultSet {
}
public TSDBResultSet() {
}
public TSDBResultSet(TSDBJNIConnector connector, long resultSetPointer) throws SQLException {
......
......@@ -17,6 +17,7 @@ package com.taosdata.jdbc;
import java.sql.*;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
public class TSDBStatement implements Statement {
private TSDBJNIConnector connector = null;
......@@ -68,7 +69,6 @@ public class TSDBStatement implements Statement {
pSql = this.connector.executeQuery(sql);
long resultSetPointer = this.connector.getResultSet();
if (resultSetPointer == TSDBConstants.JNI_CONNECTION_NULL) {
this.connector.freeResultSet(pSql);
throw new SQLException(TSDBConstants.FixErrMsg(TSDBConstants.JNI_CONNECTION_NULL));
......
......@@ -8,7 +8,6 @@ import java.util.List;
public class RestfulDatabaseMetaData extends AbstractDatabaseMetaData {
private final String url;
private final String userName;
private final Connection connection;
......
package com.taosdata.jdbc;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import java.sql.*;
import java.util.Properties;
public class DatabaseMetaDataTest {
static Connection connection = null;
static PreparedStatement statement = null;
static String dbName = "test";
static String tName = "t0";
static String host = "localhost";
@BeforeClass
public static void createConnection() throws SQLException {
try {
Class.forName("com.taosdata.jdbc.TSDBDriver");
} catch (ClassNotFoundException e) {
return;
}
Properties properties = new Properties();
properties.setProperty(TSDBDriver.PROPERTY_KEY_HOST, host);
properties.setProperty(TSDBDriver.PROPERTY_KEY_USER, "root");
properties.setProperty(TSDBDriver.PROPERTY_KEY_PASSWORD, "taosdata");
properties.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
properties.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
properties.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
connection = DriverManager.getConnection("jdbc:TAOS://" + host + ":0/", properties);
String sql = "drop database if exists " + dbName;
statement = connection.prepareStatement(sql);
statement.executeUpdate("create database if not exists " + dbName);
statement.executeUpdate("create table if not exists " + dbName + "." + tName + " (ts timestamp, k int, v int)");
}
@Test
public void testMetaDataTest() throws SQLException {
DatabaseMetaData databaseMetaData = connection.getMetaData();
ResultSet resultSet = databaseMetaData.getTables(dbName, "t*", "t*", new String[]{"t"});
while (resultSet.next()) {
for (int i = 1; i <= resultSet.getMetaData().getColumnCount(); i++) {
System.out.printf("%d: %s\n", i, resultSet.getString(i));
}
}
resultSet.close();
databaseMetaData.isWrapperFor(null);
databaseMetaData.allProceduresAreCallable();
databaseMetaData.allTablesAreSelectable();
databaseMetaData.getURL();
databaseMetaData.getUserName();
databaseMetaData.isReadOnly();
databaseMetaData.nullsAreSortedHigh();
databaseMetaData.nullsAreSortedLow();
databaseMetaData.nullsAreSortedAtStart();
databaseMetaData.nullsAreSortedAtEnd();
databaseMetaData.getDatabaseProductName();
databaseMetaData.getDatabaseProductVersion();
databaseMetaData.getDriverName();
databaseMetaData.getDriverVersion();
databaseMetaData.getDriverMajorVersion();
databaseMetaData.getDriverMinorVersion();
databaseMetaData.usesLocalFiles();
databaseMetaData.usesLocalFilePerTable();
databaseMetaData.supportsMixedCaseIdentifiers();
databaseMetaData.storesUpperCaseIdentifiers();
databaseMetaData.storesLowerCaseIdentifiers();
databaseMetaData.storesMixedCaseIdentifiers();
databaseMetaData.supportsMixedCaseQuotedIdentifiers();
databaseMetaData.storesUpperCaseQuotedIdentifiers();
databaseMetaData.storesLowerCaseQuotedIdentifiers();
databaseMetaData.storesMixedCaseQuotedIdentifiers();
databaseMetaData.getIdentifierQuoteString();
databaseMetaData.getSQLKeywords();
databaseMetaData.getNumericFunctions();
databaseMetaData.getStringFunctions();
databaseMetaData.getSystemFunctions();
databaseMetaData.getTimeDateFunctions();
databaseMetaData.getSearchStringEscape();
databaseMetaData.getExtraNameCharacters();
databaseMetaData.supportsAlterTableWithAddColumn();
databaseMetaData.supportsAlterTableWithDropColumn();
databaseMetaData.supportsColumnAliasing();
databaseMetaData.nullPlusNonNullIsNull();
databaseMetaData.supportsConvert();
databaseMetaData.supportsConvert(0, 0);
databaseMetaData.supportsTableCorrelationNames();
databaseMetaData.supportsDifferentTableCorrelationNames();
databaseMetaData.supportsExpressionsInOrderBy();
databaseMetaData.supportsOrderByUnrelated();
databaseMetaData.supportsGroupBy();
databaseMetaData.supportsGroupByUnrelated();
databaseMetaData.supportsGroupByBeyondSelect();
databaseMetaData.supportsLikeEscapeClause();
databaseMetaData.supportsMultipleResultSets();
databaseMetaData.supportsMultipleTransactions();
databaseMetaData.supportsNonNullableColumns();
databaseMetaData.supportsMinimumSQLGrammar();
databaseMetaData.supportsCoreSQLGrammar();
databaseMetaData.supportsExtendedSQLGrammar();
databaseMetaData.supportsANSI92EntryLevelSQL();
databaseMetaData.supportsANSI92IntermediateSQL();
databaseMetaData.supportsANSI92FullSQL();
databaseMetaData.supportsIntegrityEnhancementFacility();
databaseMetaData.supportsOuterJoins();
databaseMetaData.supportsFullOuterJoins();
databaseMetaData.supportsLimitedOuterJoins();
databaseMetaData.getSchemaTerm();
databaseMetaData.getProcedureTerm();
databaseMetaData.getCatalogTerm();
databaseMetaData.isCatalogAtStart();
databaseMetaData.getCatalogSeparator();
databaseMetaData.supportsSchemasInDataManipulation();
databaseMetaData.supportsSchemasInProcedureCalls();
databaseMetaData.supportsSchemasInTableDefinitions();
databaseMetaData.supportsSchemasInIndexDefinitions();
databaseMetaData.supportsSchemasInPrivilegeDefinitions();
databaseMetaData.supportsCatalogsInDataManipulation();
databaseMetaData.supportsCatalogsInProcedureCalls();
databaseMetaData.supportsCatalogsInTableDefinitions();
databaseMetaData.supportsCatalogsInIndexDefinitions();
databaseMetaData.supportsCatalogsInPrivilegeDefinitions();
databaseMetaData.supportsPositionedDelete();
databaseMetaData.supportsPositionedUpdate();
databaseMetaData.supportsSelectForUpdate();
databaseMetaData.supportsStoredProcedures();
databaseMetaData.supportsSubqueriesInComparisons();
databaseMetaData.supportsSubqueriesInExists();
databaseMetaData.supportsSubqueriesInIns();
databaseMetaData.supportsSubqueriesInQuantifieds();
databaseMetaData.supportsCorrelatedSubqueries();
databaseMetaData.supportsUnion();
databaseMetaData.supportsUnionAll();
databaseMetaData.supportsOpenCursorsAcrossCommit();
databaseMetaData.supportsOpenCursorsAcrossRollback();
databaseMetaData.supportsOpenStatementsAcrossCommit();
databaseMetaData.supportsOpenStatementsAcrossRollback();
databaseMetaData.getMaxBinaryLiteralLength();
databaseMetaData.getMaxCharLiteralLength();
databaseMetaData.getMaxColumnNameLength();
databaseMetaData.getMaxColumnsInGroupBy();
databaseMetaData.getMaxColumnsInIndex();
databaseMetaData.getMaxColumnsInOrderBy();
databaseMetaData.getMaxColumnsInSelect();
databaseMetaData.getMaxColumnsInTable();
databaseMetaData.getMaxConnections();
databaseMetaData.getMaxCursorNameLength();
databaseMetaData.getMaxIndexLength();
databaseMetaData.getMaxSchemaNameLength();
databaseMetaData.getMaxProcedureNameLength();
databaseMetaData.getMaxCatalogNameLength();
databaseMetaData.getMaxRowSize();
databaseMetaData.doesMaxRowSizeIncludeBlobs();
databaseMetaData.getMaxStatementLength();
databaseMetaData.getMaxStatements();
databaseMetaData.getMaxTableNameLength();
databaseMetaData.getMaxTablesInSelect();
databaseMetaData.getMaxUserNameLength();
databaseMetaData.getDefaultTransactionIsolation();
databaseMetaData.supportsTransactions();
databaseMetaData.supportsTransactionIsolationLevel(0);
databaseMetaData.supportsDataDefinitionAndDataManipulationTransactions();
databaseMetaData.supportsDataManipulationTransactionsOnly();
databaseMetaData.dataDefinitionCausesTransactionCommit();
databaseMetaData.dataDefinitionIgnoredInTransactions();
try {
databaseMetaData.getProcedures("", "", "");
} catch (Exception e) {
}
try {
databaseMetaData.getProcedureColumns("", "", "", "");
} catch (Exception e) {
}
try {
databaseMetaData.getTables("", "", "", new String[]{""});
} catch (Exception e) {
}
databaseMetaData.getSchemas();
databaseMetaData.getCatalogs();
// databaseMetaData.getTableTypes();
databaseMetaData.getColumns(dbName, "", tName, "");
databaseMetaData.getColumnPrivileges("", "", "", "");
databaseMetaData.getTablePrivileges("", "", "");
databaseMetaData.getBestRowIdentifier("", "", "", 0, false);
databaseMetaData.getVersionColumns("", "", "");
databaseMetaData.getPrimaryKeys("", "", "");
databaseMetaData.getImportedKeys("", "", "");
databaseMetaData.getExportedKeys("", "", "");
databaseMetaData.getCrossReference("", "", "", "", "", "");
databaseMetaData.getTypeInfo();
databaseMetaData.getIndexInfo("", "", "", false, false);
databaseMetaData.supportsResultSetType(0);
databaseMetaData.supportsResultSetConcurrency(0, 0);
databaseMetaData.ownUpdatesAreVisible(0);
databaseMetaData.ownDeletesAreVisible(0);
databaseMetaData.ownInsertsAreVisible(0);
databaseMetaData.othersUpdatesAreVisible(0);
databaseMetaData.othersDeletesAreVisible(0);
databaseMetaData.othersInsertsAreVisible(0);
databaseMetaData.updatesAreDetected(0);
databaseMetaData.deletesAreDetected(0);
databaseMetaData.insertsAreDetected(0);
databaseMetaData.supportsBatchUpdates();
databaseMetaData.getUDTs("", "", "", new int[]{0});
databaseMetaData.getConnection();
databaseMetaData.supportsSavepoints();
databaseMetaData.supportsNamedParameters();
databaseMetaData.supportsMultipleOpenResults();
databaseMetaData.supportsGetGeneratedKeys();
databaseMetaData.getSuperTypes("", "", "");
databaseMetaData.getSuperTables("", "", "");
databaseMetaData.getAttributes("", "", "", "");
databaseMetaData.supportsResultSetHoldability(0);
databaseMetaData.getResultSetHoldability();
databaseMetaData.getDatabaseMajorVersion();
databaseMetaData.getDatabaseMinorVersion();
databaseMetaData.getJDBCMajorVersion();
databaseMetaData.getJDBCMinorVersion();
databaseMetaData.getSQLStateType();
databaseMetaData.locatorsUpdateCopy();
databaseMetaData.supportsStatementPooling();
databaseMetaData.getRowIdLifetime();
databaseMetaData.getSchemas("", "");
databaseMetaData.supportsStoredFunctionsUsingCallSyntax();
databaseMetaData.autoCommitFailureClosesAllResultSets();
databaseMetaData.getClientInfoProperties();
databaseMetaData.getFunctions("", "", "");
databaseMetaData.getFunctionColumns("", "", "", "");
databaseMetaData.getPseudoColumns("", "", "", "");
databaseMetaData.generatedKeyAlwaysReturned();
}
@AfterClass
public static void close() throws Exception {
statement.executeUpdate("drop database " + dbName);
statement.close();
connection.close();
Thread.sleep(10);
}
}
package com.taosdata.jdbc.cases;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import java.sql.*;
import java.util.concurrent.TimeUnit;
public class MultiThreadsWithSameStatmentTest {
private class Service {
public Connection conn;
public Statement stmt;
public Service() {
try {
Class.forName("com.taosdata.jdbc.TSDBDriver");
conn = DriverManager.getConnection("jdbc:TAOS://localhost:6030/?user=root&password=taosdata");
stmt = conn.createStatement();
stmt.execute("create database if not exists jdbctest");
stmt.executeUpdate("create table if not exists jdbctest.weather (ts timestamp, f1 int)");
} catch (ClassNotFoundException | SQLException e) {
e.printStackTrace();
}
}
public void release() {
try {
stmt.close();
conn.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
@Before
public void before() {
}
@Test
public void test() {
Thread t1 = new Thread(() -> {
try {
Service service = new Service();
ResultSet resultSet = service.stmt.executeQuery("select * from jdbctest.weather");
while (resultSet.next()) {
ResultSetMetaData metaData = resultSet.getMetaData();
for (int i = 1; i <= metaData.getColumnCount(); i++) {
System.out.print(metaData.getColumnLabel(i) + ": " + resultSet.getString(i));
}
System.out.println();
}
resultSet.close();
service.release();
} catch (SQLException e) {
e.printStackTrace();
}
});
Thread t2 = new Thread(() -> {
while (true) {
try {
Service service = new Service();
service.stmt.executeUpdate("insert into jdbctest.weather values(now,1)");
service.release();
} catch (SQLException e) {
e.printStackTrace();
}
}
});
t1.start();
sleep(1000);
t2.start();
}
private void sleep(long mills) {
try {
TimeUnit.MILLISECONDS.sleep(mills);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
@After
public void after() {
}
}
......@@ -5,7 +5,7 @@ with open("README.md", "r") as fh:
setuptools.setup(
name="taos",
version="2.0.3",
version="2.0.4",
author="Taosdata Inc.",
author_email="support@taosdata.com",
description="TDengine python client package",
......
......@@ -184,7 +184,7 @@ class TDengineCursor(object):
return False
def fetchall(self):
def fetchall_row(self):
"""Fetch all (remaining) rows of a query result, returning them as a sequence of sequences (e.g. a list of tuples). Note that the cursor's arraysize attribute can affect the performance of this operation.
"""
if self._result is None or self._fields is None:
......@@ -203,7 +203,7 @@ class TDengineCursor(object):
for i in range(len(self._fields)):
buffer[i].extend(block[i])
return list(map(tuple, zip(*buffer)))
def fetchall_block(self):
def fetchall(self):
if self._result is None or self._fields is None:
raise OperationalError("Invalid use of fetchall")
......
......@@ -5,7 +5,7 @@ with open("README.md", "r") as fh:
setuptools.setup(
name="taos",
version="2.0.3",
version="2.0.4",
author="Taosdata Inc.",
author_email="support@taosdata.com",
description="TDengine python client package",
......
......@@ -192,7 +192,7 @@ class TDengineCursor(object):
return False
def fetchall(self):
def fetchall_row(self):
"""Fetch all (remaining) rows of a query result, returning them as a sequence of sequences (e.g. a list of tuples). Note that the cursor's arraysize attribute can affect the performance of this operation.
"""
if self._result is None or self._fields is None:
......@@ -212,7 +212,7 @@ class TDengineCursor(object):
buffer[i].extend(block[i])
return list(map(tuple, zip(*buffer)))
def fetchall_block(self):
def fetchall(self):
if self._result is None or self._fields is None:
raise OperationalError("Invalid use of fetchall")
......
......@@ -5,7 +5,7 @@ with open("README.md", "r") as fh:
setuptools.setup(
name="taos",
version="2.0.3",
version="2.0.4",
author="Taosdata Inc.",
author_email="support@taosdata.com",
description="TDengine python client package",
......
......@@ -138,7 +138,7 @@ class TDengineCursor(object):
def fetchmany(self):
pass
def fetchall(self):
def fetchall_row(self):
"""Fetch all (remaining) rows of a query result, returning them as a sequence of sequences (e.g. a list of tuples). Note that the cursor's arraysize attribute can affect the performance of this operation.
"""
if self._result is None or self._fields is None:
......@@ -158,7 +158,7 @@ class TDengineCursor(object):
buffer[i].extend(block[i])
return list(map(tuple, zip(*buffer)))
def fetchall_block(self):
def fetchall(self):
if self._result is None or self._fields is None:
raise OperationalError("Invalid use of fetchall")
......
......@@ -5,7 +5,7 @@ with open("README.md", "r") as fh:
setuptools.setup(
name="taos",
version="2.0.3",
version="2.0.4",
author="Taosdata Inc.",
author_email="support@taosdata.com",
description="TDengine python client package",
......
......@@ -139,7 +139,7 @@ class TDengineCursor(object):
def fetchmany(self):
pass
def fetchall(self):
def fetchall_row(self):
"""Fetch all (remaining) rows of a query result, returning them as a sequence of sequences (e.g. a list of tuples). Note that the cursor's arraysize attribute can affect the performance of this operation.
"""
if self._result is None or self._fields is None:
......@@ -159,7 +159,7 @@ class TDengineCursor(object):
buffer[i].extend(block[i])
return list(map(tuple, zip(*buffer)))
def fetchall_block(self):
def fetchall(self):
if self._result is None or self._fields is None:
raise OperationalError("Invalid use of fetchall")
......
......@@ -357,7 +357,7 @@ do { \
#define TSDB_PORT_HTTP 11
#define TSDB_PORT_ARBITRATOR 12
#define TSDB_MAX_WAL_SIZE (1024*1024*2)
#define TSDB_MAX_WAL_SIZE (1024*1024*3)
typedef enum {
TAOS_QTYPE_RPC = 0,
......
......@@ -82,145 +82,145 @@
#define TK_STABLES 64
#define TK_VGROUPS 65
#define TK_DROP 66
#define TK_DNODE 67
#define TK_USER 68
#define TK_ACCOUNT 69
#define TK_USE 70
#define TK_DESCRIBE 71
#define TK_ALTER 72
#define TK_PASS 73
#define TK_PRIVILEGE 74
#define TK_LOCAL 75
#define TK_IF 76
#define TK_EXISTS 77
#define TK_PPS 78
#define TK_TSERIES 79
#define TK_DBS 80
#define TK_STORAGE 81
#define TK_QTIME 82
#define TK_CONNS 83
#define TK_STATE 84
#define TK_KEEP 85
#define TK_CACHE 86
#define TK_REPLICA 87
#define TK_QUORUM 88
#define TK_DAYS 89
#define TK_MINROWS 90
#define TK_MAXROWS 91
#define TK_BLOCKS 92
#define TK_CTIME 93
#define TK_WAL 94
#define TK_FSYNC 95
#define TK_COMP 96
#define TK_PRECISION 97
#define TK_UPDATE 98
#define TK_CACHELAST 99
#define TK_LP 100
#define TK_RP 101
#define TK_UNSIGNED 102
#define TK_TAGS 103
#define TK_USING 104
#define TK_AS 105
#define TK_COMMA 106
#define TK_NULL 107
#define TK_SELECT 108
#define TK_UNION 109
#define TK_ALL 110
#define TK_FROM 111
#define TK_VARIABLE 112
#define TK_INTERVAL 113
#define TK_FILL 114
#define TK_SLIDING 115
#define TK_ORDER 116
#define TK_BY 117
#define TK_ASC 118
#define TK_DESC 119
#define TK_GROUP 120
#define TK_HAVING 121
#define TK_LIMIT 122
#define TK_OFFSET 123
#define TK_SLIMIT 124
#define TK_SOFFSET 125
#define TK_WHERE 126
#define TK_NOW 127
#define TK_RESET 128
#define TK_QUERY 129
#define TK_ADD 130
#define TK_COLUMN 131
#define TK_TAG 132
#define TK_CHANGE 133
#define TK_SET 134
#define TK_KILL 135
#define TK_CONNECTION 136
#define TK_STREAM 137
#define TK_COLON 138
#define TK_ABORT 139
#define TK_AFTER 140
#define TK_ATTACH 141
#define TK_BEFORE 142
#define TK_BEGIN 143
#define TK_CASCADE 144
#define TK_CLUSTER 145
#define TK_CONFLICT 146
#define TK_COPY 147
#define TK_DEFERRED 148
#define TK_DELIMITERS 149
#define TK_DETACH 150
#define TK_EACH 151
#define TK_END 152
#define TK_EXPLAIN 153
#define TK_FAIL 154
#define TK_FOR 155
#define TK_IGNORE 156
#define TK_IMMEDIATE 157
#define TK_INITIALLY 158
#define TK_INSTEAD 159
#define TK_MATCH 160
#define TK_KEY 161
#define TK_OF 162
#define TK_RAISE 163
#define TK_REPLACE 164
#define TK_RESTRICT 165
#define TK_ROW 166
#define TK_STATEMENT 167
#define TK_TRIGGER 168
#define TK_VIEW 169
#define TK_COUNT 170
#define TK_SUM 171
#define TK_AVG 172
#define TK_MIN 173
#define TK_MAX 174
#define TK_FIRST 175
#define TK_LAST 176
#define TK_TOP 177
#define TK_BOTTOM 178
#define TK_STDDEV 179
#define TK_PERCENTILE 180
#define TK_APERCENTILE 181
#define TK_LEASTSQUARES 182
#define TK_HISTOGRAM 183
#define TK_DIFF 184
#define TK_SPREAD 185
#define TK_TWA 186
#define TK_INTERP 187
#define TK_LAST_ROW 188
#define TK_RATE 189
#define TK_IRATE 190
#define TK_SUM_RATE 191
#define TK_SUM_IRATE 192
#define TK_AVG_RATE 193
#define TK_AVG_IRATE 194
#define TK_TBID 195
#define TK_SEMI 196
#define TK_NONE 197
#define TK_PREV 198
#define TK_LINEAR 199
#define TK_IMPORT 200
#define TK_METRIC 201
#define TK_TBNAME 202
#define TK_JOIN 203
#define TK_METRICS 204
#define TK_STABLE 205
#define TK_STABLE 67
#define TK_DNODE 68
#define TK_USER 69
#define TK_ACCOUNT 70
#define TK_USE 71
#define TK_DESCRIBE 72
#define TK_ALTER 73
#define TK_PASS 74
#define TK_PRIVILEGE 75
#define TK_LOCAL 76
#define TK_IF 77
#define TK_EXISTS 78
#define TK_PPS 79
#define TK_TSERIES 80
#define TK_DBS 81
#define TK_STORAGE 82
#define TK_QTIME 83
#define TK_CONNS 84
#define TK_STATE 85
#define TK_KEEP 86
#define TK_CACHE 87
#define TK_REPLICA 88
#define TK_QUORUM 89
#define TK_DAYS 90
#define TK_MINROWS 91
#define TK_MAXROWS 92
#define TK_BLOCKS 93
#define TK_CTIME 94
#define TK_WAL 95
#define TK_FSYNC 96
#define TK_COMP 97
#define TK_PRECISION 98
#define TK_UPDATE 99
#define TK_CACHELAST 100
#define TK_LP 101
#define TK_RP 102
#define TK_UNSIGNED 103
#define TK_TAGS 104
#define TK_USING 105
#define TK_AS 106
#define TK_COMMA 107
#define TK_NULL 108
#define TK_SELECT 109
#define TK_UNION 110
#define TK_ALL 111
#define TK_FROM 112
#define TK_VARIABLE 113
#define TK_INTERVAL 114
#define TK_FILL 115
#define TK_SLIDING 116
#define TK_ORDER 117
#define TK_BY 118
#define TK_ASC 119
#define TK_DESC 120
#define TK_GROUP 121
#define TK_HAVING 122
#define TK_LIMIT 123
#define TK_OFFSET 124
#define TK_SLIMIT 125
#define TK_SOFFSET 126
#define TK_WHERE 127
#define TK_NOW 128
#define TK_RESET 129
#define TK_QUERY 130
#define TK_ADD 131
#define TK_COLUMN 132
#define TK_TAG 133
#define TK_CHANGE 134
#define TK_SET 135
#define TK_KILL 136
#define TK_CONNECTION 137
#define TK_STREAM 138
#define TK_COLON 139
#define TK_ABORT 140
#define TK_AFTER 141
#define TK_ATTACH 142
#define TK_BEFORE 143
#define TK_BEGIN 144
#define TK_CASCADE 145
#define TK_CLUSTER 146
#define TK_CONFLICT 147
#define TK_COPY 148
#define TK_DEFERRED 149
#define TK_DELIMITERS 150
#define TK_DETACH 151
#define TK_EACH 152
#define TK_END 153
#define TK_EXPLAIN 154
#define TK_FAIL 155
#define TK_FOR 156
#define TK_IGNORE 157
#define TK_IMMEDIATE 158
#define TK_INITIALLY 159
#define TK_INSTEAD 160
#define TK_MATCH 161
#define TK_KEY 162
#define TK_OF 163
#define TK_RAISE 164
#define TK_REPLACE 165
#define TK_RESTRICT 166
#define TK_ROW 167
#define TK_STATEMENT 168
#define TK_TRIGGER 169
#define TK_VIEW 170
#define TK_COUNT 171
#define TK_SUM 172
#define TK_AVG 173
#define TK_MIN 174
#define TK_MAX 175
#define TK_FIRST 176
#define TK_LAST 177
#define TK_TOP 178
#define TK_BOTTOM 179
#define TK_STDDEV 180
#define TK_PERCENTILE 181
#define TK_APERCENTILE 182
#define TK_LEASTSQUARES 183
#define TK_HISTOGRAM 184
#define TK_DIFF 185
#define TK_SPREAD 186
#define TK_TWA 187
#define TK_INTERP 188
#define TK_LAST_ROW 189
#define TK_RATE 190
#define TK_IRATE 191
#define TK_SUM_RATE 192
#define TK_SUM_IRATE 193
#define TK_AVG_RATE 194
#define TK_AVG_IRATE 195
#define TK_TBID 196
#define TK_SEMI 197
#define TK_NONE 198
#define TK_PREV 199
#define TK_LINEAR 200
#define TK_IMPORT 201
#define TK_METRIC 202
#define TK_TBNAME 203
#define TK_JOIN 204
#define TK_METRICS 205
#define TK_INSERT 206
#define TK_INTO 207
#define TK_VALUES 208
......
......@@ -1085,8 +1085,8 @@ static void printfQueryMeta() {
printf("database name: \033[33m%s\033[0m\n", g_queryInfo.dbName);
printf("\n");
printf("super table query info: \n");
printf("rate: \033[33m%d\033[0m\n", g_queryInfo.superQueryInfo.rate);
printf("specified table query info: \n");
printf("query interval: \033[33m%d\033[0m\n", g_queryInfo.superQueryInfo.rate);
printf("concurrent: \033[33m%d\033[0m\n", g_queryInfo.superQueryInfo.concurrent);
printf("sqlCount: \033[33m%d\033[0m\n", g_queryInfo.superQueryInfo.sqlCount);
......@@ -1102,11 +1102,11 @@ static void printfQueryMeta() {
printf(" sql[%d]: \033[33m%s\033[0m\n", i, g_queryInfo.superQueryInfo.sql[i]);
}
printf("\n");
printf("sub table query info: \n");
printf("rate: \033[33m%d\033[0m\n", g_queryInfo.subQueryInfo.rate);
printf("super table query info: \n");
printf("query interval: \033[33m%d\033[0m\n", g_queryInfo.subQueryInfo.rate);
printf("threadCnt: \033[33m%d\033[0m\n", g_queryInfo.subQueryInfo.threadCnt);
printf("childTblCount: \033[33m%d\033[0m\n", g_queryInfo.subQueryInfo.childTblCount);
printf("childTblPrefix: \033[33m%s\033[0m\n", g_queryInfo.subQueryInfo.childTblPrefix);
printf("stable name: \033[33m%s\033[0m\n", g_queryInfo.subQueryInfo.sTblName);
if (SUBSCRIBE_MODE == g_jsonType) {
printf("mod: \033[33m%d\033[0m\n", g_queryInfo.subQueryInfo.subscribeMode);
......@@ -4020,23 +4020,23 @@ void *superQueryProcess(void *sarg) {
}
selectAndGetResult(winfo->taos, g_queryInfo.superQueryInfo.sql[i], tmpFile);
int64_t t2 = taosGetTimestampUs();
printf("taosc select sql return, Spent %f s\n", (t2 - t1)/1000000.0);
printf("=[taosc] thread[%"PRIu64"] complete one sql, Spent %f s\n", (uint64_t)pthread_self(), (t2 - t1)/1000000.0);
} else {
#ifdef TD_LOWA_CURL
int64_t t1 = taosGetTimestampUs();
int retCode = curlProceSql(g_queryInfo.host, g_queryInfo.port, g_queryInfo.superQueryInfo.sql[i], winfo->curl_handle);
int64_t t2 = taosGetTimestampUs();
printf("http select sql return, Spent %f s \n", (t2 - t1)/1000000.0);
printf("=[restful] thread[%"PRIu64"] complete one sql, Spent %f s\n", (uint64_t)pthread_self(), (t2 - t1)/1000000.0);
if (0 != retCode) {
printf("========curl return fail, threadID[%d]\n", winfo->threadID);
printf("====curl return fail, threadID[%d]\n", winfo->threadID);
return NULL;
}
#endif
}
}
et = taosGetTimestampMs();
printf("========thread[%"PRIu64"] complete all sqls to super table once queries duration:%.6fs\n\n", (uint64_t)pthread_self(), (double)(et - st)/1000.0);
printf("==thread[%"PRIu64"] complete all sqls to specify tables once queries duration:%.6fs\n\n", (uint64_t)pthread_self(), (double)(et - st)/1000.0);
}
return NULL;
}
......@@ -4065,7 +4065,7 @@ void *subQueryProcess(void *sarg) {
char sqlstr[1024];
threadInfo *winfo = (threadInfo *)sarg;
int64_t st = 0;
int64_t et = 0;
int64_t et = g_queryInfo.subQueryInfo.rate*1000;
while (1) {
if (g_queryInfo.subQueryInfo.rate && (et - st) < g_queryInfo.subQueryInfo.rate*1000) {
taosMsleep(g_queryInfo.subQueryInfo.rate*1000 - (et - st)); // ms
......@@ -4085,17 +4085,12 @@ void *subQueryProcess(void *sarg) {
}
}
et = taosGetTimestampMs();
printf("========thread[%"PRIu64"] complete all sqls to allocate all sub-tables once queries duration:%.4fs\n\n", (uint64_t)pthread_self(), (double)(et - st)/1000.0);
printf("####thread[%"PRIu64"] complete all sqls to allocate all sub-tables[%d - %d] once queries duration:%.4fs\n\n", (uint64_t)pthread_self(), winfo->start_table_id, winfo->end_table_id, (double)(et - st)/1000.0);
}
return NULL;
}
int queryTestProcess() {
printfQueryMeta();
printf("Press enter key to continue\n\n");
(void)getchar();
TAOS * taos = NULL;
taos_init();
taos = taos_connect(g_queryInfo.host, g_queryInfo.user, g_queryInfo.password, g_queryInfo.dbName, g_queryInfo.port);
......@@ -4108,9 +4103,13 @@ int queryTestProcess() {
(void)getAllChildNameOfSuperTable(taos, g_queryInfo.dbName, g_queryInfo.subQueryInfo.sTblName, &g_queryInfo.subQueryInfo.childTblName, &g_queryInfo.subQueryInfo.childTblCount);
}
printfQueryMeta();
printf("Press enter key to continue\n\n");
(void)getchar();
pthread_t *pids = NULL;
threadInfo *infos = NULL;
//==== create sub threads for query from super table
//==== create sub threads for query from specify table
if (g_queryInfo.superQueryInfo.sqlCount > 0 && g_queryInfo.superQueryInfo.concurrent > 0) {
pids = malloc(g_queryInfo.superQueryInfo.concurrent * sizeof(pthread_t));
......@@ -4146,7 +4145,7 @@ int queryTestProcess() {
pthread_t *pidsOfSub = NULL;
threadInfo *infosOfSub = NULL;
//==== create sub threads for query from sub table
//==== create sub threads for query from all sub table of the super table
if ((g_queryInfo.subQueryInfo.sqlCount > 0) && (g_queryInfo.subQueryInfo.threadCnt > 0)) {
pidsOfSub = malloc(g_queryInfo.subQueryInfo.threadCnt * sizeof(pthread_t));
infosOfSub = malloc(g_queryInfo.subQueryInfo.threadCnt * sizeof(threadInfo));
......@@ -4177,6 +4176,7 @@ int queryTestProcess() {
t_info->start_table_id = last;
t_info->end_table_id = i < b ? last + a : last + a - 1;
last = t_info->end_table_id + 1;
t_info->taos = taos;
pthread_create(pidsOfSub + i, NULL, subQueryProcess, t_info);
}
......
......@@ -385,12 +385,22 @@ static int32_t mnodeRetrieveQueries(SShowObj *pShow, char *data, int32_t rows, v
SConnObj *pConnObj = NULL;
int32_t cols = 0;
char * pWrite;
void * pIter;
char str[TSDB_IPv4ADDR_LEN + 6] = {0};
while (numOfRows < rows) {
pShow->pIter = mnodeGetNextConn(pShow->pIter, &pConnObj);
if (pConnObj == NULL) break;
pIter = mnodeGetNextConn(pShow->pIter, &pConnObj);
if (pConnObj == NULL) {
pShow->pIter = pIter;
break;
}
if (numOfRows + pConnObj->numOfQueries >= rows) {
mnodeCancelGetNextConn(pIter);
break;
}
pShow->pIter = pIter;
for (int32_t i = 0; i < pConnObj->numOfQueries; ++i) {
SQueryDesc *pDesc = pConnObj->pQueries + i;
cols = 0;
......@@ -518,12 +528,22 @@ static int32_t mnodeRetrieveStreams(SShowObj *pShow, char *data, int32_t rows, v
SConnObj *pConnObj = NULL;
int32_t cols = 0;
char * pWrite;
void * pIter;
char ipStr[TSDB_IPv4ADDR_LEN + 6];
while (numOfRows < rows) {
pShow->pIter = mnodeGetNextConn(pShow->pIter, &pConnObj);
if (pConnObj == NULL) break;
pIter = mnodeGetNextConn(pShow->pIter, &pConnObj);
if (pConnObj == NULL) {
pShow->pIter = pIter;
break;
}
if (numOfRows + pConnObj->numOfStreams >= rows) {
mnodeCancelGetNextConn(pIter);
break;
}
pShow->pIter = pIter;
for (int32_t i = 0; i < pConnObj->numOfStreams; ++i) {
SStreamDesc *pDesc = pConnObj->pStreams + i;
cols = 0;
......
......@@ -98,6 +98,7 @@ typedef struct SCreateTableSQL {
typedef struct SAlterTableInfo {
SStrToken name;
int16_t tableType;
int16_t type;
STagData tagData;
SArray *pAddColumns; // SArray<TAOS_FIELD>
......@@ -151,11 +152,9 @@ typedef struct SUserInfo {
} SUserInfo;
typedef struct SMiscInfo {
// int32_t nTokens; /* Number of expressions on the list */
// int32_t nAlloc; /* Number of entries allocated below */
// SStrToken *a; /* one entry for element */
SArray *a; // SArray<SStrToken>
bool existsCheck;
int16_t tableType;
SUserInfo user;
union {
SCreateDbInfo dbOpt;
......@@ -245,7 +244,7 @@ SCreateTableSQL *tSetCreateSqlElems(SArray *pCols, SArray *pTags, SQuerySQL *pSe
void tSqlExprNodeDestroy(tSQLExpr *pExpr);
SAlterTableInfo * tAlterTableSqlElems(SStrToken *pTableName, SArray *pCols, SArray *pVals, int32_t type);
SAlterTableInfo * tAlterTableSqlElems(SStrToken *pTableName, SArray *pCols, SArray *pVals, int32_t type, int16_t tableTable);
SCreatedTableInfo createNewChildTableInfo(SStrToken *pTableName, SArray *pTagVals, SStrToken *pToken, SStrToken* igExists);
void destroyAllSelectClause(SSubclauseInfo *pSql);
......@@ -262,12 +261,10 @@ void setCreatedTableName(SSqlInfo *pInfo, SStrToken *pTableNameToken, SStrToken
void SqlInfoDestroy(SSqlInfo *pInfo);
void setDCLSQLElems(SSqlInfo *pInfo, int32_t type, int32_t nParams, ...);
void setDropDbTableInfo(SSqlInfo *pInfo, int32_t type, SStrToken* pToken, SStrToken* existsCheck);
void setDropDbTableInfo(SSqlInfo *pInfo, int32_t type, SStrToken* pToken, SStrToken* existsCheck,int16_t tableType);
void setShowOptions(SSqlInfo *pInfo, int32_t type, SStrToken* prefix, SStrToken* pPatterns);
SMiscInfo *tTokenListAppend(SMiscInfo *pTokenList, SStrToken *pToken);
void setCreateDBSQL(SSqlInfo *pInfo, int32_t type, SStrToken *pToken, SCreateDbInfo *pDB, SStrToken *pIgExists);
void setCreateDbInfo(SSqlInfo *pInfo, int32_t type, SStrToken *pToken, SCreateDbInfo *pDB, SStrToken *pIgExists);
void setCreateAcctSql(SSqlInfo *pInfo, int32_t type, SStrToken *pName, SStrToken *pPwd, SCreateAcctInfo *pAcctInfo);
void setCreateUserSql(SSqlInfo *pInfo, SStrToken *pName, SStrToken *pPasswd);
......
......@@ -131,10 +131,16 @@ cmd ::= SHOW dbPrefix(X) VGROUPS ids(Y). {
//drop configure for tables
cmd ::= DROP TABLE ifexists(Y) ids(X) cpxName(Z). {
X.n += Z.n;
setDropDbTableInfo(pInfo, TSDB_SQL_DROP_TABLE, &X, &Y);
setDropDbTableInfo(pInfo, TSDB_SQL_DROP_TABLE, &X, &Y, -1);
}
cmd ::= DROP DATABASE ifexists(Y) ids(X). { setDropDbTableInfo(pInfo, TSDB_SQL_DROP_DB, &X, &Y); }
//drop stable
cmd ::= DROP STABLE ifexists(Y) ids(X) cpxName(Z). {
X.n += Z.n;
setDropDbTableInfo(pInfo, TSDB_SQL_DROP_TABLE, &X, &Y, TSDB_SUPER_TABLE);
}
cmd ::= DROP DATABASE ifexists(Y) ids(X). { setDropDbTableInfo(pInfo, TSDB_SQL_DROP_DB, &X, &Y, -1); }
cmd ::= DROP DNODE ids(X). { setDCLSQLElems(pInfo, TSDB_SQL_DROP_DNODE, 1, &X); }
cmd ::= DROP USER ids(X). { setDCLSQLElems(pInfo, TSDB_SQL_DROP_USER, 1, &X); }
cmd ::= DROP ACCOUNT ids(X). { setDCLSQLElems(pInfo, TSDB_SQL_DROP_ACCT, 1, &X); }
......@@ -305,6 +311,8 @@ signed(A) ::= MINUS INTEGER(X). { A = -strtol(X.z, NULL, 10);}
////////////////////////////////// The CREATE TABLE statement ///////////////////////////////
cmd ::= CREATE TABLE create_table_args. {}
cmd ::= CREATE TABLE create_stable_args. {}
cmd ::= CREATE STABLE create_stable_args. {}
cmd ::= CREATE TABLE create_table_list(Z). { pInfo->type = TSDB_SQL_CREATE_TABLE; pInfo->pCreateTableInfo = Z;}
%type create_table_list{SCreateTableSQL*}
......@@ -333,7 +341,8 @@ create_table_args(A) ::= ifnotexists(U) ids(V) cpxName(Z) LP columnlist(X) RP. {
}
// create super table
create_table_args(A) ::= ifnotexists(U) ids(V) cpxName(Z) LP columnlist(X) RP TAGS LP columnlist(Y) RP. {
%type create_stable_args{SCreateTableSQL*}
create_stable_args(A) ::= ifnotexists(U) ids(V) cpxName(Z) LP columnlist(X) RP TAGS LP columnlist(Y) RP. {
A = tSetCreateSqlElems(X, Y, NULL, TSQL_CREATE_STABLE);
setSqlInfo(pInfo, A, NULL, TSDB_SQL_CREATE_TABLE);
......@@ -683,7 +692,7 @@ cmd ::= RESET QUERY CACHE. { setDCLSQLElems(pInfo, TSDB_SQL_RESET_CACHE, 0);}
///////////////////////////////////ALTER TABLE statement//////////////////////////////////
cmd ::= ALTER TABLE ids(X) cpxName(F) ADD COLUMN columnlist(A). {
X.n += F.n;
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_COLUMN);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_COLUMN, -1);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
......@@ -693,14 +702,14 @@ cmd ::= ALTER TABLE ids(X) cpxName(F) DROP COLUMN ids(A). {
toTSDBType(A.type);
SArray* K = tVariantListAppendToken(NULL, &A, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, K, TSDB_ALTER_TABLE_DROP_COLUMN);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, K, TSDB_ALTER_TABLE_DROP_COLUMN, -1);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
//////////////////////////////////ALTER TAGS statement/////////////////////////////////////
cmd ::= ALTER TABLE ids(X) cpxName(Y) ADD TAG columnlist(A). {
X.n += Y.n;
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_TAG_COLUMN);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_TAG_COLUMN, -1);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
cmd ::= ALTER TABLE ids(X) cpxName(Z) DROP TAG ids(Y). {
......@@ -709,7 +718,7 @@ cmd ::= ALTER TABLE ids(X) cpxName(Z) DROP TAG ids(Y). {
toTSDBType(Y.type);
SArray* A = tVariantListAppendToken(NULL, &Y, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_DROP_TAG_COLUMN);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_DROP_TAG_COLUMN, -1);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
......@@ -722,7 +731,7 @@ cmd ::= ALTER TABLE ids(X) cpxName(F) CHANGE TAG ids(Y) ids(Z). {
toTSDBType(Z.type);
A = tVariantListAppendToken(A, &Z, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_CHANGE_TAG_COLUMN);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_CHANGE_TAG_COLUMN, -1);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
......@@ -733,7 +742,54 @@ cmd ::= ALTER TABLE ids(X) cpxName(F) SET TAG ids(Y) EQ tagitem(Z). {
SArray* A = tVariantListAppendToken(NULL, &Y, -1);
A = tVariantListAppend(A, &Z, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_UPDATE_TAG_VAL);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_UPDATE_TAG_VAL, -1);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
///////////////////////////////////ALTER STABLE statement//////////////////////////////////
cmd ::= ALTER STABLE ids(X) cpxName(F) ADD COLUMN columnlist(A). {
X.n += F.n;
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_COLUMN, TSDB_SUPER_TABLE);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
cmd ::= ALTER STABLE ids(X) cpxName(F) DROP COLUMN ids(A). {
X.n += F.n;
toTSDBType(A.type);
SArray* K = tVariantListAppendToken(NULL, &A, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, K, TSDB_ALTER_TABLE_DROP_COLUMN, TSDB_SUPER_TABLE);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
//////////////////////////////////ALTER TAGS statement/////////////////////////////////////
cmd ::= ALTER STABLE ids(X) cpxName(Y) ADD TAG columnlist(A). {
X.n += Y.n;
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, A, NULL, TSDB_ALTER_TABLE_ADD_TAG_COLUMN, TSDB_SUPER_TABLE);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
cmd ::= ALTER STABLE ids(X) cpxName(Z) DROP TAG ids(Y). {
X.n += Z.n;
toTSDBType(Y.type);
SArray* A = tVariantListAppendToken(NULL, &Y, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_DROP_TAG_COLUMN, TSDB_SUPER_TABLE);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
cmd ::= ALTER STABLE ids(X) cpxName(F) CHANGE TAG ids(Y) ids(Z). {
X.n += F.n;
toTSDBType(Y.type);
SArray* A = tVariantListAppendToken(NULL, &Y, -1);
toTSDBType(Z.type);
A = tVariantListAppendToken(A, &Z, -1);
SAlterTableInfo* pAlterTable = tAlterTableSqlElems(&X, NULL, A, TSDB_ALTER_TABLE_CHANGE_TAG_COLUMN, TSDB_SUPER_TABLE);
setSqlInfo(pInfo, pAlterTable, NULL, TSDB_SQL_ALTER_TABLE);
}
......
......@@ -1875,6 +1875,7 @@ static int32_t setCtxTagColumnInfo(SQueryRuntimeEnv *pRuntimeEnv, SQLFunctionCtx
return TSDB_CODE_SUCCESS;
}
// todo refactor
static int32_t setupQueryRuntimeEnv(SQueryRuntimeEnv *pRuntimeEnv, int16_t order) {
qDebug("QInfo:%p setup runtime env", GET_QINFO_ADDR(pRuntimeEnv));
SQuery *pQuery = pRuntimeEnv->pQuery;
......@@ -3168,6 +3169,10 @@ void copyResToQueryResultBuf(SQInfo *pQInfo, SQuery *pQuery) {
// all results in current group have been returned to client, try next group
if (pGroupResInfo->index >= taosArrayGetSize(pGroupResInfo->pRows)) {
// current results of group has been sent to client, try next group
pGroupResInfo->index = 0;
pGroupResInfo->rowId = 0;
taosArrayClear(pGroupResInfo->pRows);
if (mergeGroupResult(pQInfo) != TSDB_CODE_SUCCESS) {
return; // failed to save data in the disk
}
......@@ -3845,11 +3850,6 @@ void setExecutionContext(SQInfo *pQInfo, int32_t groupIndex, TSKEY nextKey) {
// lastKey needs to be updated
pTableQueryInfo->lastKey = nextKey;
if (pRuntimeEnv->hasTagResults || pRuntimeEnv->pTsBuf != NULL) {
setAdditionalInfo(pQInfo, pTableQueryInfo->pTable, pTableQueryInfo);
}
if (pRuntimeEnv->prevGroupId != INT32_MIN && pRuntimeEnv->prevGroupId == groupIndex) {
return;
}
......@@ -4775,19 +4775,17 @@ static void enableExecutionForNextTable(SQueryRuntimeEnv *pRuntimeEnv) {
}
}
// TODO refactor: setAdditionalInfo
static FORCE_INLINE void setEnvForEachBlock(SQInfo* pQInfo, STableQueryInfo* pTableQueryInfo, SDataBlockInfo* pBlockInfo) {
SQueryRuntimeEnv* pRuntimeEnv = &pQInfo->runtimeEnv;
SQuery* pQuery = pQInfo->runtimeEnv.pQuery;
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pQuery->order.order);
if (QUERY_IS_INTERVAL_QUERY(pQuery)) {
TSKEY nextKey = pBlockInfo->window.skey;
setIntervalQueryRange(pQInfo, nextKey);
if (pRuntimeEnv->hasTagResults || pRuntimeEnv->pTsBuf != NULL) {
setAdditionalInfo(pQInfo, pTableQueryInfo->pTable, pTableQueryInfo);
}
if (pRuntimeEnv->hasTagResults || pRuntimeEnv->pTsBuf != NULL) {
setAdditionalInfo(pQInfo, pTableQueryInfo->pTable, pTableQueryInfo);
}
if (QUERY_IS_INTERVAL_QUERY(pQuery)) {
setIntervalQueryRange(pQInfo, pBlockInfo->window.skey);
} else { // non-interval query
setExecutionContext(pQInfo, pTableQueryInfo->groupIndex, pBlockInfo->window.ekey + step);
}
......@@ -4810,7 +4808,7 @@ static void doTableQueryInfoTimeWindowCheck(SQuery* pQuery, STableQueryInfo* pTa
static int64_t scanMultiTableDataBlocks(SQInfo *pQInfo) {
SQueryRuntimeEnv *pRuntimeEnv = &pQInfo->runtimeEnv;
SQuery* pQuery = pRuntimeEnv->pQuery;
SQueryCostInfo* summary = &pRuntimeEnv->summary;
SQueryCostInfo* summary = &pRuntimeEnv->summary;
int64_t st = taosGetTimestampMs();
......@@ -5467,7 +5465,7 @@ static void doRestoreContext(SQInfo *pQInfo) {
SET_MASTER_SCAN_FLAG(pRuntimeEnv);
}
static void doCloseAllTimeWindowAfterScan(SQInfo *pQInfo) {
static void doCloseAllTimeWindow(SQInfo *pQInfo) {
SQuery *pQuery = pQInfo->runtimeEnv.pQuery;
if (QUERY_IS_INTERVAL_QUERY(pQuery)) {
......@@ -5519,7 +5517,7 @@ static void multiTableQueryProcess(SQInfo *pQInfo) {
}
// close all time window results
doCloseAllTimeWindowAfterScan(pQInfo);
doCloseAllTimeWindow(pQInfo);
if (needReverseScan(pQuery)) {
int32_t code = doSaveContext(pQInfo);
......@@ -5844,8 +5842,7 @@ static void stableQueryImpl(SQInfo *pQInfo) {
(isFixedOutputQuery(pRuntimeEnv) && (!isPointInterpoQuery(pQuery)) && (!pRuntimeEnv->groupbyColumn))) {
multiTableQueryProcess(pQInfo);
} else {
assert((pQuery->checkResultBuf == 1 && pQuery->interval.interval == 0) || isPointInterpoQuery(pQuery) ||
pRuntimeEnv->groupbyColumn);
assert(pQuery->checkResultBuf == 1 || isPointInterpoQuery(pQuery) || pRuntimeEnv->groupbyColumn);
sequentialTableProcess(pQInfo);
}
......@@ -6373,7 +6370,7 @@ static int32_t createQueryFuncExprFromMsg(SQueryTableMsg *pQueryMsg, int32_t num
if (functId == TSDB_FUNC_TOP || functId == TSDB_FUNC_BOTTOM) {
int32_t j = getColumnIndexInSource(pQueryMsg, &pExprs[i].base, pTagCols);
if (j < 0 || j >= pQueryMsg->numOfCols) {
assert(0);
return TSDB_CODE_QRY_INVALID_MSG;
} else {
SColumnInfo *pCol = &pQueryMsg->colList[j];
int32_t ret =
......@@ -6638,7 +6635,7 @@ static SQInfo *createQInfoImpl(SQueryTableMsg *pQueryMsg, SSqlGroupbyExpr *pGrou
pQInfo->runtimeEnv.summary.tableInfoSize += (pTableGroupInfo->numOfTables * sizeof(STableQueryInfo));
pQInfo->runtimeEnv.pResultRowHashTable = taosHashInit(pTableGroupInfo->numOfTables, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
pQInfo->runtimeEnv.keyBuf = malloc(TSDB_MAX_BYTES_PER_ROW);
pQInfo->runtimeEnv.keyBuf = malloc(TSDB_MAX_BYTES_PER_ROW); // todo opt size
pQInfo->runtimeEnv.pool = initResultRowPool(getResultRowSize(&pQInfo->runtimeEnv));
pQInfo->runtimeEnv.prevRow = malloc(POINTER_BYTES * pQuery->numOfCols + srcSize);
......
......@@ -585,11 +585,12 @@ SCreatedTableInfo createNewChildTableInfo(SStrToken *pTableName, SArray *pTagVal
return info;
}
SAlterTableInfo *tAlterTableSqlElems(SStrToken *pTableName, SArray *pCols, SArray *pVals, int32_t type) {
SAlterTableInfo *tAlterTableSqlElems(SStrToken *pTableName, SArray *pCols, SArray *pVals, int32_t type, int16_t tableType) {
SAlterTableInfo *pAlterTable = calloc(1, sizeof(SAlterTableInfo));
pAlterTable->name = *pTableName;
pAlterTable->type = type;
pAlterTable->tableType = tableType;
if (type == TSDB_ALTER_TABLE_ADD_COLUMN || type == TSDB_ALTER_TABLE_ADD_TAG_COLUMN) {
pAlterTable->pAddColumns = pCols;
......@@ -696,18 +697,6 @@ void setCreatedTableName(SSqlInfo *pInfo, SStrToken *pTableNameToken, SStrToken
pInfo->pCreateTableInfo->existCheck = (pIfNotExists->n != 0);
}
SMiscInfo *tTokenListAppend(SMiscInfo *pMiscInfo, SStrToken *pToken) {
assert(pToken != NULL);
if (pMiscInfo == NULL) {
pMiscInfo = calloc(1, sizeof(SMiscInfo));
pMiscInfo->a = taosArrayInit(8, sizeof(SStrToken));
}
taosArrayPush(pMiscInfo->a, pToken);
return pMiscInfo;
}
void setDCLSQLElems(SSqlInfo *pInfo, int32_t type, int32_t nParam, ...) {
pInfo->type = type;
if (nParam == 0) {
......@@ -724,15 +713,23 @@ void setDCLSQLElems(SSqlInfo *pInfo, int32_t type, int32_t nParam, ...) {
while ((nParam--) > 0) {
SStrToken *pToken = va_arg(va, SStrToken *);
pInfo->pMiscInfo = tTokenListAppend(pInfo->pMiscInfo, pToken);
taosArrayPush(pInfo->pMiscInfo->a, pToken);
}
va_end(va);
}
void setDropDbTableInfo(SSqlInfo *pInfo, int32_t type, SStrToken* pToken, SStrToken* existsCheck) {
void setDropDbTableInfo(SSqlInfo *pInfo, int32_t type, SStrToken* pToken, SStrToken* existsCheck, int16_t tableType) {
pInfo->type = type;
pInfo->pMiscInfo = tTokenListAppend(pInfo->pMiscInfo, pToken);
if (pInfo->pMiscInfo == NULL) {
pInfo->pMiscInfo = (SMiscInfo *)calloc(1, sizeof(SMiscInfo));
pInfo->pMiscInfo->a = taosArrayInit(4, sizeof(SStrToken));
}
taosArrayPush(pInfo->pMiscInfo->a, pToken);
pInfo->pMiscInfo->existsCheck = (existsCheck->n == 1);
pInfo->pMiscInfo->tableType = tableType;
}
void setShowOptions(SSqlInfo *pInfo, int32_t type, SStrToken* prefix, SStrToken* pPatterns) {
......@@ -758,7 +755,7 @@ void setShowOptions(SSqlInfo *pInfo, int32_t type, SStrToken* prefix, SStrToken*
}
}
void setCreateDBSQL(SSqlInfo *pInfo, int32_t type, SStrToken *pToken, SCreateDbInfo *pDB, SStrToken *pIgExists) {
void setCreateDbInfo(SSqlInfo *pInfo, int32_t type, SStrToken *pToken, SCreateDbInfo *pDB, SStrToken *pIgExists) {
pInfo->type = type;
if (pInfo->pMiscInfo == NULL) {
pInfo->pMiscInfo = calloc(1, sizeof(SMiscInfo));
......
此差异已折叠。
......@@ -568,7 +568,7 @@ static void syncStartCheckPeerConn(SSyncPeer *pPeer) {
int32_t ret = strcmp(pPeer->fqdn, tsNodeFqdn);
if (pPeer->nodeId == 0 || (ret > 0) || (ret == 0 && pPeer->port > tsSyncPort)) {
int32_t checkMs = 100 + (pNode->vgId * 10) % 100;
if (pNode->vgId > 1) checkMs = tsStatusInterval * 1000 + checkMs;
sDebug("%s, check peer connection after %d ms", pPeer->id, checkMs);
taosTmrReset(syncCheckPeerConnection, checkMs, (void *)pPeer->rid, tsSyncTmrCtrl, &pPeer->timer);
}
......
......@@ -475,7 +475,8 @@ void *syncRetrieveData(void *param) {
SSyncNode *pNode = pPeer->pSyncNode;
taosBlockSIGPIPE();
sInfo("%s, start to retrieve data, sstatus:%s", pPeer->id, syncStatus[pPeer->sstatus]);
sInfo("%s, start to retrieve data, sstatus:%s, numOfRetrieves:%d", pPeer->id, syncStatus[pPeer->sstatus],
pPeer->numOfRetrieves);
if (pNode->notifyFlowCtrl) (*pNode->notifyFlowCtrl)(pNode->vgId, pPeer->numOfRetrieves);
......@@ -497,9 +498,11 @@ void *syncRetrieveData(void *param) {
pPeer->numOfRetrieves++;
} else {
pPeer->numOfRetrieves = 0;
if (pNode->notifyFlowCtrl) (*pNode->notifyFlowCtrl)(pNode->vgId, 0);
// if (pNode->notifyFlowCtrl) (*pNode->notifyFlowCtrl)(pNode->vgId, 0);
}
if (pNode->notifyFlowCtrl) (*pNode->notifyFlowCtrl)(pNode->vgId, 0);
pPeer->fileChanged = 0;
taosClose(pPeer->syncFd);
......
......@@ -308,7 +308,7 @@ static void vnodeFlowCtrlMsgToWQueue(void *param, void *tmrId) {
if (pVnode->flowctrlLevel <= 0) code = TSDB_CODE_VND_IS_FLOWCTRL;
pWrite->processedCount++;
if (pWrite->processedCount > 100) {
if (pWrite->processedCount >= 100) {
vError("vgId:%d, msg:%p, failed to process since %s, retry:%d", pVnode->vgId, pWrite, tstrerror(code),
pWrite->processedCount);
pWrite->processedCount = 1;
......
需求:
1. 可以读lowa的配置文件
2. 支持对JNI方式和Restful方式的taos-driver
\ No newline at end of file
2. 支持JDBC-JNI和JDBC-restful
3. 读取配置文件,持续执行查询
\ No newline at end of file
......@@ -19,14 +19,13 @@ import java.util.Map;
public class TaosDemoApplication {
private static Logger logger = Logger.getLogger(TaosDemoApplication.class);
private static final Logger logger = Logger.getLogger(TaosDemoApplication.class);
public static void main(String[] args) throws IOException {
// 读配置参数
JdbcTaosdemoConfig config = new JdbcTaosdemoConfig(args);
boolean isHelp = Arrays.asList(args).contains("--help");
if (isHelp || config.host == null || config.host.isEmpty()) {
// if (isHelp) {
JdbcTaosdemoConfig.printHelp();
System.exit(0);
}
......@@ -75,7 +74,7 @@ public class TaosDemoApplication {
}
}
end = System.currentTimeMillis();
logger.error(">>> create table time cost : " + (end - start) + " ms.");
logger.info(">>> create table time cost : " + (end - start) + " ms.");
/**********************************************************************************/
// 插入
long tableSize = config.numOfTables;
......@@ -90,7 +89,7 @@ public class TaosDemoApplication {
// multi threads to insert
int affectedRows = subTableService.insertMultiThreads(superTableMeta, threadSize, tableSize, startTime, gap, config);
end = System.currentTimeMillis();
logger.error("insert " + affectedRows + " rows, time cost: " + (end - start) + " ms");
logger.info("insert " + affectedRows + " rows, time cost: " + (end - start) + " ms");
/**********************************************************************************/
// 删除表
if (config.dropTable) {
......@@ -108,5 +107,4 @@ public class TaosDemoApplication {
return startTime;
}
}
......@@ -21,27 +21,27 @@ public class DatabaseMapperImpl implements DatabaseMapper {
public void createDatabase(String dbname) {
String sql = "create database if not exists " + dbname;
jdbcTemplate.execute(sql);
logger.info("SQL >>> " + sql);
logger.debug("SQL >>> " + sql);
}
@Override
public void dropDatabase(String dbname) {
String sql = "drop database if exists " + dbname;
jdbcTemplate.update(sql);
logger.info("SQL >>> " + sql);
logger.debug("SQL >>> " + sql);
}
@Override
public void createDatabaseWithParameters(Map<String, String> map) {
String sql = SqlSpeller.createDatabase(map);
jdbcTemplate.execute(sql);
logger.info("SQL >>> " + sql);
logger.debug("SQL >>> " + sql);
}
@Override
public void useDatabase(String dbname) {
String sql = "use " + dbname;
jdbcTemplate.execute(sql);
logger.info("SQL >>> " + sql);
logger.debug("SQL >>> " + sql);
}
}
......@@ -21,14 +21,14 @@ public class SubTableMapperImpl implements SubTableMapper {
@Override
public void createUsingSuperTable(SubTableMeta subTableMeta) {
String sql = SqlSpeller.createTableUsingSuperTable(subTableMeta);
logger.info("SQL >>> " + sql);
logger.debug("SQL >>> " + sql);
jdbcTemplate.execute(sql);
}
@Override
public int insertOneTableMultiValues(SubTableValue subTableValue) {
String sql = SqlSpeller.insertOneTableMultiValues(subTableValue);
logger.info("SQL >>> " + sql);
logger.debug("SQL >>> " + sql);
int affectRows = 0;
try {
......@@ -42,7 +42,7 @@ public class SubTableMapperImpl implements SubTableMapper {
@Override
public int insertOneTableMultiValuesUsingSuperTable(SubTableValue subTableValue) {
String sql = SqlSpeller.insertOneTableMultiValuesUsingSuperTable(subTableValue);
logger.info("SQL >>> " + sql);
logger.debug("SQL >>> " + sql);
int affectRows = 0;
try {
......@@ -56,7 +56,7 @@ public class SubTableMapperImpl implements SubTableMapper {
@Override
public int insertMultiTableMultiValues(List<SubTableValue> tables) {
String sql = SqlSpeller.insertMultiSubTableMultiValues(tables);
logger.info("SQL >>> " + sql);
logger.debug("SQL >>> " + sql);
int affectRows = 0;
try {
affectRows = jdbcTemplate.update(sql);
......@@ -69,7 +69,7 @@ public class SubTableMapperImpl implements SubTableMapper {
@Override
public int insertMultiTableMultiValuesUsingSuperTable(List<SubTableValue> tables) {
String sql = SqlSpeller.insertMultiTableMultiValuesUsingSuperTable(tables);
logger.info("SQL >>> " + sql);
logger.debug("SQL >>> " + sql);
int affectRows = 0;
try {
affectRows = jdbcTemplate.update(sql);
......
......@@ -18,14 +18,14 @@ public class SuperTableMapperImpl implements SuperTableMapper {
@Override
public void createSuperTable(SuperTableMeta tableMetadata) {
String sql = SqlSpeller.createSuperTable(tableMetadata);
logger.info("SQL >>> " + sql);
logger.debug("SQL >>> " + sql);
jdbcTemplate.execute(sql);
}
@Override
public void dropSuperTable(String database, String name) {
String sql = "drop table if exists " + database + "." + name;
logger.info("SQL >>> " + sql);
logger.debug("SQL >>> " + sql);
jdbcTemplate.execute(sql);
}
}
package com.taosdata.taosdemo.dao;
import com.taosdata.taosdemo.dao.TableMapper;
import com.taosdata.taosdemo.domain.TableMeta;
import com.taosdata.taosdemo.domain.TableValue;
import com.taosdata.taosdemo.utils.SqlSpeller;
import org.apache.log4j.Logger;
import org.springframework.jdbc.core.JdbcTemplate;
import java.util.List;
public class TableMapperImpl implements TableMapper {
private static final Logger logger = Logger.getLogger(TableMapperImpl.class);
private JdbcTemplate template;
@Override
public void create(TableMeta tableMeta) {
String sql = SqlSpeller.createTable(tableMeta);
logger.debug("SQL >>> " + sql);
template.execute(sql);
}
@Override
public int insertOneTableMultiValues(TableValue values) {
String sql = SqlSpeller.insertOneTableMultiValues(values);
logger.debug("SQL >>> " + sql);
return template.update(sql);
}
@Override
public int insertOneTableMultiValuesWithColumns(TableValue values) {
String sql = SqlSpeller.insertOneTableMultiValuesWithColumns(values);
logger.debug("SQL >>> " + sql);
return template.update(sql);
}
@Override
public int insertMultiTableMultiValues(List<TableValue> tables) {
String sql = SqlSpeller.insertMultiTableMultiValues(tables);
logger.debug("SQL >>> " + sql);
return template.update(sql);
}
@Override
public int insertMultiTableMultiValuesWithColumns(List<TableValue> tables) {
String sql = SqlSpeller.insertMultiTableMultiValuesWithColumns(tables);
logger.debug("SQL >>> " + sql);
return template.update(sql);
}
}
### 设置###
log4j.rootLogger=error,stdout
log4j.rootLogger=info,stdout
### 输出信息到控制抬 ###
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
......
......@@ -3,6 +3,6 @@ PROJECT(TDengine)
IF (TD_LINUX)
INCLUDE_DIRECTORIES(. ${TD_COMMUNITY_DIR}/src/inc ${TD_COMMUNITY_DIR}/src/client/inc ${TD_COMMUNITY_DIR}/inc)
AUX_SOURCE_DIRECTORY(. SRC)
ADD_EXECUTABLE(demo demo.c)
ADD_EXECUTABLE(demo apitest.c)
TARGET_LINK_LIBRARIES(demo taos_static trpc tutil pthread )
ENDIF ()
......@@ -16,6 +16,9 @@ fi
logDir=`grep "^logDir" /etc/taos/taos.cfg | awk '{print $2}'`
dataDir=`grep "^dataDir" /etc/taos/taos.cfg | awk '{print $2}'`
[ -z "$logDir" ] && logDir="/var/log/taos"
[ -z "$dataDir" ] && dataDir="/var/lib/taos"
# Coloured Echoes
function red_echo { echo -e "\033[31m$@\033[0m"; }
function green_echo { echo -e "\033[32m$@\033[0m"; }
......
......@@ -16,6 +16,9 @@ fi
logDir=`grep "^logDir" /etc/taos/taos.cfg | awk '{print $2}'`
dataDir=`grep "^dataDir" /etc/taos/taos.cfg | awk '{print $2}'`
[ -z "$logDir" ] && logDir="/var/log/taos"
[ -z "$dataDir" ] && dataDir="/var/lib/taos"
# Coloured Echoes #
function red_echo { echo -e "\033[31m$@\033[0m"; } #
function green_echo { echo -e "\033[32m$@\033[0m"; } #
......
......@@ -16,6 +16,9 @@ fi
logDir=`grep "^logDir" /etc/taos/taos.cfg | awk '{print $2}'`
dataDir=`grep "^dataDir" /etc/taos/taos.cfg | awk '{print $2}'`
[ -z "$logDir" ] && logDir="/var/log/taos"
[ -z "$dataDir" ] && dataDir="/var/lib/taos"
# Coloured Echoes #
function red_echo { echo -e "\033[31m$@\033[0m"; } #
function green_echo { echo -e "\033[32m$@\033[0m"; } #
......
......@@ -16,6 +16,9 @@ fi
logDir=`grep "^logDir" /etc/taos/taos.cfg | awk '{print $2}'`
dataDir=`grep "^dataDir" /etc/taos/taos.cfg | awk '{print $2}'`
[ -z "$logDir" ] && logDir="/var/log/taos"
[ -z "$dataDir" ] && dataDir="/var/lib/taos"
# Coloured Echoes #
function red_echo { echo -e "\033[31m$@\033[0m"; } #
function green_echo { echo -e "\033[32m$@\033[0m"; } #
......
......@@ -16,6 +16,9 @@ fi
logDir=`grep "^logDir" /etc/taos/taos.cfg | awk '{print $2}'`
dataDir=`grep "^dataDir" /etc/taos/taos.cfg | awk '{print $2}'`
[ -z "$logDir" ] && logDir="/var/log/taos"
[ -z "$dataDir" ] && dataDir="/var/lib/taos"
# Coloured Echoes #
function red_echo { echo -e "\033[31m$@\033[0m"; } #
function green_echo { echo -e "\033[32m$@\033[0m"; } #
......
FROM ubuntu:latest AS builder
ARG PACKAGE=TDengine-server-1.6.5.10-Linux-x64.tar.gz
ARG EXTRACTDIR=TDengine-enterprise-server
ARG CONTENT=taos.tar.gz
WORKDIR /root
COPY ${PACKAGE} .
RUN tar -zxf ${PACKAGE}
RUN mv ${EXTRACTDIR}/driver ./lib
RUN tar -zxf ${EXTRACTDIR}/${CONTENT}
FROM ubuntu:latest
WORKDIR /root
RUN apt-get update
RUN apt-get install -y vim tmux net-tools
RUN echo 'alias ll="ls -l --color=auto"' >> /root/.bashrc
COPY --from=builder /root/bin/taosd /usr/bin
COPY --from=builder /root/bin/taos /usr/bin
COPY --from=builder /root/cfg/taos.cfg /etc/taos/
COPY --from=builder /root/lib/libtaos.so.* /usr/lib/libtaos.so.1
ENV LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib"
ENV LC_CTYPE=en_US.UTF-8
ENV LANG=en_US.UTF-8
EXPOSE 6030-6041/tcp 6060/tcp 6030-6039/udp
# VOLUME [ "/var/lib/taos", "/var/log/taos", "/etc/taos" ]
CMD [ "bash" ]
#!/bin/bash
echo "Executing buildClusterEnv.sh"
DOCKER_DIR=/data
CURR_DIR=`pwd`
if [ $# != 4 ]; then
echo "argument list need input : "
echo " -n numOfNodes"
echo " -v version"
exit 1
fi
NUM_OF_NODES=
VERSION=
while getopts "n:v:" arg
do
case $arg in
n)
NUM_OF_NODES=$OPTARG
;;
v)
VERSION=$OPTARG
;;
?)
echo "unkonwn argument"
;;
esac
done
function createDIR {
for i in {1.. $2}
do
mkdir -p /data/node$i/data
mkdir -p /data/node$i/log
mkdir -p /data/node$i/cfg
done
}
function cleanEnv {
for i in {1..3}
do
echo /data/node$i/data/*
rm -rf /data/node$i/data/*
echo /data/node$i/log/*
rm -rf /data/node$i/log/*
done
}
function prepareBuild {
if [ -d $CURR_DIR/../../../../release ]; then
echo release exists
rm -rf $CURR_DIR/../../../../release/*
fi
cd $CURR_DIR/../../../../packaging
./release.sh -v edge -n $VERSION >> /dev/null
if [ ! -f $CURR_DIR/../../../../release/TDengine-server-$VERSION-Linux-x64.tar.gz ]; then
echo "no TDengine install package found"
exit 1
fi
cd $CURR_DIR/../../../../release
mv TDengine-server-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
rm -rf $DOCKER_DIR/*.yml
cd $CURR_DIR
cp docker-compose.yml $DOCKER_DIR
cp Dockerfile $DOCKER_DIR
if [ $NUM_OF_NODES -eq 4 ]; then
cp ../node4.yml $DOCKER_DIR
fi
if [ $NUM_OF_NODES -eq 5 ]; then
cp ../node5.yml $DOCKER_DIR
fi
}
function clusterUp {
cd $DOCKER_DIR
if [ $NUM_OF_NODES -eq 3 ]; then
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION docker-compose up -d
fi
if [ $NUM_OF_NODES -eq 4 ]; then
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION docker-compose -f docker-compose.yml -f node4.yml up -d
fi
if [ $NUM_OF_NODES -eq 5 ]; then
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION docker-compose -f docker-compose.yml -f node4.yml -f node5.yml up -d
fi
}
cleanEnv
# prepareBuild
# clusterUp
\ No newline at end of file
version: '3.7'
services:
td2.0-node1:
build:
context: .
args:
- PACKAGE=${PACKAGE}
- EXTRACTDIR=${DIR}
image: 'tdengine:2.0.13.1'
container_name: 'td2.0-node1'
cap_add:
- ALL
stdin_open: true
tty: true
environment:
TZ: "Asia/Shanghai"
command: >
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
echo $TZ > /etc/timezone &&
exec my-main-application"
volumes:
# bind data directory
- type: bind
source: /data/node1/data
target: /var/lib/taos
# bind log directory
- type: bind
source: /data/node1/log
target: /var/log/taos
# bind configuration
- type: bind
source: /data/node1/cfg
target: /etc/taos
- type: bind
source: /data
target: /root
networks:
taos_update_net:
ipv4_address: 172.27.0.7
command: taosd
td2.0-node2:
build:
context: .
args:
- PACKAGE=${PACKAGE}
- EXTRACTDIR=${DIR}
image: 'tdengine:2.0.13.1'
container_name: 'td2.0-node2'
cap_add:
- ALL
stdin_open: true
tty: true
environment:
TZ: "Asia/Shanghai"
command: >
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
echo $TZ > /etc/timezone &&
exec my-main-application"
volumes:
# bind data directory
- type: bind
source: /data/node2/data
target: /var/lib/taos
# bind log directory
- type: bind
source: /data/node2/log
target: /var/log/taos
# bind configuration
- type: bind
source: /data/node2/cfg
target: /etc/taos
- type: bind
source: /data
target: /root
networks:
taos_update_net:
ipv4_address: 172.27.0.8
command: taosd
td2.0-node3:
build:
context: .
args:
- PACKAGE=${PACKAGE}
- EXTRACTDIR=${DIR}
image: 'tdengine:2.0.13.1'
container_name: 'td2.0-node3'
cap_add:
- ALL
stdin_open: true
tty: true
environment:
TZ: "Asia/Shanghai"
command: >
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
echo $TZ > /etc/timezone &&
exec my-main-application"
volumes:
# bind data directory
- type: bind
source: /data/node3/data
target: /var/lib/taos
# bind log directory
- type: bind
source: /data/node3/log
target: /var/log/taos
# bind configuration
- type: bind
source: /data/node3/cfg
target: /etc/taos
- type: bind
source: /data
target: /root
networks:
taos_update_net:
ipv4_address: 172.27.0.9
command: taosd
networks:
taos_update_net:
# external: true
ipam:
driver: default
config:
- subnet: "172.27.0.0/24"
version: '3.7'
services:
td2.0-node4:
build:
context: .
args:
- PACKAGE=${PACKAGE}
- EXTRACTDIR=${DIR}
image: 'tdengine:2.0.13.1'
container_name: 'td2.0-node4'
cap_add:
- ALL
stdin_open: true
tty: true
environment:
TZ: "Asia/Shanghai"
command: >
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
echo $TZ > /etc/timezone &&
exec my-main-application"
volumes:
# bind data directory
- type: bind
source: /data/node4/data
target: /var/lib/taos
# bind log directory
- type: bind
source: /data/node4/log
target: /var/log/taos
# bind configuration
- type: bind
source: /data/node4/cfg
target: /etc/taos
- type: bind
source: /data
target: /root
networks:
taos_update_net:
ipv4_address: 172.27.0.10
command: taosd
\ No newline at end of file
version: '3.7'
services:
td2.0-node5:
build:
context: .
args:
- PACKAGE=${PACKAGE}
- EXTRACTDIR=${DIR}
image: 'tdengine:2.0.13.1'
container_name: 'td2.0-node5'
cap_add:
- ALL
stdin_open: true
tty: true
environment:
TZ: "Asia/Shanghai"
command: >
sh -c "ln -snf /usr/share/zoneinfo/$TZ /etc/localtime &&
echo $TZ > /etc/timezone &&
exec my-main-application"
volumes:
# bind data directory
- type: bind
source: /data/node5/data
target: /var/lib/taos
# bind log directory
- type: bind
source: /data/node5/log
target: /var/log/taos
# bind configuration
- type: bind
source: /data/node5/cfg
target: /etc/taos
- type: bind
source: /data
target: /root
networks:
taos_update_net:
ipv4_address: 172.27.0.11
command: taosd
\ No newline at end of file
......@@ -15,6 +15,7 @@ print =============== step1 - login
system_content curl 127.0.0.1:7111/rest/
print 1-> $system_content
if $system_content != @{"status":"error","code":4357,"desc":"no auth info input"}@ then
print $system_content
return -1
endi
......
......@@ -382,4 +382,28 @@ sql drop table cars;
sql create table cars(ts timestamp, c int) tags(id int);
sql create table car1 using cars tags(1);
sql create table car2 using cars tags(2);
sql insert into car1 (ts, c) values (now,1) car2(ts, c) values(now, 2);
\ No newline at end of file
sql insert into car1 (ts, c) values (now,1) car2(ts, c) values(now, 2);
print ========================> TD-2700
sql create table t1(ts timestamp, k int);
sql insert into t1 values(1500000001000, 0);
sql select sum(k) from t1 interval(1d) sliding(1h);
if $rows != 24 then
return -1
endi
print ========================> TD-2740
sql drop table if exists m1;
sql create table m1(ts timestamp, k int) tags(a int);
sql create table tm0 using m1 tags(0);
sql create table tm1 using m1 tags(1);
sql create table tm2 using m1 tags(2);
sql create table tm3 using m1 tags(3);
sql insert into tm0 values('2020-1-1 1:1:1', 0);
sql insert into tm1 values('2020-1-5 1:1:1', 0);
sql insert into tm2 values('2020-1-7 1:1:1', 0);
sql insert into tm3 values('2020-1-1 1:1:1', 0);
sql select count from m1 where ts='2020-1-1 1:1:1' interval(1h) group by tbname;
if $rows != 2 then
return -1
endi
\ No newline at end of file
......@@ -415,6 +415,7 @@ sql select count(join_mt0.c1), sum(join_mt1.c2), first(join_mt0.c5), last(join_m
$val = 100
if $rows != $val then
print $rows
return -1
endi
......@@ -514,4 +515,4 @@ sql drop table tm2;
sql select count(*) from m1, m2 where m1.ts=m2.ts and m1.b=m2.a;
sql drop database ux1;
system sh/exec.sh -n dnode1 -s stop -x SIGINT
\ No newline at end of file
system sh/exec.sh -n dnode1 -s stop -x SIGINT
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册